Understanding the Market | Alibaba-W opened over 6% higher, releasing the open-source inference model QwQ-32B, which rivals DeepSeek R1 with 1/20 the parameters

Zhitong
2025.03.06 01:31
portai
I'm PortAI, I can summarize articles.

Alibaba-W opened over 6% higher, and as of the time of writing, it rose 6.24% to HKD 138, with a transaction volume of HKD 898 million. On the news front, according to market reports on March 6, Alibaba released and open-sourced a new inference model, Tongyi Qianwen QwQ-32B, which matches the overall performance of DeepSeek-R1 in mathematics, coding, and general capabilities while reducing deployment costs, allowing for local deployment on consumer-grade graphics cards. Since the beginning of 2023, the Alibaba Tongyi team has open-sourced over 200 models. According to official information, this model, with only 32 billion parameters, not only matches the performance of DeepSeek-R1, which has 671 billion parameters (of which 37 billion are activated), but also surpasses it in certain tests. The Alibaba Qwen team stated that this achievement highlights the effectiveness of applying reinforcement learning to powerful foundational models that have undergone large-scale pre-training, hoping to demonstrate that the combination of strong foundational models and large-scale reinforcement learning may be a viable path toward general artificial intelligence

According to Zhitong Finance APP, Alibaba-W (09988) opened more than 6% higher, and as of the time of writing, it has risen by 6.24%, trading at HKD 138, with a transaction volume of HKD 898 million.

In terms of news, on March 6, market reports indicated that Alibaba released and open-sourced a new inference model, Tongyi Qianwen QwQ-32B, which matches the overall performance of DeepSeek-R1 in mathematics, coding, and general capabilities, while also reducing deployment costs, allowing for local deployment on consumer-grade graphics cards. Since 2023, the Alibaba Tongyi team has open-sourced over 200 models.

According to official introductions, this model, with only 32 billion parameters, not only performs comparably to DeepSeek-R1, which has 671 billion parameters (of which 37 billion are activated), but also surpasses it in certain tests. The Alibaba Qwen team stated that this achievement highlights the effectiveness of applying reinforcement learning to powerful foundational models that have undergone large-scale pre-training, hoping to demonstrate that combining powerful foundational models with large-scale reinforcement learning may be a feasible path toward general artificial intelligence