
Revisiting the topic of computing power never sleeps: DeepSeek is a "computing power butcher" or a "New Year red envelope" has been verified

Amid the fluctuations in NVIDIA's stock price and market pessimism, DeepSeek is regarded as a "New Year red envelope" rather than a "computing power butcher." With the integration of DeepSeek and the popularization of AI models, the demand for computing power is expected to increase in the long term. Tencent's data shows that the number of monthly active accounts on WeChat reached 1.382 billion in the third quarter of 2024, and the vast user base will drive the frequency of large model usage, further enhancing the prosperity of the computing power industry. The training and inference costs of DeepSeek should not be underestimated, as they may accelerate the innovation and popularization of AI in the future
On the first day of the Lunar New Year, amidst significant fluctuations in NVIDIA's stock price and the market's most pessimistic outlook on computing power, we published "Is DeepSeek a 'Computing Power Butcher' or a 'New Year Red Envelope'?" that day, clearly stating our view: DeepSeek is a 'New Year Red Envelope' rather than a 'Computing Power Butcher', and the computing power prosperity remains on a long-term upward trend. NVIDIA's stock has risen over 20% since its bottom, and recent events such as WeChat's integration with DeepSeek further validate our judgment. The breakthrough in model computing power efficiency, from a longer-term perspective, will precisely accelerate the popularization and innovation of AI, leading to a significant increase in computing power demand.
The democratization of large models brings a substantial demand for model applications, which is expected to drive long-term growth in computing power demand. Tencent stated that some test users can see the "AI Search" label at the top of the WeChat chat window search entry. By clicking in, they can use the full version of the DeepSeek-R1 model for free, gaining a more diversified search experience. If this entry is not displayed, it indicates that this grayscale test has not yet covered that user account, and they should patiently wait for subsequent openings.
According to the third-quarter report for 2024 released by Tencent Holdings Limited, the number of monthly active accounts on WeChat and WECHAT reached 1.382 billion in the third quarter of 2024. With such a large user base, the usage frequency of large models is expected to gradually increase, thereby driving long-term growth in computing power demand.
According to a report by Economic Observer on February 15, recently, whether it is the high-performance H series GPUs or the high-end RTX 40 series graphics cards, NVIDIA's GPUs have once again become in short supply.
The increase in the integration and usage of large models is expected to further enhance the prosperity of the computing power industry, and DeepSeek may be a 'small wave' in the vast sea of computing power. 1) The true training and inference costs of DeepSeek should not be underestimated. According to the DeepSeek V3 paper, the cost of $5.56 million only includes the formal training of DeepSeek-V3, excluding the costs related to architecture, algorithms, and data from preliminary research and ablation experiments. The costs for formal training based on sufficient preliminary preparation are often relatively low. For example, the formal training cost of the Sky-T1-32B-Preview from the University of California, Berkeley, released in January 2025, is only $450, but it scores higher than OpenAI's o1-Preview in areas such as mathematical ability.
2) DeepSeek optimizes the underlying computing power ecosystem, which is expected to further stimulate the potential of domestic computing power. In the technical blog of DeepSeek-V3, DeepSeek stated that it uses NVIDIA's PTX (Parallel Thread Execution) language, which allows for more precise control over the transmission of data, weights, and gradients between GPUs, thereby improving computing power usage efficiency. For domestic computing power, leading domestic large models (represented by DeepSeek) are also expected to have strong capabilities for ecological adaptation and optimization, thereby promoting the continuous improvement of the domestic computing power ecosystem Currently, several domestic AI chip manufacturers have announced their compatibility with the DeepSeek large model.
3) Cost optimization is also an important driving force for the popularization of AI inference. The pricing for the DeepSeek-R1 API service is 1 yuan per million input tokens (cache hit) / 4 yuan (cache miss), and 16 yuan per million output tokens, which represents a significant optimization compared to the input and output prices of o1 class inference models.
Therefore, lowering the entry barriers and costs in the AI industry will, in the long run, drive total demand to rise rather than fall. All models from DeepSeek are open-source models, meaning that all application vendors have access to large models that can compete with top-tier AI, and they can also develop and deploy them flexibly. This will accelerate the development process of AI applications. As the cost of models decreases and the development of open-source models improves, the frequency of model deployment and usage will increase, leading to greater usage volume. The well-known "Jevons Paradox" in economics states that when technological advancements improve the efficiency of resource use, it does not reduce the consumption of that resource; instead, due to lower usage costs, it stimulates greater demand, ultimately leading to an increase in the total amount of resource usage. Therefore, from a longer-term perspective, the development of DeepSeek will precisely accelerate the popularization and innovation of AI, resulting in a significant increase in computing power demand, especially for inference computing power.
National-level application models continue to amplify computing power demand. As mentioned earlier, in the third quarter of 2024, the monthly active accounts of WeChat and WECHAT reached 1.382 billion, showing growth both year-on-year and quarter-on-quarter. With such a massive customer usage demand, the underlying demand for computing power will welcome new growth opportunities.
Search: According to the WeChat Search team disclosed at the 2023 WeChat Open Class PRO, the monthly active users of WeChat Search reached 800 million in 2022. According to the daily monitoring of a sample library of millions of public accounts by Xinbang, in 2024, public accounts produced over 444 million articles cumulatively, with 307,800 articles achieving over 100,000 views, which is an increase of 53,300 articles compared to 2023.
Video Accounts: According to the account data publicly displayed on the Video Account mutual selection platform (excluding unregistered accounts), as of January 10, 2025, there are at least 1,288 accounts with over 1 million followers, which is a 2.6 times increase compared to 2023; there are 44 accounts with over 5 million followers, which is nearly a 3 times increase compared to 2023.
Estimating the inference computing power demand brought by DeepSeek's integration with well-known leading applications: We have made conservative, neutral, and optimistic assumptions based solely on WeChat's current daily active users and average daily token call volume, and calculated the inference computing power demand brought by DeepSeek's integration using the large model inference computing power demand calculation formula Under three assumptions, it is expected that the integration of WeChat search with DeepSeek and its widespread use will lead to an increase in inference demand, resulting in a capital expenditure demand for AI servers of 29.4 billion, 52.9 billion, and 88.2 billion yuan.
At the same time, it should be noted that there are several mainstream apps in China that also have high usage and search frequencies. According to QuestMobile data, the average MAU of WeChat exceeded 1 billion in Q3 2024, but several domestic apps have MAUs close to this level, such as Taobao (900 million), Douyin (800 million), Alipay (900 million), Meituan (500 million), and Weibo (500 million). Therefore, we can reasonably infer that in the future, as domestic apps integrate with large models like DeepSeek and achieve widespread use, the demand for inference computing power is expected to continue to increase rapidly.
Article authors: Lü Wei: S0100521110003, Guo Xinyu: S0100518120001, Source: Computer Command, Original title: "Revisiting the Never-Sleeping Computing Power: Is DeepSeek a 'Computing Power Butcher' or a 'New Year Red Envelope' Already Verified"
Risk Warning and Disclaimer
The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial conditions, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk