
The revenue forecast of 500 billion is too conservative! NVIDIA CFO says "it will definitely be higher," Jensen Huang states that demand from Chinese customers is strong

NVIDIA CFO Kress stated that since providing the expectation of $500 billion in existing and future data center chip revenue in October 2025, interest from NVIDIA's customers has continued to increase; the company's optimistic outlook on AI applications is driven not only by AI demand itself but also by the growing demand for enterprise data processing, which is fueling the growth of next-generation computing needs
NVIDIA executives have repeatedly released optimistic signals, optimistic about the sales prospects of Blackwell and the next-generation Rubin chips.
On Tuesday, Eastern Time on the 6th, NVIDIA Chief Financial Officer Colette Kress stated at an event hosted by JP Morgan that due to strong demand, NVIDIA is now more optimistic about its data center business, and by the end of 2026, the expected revenue from NVIDIA's data center chips "will definitely" exceed the previously forecasted $500 billion given in October last year.
The $500 billion mentioned by Kress refers to the statement made by Jensen Huang at the GTC conference over two months ago, indicating that by the end of 2026, NVIDIA's existing and future data center chips will generate approximately $500 billion in revenue. Goldman Sachs pointed out at the end of October that this expectation is about 12% higher than the market consensus at that time.
Kress's remarks continue the optimistic sentiment expressed by NVIDIA CEO Jensen Huang during his speech at the Consumer Electronics Show (CES) on Monday. NVIDIA announced the official release of the Rubin chip on Monday. In his keynote speech at CES, Huang revealed that the new generation Vera Rubin platform has been fully put into production, with inference costs reduced to one-tenth of the Blackwell platform.
Huang also emphasized in a media interview on Tuesday that NVIDIA's new chips have performance that is 10 times higher than the previous generation and stated that demand from Chinese customers is strong.
Despite the positive signals from Kress and Huang, NVIDIA's stock price has not reversed its downward trend. After rising over 2% in early trading on Tuesday, NVIDIA's stock continued to decline, slightly turning negative at midday, closing down nearly 0.5%, marking two consecutive days of declines. In 2025, NVIDIA has cumulatively risen about 39%, although it remains a major driver of the US stock market's rise, the increase is far less than the over 170% rise in 2024, mainly affected by concerns over the AI bubble and competition from Google's TPU.

Revenue Expectations Continue to Be Upgraded
In her speech on Tuesday, Kress clearly stated that since the forecast of $500 billion was given in October 2025, interest from NVIDIA's customers has continued to increase. When discussing the revenue expectations for data center chips, she said, "The $500 billion (forecast) number will definitely be higher."
Kress pointed out that NVIDIA's optimistic expectations for AI applications come not only from AI demand itself but also from the demand for enterprise data processing, which is driving the growth of next-generation computing needs, helping overall investment reach trillions of dollars by the end of 2030.
This statement echoes Jensen Huang's prediction at the GTC conference on October 28, 2025. At that time, Huang revealed that the company has "visibility" to achieve cumulative data center business revenue of $500 billion during the period from 2025 to 2026, which includes products from the Blackwell and next-generation Rubin architectures According to Goldman Sachs analysis, this target is 12% higher than the $447 billion consensus reflected by the Visible Alpha Consensus Data and 10% higher than Goldman Sachs' own forecast of $453 billion.
In early December 2025, Kress stated that the $500 billion booking amount does not include any work Nvidia is doing on the next phase of the OpenAI agreement, and revealed that the previously announced $100 billion investment agreement with OpenAI has not yet been finalized. She mentioned at that time that the company "definitely has the opportunity to secure more orders on top of the announced $500 billion."
At an event on Tuesday, Kress was asked about the potential role of the Chinese market in Nvidia's revenue. Kress stated that the U.S. government is working to approve license applications, and Nvidia has received orders from customers, but the specific outcomes remain unclear.
Rubin Platform Fully Operational
Jensen Huang announced on Monday at the CES in Las Vegas that the next-generation Vera Rubin AI platform is now fully operational. He stated that the platform achieves significant leaps in inference cost and training efficiency through the integrated design of six new chips, with the first batch of customers set to receive deliveries in the second half of 2026.
Huang described the Rubin GPU as "a huge monster" and elaborated on the design logic: "The inference cost of AI needs to decrease by a factor of ten each year, while the number of tokens generated by AI 'thinking' grows fivefold each year." He emphasized that this performance leap, exceeding Moore's Law expectations, stems from "extreme collaborative design"—a comprehensive reconstruction from CPU, GPU, network chips to cooling systems.
It is reported that the Rubin GPU achieves an inference performance of 50 PFLOPS at NVFP4 precision, which is five times that of Blackwell; training performance is 35 PFLOPS, an increase of 3.5 times compared to the previous generation. The cost of generating inference tokens can be reduced to one-tenth of that on the Blackwell platform. Each GPU is packaged with eight sets of HBM4 memory, with a bandwidth of up to 22 TB/s. Microsoft's next-generation AI super factory will deploy hundreds of thousands of Vera Rubin chips.

To address the AI "memory" bottleneck, Nvidia has built an inference context memory storage platform based on the BlueField-4 DPU, adding an additional 16TB of high-speed shared memory to each GPU on top of the original 1TB memory, connected via a 200Gb/s bandwidth to solve the long text "memory wall" problem.
In terms of energy efficiency, the Rubin NVL72 rack achieves 100% liquid cooling and supports an inlet water temperature of 45 degrees Celsius, meaning data centers can dissipate heat without high-energy-consuming chillers. Huang stated that this will save 6% of electricity for global data centers
Jensen Huang's Speech Leads to Surge in Storage Stocks and Plunge in Data Center Cooling Stocks
Jensen Huang's speech at CES had a significant impact on the stock prices of related industries. He emphasized the demand for memory and storage in AI systems, stating, "This is a market that has never existed before and is likely to become the largest storage market in the world, essentially carrying the working memory of global AI."
Driven by the above remarks, storage chip giant SanDisk became the best-performing stock in the S&P 500 index this Tuesday, with its stock price rising nearly 27.6%, marking the largest single-day increase since February 18, 2025.

SanDisk's stock has surged over 40% in the first three trading days of this year, and has skyrocketed about 1050% since the low point in April last year. Storage device manufacturers Western Digital and Seagate Technology also recorded double-digit gains on Tuesday.
In contrast, the stock prices of data center cooling system manufacturers faced a sharp decline. Johnson Controls' stock price fell by as much as 11% on Tuesday, closing down over 6.2%. Modine Manufacturing saw its stock drop 21% during the day, later narrowing its loss to nearly 7.5%, while Trane Technologies and Carrier Global closed down 2.5% and 0.5%, respectively.

The decline in these companies' stock prices stemmed from Jensen Huang's comments regarding the Rubin chip's ability to use warm water cooling without the need for chillers. Huang stated at the CES exhibition that server racks equipped with the new Rubin chip could be cooled using temperatures that do not require water cooling, with airflow requirements comparable to racks equipped with Blackwell chips.
Industry research indicates that chillers are the "primary" equipment provided to data centers by companies like Trane and Johnson Controls.
Baird analyst Timothy Wojs noted in his report that Huang's comments "raised some questions and concerns about the long-term positioning of cooling equipment in data centers, especially as liquid cooling technology becomes increasingly prevalent."
Barclays analyst Julian Mitchell pointed out, "Given NVIDIA's dominant position in the entire AI ecosystem, his remarks, while seemingly exaggerated, should not be underestimated."
