
NVIDIA (Minutes): GB300 has been shipped, Rubin is scheduled for mass production next year.
The following are the minutes of NVIDIA's FY2026 Q2 earnings call organized by Dolphin Research. For earnings interpretation, please refer to "NVIDIA: The Universe's Top Stock, Is It a Crime Not to Be Explosive?"
I.$NVIDIA(NVDA.US) Key Financial Information Review
1. Overall Performance and Financial Highlights (Q2)
a. Total Revenue: Reached a record $46.7 billion, exceeding expectations, with sequential growth across all market platforms.
b. Data Center Revenue: Increased by 56% year-over-year. Despite a $4 billion decline in H20 product revenue due to Chinese policies, the data center business still achieved sequential growth.
c. Gross Margin: GAAP gross margin was 72.4%, and non-GAAP gross margin was 72.7%. Excluding gains from H20 inventory release, non-GAAP gross margin was 72.3%, still above expectations.
d. Shareholder Returns: Returned $10 billion to shareholders through stock repurchases and cash dividends. The board authorized an additional $60 billion in stock repurchase.
e. Inventory: Increased from $11 billion to $15 billion to support new product ramp-up.
II. Detailed Content of NVIDIA's Earnings Call
2.1 Executive Statements Key Information
1. Product and Technology Progress:
a. Blackwell: Strong demand, reaching record revenue levels with a 17% sequential increase. It has become the new standard for AI inference performance. GB300 began production and shipping in Q2, currently in full production, producing about 1,000 racks per week, expected to accelerate further in Q3.
- Performance: GB300 improves energy efficiency per token by 10 to 50 times compared to Hopper. A $3 million investment in GB200 can generate $30 million in token revenue. Using NVFP4 format, GB300's training speed is 7 times faster than H100 (FP8).
- Ecosystem Adoption: Widely adopted by all major cloud providers and AI companies, including AWS, Google, Microsoft, OpenAI, Meta, Mistral, etc.
b. Hopper (H100/H200): Shipments of H100 and H200 increased this quarter, indicating widespread demand for accelerated computing.
c. Rubin: Chips have entered the fab, and the platform is scheduled for mass production next year, continuing NVIDIA's pace of annual new product innovation.
d. Networking: Revenue reached a record $7.3 billion, up 46% sequentially and 98% year-over-year.
- SpectrumX Ethernet: Annualized revenue has exceeded $10 billion.
- InfiniBand: Benefited from XDR technology, revenue nearly doubled sequentially.
- NVLink: Bandwidth is 14 times that of PCIe Gen 5, with strong growth.
2. Performance of Business Segments:
a. Data Center: Remains the core growth engine, benefiting from the wave of AI infrastructure construction.
- Sovereign AI has become an important growth driver, with expected revenue in this area exceeding $20 billion this year, more than double last year.
- RTX PRO servers are in full production, expected to become a multi-billion-dollar product line serving enterprise AI applications and digital twins.
b. Gaming: Revenue reached a record $4.3 billion, up 14% sequentially and 49% year-over-year. Growth driven by RTX 5060 desktop GPUs and new Blackwell architecture products. GeForce NOW cloud gaming service will receive a major upgrade in September, introducing Blackwell performance.
c. Professional Visualization: Revenue of $601 million, up 32% year-over-year, driven by high-end RTX workstation GPUs.
d. Automotive: Revenue of $586 million, up 69% year-over-year, mainly driven by autonomous driving solutions. The new generation SoC Thor has begun shipping, with performance improvements over Orin by an order of magnitude.
3. Discussion on the Chinese Market: Revenue from China in Q2 fell to a low single-digit percentage of data center business. The U.S. government has begun reviewing H20 product sales licenses to China, although some customers have been approved, NVIDIA has not yet shipped. The U.S. government has expressed an intention to charge 15% of H20 sales revenue, but no legislation has been enacted.
a. Impact on Q3 Outlook: The Q3 outlook does not include any H20 sales revenue to China. If the issue is resolved, Q3 could see an increase of $2 billion to $5 billion in H20 revenue.
b. NVIDIA continues to advocate for the approval of Blackwell and other products for sale to China, believing it benefits the U.S. economy and technological leadership.
4. Third Quarter Performance Outlook (Q3):
a. Total Revenue: Expected to be $54 billion (±2%), implying a sequential increase of over $7 billion.
b. Gross Margin: Expected non-GAAP gross margin of 73.5% (±50 basis points), and expected to reach around 75% by year-end.
c. Operating Expenses: Expected full-year operating expenses to grow 35-40% year-over-year, higher than previous expectations, to seize growth opportunities.
5. Industry Trends: Management believes we are at the beginning of an AI-driven industrial revolution. By 2030, annual AI infrastructure spending is expected to reach $3 trillion to $4 trillion.
a. Growth Drivers: ① The enormous demand for computing power from inference and agentic AI. ② Global sovereign AI construction. ③ Widespread adoption of enterprise AI. ④ The rise of physical AI and robotics.
b. Robotics: The Thor computing platform and Omniverse digital twin platform are being adopted by leading companies like Amazon, Boston Dynamics, and Figure AI, becoming a long-term driver of data center demand.
2.2 Q&A
Q: Considering the 12-month cycle from wafer to rack shipment, and you confirmed the Rubin platform will begin mass production in the second half of next year, can you elaborate on the company's vision for growth in 2026, especially the growth prospects for data center and networking businesses?
A: The core driver of our long-term growth is the revolutionary evolution of AI technology from "single query" to "inference and agentic AI." In the past, chatbots provided a prompt and generated an answer; now, AI can research, think, plan, and even use tools, requiring hundreds or even thousands of times more computing power than before. This evolution greatly enhances AI's effectiveness, significantly reduces "hallucinations," and enables it to perform real tasks, opening breakthroughs for enterprise applications and physical AI (such as robotics and autonomous driving systems).
We started preparing for this years ago, launching the Blackwell NVL72 "rack as a computer" system, achieving a massive leap in computing node scale. Although technically challenging, the result is a significant improvement in performance, energy efficiency, and cost-effectiveness. Looking ahead, from now to 2030, we will continue to expand and seize this $3 to $4 trillion AI infrastructure construction opportunity through Blackwell, Rubin, and subsequent products. In the past two years, the capital expenditure of the four major cloud service providers alone has doubled to about $600 billion, and we believe this is just the beginning of this massive construction wave. Advances in AI technology are enabling it to truly penetrate and solve problems across various industries.
Q: Regarding the potential $2 billion to $5 billion revenue from the Chinese market, what conditions need to be met to achieve this? Additionally, what do you think the sustainable revenue level for the China business will be after entering the fourth quarter?
A: To ship H20 products, several key factors are currently in play. We have received the first batch of licenses, customers are interested in H20 products, and we are ready with the corresponding supply. Therefore, we estimate the potential to ship $2 billion to $5 billion this quarter. However, the situation remains unclear due to ongoing geopolitical issues between governments and dynamic changes in customer procurement decisions, so we are still uncertain about the final outcome. If more customer interest and licenses are approved, we also have the capacity to produce and ship more.
Q: Regarding the competitive landscape, your major customers are developing or planning their own ASIC chips. Competitors also predict significant growth in their AI business next year. Do you think the market could shift from NVIDIA's GPUs to ASICs? What feedback have you received from customers? How do they balance using commercial chips (like NVIDIA products) and self-developed ASICs?
A: NVIDIA's products are fundamentally different from ASICs. While many companies have initiated ASIC projects, few have actually gone into production because accelerated computing is an extremely complex full-stack co-design problem, far from simply compiling software onto a processor. Today's AI factories are the most extreme challenges in computer science history, with model architectures (such as from autoregressive to diffusion models) iterating rapidly, and NVIDIA's platform can flexibly adapt to all these changes and accelerate the entire workflow from data processing to final inference.
Our core advantage lies in building a ubiquitous, unified ecosystem. From the cloud to on-premises to robotics, developers can work under the same programming model (CUDA), making NVIDIA the preferred choice for all AI frameworks worldwide, ensuring the highest utility and longest lifecycle for data centers built on our platform. Moreover, we offer an extremely complex system-level solution, far beyond a single GPU. For example, the Blackwell and Rubin platforms integrate custom CPUs, SuperNICs, fifth-generation NVLink switches, Spectrum-X networking, and the latest cross-data center connectivity technology, a level of system complexity unmatched by a single ASIC.
Finally, the key reason we are chosen by all cloud providers is our extreme economic efficiency. In an era of power-constrained data centers, we have the industry's best "performance per watt," directly translating into revenue-generating capacity for customers. Simultaneously, our unparalleled "performance per dollar" provides customers with substantial profit margins. In summary, NVIDIA offers a holistic full-stack solution for AI factories, which is why we are the industry's preferred choice.
Q: You recently raised your market size expectation for AI infrastructure from the previous $1 trillion (mainly referring to computing) to $3 to $4 trillion by 2030. Does this mean you predict that by 2030, spending on the "computing" portion alone could exceed $2 trillion? What market share do you expect NVIDIA to capture by then? To achieve this $3 to $4 trillion scale, do you see potential bottlenecks like "power"?
A: The $3 to $4 trillion market size is a reasonable forecast. We see that the capital expenditure of the four major cloud service providers (CSPs) alone has doubled to $600 billion annually over the past two years, and this is only part of the market, excluding other global cloud providers, national sovereign AI, and substantial enterprise on-premises deployments. Over the next five years, as AI becomes more widespread, we expect the global distribution of computing facilities to align more closely with GDP distribution, and AI itself will accelerate GDP growth.
In a typical AI factory or data center, NVIDIA plays the role of an AI infrastructure company. For an AI factory costing $50 billion to $60 billion and at gigawatt power levels, NVIDIA's value accounts for about 35% of it. We provide far more than GPU chips; we offer a complete set of complex systems, such as building a Rubin supercomputer requires six different types of chips and scaling it to hundreds of thousands of computing nodes.
Regarding bottlenecks, power is indeed one of the main constraints in the future. Therefore, our core mission is to continuously improve "performance per watt." For an AI factory with a fixed power (e.g., 100 megawatts), its output (i.e., the number of tokens it can generate) directly determines its revenue. NVIDIA's technology can extract the most computing performance from each unit of energy, thereby directly enhancing customers' revenue-generating capacity. Simultaneously, our extremely high "performance per dollar" ensures customers achieve the best profit margins. Therefore, the key to overcoming bottlenecks like power lies in continuously improving technological efficiency, which is our focus.
Q: You mentioned that China has half of the world's AI software talent. How much growth potential does NVIDIA have in this market? In the future, how important is it for the company's business development to obtain licenses and sell advanced architectures like Blackwell in China?
A: I estimate that without restrictions, the Chinese market should have been a $50 billion opportunity for us this year, and it will grow at about 50% annually, similar to the global AI market. China is the world's second-largest computing market and has about half of the world's AI researchers. Many leading open-source models, such as DeepSeek, Qwen, and Kimi, originated in China, and these models are crucial for driving global enterprises, SaaS companies, and the robotics field to adopt AI. Therefore, it is very important for U.S. technology companies to serve this market. We have been communicating with the U.S. government, emphasizing the importance of allowing U.S. companies to compete in the Chinese market, which helps the U.S. technology stack become the global standard and win the AI race. Currently, H20 has been approved for sale to companies not on the entity list, and many licenses have been approved. Given this, I believe it is a real possibility to introduce the Blackwell architecture into the Chinese market in the future, and we will continue to advocate for this.
Q: What is the market opportunity for your recently released Spectrum-XGS (for connecting multiple AI clusters)? Can we understand it as a new layer of data center interconnect (DCI)? In your Ethernet product portfolio, which has already exceeded $10 billion in annual revenue, how much business scale can Spectrum-XGS contribute in the future?
A: We now offer three different levels of networking technology: NVLink for "Scale-Up," InfiniBand and Spectrum-X Ethernet for "Scale-Out," and the newly released Spectrum-XGS for "Scale-Across."
1. Scale-Up: The core technology is NVLink, which is used to build the largest possible virtual GPU computing nodes. The revolutionary NVLink 72 technology in the Blackwell platform greatly enhances memory bandwidth, which is crucial for long-thinking agentic AI and inference systems, and is key to achieving a significant generational leap in performance.
2. Scale-Out: We offer two options. First is InfiniBand, which has unparalleled low latency and low jitter, making it the clear choice for supercomputing and top model developers, delivering the best AI factory performance. Second is Spectrum-X Ethernet, which is not ordinary Ethernet but technology optimized specifically for AI, with designs like congestion control, making its performance far superior to other Ethernet and very close to InfiniBand, suitable for data centers wishing to unify their Ethernet technology stack.
3. Scale-Across: This is where Spectrum-XGS is positioned, a giga-scale technology used to connect multiple independent AI factories or data centers into a massive "super factory" system.
For AI factories, choosing the right networking technology is crucial. An efficient network can increase the overall computing throughput efficiency of a factory from 65% to 85% or even 90%. For a $50 billion giga-scale AI factory, an efficiency improvement of several percentage points means creating an additional $10 billion to $20 billion in value. This enormous return makes investing in advanced networking almost "free." This is why we acquired Mellanox five and a half years ago and have continued to invest heavily in networking technology. Our Spectrum-X Ethernet has grown into a sizable business just a year and a half after its launch, achieving great success. We believe all three layers of networking technology will achieve outstanding development.
Q: How should we understand the allocation of this $7 billion increment among Blackwell, Hopper, and Networking? Do you think Hopper's strong performance will continue? How should we break down this $7 billion increment?
A: From the growth from Q2 to Q3, Blackwell will continue to be the absolute main driver of our data center business growth, that is clear. You should expect Blackwell to be the core engine driving our performance growth. But remember, strong sales of Blackwell will simultaneously drive our Compute and Networking businesses because we sell a complete system that includes the NVLink technology mentioned earlier.
As for Hopper, we are indeed still selling H100 and H200, especially in the HGX system form. However, Blackwell's revenue contribution share will be the largest. While we cannot provide more specific product line breakdown data within the quarter, the core information is: the Blackwell platform will be the main growth driver.
Q: How do you view the future transition strategy from Blackwell to Rubin? Compared to Blackwell, what new, incremental features and value can the Rubin platform bring to customers? From a performance improvement perspective, do you think Rubin's progress compared to Blackwell is greater, smaller, or a similar leap as Blackwell's improvement over Hopper?
A: We have entered an annual cycle of releasing a new architecture every year. The reason we can do this and recommend customers to plan and build data centers on an annual rhythm is that it maximizes cost reduction and revenue generation for customers.
The key metric is "performance per watt," i.e., the number of tokens that can be generated per unit of energy consumed. Blackwell's performance per watt in inference systems is already an order of magnitude higher than Hopper. Since all data centers are constrained by energy supply, this means using Blackwell can generate more revenue under the same energy consumption than any previous architecture, while its excellent "performance per dollar" can also help customers improve gross margins. As long as we can continue to come up with great new ideas, we will continue to enhance customers' revenue-generating capacity, AI capabilities, and profit margins by releasing new architectures.
The Rubin platform also carries a lot of new, groundbreaking ideas. However, I cannot disclose specific details at this time.
The focus for this year and the next is to fully ramp up the production capacity and data center deployment of Grace Blackwell and Blackwell Ultra. This year will obviously be a record year, and I expect next year to set new records again. We will continue to move towards two goals: on the one hand, advancing towards artificial superintelligence (ASI) in technology, continuously improving AI performance; on the other hand, continuously enhancing the revenue-generating capacity of our hyperscale data center customers in business.
Q: You said the AI market's compound annual growth rate (CAGR) could reach 50%. Considering your visibility into next year's business, can this 50% growth rate be seen as a reasonable reference target for your data center business growth next year?
A: We are very confident about future growth. On the one hand, we have received very substantial business forecasts for next year from large customers, and we are continuously winning new business, with new AI startups emerging one after another. On the other hand, market demand is extremely strong, to the point where "everything is sold out"—H100, H200 are all sold out, and startups and large cloud service providers are competing for computing power, fully proving the reality and strength of the demand. We believe we are still in the early stages of the AI explosion.
Several key drivers are behind this growth. First, AI-native startups are experiencing explosive growth, with their funding and revenue growing tenfold. Second, the popularity of open-source models is driving large enterprises, SaaS companies, and industrial giants to fully join the AI revolution, opening up new growth sources. In the long run, the annual capital expenditure (CapEx) of large cloud service providers has reached about $600 billion, and it is entirely reasonable for us to occupy an important share in this huge market. Therefore, we foresee a period of rapid and significant growth in the coming years and up to 2030.
Looking ahead, Blackwell, as the world's anticipated next-generation AI platform, is ramping up production at full speed to meet its extraordinary demand. And our next-generation platform Rubin, with its core six new chips, has already been taped out. We have entered an annual iteration rhythm, and the Blackwell and Rubin platforms will support the global construction of AI factories worth $3 to $4 trillion by 2030. The scale of AI factories being built by customers is also expanding exponentially: from tens of megawatt facilities with thousands of GPUs to hundred-megawatt facilities with hundreds of thousands of GPUs, we will soon move towards gigawatt-level super AI factories with millions of GPUs across regions.
The demand itself is also evolving. Simple chatbots are upgrading to "agentic AI" capable of researching, planning, and using tools, which increases computing demand by several orders of magnitude and opens up a vast enterprise market. More excitingly, the era of physical AI has arrived, opening a new chapter in robotics and industrial automation, where every industrial company will need both a physical factory and an AI factory.
In summary, we are at the beginning of a new industrial revolution, the opportunities ahead are enormous, and the AI race is in full swing.
<End Here>
Risk Disclosure and Statement of This Article:Dolphin Research Disclaimer and General Disclosure