
Nvidia (Minutes): Blackwell contributes 70% of data center computing revenue
NVIDIA (NVDA.O) released its Q1 FY2026 earnings report (as of April 2025) after the U.S. market closed on May 29, Beijing time. Key details are as follows:
Below are the minutes of NVIDIA's FY25Q1 earnings call. For earnings analysis, please refer to NVIDIA: No Doubt, Still the Universe's Top Stock!
1. $NVIDIA(NVDA.US) Key Earnings Highlights
2. Detailed Content of NVIDIA's Earnings Call
2.1 Key Executive Statements
1. H20 Export Controls: Q1 recognized $4.6 billion in H20 revenue (all pre-ban sales), but recorded a $4.5 billion inventory and procurement obligation write-off (lower than expected due to material reuse). $2.5 billion in H20 orders could not be delivered in Q1, with Q2 China data center revenue expected to "decline significantly." Losing the AI accelerator market in China (estimated at nearly $50 billion) will impact long-term business.
2. Data Center Business:
a. Products:
- Blackwell is the fastest-adopted product in company history, contributing 70% of data center compute revenue. The Hopper architecture transition is largely complete.
- GB200 NVL: Supports data center-scale workloads, reduces inference costs, and improves manufacturing yield. Racks have been delivered to enterprise and sovereign clients.
- GB300: Sampling began this month for cloud service providers (CSPs), with mass production in late Q2. Compatible with GB200, featuring 50% more HBM and 50% higher FP4 inference performance.
- Blackwell Ultra: The next phase of the roadmap, building on GB200 experience for a smooth transition.
b. Inference Demand:
- Hyperscalers deploy nearly 1,000 NVL72 racks weekly (72,000 Blackwell GPUs). Clients like Microsoft and OpenAI are accelerating GB200 adoption.
- Microsoft processed 100 trillion tokens in Q1, up 5x YoY, with surging demand for Azure OpenAI.
- NVIDIA Dynamo (Blackwell-based) boosts inference throughput 30x, reducing latency 5x for clients like Capital One.
c. Ecosystem Advantages:
- GB200 NVL72 delivers 30x higher inference throughput than H200 in Llama 3.1 benchmarks, with software optimizations improving performance 1.5x monthly.
- Nearly 100 AI factories under construction (up 2x YoY), with average GPU count doubling. AT&T, BYD, and Foxconn adopt full-stack architecture.
3. Innovation: Launched Llama Nemotron open inference model, boosting enterprise agent AI platform performance (accuracy +20%, inference speed +5x).
4. Networking & Hardware:
a. NVLink 72: Single-rack bandwidth of 130TB/s (equivalent to global internet peak traffic), with Q1 shipments exceeding $1 billion.
b. NVLink Fusion: Supports semi-custom CCU/accelerator connections to NVIDIA platforms, with partners like MediaTek and Qualcomm joining.
c. Spectrum-X Switches: Annual revenue over $8 billion, adding Google Cloud and Meta. Co-packaged optics improve energy efficiency 3.5x.
5. Q2 Financial Guidance:
a. Revenue: $45 billion (±2%), with moderate growth across platforms offset by Blackwell rollout and China revenue decline.
b. Gross Margin: GAAP 71.8%, non-GAAP 72% (±50 bps), with Blackwell driving sequential improvement.
c. Expenses: Full-year OpEx growth target ~30%.
d. Shareholder Returns: Returned $14.3 billion in Q1 (dividends + buybacks).
2.2 Full Transcript of Jensen Huang's Remarks:
On Export Controls
China is one of the largest AI markets and a springboard to global success. Half the world's AI researchers are there; winning its platform could lead global dominance. However, this $50 billion market is now effectively closed to U.S. firms. The H20 ban ended our Hopper data center business in China. Further Hopper cuts couldn’t comply with new rules, forcing multi-billion-dollar write-downs. We’re exploring limited alternatives, but Hopper is no longer an option. With or without U.S. chips, China’s AI advances. It needs compute for training and deploying advanced models. The question isn’t whether China will have AI—it already does—but whether the world’s largest AI market will run on U.S. platforms. Shielding Chinese chipmakers from U.S. competition only strengthens them abroad and weakens the U.S. Export restrictions have spurred Chinese innovation and scale.
AI competition isn’t just about chips but which tech stack the world runs on. As stacks expand to include 6G and quantum, U.S. infrastructure leadership is at risk. U.S. policy assumes China can’t make AI chips—a dubious claim now proven false. China has strong manufacturing. Ultimately, the platform that wins AI developers wins the race. Export controls should strengthen U.S. platforms, not push half the world’s AI talent to rivals.
On Open-Source Models Like DeepSeek
China’s DeepSeek and Qwen are among the finest open-source models. Their free release gained global popularity. DeepSeek R1, like ChatGPT, introduced deliberative AI—better answers with more thought. This supports step-by-step problem-solving, planning, and tool use, turning models into agents.
Deliberation is compute-intensive, requiring hundreds to thousands more tokens per task. It’s driving step-function demand growth. AI’s scaling laws now apply to inference. DeepSeek highlights open-source AI’s strategic value. When popular models train and optimize in the U.S., they drive usage, feedback, and improvement, cementing U.S. stack leadership.
The U.S. must remain the preferred platform for open-source AI, supporting collaboration with top global developers, including China’s. When models like DeepSeek run best on U.S. infrastructure, the U.S. wins.
On Domestic Manufacturing
President Trump outlined a bold vision to revive advanced manufacturing, create jobs, and bolster security. Future factories will be highly automated. We share this vision. TSMC is building six fabs and two advanced packaging plants in Arizona for NVIDIA chips. Process certification is underway, with volume production expected by year-end. SPIL and Amcor are also investing in Arizona for packaging and testing.
In Houston, we’re partnering with Foxconn on a 1M sq ft AI supercomputer factory. Wistron is building a similar facility in Fort Worth, Texas. To support these investments, we’ve made long-term purchase commitments, deeply investing in U.S. AI manufacturing. Our goal is U.S.-made chips-to-supercomputers within a year. Each GB200 NVLink72 rack has 1.2M components and weighs nearly 2 tons. No one has produced supercomputers at this scale before—our partners are making extraordinary efforts.
On AI Dissemination Rules
President Trump repealed "counterproductive" AI rules, proposing a new policy to promote U.S. AI tech with trusted partners. During his Middle East trip, he announced historic investments. I was honored to join him in announcing Saudi Arabia’s 500MW AI infrastructure project and the UAE’s 5GW AI campus. President Trump wants U.S. tech to lead the future. These deals are win-wins, creating jobs, advancing infrastructure, increasing tax revenue, and reducing the trade deficit.
The U.S. will always be NVIDIA’s largest market and infrastructure hub. Every nation now sees AI as the next industrial revolution’s core—a new industry producing intelligence and critical infrastructure. Countries are racing to build national AI platforms. At Computex, we announced Taiwan’s first AI factory with Foxconn and the local government. Last week, I launched Sweden’s first national AI infrastructure. Japan, Korea, India, Canada, France, the UK, Germany, Italy, and Spain are all building national AI factories to empower startups, industries, and societies. Sovereign AI is NVIDIA’s new growth engine.
2.3 Q&A
Q: You’ve discussed inference scaling for a year, with visible results. Can you elaborate on current capacity, inference business scale, and whether future inference will require full NVL72 racks?
A: Capacity: On track to meet most demand. Grace Blackwell NVLink72 is the ideal solution, now in full production.
Tech Edge: Inference generates 100-1000x more tokens than single chats. Grace Blackwell offers 40x speed and throughput gains over Hopper, cutting costs and improving response quality.
Future: Full NVLink72 racks are essential for complex inference tasks. Their design prioritizes efficient, high-quality responses for prolonged deliberation.
Q: On China’s impact, you cited ~$15 billion total, with $8 billion lost in Q2. Will remaining quarters see further losses? How to model this?
A: Q2 Loss: China data center revenue drops sharply, with $8 billion in unfilled H20 orders.
Ongoing Impact: Beyond Q2, other orders are unfulfillable, reflected in the $4.5 billion write-down.
Long-Term: The unaddressable China AI accelerator TAM (~$50 billion) will hurt long-term business.
Q: At GTC, you outlined a ~$1 trillion AI spend path. Progress? Is this evenly distributed (CSPs, sovereigns, enterprises)? Any client digestion periods?
A: Progress: AI is in early infrastructure buildout, with inference driving new growth. Cloud, enterprise, telecom, and manufacturing AI factories are nascent.
Distribution:
- Cloud: U.S. CSPs lead (largest install base), but AI is spreading to enterprises (e.g., RTX Pro servers), telecom (AI-powered 6G), and manufacturing (AI factories).
- Sovereigns: National AI platforms (e.g., Middle East, Europe) are new growth engines.
Digestion: Not directly addressed.
Q: Many large GPU cluster announcements (Saudi, UAE, Oracle, xAI). Any unannounced projects? How do these affect Blackwell lead times and mid-2025 capacity visibility?
A: Unannounced: Orders have grown since GTC. ~100 AI factories are underway globally, with many projects unannounced. AI is becoming essential infrastructure.
Capacity: Expanding supply chains (especially U.S. manufacturing). Blackwell lead times remain stable with rising capacity. Mid-2025 visibility is strong.
Q: The $8 billion Q2 H20 loss exceeds expectations by ~$3 billion. To hit $45 billion revenue, must other businesses over deliver by $2-3 billion? Is non-China outperforming? Drivers?
A: Logic: Without export controls, we’d have ~$8 billion in H20 orders.
Non-China: Blackwell adoption exceeds expectations, driven by broad client uptake and supply chain ramp.
Q: With relaxed AI rules, sovereign demand, and supply chain improvements, is management more confident in sustained sequential growth?
Inference AI Demand: Problem-solving breakthroughs drive exponential growth (e.g., agent AI).
Rule Relaxation: U.S. tech stack expands globally (e.g., Middle East, Europe), with AI as critical infrastructure.
Enterprise AI: RTX Pro servers integrate compute, storage, and networking, enabling global deployment.
Industrial AI: Global factory builds (chips, electronics) and Omniverse/robotics fuel AI factory demand.
These four drivers are turbocharging growth.
Q: Does July guidance assume no H20 replacement for China? Any approved modified products? Was Q2 shipment failure due to lack of approval?
A: Export Impact: New rules block Hopper products (e.g., H20) from China. Q2 and July guidance reflect this.
Alternatives: No approved yet. Research continues, but strict rules mean no timeline.
Q: Prior China revenue was ~$7-8 billion quarterly. If alternatives are approved, can this level return? When?
A: No specifics given.
Q: What drives networking growth (especially Ethernet at CSPs)? Any attach rate changes?
A: Drivers:
- Spectrum-X Ethernet: Boosts AI cluster utilization from 50% to 85-90% via InfiniBand-like tech, enhancing ROI (e.g., 40% efficiency gain on $10B clusters = $4B value).
- Platform Synergy: NVLink, InfiniBand, Spectrum-X, and BlueField form a networking moat.
- CSP Adoption: Two major CSPs added Spectrum-X last quarter.
Attach Rates: No data shared.
<End>
Disclosures & Disclaimers:Dolphin Research Disclosures