
A visual guide to the key points of Jensen Huang's speech

Jensen Huang emphasized several advancements in AI technology during his speech, including Blackwell Ultra's inference performance being 40 times higher than Hopper, expected to launch in 2025; Llama Nemotron model optimizing AI inference; Isaac GR00T N1 open-source robot AI model; DGX Spark and DGX Station enabling local LLM training; General Motors collaborating with NVIDIA to develop autonomous driving technology; silicon photonic networks enhancing AI factory efficiency; Vera Rubin and Rubin Ultra GPUs driving improvements in AI data center costs and efficiency; and the CUDA-X library increasing engineering simulation speed on super chips
→ Blackwell Ultra's inference performance is 40 times higher than Hopper, improving AI factories and inference AI systems. It will be launched in the second half of 2025.
→ The Llama Nemotron model optimizes AI inference for multi-step problem solving in commercial and technical applications.
→ Isaac GR00T N1 is an open-source robotic AI model announced for the robotics and automation field.
→ DGX Spark and DGX Station bring Grace Blackwell AI to the desktop, enabling local LLM training.
→ General Motors collaborates with NVIDIA to develop AI autonomous driving technology, factories, and robots.
→ Silicon photonic networks (Spectrum-X, Quantum-X) increase AI factory efficiency by 3.5 times and network resilience by 10 times.
→ NVIDIA Vera Rubin and Rubin Ultra GPUs will drive cost and efficiency improvements in AI data centers from 2026 to 2027.
→ The CUDA-X libraries running on GH200 and GB200 superchips have increased engineering simulation speeds by 11 times.