
Jensen Huang appeared in Taipei, revealing NVIDIA's next "AI blueprint" keywords: Rubin, silicon photonics, and the Chinese market

NVIDIA CEO Jensen Huang recently visited Taipei, emphasizing that the future product decision-making power of the AI chip H20, specially supplied for the Chinese market, lies with the U.S. government. He revealed that the AI GPU based on the Rubin architecture and the silicon photonic processor have completed initial tape-out. This visit comes amid escalating tech competition between China and the U.S., where Huang met with senior executives from TSMC to discuss cooperation and management philosophies. NVIDIA faces challenges related to security risks of AI chips in the Chinese market and is preparing to develop the next generation of AI chips based on the Blackwell architecture
According to Zhitong Finance APP, Jensen Huang, CEO of NVIDIA (NVDA.US), arrived in Taipei, Taiwan on Friday to visit the chip giant's long-time chip foundry partner, the world's largest chip manufacturer, TSMC. As the company with the highest market value globally, NVIDIA is facing increasing friction over the import and export access issues of its industry-leading AI chips, which are also the most critical AI infrastructure hardware products, amid escalating tensions between Washington and Beijing. Additionally, Huang revealed that the Rubin architecture AI GPU and a silicon photonic processor have completed initial tape-out.
His visit comes just days before NVIDIA is set to release its earnings report next Wednesday Eastern Time, during which it has been reported that the chip company has asked some suppliers to halt manufacturing or packaging testing related to the H20 AI chip due to Beijing's cautious stance on the chip's security risks. There are also reports that the company is preparing to develop a next-generation AI chip for the Chinese market based on the newly launched Blackwell architecture.
In an interview in Taipei, Huang stated that the final decision on the next generation of the China-specific AI chip, namely the successor product to the H20 AI chip, lies with the U.S. government.
"The main purpose of my visit is to meet with TSMC," he said in front of reporters in Taipei, adding that he would only stay for a few hours and would leave after having dinner with TSMC's executives. All statements made during the interview were broadcast live by local media at the Taipei airport upon his arrival on a private jet.
Huang mentioned that TSMC's management requested him to give a speech. TSMC, in a statement, said that Huang would deliver an internal speech on his "management philosophy." The statement did not provide further details.
Rubin and Silicon Photonics
Huang stated that his visit was to thank TSMC and mentioned that both parties have completed the tape-out work for six new AI chips, including a new AI GPU based on the Rubin architecture supercomputer and a brand new silicon photonic processor. The term "tape-out" typically refers to the completion of chip design and the beginning of small-scale production.
"This is the first time in our history that every chip is brand new and revolutionary," he said during the interview. "We have completed almost all the tape-outs for the next generation of chips."
The Rubin architecture is positioned as the direct successor to Blackwell, aimed for mass production in 2026. Key changes include a shift to HBM4, higher-speed NVLink, and greater rack/clustering scalability. The critical upgrades include HBM4 (with a bandwidth of approximately 13 TB/s, significantly improved from Blackwell's 8 TB/s), faster NVLink (with a total bandwidth of about 260 TB/s), and expansion for higher-density racks (such as a target of 600kW racks). Additionally, the combination of Vera CPU + Rubin GPU AI server clusters will be the successor to the "Grace-Blackwell" combination Silicon photonic processors are likely to be used for AI data center networks/super-speed interconnects, specifically silicon photonic switching/transceiver chips (not AI GPU series products), as seen in independent semiconductor research institutions like SemiAnalysis. According to previous media reports, NVIDIA has successfully integrated the silicon photonic engine into its Quantum-X (InfiniBand) and Spectrum-X (Ethernet) high-performance switch ASIC devices, used for cabinet/clusters level optical interconnect (CPO/Co-Packaged Optics concept), to support larger-scale AI GPU interconnections in the Rubin era.
NVIDIA has also revealed that silicon photonic technology may first be applied to switch ASICs—meaning CPO technology will initially land on the switch side, while the AI GPU side will still primarily rely on high-speed copper cable interconnects (due to reliability/cost reasons). Therefore, the "silicon photonic processor" in the Rubin era is likely to first appear in network switching/interconnect devices, providing optical bandwidth foundations for larger-scale AI GPU pooling and AI server cluster flattening.
The AI computing power demand brought by inference can be described as "starry seas," expected to drive the artificial intelligence computing infrastructure market to continue showing exponential growth. The "AI inference system" is also seen by Jensen Huang as the largest source of future revenue for NVIDIA. The increasingly massive demand for AI computing power will inevitably lead to an enormous demand for optical interconnects, thus silicon photonic technology has considerable potential in large-scale application scenarios that require high bandwidth, low power consumption, and low heat demand, such as high-speed data communication and data center interconnects. As the penetration of cloud AI computing services based on AI training/inference computing power and ChatGPT generative AI applications increases, the demand for AI computing power will surge, and silicon photonic technology will play an increasingly important role.
The approaching limit of Moore's Law has largely led to a slowdown in the performance enhancement of traditional electronic chips. Chip packaging technology based on silicon photonics provides a performance enhancement solution based on optical technology, allowing chip performance to expand rapidly even under the constraints of nanometer process technology. Silicon photonics technology integrates optical components such as laser devices with silicon-based integrated circuits, achieving high-speed data transmission, longer transmission distances, and low power consumption through light rather than electrical signals. Additionally, compared to ordinary electrical signal chips, silicon photonic chips can provide much lower latency.
In the landscape of silicon photonic technology, "Co-Packaged Optics (CPO)" and "Optical I/O" form two complementary yet distinctly oriented paths: the former prioritizes solving the power consumption and panel density bottlenecks of rack-level switching ASIC interfaces, while the latter positions optical transceivers as chiplets, serving as the next-generation off-chip bus between computing chips like CPU/GPU/NPU.
Is the Chinese market about to welcome Blackwell architecture AI chips?
Earlier this month, U.S. President Donald Trump opened the door for the sale of more advanced NVIDIA chips beyond H20 to China and reached an agreement with NVIDIA and another AI chip leader, AMD (AMD.US). Under Trump's leadership, the U.S. government will receive a 15% revenue share from some advanced AI chips sold in the Chinese market According to media reports this week, NVIDIA is developing a new custom version chip tentatively named "B30A" specifically for the Chinese market, based on its latest Blackwell architecture, which will outperform the H20 AI chip based on the previous generation Hopper architecture.
Compared to NVIDIA's previous generation AI GPU—the H100, the H20's overall performance in "artificial intelligence training (especially multi-card parallel)" is significantly weaker. However, the biggest advantage of this AI chip lies in NVIDIA's unique CUDA ecosystem and the single-card inference efficiency/throughput of the H20, which provides ecological and deployment efficiency advantages for AI inference clusters in the Chinese market, giving it an irreplaceable edge in ultra-large-scale AI inference workloads in China.
When asked about the B30A, Jensen Huang stated that NVIDIA is in discussions with the Trump administration to provide a successor to the H20 AI chip for China, but this is not something the company can decide on its own. "Of course, it depends on the U.S. government. We are in dialogue with them, but it is too early to draw conclusions," he said in an interview.
NVIDIA only received U.S. government approval to resume sales of the H20 in July of this year. This AI chip was specifically developed for the Chinese market after the Biden administration imposed comprehensive export restrictions on NVIDIA's AI chip product line in 2023, but the company was suddenly ordered by the Trump administration to stop sales in April, only receiving approval in July.
Shortly after the Washington approval, NVIDIA reportedly placed orders with TSMC for up to 300,000 H20 chips to increase its existing inventory due to strong demand from Chinese tech companies. However, days later, NVIDIA faced allegations regarding potential security risks posed by its chips. Nevertheless, NVIDIA insists that its chips do not have backdoor risks.
Media reports on Friday cited two informed sources stating that Foxconn has been asked by NVIDIA to stop any manufacturing or packaging testing related to the H20 chip. A third source indicated that NVIDIA wants to first digest its existing H20 inventory. Foxconn did not immediately respond to requests for comment.
Tech media The Information reported on Thursday that NVIDIA has instructed Amkor Technology, a chip packaging leader based in Arizona, to halt processes related to the H20 chip this week, while also notifying South Korea's Samsung Electronics to suspend related work.
Amkor provides advanced chiplet packaging processes for this chip, while Samsung supplies the HBM memory system for this model. Both companies did not immediately respond to requests for comment.
When asked whether NVIDIA had requested suppliers to stop production, Jensen Huang told reporters in Taipei that NVIDIA has prepared a large number of H20 chips and is currently waiting for purchase orders from Chinese customers. "When we receive orders, we will be able to procure more."
"We continuously manage the supply chain to respond to market conditions," an NVIDIA spokesperson stated in a statement, adding, "As recognized by both governments, the H20 is neither a military product nor used for government-related infrastructure." Jensen Huang stated that shipping H20 to China is not a national security issue, and "we are deeply grateful" to be able to deliver H20 AI chips to China