Broadcom CEO: AI revenue will surpass the total of other revenues within two years, cloud giants dominate ASIC chips, and enterprises will continue to rely on GPUs

Wallstreetcn
2025.09.10 07:11
portai
I'm PortAI, I can summarize articles.

Broadcom CEO Chen Fu Yang expects that the company's AI-related revenue will exceed the total revenue from software and non-AI businesses within two years. At the same time, he has set a target for AI revenue to reach up to $120 billion by fiscal year 2030, which is directly linked to CEO compensation. He pointed out that the AI chip market will see differentiation in the future—large cloud service providers will dominate the application of customized ASIC chips, while a wide range of enterprise customers will continue to rely on general-purpose GPUs

Author: Long Yue

Source: Hard AI

Chip giant Broadcom is placing artificial intelligence at the core of its strategic map.

According to a report from Goldman Sachs released on September 9, company president and CEO Hock E. Tan clearly stated at the recent Goldman Sachs Communacopia + Technology Conference that meeting the AI computing needs of specific customers is the company's top priority, and he expects that within the next two years, the company's AI-related revenue will exceed the total revenue from software and other non-AI businesses.

Surge in AI Revenue: Expected to Become an Absolute Pillar in Two Years, Targeting $120 Billion by 2030

The most striking information in the report is Hock E. Tan's aggressive forecast for AI revenue. He explicitly stated, meeting the AI computing needs of a few core customers is the company's top priority, and he expects "Broadcom's AI revenue will exceed the total of software and non-AI revenue within the next two years." This marks a shift of AI from a high-growth business unit to an absolute core pillar of the entire company.

Supporting this ambition is a newly submitted "Tan PSU Award" executive incentive plan. The core terms of this plan state that if Broadcom achieves specific AI revenue thresholds before fiscal year 2030 (FY2030), CEO Hock E. Tan will receive corresponding rewards. This also means that Hock E. Tan's compensation is strongly tied to AI revenue.

Details of the plan show that the highest performance target set for the CEO is to achieve an annual AI revenue of $120 billion by fiscal year 2030. According to Goldman Sachs' research report, this figure represents a fivefold increase compared to the bank's forecast of $20 billion in AI revenue for Broadcom in fiscal year 2025, highlighting management's extreme confidence in the AI business.

Market Differentiation: Cloud Giants Favor ASICs, Enterprise Market Still Dominated by GPUs

Regarding the highly anticipated AI accelerator market, Hock E. Tan provided a clear judgment: the market will become differentiated.

He expects that custom ASIC chips will mainly be adopted by large cloud service providers (hyperscalers). These giants have the capability and willingness to deeply customize chips for specific workloads, such as their large language models (LLMs), in pursuit of ultimate performance and cost-effectiveness. The report points out that Broadcom's XPU (customized processor) business opportunities mainly come from seven existing and potential customers.

At the same time, Hock E. Tan believes that a large number of enterprise customers may continue to use commercial GPUs. This means that for enterprises that do not have the capability or need to develop custom chips, general-purpose GPUs represented by NVIDIA will still be their preferred choice for deploying AI applications.

New Battlefield: Ethernet Will Dominate AI Clusters

In addition to computing chips, Hock E. Tan also emphasized the growth potential of AI networking business. He believes that Ethernet, as a mature technology that has been fully validated over the past two to three decades, will play an increasingly important role in AI networks Its growth momentum mainly comes from two aspects: first, AI networks generally adopt Ethernet technology; second, as the scale of AI computing clusters continues to expand, the demand for "scale-up networks" has surged for Ethernet. Chen Fuyang expects that within the next 18-24 months, Ethernet will begin to be deployed on a large scale in these scale-up networks.

This article is from WeChat Official Account "Hard AI". For more cutting-edge AI news, please click here.