Broadcom single-handedly revitalizes "AI faith"! AI ASIC demand embarks on "NVIDIA-style surge trajectory"

Zhitong
2025.09.05 01:29
portai
I'm PortAI, I can summarize articles.

Broadcom performed strongly in its Q3 fiscal year 2025 earnings report, with its stock price rising nearly 5% in after-hours trading. As a core supplier in the AI ASIC field, Broadcom's performance and optimistic outlook have revitalized investor confidence in AI, demonstrating the continued growth in demand for AI computing power. Despite increasing concerns in the market regarding tech stocks, Broadcom's success proves that spending on AI infrastructure remains robust, driving a rebound in chip stocks

According to Zhitong Finance APP, one of the biggest winners of the global AI boom, Broadcom (AVGO.US), announced its fiscal Q3 2025 financial report for the period ending August 3 on the morning of September 5, Beijing time. Broadcom is one of the core chip suppliers for Apple and other large tech companies, as well as a key supplier of high-performance Ethernet switch chips for large AI data centers globally, and customized AI chips (AI ASIC) that are crucial for AI training/inference. After the financial report was released, the stock price rose nearly 5% in after-hours trading, significantly revitalizing the recently sluggish "AI faith" and proving to investors that the spending scale of tech giants like Google, Meta, and AI leaders like OpenAI in the field of AI computing infrastructure remains strong.

Broadcom's strong performance data and future outlook have fully revitalized the belief of U.S. stock tech investors in artificial intelligence, driving chip stocks to regain upward momentum in after-hours trading, something even the "AI chip king" NVIDIA (NVDA.US) failed to achieve at the end of August. The mixed results of Salesforce (CRM.US), Marvell Technology (MRVL.US), and NVIDIA led some investors who held a cautious stance on the "monetization path of artificial intelligence" to significantly sell off popular tech stocks—they generally believe that the AI investment frenzy has led to a bubble in tech stocks, compounded by the market's rising expectations of "stagflation" in the U.S. economy under the tariffs led by Trump, resulting in a continuous decline in U.S. tech stocks since early September.

However, the strongest force in the AI ASIC field—Broadcom, with a market capitalization of $1.4 trillion—uses its strong performance and optimistic outlook to tell investors that the demand for AI computing power is still showing explosive growth trends, especially the demand growth for AI ASICs and high-performance Ethernet chips is comparable to the unprecedented growth rate of NVIDIA's data center AI GPU demand in 2023-2024. Broadcom CEO Hock Tan stated in a conference call with Wall Street analysts that the company's revenue prospects associated with artificial intelligence will "significantly" expand in fiscal 2026, which alleviates market concerns about a slowdown in AI computing power growth.

In the conference call following the quarterly earnings announcement, Hock Tan mentioned that the chip company is collaborating with more potential major clients to develop AI training/inference acceleration chips—this market is currently dominated by NVIDIA's AI GPUs, but the AI ASIC route led by Broadcom is beginning to achieve a surge in market scale in the AI training/inference field. This year, Broadcom's stock price has repeatedly hit historical highs driven by the unprecedented AI investment boom, and together with NVIDIA and TSMC, it has propelled the entire AI computing power industry chain into a bullish market trend.

"In the last financial quarter, one potential customer placed a large-scale mass production order with Broadcom related to AI infrastructure," he did not disclose the customer's name during his communication with analysts. "We now expect the revenue outlook related to AI infrastructure construction for the fiscal year 2026 to expand significantly compared to the already strong growth rate we mentioned last quarter."

In the last earnings call, Chen Fuyang stated that the revenue outlook related to AI for 2026 would show a growth trajectory similar to this year—projected growth of about 50% to 60%. Now, with the addition of a new major customer, whose name Chen Fuyang did not disclose, with "timely and massive demand," the revenue growth associated with AI will be significantly upgraded in a "substantive and considerable" manner, Chen Fuyang stated.

Broadcom reported in its earnings report released on the morning of the 5th Beijing time that the company's management expects overall revenue for the fourth fiscal quarter (ending in October) to be approximately $17.4 billion, exceeding the average expectation of Wall Street analysts of about $17.05 billion, indicating a year-on-year growth of about 25%.

Before the earnings report was released, the market had very high expectations for Broadcom's performance and future outlook data, so exceeding market expectations has significantly boosted investors' bullish sentiment towards Broadcom and the entire AI computing power supply chain. Since the year-to-date low in April, Broadcom's stock price has more than doubled, adding about $730 billion to the company's market value, making it the third-best-performing stock in the Nasdaq 100 index, with a stock price increase stronger than NVIDIA.

Investors have been looking for signs that AI computing power spending remains strong. Last week, NVIDIA provided mixed earnings guidance, raising concerns in the market about a potential bubble burst in the artificial intelligence industry.

Although Broadcom has not experienced the same explosive market value expansion as NVIDIA—NVIDIA's market value has increased by over $3 trillion in 2023 so far—it is still viewed by the market as a core beneficiary of the AI boom. Large-scale customers developing and operating continuously updated AI large models—such as Google and Facebook's parent company Meta—are heavily reliant on Broadcom's customized AI ASIC chips and high-performance networking equipment to handle massive AI workloads.

During the earnings calls of Google and Meta, Pichai and Zuckerberg both stated that they would strengthen their collaboration with chipmaker Broadcom to launch self-developed AI ASICs. The AI ASIC technology partners of these two giants are both leaders in customized chips, such as the TPU (Tensor Processing Unit) co-developed by Google and Broadcom, which is a typical example of an AI ASIC.

During the earnings call, Chen Fuyang stated that he and the board have agreed that he will serve as Broadcom's CEO at least until 2030.

Earnings data shows that for the third fiscal quarter ending August 3, Broadcom's overall revenue grew by 22% to nearly $16 billion. Excluding certain items, the adjusted profit was $1.69 per share, exceeding the average expectations of Wall Street analysts for revenue of about $15.8 billion and earnings per share of $1.67—both of which have been continuously revised upward recently Broadcom's semiconductor revenue related to AI infrastructure in the third fiscal quarter was approximately $5.2 billion, with a year-on-year growth rate of 63%, exceeding Wall Street's average expectation of $5.11 billion. Broadcom's management expects this category's revenue to reach about $6.2 billion in the fourth fiscal quarter, higher than analysts' previous expectation of approximately $5.82 billion.

In recent days, other chip manufacturers focused on AI computing infrastructure have performed poorly. One of Broadcom's competitors in the customized semiconductor market, Marvell Technology Inc., saw its stock price plummet 19% last Friday after the company's data center business revenue fell short of expectations.

In addition to collaborating with major clients like Google to develop customized AI accelerators—namely AI ASIC chips, Broadcom has also been upgrading its high-performance networking equipment to better transmit information between AI server systems at the core of AI data centers. As Chen Fuyang's latest comments suggest, Broadcom is making positive progress in seeking major clients that hope to provide high-performance equipment for high-load AI training/inference tasks.

Through years of mergers and acquisitions, Chen Fuyang has transformed Broadcom into a behemoth spanning both software and hardware fields. In addition to its semiconductor business closely related to AI infrastructure, the chip giant headquartered in Palo Alto, California, also provides the most critical networking components for Apple's iPhone devices.

The "AI ASIC Super Wave" Led by Google and Meta is Coming

As American tech giants firmly invest heavily in the field of artificial intelligence, the biggest beneficiaries include not only NVIDIA but also AI ASIC giants like Broadcom, Marvell Technology, and Taiwan's MediaTek. Microsoft, Amazon, Google, and Meta, as well as generative AI leaders like OpenAI, are all collaborating with Broadcom or other ASIC giants to update and iterate AI ASIC chips for massive inference-end AI computing deployments. Therefore, the future market share expansion of AI ASICs is expected to significantly outperform AI GPUs, moving towards parity in market share rather than the current situation where NVIDIA dominates the AI GPU market with a staggering 90% share.

With its absolute technological leadership in inter-chip communication and high-speed data transmission between chips, Broadcom has become the most important player in the ASIC customized chip market for AI in recent years. For instance, in Google's self-developed server AI chip—TPU AI acceleration chip, Broadcom is a core participant, collaborating with Google's team in the research and development of the TPU AI acceleration chip. In addition to chip design, Broadcom also provides Google with critical inter-chip communication intellectual property and is responsible for manufacturing, testing, and packaging new chips, thereby safeguarding Google's expansion into new AI data centers.

The high-performance Ethernet switch chips led by Broadcom are primarily used in data centers and server cluster devices, efficiently and rapidly processing and transmitting data streams. Broadcom's Ethernet chips are essential for building AI hardware infrastructure, as they ensure high-speed data transmission between GPU processors, storage systems, and networks, which is crucial for generative AI applications like ChatGPT, especially those requiring the processing of large data inputs and real-time processing capabilities, such as Dall-E Sora text generation video model, etc.

Based on Broadcom's unique inter-chip communication technology and numerous patents for data transmission flows, Broadcom has currently become the most important player in the AI ASIC chip market within the AI hardware field. Not only does Google continue to choose to collaborate with Broadcom to design and develop customized AI ASIC chips, but giants like Apple and Meta, as well as more data center service operators, are expected to work with Broadcom in the long term to create high-performance AI ASICs. It is understood that Broadcom's management projected at the performance meeting earlier this year that the potential market size for AI components (Ethernet chips + AI ASICs) created for global data center operators could reach as high as USD 60-90 billion by fiscal year 2027.

It is reported that one of Broadcom's major clients, Google, disclosed the latest details of the Ironwood TPU (TPU v6) at the conference, showcasing remarkable performance improvements. Compared to TPU v5p, Ironwood's peak FLOPS performance has increased tenfold, with an efficiency improvement of 5.6 times. Compared to Google's TPU v4 launched in 2022, Ironwood's single-chip computing power has increased by more than 16 times.

The data released by Google clearly demonstrates the evolutionary path of its TPU platform's performance. Ironwood's single-chip peak computing power reaches 4614 TFLOPs, equipped with 192 GB of HBM and a bandwidth of up to 7.4 TB/s. In comparison, the TPU v4 released in 2022 had a single-chip computing power of 275 TFLOPs, equipped with 32 GB of HBM and a bandwidth of 1.2 TB/s. The TPU v5p launched in 2023 had a single-chip computing power of 459 TFLOPs, equipped with 95 GB of HBM and a bandwidth of 2.8 TB/s.

Performance comparisons show that Google's Ironwood has a power efficiency of 4.2 TFLOPS/watt, which is slightly lower than NVIDIA's B200/300 GPU's 4.5 TFLOPS/watt. JP Morgan commented that this performance data highlights that advanced AI's dedicated AI ASIC chips are rapidly narrowing the performance gap with market-leading AI GPUs, driving hyperscale cloud service providers to increase investments in more cost-effective customized ASIC projects.

According to the latest forecast from Wall Street financial giant JP Morgan, this chip, using a 3nm advanced process developed in collaboration with Broadcom, is expected to be mass-produced in the second half of 2025, and Ironwood is anticipated to bring approximately USD 10 billion in revenue to Broadcom over the next 6-7 months.

It is noteworthy that some media reports indicate that Google has recently approached several cloud service providers primarily leasing NVIDIA AI GPU server clusters, expressing a desire for their data centers to also deploy Google TPU computing power clusters. Representatives from companies involved in the deal privately disclosed to the media that Google has reached an agreement with at least one cloud service provider, including Fluidstack, headquartered in London, which will deploy Google TPU computing power clusters in its New York data center Gil Luria, a well-known analyst from the Wall Street investment firm D.A. Davidson, stated that an increasing number of cloud service providers and large AI application developers are interested in Google's TPU, hoping to reduce their reliance on NVIDIA. After communicating with researchers and engineers from several cutting-edge AI laboratories, D.A. Davidson found that engineers have a very positive evaluation of Google's custom acceleration chip for AI training/inference.

After the global shock of DeepSeek, Broadcom's stock price rises stronger than NVIDIA! Wall Street is optimistic about Broadcom's stock price continuing to hit new highs.

The continuously explosive demand for AI computing power globally, coupled with the increasingly large AI infrastructure investment projects led by the U.S. government, and the tech giants' continuous massive investments in building large data centers, largely indicates that for long-term investors who are fond of NVIDIA and the AI computing power industry chain, the sweeping global "AI faith" has not yet concluded its "super catalysis" on the stock prices of computing power leaders. They bet that the stock prices of AI computing power industry chain companies led by NVIDIA, TSMC, and Broadcom will continue to exhibit a "bull market curve," thereby driving the global stock market to continue its bull market trend.

It is under the epic stock price surge of leaders in the AI computing power industry chain such as NVIDIA, Google, TSMC, and Broadcom, along with the consistently strong performance since the beginning of this year, that an unprecedented AI investment boom has swept through the U.S. stock market and global stock markets, driving the global benchmark index—MSCI Global Index to surge significantly since April, recently setting new historical highs.

Since the launch of DeepSeek R1 at the end of January, which shocked Silicon Valley and Wall Street, leading to a record single-day drop in the AI computing power sector of the U.S. stock market, Broadcom's stock price surge has been stronger than that of AI chip leader NVIDIA.

As DeepSeek completely initiates an "efficiency revolution" in AI training and inference, focusing future AI large model development on "low cost" and "high performance," the demand for AI ASIC in cloud AI inference computing power is entering a trajectory of even stronger demand expansion than the 2023-2024 AI boom period. Major clients such as Google, OpenAI, and Meta are expected to continue investing heavily to collaborate with Broadcom in developing AI ASIC chips.

As large model architectures gradually converge towards several mature paradigms (such as standardized Transformer decoders and Diffusion model pipelines), more cost-effective AI ASICs can more easily handle mainstream inference computing loads. Additionally, certain cloud service providers or industry giants may deeply couple their software stacks, making ASICs compatible with common network operators and providing excellent developer tools, which will accelerate the popularization of ASIC inference in normalized/massive scenarios. NVIDIA's AI GPUs may focus more on ultra-large-scale frontier exploratory training, rapidly changing multimodal or new structure rapid experimentation, as well as general computing power for HPC, graphics rendering, and visual analytics Given the continuous explosive growth in demand for Broadcom's Ethernet switch chips and AI ASIC chips, Wall Street is generally bullish on Broadcom's stock price outlook, optimistic about Broadcom continuing to set new highs in its stock price. Evercore recently significantly raised Broadcom's 12-month target price from $304 to $342, while Morgan Stanley raised its target price from $338 to $357.

In addition, "silicon photonics technology" is expected to become an important catalyst for Broadcom's stock price to enter a new bull market curve. The wave of "silicon photonics technology" led by global chip giants such as NVIDIA, TSMC, and Broadcom is about to evolve into an unprecedented revolution sweeping the entire AI computing power industry chain—the "silicon photonics revolution," which also means that CPO and optical I/O technology routes will soon accelerate their penetration from cutting-edge laboratories to global applications.

On one hand, Broadcom is developing its own high-performance CPO switch chip solutions (integrating its Tomahawk series flagship switch chip products), and on the other hand, it is accumulating technology in the optical interconnect field through acquisitions (having previously acquired the optical module manufacturer Brocade). Broadcom has a broad global customer base among cloud providers and a mature switch ASIC business, and the large-scale introduction of CPO technology will undoubtedly significantly enhance the competitiveness of its switching system products