
Marvell ignites expectations for a surge in AI ASIC demand, with Broadcom as the biggest beneficiary?

Marvell Tech received high praise from Wall Street analysts for its customized AI chip activities, with its stock price rising over 10% in early trading on Wednesday. Analysts believe that Broadcom will be the biggest beneficiary of Marvell's AI activities. Analyst Mark Lipacis pointed out that the customized AI chip design projects are expected to ramp up quickly between 2026 and 2027, potentially generating billions of dollars in lifecycle revenue. Marvell maintains an "Outperform" rating with a target price of $133
According to Zhitong Finance APP, Marvell Technology (MRVL.US), one of the largest AI ASIC partners of Amazon AWS focusing on customized AI chips (i.e., AI ASIC chips), saw its stock price rise by more than 10% at the beginning of Wednesday's U.S. stock market. This surge followed high praise from several top Wall Street analysts regarding the company's various technical routes and potential market announcements related to its customized AI chip activities, with many giving strong bullish positions on the stock price.
As Marvell ignites the market's imagination regarding the demand for customized AI chips, analysts have emphasized that Broadcom (AVGO.US), known as the "AI ASIC king," will be the biggest long-term beneficiary of Marvell's AI activities from an investment perspective.
Among the most notable market announcements are two new AI ASIC chip design wins from large-scale cloud computing companies. Mark Lipacis, an analyst from Wall Street investment firm Evercore ISI, stated that these customized AI chip design projects are expected to ramp up quickly within the 2026 to 2027 timeframe. Additionally, Marvell has secured 12 new XPU-Attach design wins, bringing the total to 13 when combined with the previously announced large AI chip project from Facebook's parent company Meta Platforms (META.US).
"Marvell believes that each customized AI chip design win can generate billions of dollars in lifecycle revenue within 1.5 to 2 years, while each XPU Attach win can contribute hundreds of millions of dollars in lifecycle revenue per socket within 2 to 4 years," Lipacis wrote in his latest report to clients. This analyst maintains a "outperform" rating on Marvell, with a target price set at $133. As of the beginning of Wednesday's U.S. stock market, Marvell's stock price had risen over 9%, hovering around $76.
XPU generally refers to a "highly extensible" processor architecture, typically indicating AI ASICs (including TPUs), FPGAs, and other customized AI accelerator hardware, excluding NVIDIA AI GPUs.
Marvell Sparks AI ASIC Storm
Joseph Moore, a well-known analyst from Wall Street financial giant Morgan Stanley, also holds a positive attitude towards Marvell's AI activities, noting that the company has raised its financial targets beyond expectations and provided "significant positive comments" on "long-term new growth opportunities in the AI field."
"These numbers are ambitious, much like NVIDIA's performance outlook in its early years. However, we would prefer to see them significantly exceed expectations in recent performance rather than just raising their 2028 targets. Nonetheless, it is undeniable that the substantial growth opportunities for Marvell's AI ASICs are real, and the stock price will react aggressively," analyst Moore wrote.
Morgan Stanley analyst Moore maintains a "market perform" rating on Marvell's stock, summarizing the key points of this research report as follows: The TAM (Total Addressable Market) for customized chips in data centers has been significantly raised to $94 billion, expanding 26% from last year's AI activities;
Marvell Technology plans to capture at least 20% of this market, with over 50% of its data center business revenue coming from AI computing demand related to "custom AI chips," namely AI ASIC;
Two new large XPU Compute projects and four XPU Attach sockets have been added;
The potential revenue scale of the customized silicon product pipeline for all projects of the company totals $75 billion;
The progress of the Maia customized AI chip project with Microsoft (MSFT.US) is on track;
At least one XPU Attach socket will be paired with Amazon (AMZN.US)'s cloud computing division AWS's Trainium3 AI processor.
"Marvell has already sparked an AI ASIC storm and is clearly among the winners of AI on Wall Street, despite recent market sentiment being somewhat negative; we maintain a 'market perform' rating, still preferring Broadcom, which is also in the ASIC field, but expect Marvell's stock price to trend upwards," analyst Moore added.
Notable analyst Vivek Arya from Bank of America pointed out that the company's raised earnings expectations for 2028 imply a potential earnings per share of $8, which is a full 60% above Wall Street's estimates.
"We remain optimistic about Marvell: the wave of AI capital expenditure expansion from cloud computing giants will drive this company, which focuses on data center business and possesses leading IP across AI computing infrastructure, XPU, networking, electro-photonics, security, and storage/memory, to achieve fundamental upside potential," Arya wrote in a research report.
Arya reiterated Bank of America's "buy" rating on Marvell Technology, raising the target price from $80 to $90, stating that this AI activity, along with product pipeline and earnings expectations updates, "may reassure investors and help the stock price catch up with its AI chip peers, especially considering Marvell's current NTM PE of only 23x, which is below the historical median of 32x, and far lower than most chip peers."
The biggest beneficiary may not be Marvell, but Broadcom?
Since the beginning of this year, especially with the emergence of the DeepSeek-R1 large model that has shocked Silicon Valley and Wall Street, the bull market narrative around AI ASIC has been continuously reinforced. However, as Morgan Stanley stated, from a long-term investment perspective, the company that may benefit the most from Marvell Technology's current AI activities in the long run could be Broadcom.
Broadcom is one of the core chip suppliers for Apple and other large tech companies, and it is also a key supplier of Ethernet switch chips for large AI data centers globally, as well as the core supplier of customized AI chips that are crucial for AI training/inference. In the niche market of cloud computing vendors for AI ASIC (non-GPU), Broadcom is currently the absolute "dominant player," with a market share of about 60%; Marvell follows closely with a share of 13%-15%, while other manufacturers, including Global Unichip, Alchip, and Samsung LSI, share the remaining market In the field of AI ASIC, Broadcom, with its strong performance over multiple quarters and optimistic outlook, tells investors that under the "ultra-low-cost AI large model computing paradigm" led by DeepSeek, the demand for AI computing power is still experiencing explosive growth, and the so-called "oversupply of AI computing power" is merely market overthinking. Especially as DeepSeek completely ignites the "efficiency revolution" in AI training and inference, driving the future development of AI large models to focus on the two core aspects of "low cost" and "high performance," AI ASIC is entering a trajectory of demand expansion that is even stronger than the AI boom period of 2023-2024, with major clients like Google, OpenAI, and Meta expected to continue investing heavily to collaborate with Broadcom in developing AI ASIC chips.
With its absolute technological leadership in inter-chip communication and high-speed data transmission between chips, Broadcom has become one of the most important players in the customization of ASIC chips in the AI field in recent years. For example, in Google's self-developed server AI chip—TPU AI acceleration chip, Broadcom is a core participant, collaborating with the Google team in the research and development of the TPU AI acceleration chip. In addition to chip design, Broadcom also provides Google with critical inter-chip communication intellectual property and is responsible for manufacturing, testing, and packaging new chips, thereby safeguarding Google's expansion of new AI data centers.
As American tech giants firmly invest heavily in the AI field, the biggest beneficiaries are likely to be AI ASIC giants like Broadcom, Marvell Tech, and Taiwan's Wistron. Microsoft, Amazon, Google, and Meta, as well as generative AI leader OpenAI, are all collaborating with Broadcom or other ASIC giants to update and iterate AI ASIC chips for massive deployment of inference-side AI computing power. Therefore, the future market share expansion of AI ASIC is expected to significantly outperform that of AI GPUs, tending towards equal shares rather than the current situation where AI GPUs dominate with a staggering 90% share of the AI chip market. The four major American tech giants are expected to spend as much as $330 billion on AI computing power by 2026, indicating a nearly 10% growth from this year's record scale