Dolphin Research
2025.03.07 06:33

Broadcom (Minutes): ASIC customers increase from 2 to "3+4", no consideration for mergers and acquisitions for now

Broadcom (AVGO.O) released its Q1 2025 financial report (for the period ending January 2025) after U.S. stock market hours on March 7, Beijing time:

Below are the minutes from Broadcom's Q1 2025 earnings call. For the interpretation of the financial report, please refer to Marvell Collapse, NVIDIA Lurking? Broadcom to be the Stabilizing Force .

I. $Broadcom(AVGO.US) Core Financial Information Review

  1. FY25 Q1:

Revenue: Total revenue for Q1 of FY2025 reached $9 billion, a year-on-year increase of 25%; adjusted EBITDA reached $10.1 billion, a record high, up 41% year-on-year. Semiconductor business Q1 revenue grew driven by AI, with AI revenue at $4.1 billion, up 77% year-on-year, exceeding the guidance of $3.8 billion. Non-AI semiconductor revenue was $4.1 billion, down 9% quarter-on-quarter due to seasonal factors, with overall recovery being slow. Infrastructure software business Q1 revenue was $6.7 billion, up 47% year-on-year and up 15% quarter-on-quarter.

Profit Margin: The gross margin for Q1 was 79.1% of revenue, better than initially expected, due to increased infrastructure software revenue and a more favorable semiconductor revenue mix; total operating expenses for Q1 were $2 billion, with R&D expenses at $1.4 billion. The gross margin for semiconductor solutions business was approximately 68%, an increase of 70 basis points year-on-year; the gross margin for infrastructure software business was 92.5%, compared to 88% in the same period last year. Q1 operating income was $9.8 billion, a year-on-year increase of 44%, with an operating profit margin of 66% of revenue; adjusted EBITDA for Q1 reached a record $10.1 billion, accounting for 68% of revenue, higher than the guidance target of 66%, excluding $142 million in depreciation.

Cash Flow and Capital Expenditure: Free cash flow for Q1 was $6 billion, accounting for 40% of revenue, impacted by cash interest expenses related to VMware acquisition debt, cash taxes due to U.S. taxable income mix, delays in the re-enactment of Section 174, and corporate AMT effects; Broadcom's capital expenditure was $100 million; the days sales outstanding at the end of Q1 was 30 days, down from 41 days in the same period last year. Inventory at the end of Q1 was $1.9 billion, an increase of 8% quarter-on-quarter, to support future revenue, with inventory days at 65 days; at the end of Q1, Broadcom had $9.3 billion in cash and $68.8 billion in total principal debt In the first quarter, Broadcom used new priority notes, commercial paper, and cash on hand to repay $495 million of fixed-rate debt and $7.6 billion of floating debt, resulting in a net debt reduction of $1.1 billion; in the first quarter, Broadcom paid $2.8 billion in cash dividends to shareholders and spent $2 million to repurchase 8.7 million AGO shares from employees for tax withholding.

  1. FY25 Q2 Guidance:

Revenue: Total revenue for Q2 is expected to be approximately $14.9 billion, a year-on-year increase of 19%. Total semiconductor revenue is expected to be around $8.4 billion, a year-on-year increase of 17%, with artificial intelligence revenue expected to reach $4.4 billion, a year-on-year increase of 44%, and non-artificial intelligence semiconductor revenue expected to be $4 billion; in terms of infrastructure software, Q2 is expected to be approximately $6.5 billion, a year-on-year increase of 23%.

Gross Profit: In Q2, due to the revenue mix of infrastructure software and the product mix of semiconductors, the overall gross margin is expected to decline by approximately 20 basis points quarter-on-quarter, with adjusted EBITDA expected to account for about 66% of revenue.

Cash Flow and Capital Expenditures: Q2 non-GAAP diluted share count is expected to be approximately 4.95 billion shares.

Broadband is expected to hit bottom in Q4 2024, but there was a double-digit quarter-on-quarter rebound in Q1, and similar growth is expected in Q2, with service providers and telecommunications increasing spending. Server storage experienced a single-digit quarter-on-quarter decline in Q1, but a higher single-digit quarter-on-quarter growth is expected in Q2.

Due to customers continuing to manage channel inventory, the enterprise networking business remained stable in the first half of FY25. Wireless business saw a quarter-on-quarter decline due to seasonal factors, but remained roughly flat year-on-year. A slight year-on-year decline is also expected in Q2 for the wireless business. Resale in the industrial sector declined by double digits in Q1, and a decline is also expected in Q2.

2. Broadcom Earnings Call Detailed Content

2.1 Key Information from Executive Statements

Semiconductor Business:

Artificial intelligence has driven growth in semiconductor revenue, with AI revenue reaching $4.1 billion in Q1, a year-on-year increase of 77%. Broadcom's performance exceeded expectations due to increased shipments of network solutions for AI supercomputers.

Broadcom is increasing R&D investment in two areas: developing next-generation accelerators, such as launching the industry's first 3.5D packaged 2-nanometer AI XPU, and progressing towards a 10,000 tariff flops XPU; expanding 500,000 accelerator clusters for hyperscale customers, doubling the existing Thermon rated capacity, and launching the next-generation switch with a monthly capacity of 100TB, supporting 200G research and 1.6TB bandwidth. Samples will be delivered to customers in the coming months.

Broadcom is collaborating with hyperscale partners who are actively investing in next-generation models that require high-performance accelerators and larger AI data center clusters. By FY27, these three hyperscale customers are expected to generate a serviceable addressable market of $60 billion to $90 billion In addition, Broadcom is collaborating with two other hyperscale companies to develop customized artificial intelligence accelerators; aside from these three hyperscale clients, two other clients have chosen Broadcom to develop customized accelerators for their next-generation frontier models, and four additional clients are working closely with Broadcom to develop their own accelerators, but these are not included in the projected market size for the fiscal year 2027. Due to the pressure that new frontier models and technologies place on artificial intelligence systems, making it difficult for a single system design point to serve all types of models, Broadcom believes that the development trend of XPU is a multi-year journey.

In 2025, the deployment of XPU and network products will steadily increase. The recovery of non-artificial intelligence semiconductors remains slow.

  1. Infrastructure Software:

The significant growth in the software division is due to the transition from major perpetual licenses to fully subscription-based models (as of now, this transition has been completed for over 60%), and the full-stack VCF has been raised for customers, enabling virtualization across the entire data center, allowing customers to create their own private cloud environments internally.

As of the end of the first quarter, approximately 70% of Broadcom's largest 10,000 customers have adopted VCF, with further growth opportunities in the future as enterprises adopt artificial intelligence and run AI workloads in their internal data centers.

Broadcom has partnered with NVIDIA to launch the VMware private infrastructure platform, which virtualizes GPUs, enabling enterprises to import AI models and run their own data internally, with 39 enterprise customers to date.

2.2 Q&A

Q: You mentioned that there are 4 new customers coming online, can you elaborate on the trends you are currently seeing? Is it possible for these 4 customers to reach the scale of the existing 3 customers? What does this mean for the trend of customized chips and Broadcom's long-term optimistic outlook and growth potential?

A: These 4 customers are not yet strictly defined. In developing and creating XPU, Broadcom is not the true creator but rather assists hyperscale data center partners in creating chips and computing systems, which includes models and software models that need to closely integrate with the computing engine (XPU) and the networks that connect multiple XPU clusters together to train frontier models. The hardware created by Broadcom must work in conjunction with the software models and algorithms of its partners to achieve full deployment and scalable applications, so it is only defined as a customer when it is known that the partner has deployed at scale and received production orders that enable them to operate. These 4 are partners attempting to do the same thing as the first 3 customers, running or training their own frontier models. Developing the first chip typically takes 1.5 years, and Broadcom has viable frameworks and methods to accelerate this process. There is no reason to believe that these 4 partners cannot generate demand like the first 3 customers, but it may be later since they are starting later.

Q: Can you describe the growth situation in AI business in the second half of this fiscal year compared to the first half? Is the growth situation more favorable or unfavorable compared to 90 days ago? Is the capacity increase for the 3-nanometer products mentioned 90 days ago still on track? A: Currently, the Q2 data is quite encouraging, partly due to improvements in the shipment volume of network products , and there has also been relevant progress in AI accelerators (and in some cases GPUs) for hyperscale data centers. Additionally, there are some instances of orders being delivered and accelerated ahead of schedule in fiscal year 2025. However, it is difficult to speculate on customer thoughts, making it hard to provide a definitive judgment on the situation in the second half of the year.

No relevant speculation for the second half of the year.

Q: From the headlines, tariffs and Deep Seek may cause some disruptions, and some customers and other complementary suppliers seem a bit at a loss, making it difficult to make tough decisions. Has Broadcom been affected by these dynamics? Besides increasing customers in the AI field, will Broadcom undergo any larger changes as a result?

A: It is still too early to assess the impact of tariffs, as the situation regarding chip tariffs is not yet clear, and the specific structure is unknown. Currently, Broadcom is experiencing positive disruptions, namely the positive impact of generative AI on the semiconductor industry. Generative AI is accelerating the development of semiconductor technology, including processes, packaging, and the design direction towards higher-performance accelerators and network functions. In terms of XPU, Broadcom is being asked to optimize for cutting-edge models from partners, customers, and hyperscale partners, which involves balancing multiple variables such as computing power, network bandwidth, memory capacity, and latency, presenting a great challenge for engineers. Furthermore, AI not only drives the development of enterprise hardware but also influences the architecture of enterprise data centers, where data privacy and control become important. Large enterprises may pause pushing workloads to the public cloud and instead consider upgrading their own data centers to run AI workloads locally, which is a trend observed over the past 12 months. Regarding tariffs, a clearer understanding may emerge in 3 to 6 months.

Q: What is the conversion rate from design to deployment? Is there a significant range of fluctuations? What methods can help Broadcom understand this situation?

A: Broadcom's view on obtaining orders differs from the outside world. Broadcom believes that an order is only considered obtained when the product is mass-produced and actually deployed in production. The process from tape-out to product delivery to partners and then to mass production typically takes a year or even longer. Based on Broadcom's experience, it takes 6 to 12 months from product to partner to mass production. Moreover, in Broadcom's view, producing and deploying 5,000 XPUs does not count as true mass production. When selecting partners, Broadcom chooses customers with substantial demand, primarily those with ongoing needs in the training of cutting-edge large language models. Broadcom has always approached its ASIC business this way, selecting sustainable customers and developing multi-year roadmaps with them, avoiding partnerships with startups.

Q: New regulations or AI diffusion rules will be introduced in May. Will this affect the order acquisition or product shipment of the three customers currently with large-scale business? Is there a possibility for these customers to enter the Chinese market or supply to Chinese customers? A: In the context of current geopolitical tensions and significant actions taken by various governments, there are concerns, but Broadcom does not have such worries. We do not comment on whether to enter or supply the Chinese market.

Q: If the business mix in the artificial intelligence market shifts more towards inference workloads, what changes will occur in Broadcom's business opportunities and market share? Will this cause the total addressable market (TAM) to exceed $60-90 billion, or will it remain unchanged but with a different product mix? In the coming year, will markets with a higher proportion of inference business be more favorable for GPU development?

A: Broadcom also focuses on inference as an independent product line, as the architecture of inference chips is quite different from that of training chips. The $60-90 billion target market size is the sum of training and inference businesses. However, so far, most of Broadcom's revenue still comes from training business rather than inference business.

Regarding the AI chip market, forecasts suggest that the GPU segment is expected to hold the largest share in the market. GPUs can effectively handle the substantial computational loads required for training and running deep learning models. This makes them crucial in data centers and AI research, as the rapid growth of AI applications demands efficient hardware solutions. Additionally, the inference engineering market is expected to capture the largest share of the AI chip market during the forecast period and will grow at the highest rate. Therefore, if the market workload composition shifts towards inference, Broadcom has the opportunity to expand its market share, especially if its products can effectively meet the demands of inference workloads. Furthermore, while GPUs dominate in data centers and other high-performance computing environments, other types of chips, such as ASICs and FPGAs, may also find opportunities in specific application scenarios as inference workloads grow.

Q: When customers choose network cards, how do they consider the choice between vendors with the best network switching and ASIC capabilities (like Broadcom) and those with computing capabilities, and what key points do they ultimately focus on?

A: For hyperscale data center customers, the choice is primarily performance-driven when connecting and scaling AI accelerators (whether XPU or GPU). If they want to achieve the best performance from hardware while training and continuously training cutting-edge models, customers will first choose proven hardware and systems. Broadcom has at least 10 years of experience in network switching and routing, which gives it a significant advantage in this area. Moreover, with the development of AI, Broadcom continues to increase its investment, upgrading from 800Gbps bandwidth to 1.6T and even 3.2T, and is accelerating the development of next-generation products, such as the planned Tomahawk 6, 7, and 8, primarily for a few customers with large market potential.

Q: Previously, it was mentioned that the number of XPU units will grow from about 2 million last year to about 7 million by 2027-2028. Will the addition of 4 new customers increase this 7 million figure, or will it just fill up to that number?

A: The market currently discussed, including the converted unit numbers, only pertains to the existing 3 customers. The additional 4 are partners and are not yet considered customers, so they are not included in the available market served Q: How does Broadcom support the optimization of six large-scale frontier models through the expansion of its product portfolio, to what extent does it assist customers in achieving their goals for capital expenditure per dollar and floating-point operations per watt, and in what areas might customers choose not to share work with Broadcom due to differentiation needs?

A: Broadcom only provides foundational technologies in semiconductors, allowing customers to utilize these technologies and optimize based on their specific models and algorithms. Broadcom performs a certain degree of optimization for each customer, with about five degrees of freedom. The optimization methods are related to the needs of partners, involving aspects such as performance and power, which ultimately affect the total cost of ownership. Additionally, optimization is related to cluster size and usage scenarios (such as training, pre-training, post-training, inference, and scaling during testing). Regarding the so-called "Chinese wall" issue, Broadcom considers it a technical problem and has no relevant comments.

Q: With a complete portfolio of connectivity products, how does Broadcom view the opportunities for scaling new undeveloped projects, and how do these opportunities manifest in optics, copper cables, etc., and what benefits do they bring to Broadcom?

A: Broadcom deals with many large-scale customers, most of whom are expanding and are almost all new projects, leaning towards adopting next-generation technologies, presenting significant opportunities. Broadcom has the capability to operate in copper cables but sees many opportunities in providing network connectivity through optics, including active components such as multi-mode lasers (VCSELs) and single-mode lasers. Besides Ethernet, Broadcom is a leader in other protocols like PC Express, providing both smart switches (similar to certain series) and downlink switches (Tomahawk). Combined, these product portfolios may account for about 20% of Broadcom's total AI revenue as previously mentioned, with the potential to reach 30%, and nearly 40% last quarter, but this is not the norm; typically, it averages around 30%, while accelerators (XPUs) account for 70%.

Q: With the increase in operating expenses, where are these expenses directed in terms of AI opportunities, and are they related to R&D?

A: In terms of R&D, Broadcom's consolidated spending on R&D in the first quarter was $1.4 billion and will continue to increase. Broadcom emphasizes R&D across all product lines to maintain the competitiveness of next-generation products. The focus is on two areas: first, launching the industry's first 2-nanometer AI floating-point unit (AI FPU) using 3D packaging; second, doubling the capacity of the existing Tomahawk to enable AI customers to scale accelerators (XPUs) to one million over Ethernet.

Q: How has the AI business grown quarter-on-quarter in terms of networking, what are the thoughts on future mergers and acquisitions, and how does Broadcom view the numerous news surrounding Intel's product projects?

A: In terms of networking, the business ratio in the first quarter was 60% computing and 40% networking, but this is not the normal ratio, which is closer to 70 - 30. The second quarter is expected to continue this trend, but this is only a temporary phenomenon; in the long term, the normal ratio will be 70 - 30. Regarding mergers and acquisitions, Broadcom is currently focused on AI and VMware businesses and is not considering acquisitions at this time.

Risk Disclosure and Disclaimer of this Article: Dolphin Research Disclaimer and General Disclosure