
Broadcom (Minutes): AI revenue growth will exceed 60% in the next fiscal year.
The following are the minutes of Broadcom AVGO's Q3 FY2025 earnings call organized by Dolphin Research. For earnings interpretation, please refer to NVIDIA, AMD lackluster? Broadcom takes over as AI flag bearer.
I.$Broadcom(AVGO.US) Key Financial Highlights
1. Overall Financial Performance:
a. Gross Margin: Primarily benefited from a high proportion of software revenue and optimization of the semiconductor product mix.
b. Operating Margin: 65.5%. Despite a sequential decline in gross margin due to product mix changes, operating margin improved sequentially due to operating leverage.
c. Total Operating Expenses: $2 billion (excluding stock-based compensation and other expenses), with R&D investment at $1.5 billion, indicating continued investment in technological innovation.
2. Segment Financial Analysis:
a. Semiconductor Solutions (AI + non-AI) revenue was $9.2 billion (up 26% YoY), accounting for 57% of total revenue. Gross margin was approximately 67%, slightly down YoY due to product mix changes. Operating margin was 57%, benefiting from AI R&D investment and scale effects, up 130 basis points YoY.
b. Software Services revenue was $6.8 billion (up 17% YoY), accounting for 43% of total revenue. Gross margin was as high as 93% (compared to 90% in the same period last year). Operating margin was approximately 77%, significantly higher than last year's 67%, clearly reflecting the completion of VMware integration and significant profit improvement.
3. Cash Flow: Free cash flow reached $7 billion, accounting for 44% of revenue. Capital expenditure was $142 million, showing spending restraint.
4. Operational Efficiency: Inventory was $2.2 billion, up 8% sequentially to meet Q4 growth expectations. Inventory turnover days decreased from 69 days in Q2 to 66 days, demonstrating efficient inventory management discipline. Sales turnover days were 37 days.
5. Capital Structure: Cash at the end of the quarter was $10.7 billion. Total debt was $66.3 billion, mostly fixed-rate long-term debt (average rate 3.9%, average term 6.9 years). $2.8 billion in cash dividends were paid to shareholders in Q3.
II. Detailed Content of Broadcom AVGO Earnings Call
2.1 Key Information from Executive Statements
1. Overall Financial Performance: Total revenue reached a record $16 billion, up 22% YoY. Adjusted EBITDA reached a record $10.7 billion, up 30% YoY, accounting for 67% of revenue, above the 66% guidance. Revenue growth was mainly driven by the outperformance of AI semiconductors and the continued growth of VMware business. The company's consolidated backlog reached a record $110 billion.
2. AI Semiconductor Business: Revenue was $5.2 billion, up 63% YoY, achieving strong growth for 10 consecutive quarters.
a. XPU: Business accelerated, accounting for 65% of AI revenue. Demand from three major existing customers continues to grow. A new qualified customer (the 4th customer) confirmed AI-related product orders exceeding $10 billion based on our XPU products. Therefore, the company significantly raised its AI revenue expectations for FY2026.
b. AI Networking Business: Strong demand, with Broadcom addressing AI cluster expansion challenges through technological innovation.
- Scale-Up: Launched an open Ethernet solution, scalable to 512 compute nodes, far exceeding the current mainstream solution of 72 nodes.
- Scale-Out: Introduced Tomahawk 6, reducing network layers from three to two, lowering latency and power consumption.
- Scale-Across: Launched Jericho4, supporting ultra-large-scale clusters with over 200,000 compute nodes across data centers.
3. Non-AI Semiconductor Business (slow recovery): Q3 revenue was $4 billion, flat sequentially.
a. Segment Performance: Strong sequential growth in broadband business, offset by declines in enterprise networking and server storage; wireless and industrial businesses were flat.
4. Software Services: Q3 revenue was $6.8 billion, up 17% YoY, exceeding expectations. Total contract value for the quarter exceeded $8.4 billion.
a. Officially launched VMware Cloud Foundation (VCF) 9.0, a fully integrated private cloud platform developed over two years, enabling enterprise customers to run all applications, including AI, in their own data centers or in the cloud. Management sees this as a true alternative to public cloud.
5. FY2025 Q4 Guidance:
a. Total Revenue: Expected to be approximately $17.4 billion, up 24% YoY.
b. Adjusted EBITDA: Expected to account for 67% of total revenue.
c. AI Semiconductor Revenue: Expected to be approximately $6.2 billion, up 66% YoY, continuing high growth.
d. Non-AI Semiconductor Revenue: Expected to be approximately $4.6 billion, benefiting from seasonal factors, achieving low double-digit sequential growth. Broadband, server storage, and wireless businesses are expected to improve, while enterprise networking is expected to decline sequentially.
e. VMware Revenue: Expected to be approximately $6.7 billion, up 15% YoY.
f. Gross Margin: Expected to decline approximately 70 basis points sequentially, mainly due to the high proportion of XPU and wireless business product mix.
g. Tax Rate: Expected non-GAAP tax rate to remain at 14%.
2.2 Q&A Session
Q: You mentioned that you now expect AI business growth to be "significantly faster" than last quarter's expectations. What are the main drivers of this outlook change? Is it because the new potential customer has become a formal customer and brought in that $10 billion backlog order? Or have the demands of our existing three major customers also become stronger than expected?
A: It's a combination of both. The main reason for raising our expectations is the addition of a fourth custom chip customer. The company will also begin large-scale shipments in early 2026. This new customer brings direct and substantial demand, coupled with the steady growth of orders from our existing three customers, fundamentally changing our outlook for 2026 performance.
Q: Regarding the non-AI semiconductor business, you previously mentioned that this part of the business is near the cyclical bottom and recovering slowly. How should we view the strength of this cyclical recovery? Given your 30 to 40-week delivery cycle, have you seen sustained order improvements in the non-AI business, and does this suggest that the recovery momentum can continue into the next fiscal year?
A: Our Q4 guidance does show a slight 1-2% YoY growth in non-AI business, but it's not worth focusing too much on. The current situation is that, aside from the seasonal factors we see, the performance of various sub-segments is mixed, and overall, there is no clear recovery trend. The only consistently strong growth over the past three quarters has been in the broadband business, while other areas have not shown sustainable upward momentum. In summary, the situation hasn't worsened, but we haven't seen the expected V-shaped rebound either. I tend to think this will be a U-shaped slow recovery, perhaps not seeing more meaningful recovery until mid to late 2026. But for now, the outlook remains unclear.
Q: Considering your long 40-week delivery cycle, have you started to see signs of recovery in order trends (for non-AI business)?
A: We have indeed seen it. But we've experienced similar situations before... Bookings are growing, up more than 20% YoY. Of course, this can't compare to AI business bookings, but a 23% growth is still quite good.
Q: Last time you mentioned that AI revenue could grow by 60% in FY2026. After accounting for the $10 billion order from the new customer, what is the updated growth expectation now? Looking ahead to 2026, will the revenue mix of "custom chips" and "network chips" in the AI business remain roughly the same as in the past year, or will it lean more towards custom chips?
A: I need to clarify: last quarter, I only said that the growth trend in 2026 would "reflect" the situation in 2025, which is about a 50%-60% YoY growth rate, and I didn't give a specific number. Now, what I want to express, perhaps more accurately, is: we see growth "accelerating," not just maintaining a stable 50%-60% level. We expect the growth rate in 2026 to exceed 2025 (50-60% growth). I know you want a specific number, but we can't provide a 2026 performance forecast at this time. However, the best way to describe it is that it will be a considerable increase.
As I mentioned earlier, one of the major drivers of growth in 2026 will be XPU. On one hand, our share among these three existing customers is continuously increasing, as their technology iterations lead to more adoption of XPU; on the other hand, we have added a fourth very important customer. These two factors combined mean our business will be more inclined towards XPU.
While we will also gain network chip business from these four major customers, the proportion of network business from outside these four will be diluted and appear smaller. Therefore, it is foreseeable that by 2026, the percentage of network chips in total AI business revenue will decrease.
Q: Regarding the $110 billion backlog, how is this number specifically composed? How much do AI business, non-AI business, and software business each account for? What is the approximate delivery cycle for these orders?
A: We typically don't provide specific breakdown data for the backlog; it's mainly to show the strong momentum of the company's overall business. The growth of this number is largely driven by the AI business. Meanwhile, the software business is steadily increasing, and the non-AI semiconductor business has also achieved double-digit growth, but it pales in comparison to the strong growth of AI. If I had to give a rough idea, at least half (50%) of this is semiconductor business. It can be reasonably assumed that within the semiconductor portion of the orders, AI-related business is far more than non-AI business.
Q: Besides the existing four major customers, how are the conversations progressing with the other three potential customers? How do you view the growth momentum brought by these potential new customers starting in 2027? How will it shape the company's future development?
A: For this question, I don't want to give a subjective judgment because, frankly, I can't give an accurate answer. In practice, we sometimes encounter obstacles in production at unexpected times, and projects can also be delayed. Therefore, I prefer not to give you more details about these potential customers. I can only tell you that these are real potential customers, and we are maintaining very close cooperation with each of them, and they are all investing their resources in development, with a clear intention to enter the mass production stage, just like the four customers we have today.
Q: Do you still believe that the goal of reaching a shipment volume of 1 million chips for these seven customers (4 existing + 3 potential) is still valid?
A: No, what I said before was only for our existing customers. I have no comments or judgments about those three potential customers. But for our existing four customers, yes.
Q: Beyond these confirmed seven customers (4 existing + 3 potential), how many more potential customers do you think are worth developing custom AI chips for in this market? How do you evaluate these additional potential customers? Considering your usual selectivity regarding the number of customers and the scale they can bring, can you describe the opportunity landscape you see beyond the "seven major customers"?
A: We divide this market into two major segments: the first category is customers developing large language models (LLM) themselves. The second category is the enterprise market, which is the market for running AI workloads for enterprises, whether through on-premises deployment or cloud services. Frankly, we do not engage in the enterprise market because it is difficult to penetrate, and our organizational structure is not designed for it.
Our focus is entirely on the first category, which is developers of large language models (LLM). As I have mentioned many times, this is a very narrow market dominated by a few players. These players are leading the development of cutting-edge models, continuously accelerating towards "superintelligence." They not only need to invest heavily in the training phase to build larger and more powerful accelerator clusters, but they also have to be accountable to shareholders and need to create cash flow to sustain their development path. Therefore, they must invest heavily in inference capabilities to commercialize and monetize the models. These are our target customers. Each of them will invest heavily in computing power, but such players are very rare. I have identified seven, four of which are our customers, and the other three are potential customers we are continuously engaging with. We are very selective and cautious in screening who meets the criteria—they must be building or already have a platform and are heavily investing in leading large language models.
We might see one more potential customer, but even making that judgment, we would be extremely thoughtful. So, it can be said with certainty that we now have seven. For now, that's all we have.
Q: You mentioned Jericho4, and NVIDIA is also talking about XGS switches. It sounds like the switch market specifically designed for AI clusters is really taking off. Can you talk specifically about why you think this business will see a "substantial increase" in revenue? As AI applications increasingly shift towards inferencing, why are switch chips like Jericho4 becoming more important? What key role do they play in inferencing scenarios?
A: Indeed, let's first talk about 'scale-up,' which is deploying compute nodes within a single rack or a single data center; now we are increasingly talking about 'scale-across,' which is expansion across data centers.
When you build an ultra-large-scale cluster with over 100,000 GPUs or XPUs (custom AI chips), you quickly encounter physical limitations, such as the footprint or power capacity of a single data center may not be sufficient to accommodate so many devices. Therefore, we see almost all major customers adopting a new strategy: they establish multiple data center sites in close proximity (e.g., within 100 kilometers) and deploy homogeneous AI chips at these sites.
The coolest part is connecting these three or four sites through a network, allowing them to work together as a single, massive cluster. This is 'scale-across.'
To achieve this cross-regional connection, a special network technology is required, which must have deep buffering and intelligent congestion control capabilities. This is not a new technology; it has been used in network routing in the telecommunications field (such as AT&T, Verizon) for many years. Now, we are applying it to the more complex scenario of AI workloads.
In the past two years, we have supplied several ultra-large-scale customers with Jericho3 to connect their clusters. As the bandwidth demand for AI training continues to expand, we have now launched the new generation Jericho4, which has 51.2T bandwidth to handle larger traffic.
The key is that this is based on mature technology that has been fully tested and verified over the past one or two decades. It runs on Ethernet, which is very stable and reliable. It is not something newly created for AI but an application upgrade of mature technology.
Q: Have you converted all of your top 10,000 large customers from a single vSphere product to a VMware Cloud Foundation (VCF) subscription plan that includes all components? (Last quarter, this ratio was 87%, how much has been completed now?)
A: Over 90% of our top 10,000 large customers have purchased VCF (VMware Cloud Foundation). But I need to clarify the wording: they have purchased the deployment license for VCF, which does not mean they have completed the full deployment.
This is precisely the second phase of our work: working with these customers to help them successfully deploy, operate, and scale their envisioned private cloud on their local infrastructure. This is the core work for the next two years. As deployment deepens, we expect VCF to continue to expand in their IT environments. The first phase of the VMware story was convincing customers to switch from perpetual licenses to VCF subscriptions; the second phase is helping customers truly realize the private cloud value they expect on VCF. On this basis, we will also sell advanced services such as security, disaster recovery, and even AI workloads in the future, which is very exciting.
Q: After these enterprise customers adopt the VMware solution, have you seen specific, tangible cross-selling growth in Broadcom's original hardware business (such as semiconductors, storage, network products)?
A: No. These two are completely independent. In fact, when we help customers virtualize their data centers, we consciously accept the fact that we are 'commoditizing' the underlying hardware, including servers, storage, and even networks. This is okay because, in this way, we actually help enterprises reduce their investment costs in data center hardware.
Q: Besides these large customers, how is the interest of the broader enterprise customer base in adopting VCF?
A: We have achieved some success, but we do not expect to achieve the same level of success, mainly for two reasons: ① For small and medium-sized enterprises, the total cost of ownership (TCO) savings that deploying VCF can bring may not be as significant. ② More importantly, the technical capability required to continuously operate and maintain such a private cloud platform may exceed what many small and medium-sized enterprises can bear.
This part of the market is still a learning and exploration process for us. VMware has a total of 300,000 customers, and we believe the value of deploying VCF private cloud for the top 10,000 customers is enormous. As for whether the next 20,000 to 30,000 medium-sized enterprises will see the same value, it remains to be seen.
Q: You expect the overall gross margin to decline by only 70 basis points (bps), but considering that the more significant AI chips (XPU) and seasonal wireless business typically drag down semiconductor business margins, while software business revenue declines sequentially, this decline seems too small. Can you help me understand which part's strong performance offset these negative factors, resulting in such a mild overall gross margin decline? Is there really that much room for improvement in software business margins?
A: We expect AI chips (XPU/GPU) and seasonal wireless business revenue to grow, while software business revenue will also slightly increase. The third quarter is usually the heaviest shipping quarter for our wireless business throughout the year. So, you will see an increase in revenue contribution from wireless business and AI chips (which have relatively low gross margins), while software business revenue remains relatively stable. These are the main dynamic factors affecting the overall gross margin.
Q: You previously categorized your AI customers into two types: one is hyperscale cloud service providers (such as customers 4 and 5), and the other is native large model developers (such as customers 6 and 7). Can you help us categorize the custom AI chip customers mentioned this time? Can you disclose the delivery timeframe for the $10 billion AI chip order you mentioned?
A: Regarding customer classification, it's actually difficult to strictly distinguish. Ultimately, all seven of our custom chip customers are related to large language models (LLM). Although not every customer has a large existing platform like the leading cloud providers, it is foreseeable that they will eventually build or have their own platform. So, the boundary of distinction is becoming increasingly blurred.
As for the $10 billion order you asked about, we expect to start delivery in the second half of FY2026. More precisely, it is likely to complete the delivery of this $10 billion order in the third quarter of FY2026.
Q: How do you evaluate the current market development momentum of your Scale-up Ethernet solution? How does it compare to other industry solutions (such as UALink and PCIe) in terms of competitiveness? How important is the launch of the Tomahawk 5 Ultra switch chip with lower latency for enhancing the attractiveness of your Scale-up solution? Looking ahead to the next year, how big of a business opportunity do you think Scale-up Ethernet can grow into in your AI networking business?
A: First, the separation degree of Ethernet solutions from AI accelerators (XPU/GPU) is very high. We firmly believe that Ethernet should be an open standard, giving customers the choice, rather than being tied to a specific AI chip.
For customers using our custom AI chips (XPU), we will co-optimize components such as network switches with the chips during development to ensure optimal signal transmission performance. This hand-in-hand development experience allows us to handle Ethernet interfaces very efficiently. Therefore, in our customer ecosystem, we are very openly promoting Ethernet as the preferred network protocol.
This strategy is effective, especially among hyperscale cloud service providers. They have the ability to separate the design of GPU cluster construction from the network architecture, particularly in large-scale clusters that require scale-out. In this scenario, we have sold a large number of Ethernet switches. We expect that as this decoupling trend further develops, the application of Ethernet will become more widespread.
In summary, whether within our own ecosystem or among leading cloud providers capable of independently building network architectures, Ethernet has become the de facto standard. And as a leader in this field, we naturally benefit from it.
Q: Do you think emerging interconnect technologies like UALink or PCIe have the potential to replace Ethernet and become the mainstream solution for AI cluster interconnects around 2027 when they are expected to be widely applied?
A: I am biased, but this bias stems from an obvious fact: Ethernet is time-tested and deeply rooted. For all the engineers and architects designing AI data centers in hyperscale cloud providers, Ethernet is their most familiar and natural logical choice, and they are already widely using and focusing on it. In my view, abandoning such a mature ecosystem to develop a completely new, independent protocol is unimaginable.
The only point of contention is latency, which is why technologies like NVLink have emerged. But even so, reducing Ethernet latency is not difficult. We (and several other manufacturers) only need to adjust the switches to easily achieve latency below 250 nanoseconds, better than NVLink or InfiniBand—we have already done it. This is a reflection of our technical accumulation in the Ethernet field over the past 25 years.
Therefore, I believe Ethernet is the direction of the future. Its openness also means the existence of competition, which is one of the reasons hyperscale cloud providers like it—they don't want to be locked in by a single supplier. We welcome this open competition.
Q: How do you view the competitive threat from other manufacturers in the US and Asia in the custom ASIC field? Do you think this competition is intensifying or diminishing?
A: In this field, the only way we maintain our lead is through continuous investment, striving to surpass all competitors in innovation.
Our advantage lies in being the pioneer of the custom AI chip (ASIC on Silicon) business model. We have a world-class semiconductor IP portfolio, such as the most advanced SerDes (serializer/deserializer), top-notch packaging technology, and excellent low-power design capabilities.
What we can do is continue to increase investment to maintain our lead in the competition. So far, we are doing well in this regard.
Q: You currently have 3 to 4 major cloud service provider customers deploying AI clusters on a large scale. As these data centers grow larger, the demand for efficiency and differentiation through customization becomes stronger, which is the value of XPU. Based on this logic, why shouldn't we believe that, in the long run, the market share of XPU within your core customers will eventually exceed that of general-purpose GPUs?
A: The fact is also developing in this direction. We see that this is a gradual, multi-year evolution process.
We have developed more than one generation and more than one version of custom chips (XPU) for each existing customer. With each new generation of products, we observe a clear trend: as customers' confidence in chip performance increases, and their software stacks (such as libraries and models) on our chips continue to mature and stabilize, they significantly increase the procurement and deployment of XPU.
So, the conclusion is affirmative. Within these customers who have successfully deployed our solutions, their own chips (XPU) will undoubtedly occupy an increasingly higher proportion in their computing landscape. We are witnessing this trend firsthand, which is also why our market share can continue to grow.
<End Here>
Risk Disclosure and Statement of This Article:Dolphin Research Disclaimer and General Disclosure