Dolphin Research
2026.05.08 04:57

CoreWeave (Trans): Inference now >50% of compute usage

Dolphin Research Trans of $Coreweave(CRWV.US) FY26Q1 earnings call. For the full analysis, see CoreWeave: Can Nvidia’s ‘godson’ truly rise on pedigree?.

I. Key takeaways

1. Guidance: Q2 revenue of $2.45bn-$2.6bn, Q2 Adj. OP of $30mn-$90mn, Q2 interest expense of $650mn-$730mn, and Q2 capex of $7bn-$9bn. FY2026 capex raised to $31bn-$35bn on component inflation. Year-end 2026 annualized revenue run-rate lifted to $18bn-$19bn (low end +$1bn); 2027 year-end run-rate remains $30bn+, with 75%+ already contracted.

2. Key financials: Q1 revenue of $2.1bn (+112% YoY, +32% QoQ). Adj. OP of $21mn with a 1% margin, which management expects to be the annual trough. Net loss of $740mn (vs. $315mn YoY), interest expense of $536mn (vs. $264mn YoY), and capex of $6.8bn.

3. Capital markets & financing: YTD financing exceeded $20bn (debt + equity). DDTL 4.0 is an $8.5bn investment-grade (A-) delayed-draw term loan, implied cost <6%, non-recourse to the parent, marking the first IG debt backed by HPC infrastructure. Also secured $2bn in equity tied to an expanded NVIDIA relationship and a $1bn strategic investment from Jane Street; DDTL 5.0 (for OpenAI and Cohere contracts) was priced in the public loan market with the spread 50bps tighter vs. initial marketing. S&P outlook moved to 'Positive' from 'Stable'. No debt maturities before 2029 (ex. self-amortizing contract-backed debt and OEM financing). Weighted avg. debt cost fell ~600bps from 2023 to 2025, and declined another ~80bps YTD.

4. Balance sheet: Period-end cash, restricted cash and marketable securities exceeded $3.3bn. Construction in progress (CIP) was roughly flat QoQ.

II. Call details

2.1 Management highlights

1. Demand and customer expansion

a. New customer commitments topped $40bn in Q1, with backlog near $100bn (+~50% QoQ, ~4x YoY). About 36% is expected to be recognized over the next 24 months, and 75% over the next four years.

b. All four top-tier global AI model developers are on CoreWeave Cloud, and nine of the top ten AI leaders outside China are customers. This underscores CoreWeave’s positioning among leading AI builders.

c. Anthropic was added in Q1, supporting the Claude family for training and deployment. Also signed multiple new orders with Meta, including the $21bn agreement announced in Apr.

d. Ten customers have committed to spending at least $1bn on CoreWeave. This reflects deep, multi-year demand.

e. Backlog in financial services is near $10bn, with Jane Street adding $6bn of capacity in Q1. Hudson River Trading joined as a new customer.

f. Physical AI and spatial computing backlog contribution has surpassed $1bn. New customers include WorldLabs, PhysicsX and Sunday Robotics.

g. The share of backlog from non-IG AI-native firms and foundation model labs fell to below 30%. Mix is shifting toward higher-credit counterparties.

h. Avg. pricing rose QoQ across all GPU generations (A100, H100, H200, L40), and recent capacity is largely sold out. Pricing power remains solid amid tight supply.

2. Cloud platform & products

a. Over 90% of reserved instance customers use at least two products, and over 75% use three or more. Cross-sell remains strong.

b. Storage continues to grow rapidly, and software, CPU and networking each are expected to reach $100mn+ ARR by year-end. Diversification is improving.

c. Launched CoreWeave Trust Center for enterprise security/compliance, plus flex reservation and spot pricing to enable elastic consumption. These expand addressable workloads.

d. Rolled out CoreWeave Interconnect with Google Cloud to simplify multi-cloud workload management. This enhances hybrid strategies.

e. Introduced CoreWeave Omni to deploy the full cloud stack in customers’ own data centers. It has drawn strong early interest from cloud, enterprise and sovereign customers.

f. Examples: Perplexity runs next-gen inference on CoreWeave and uses Weights & Biases for training. Advaita Bio compresses biology data analysis from weeks to minutes.

3. Infrastructure build-out & execution

a. Active power surpassed the 1GW milestone, making CoreWeave one of the few clouds at this scale. It is the only platform purpose-built for AI.

b. Added 400MW of signed power in Q1, taking total signed capacity above 3.5GW. The vast majority is expected to come online by end-2027.

c. Operating nearly 50 data centers, with no single data center provider over 17% of active infrastructure. This limits concentration risk.

d. Active power is expected to reach or exceed 1.7GW by end-2026, and surpass 8GW by 2030. Scale-up remains on track.

e. The first self-built data center is slated to go live later this year. It should provide greater operational control and long-term financial upside.

f. NVIDIA certified CoreWeave’s software solution as a reference architecture. NVIDIA also provides a 5GW infrastructure expansion opportunity.

4. Margin outlook

a. Q1 marked the margin trough, with expansion expected each quarter through the year. The inflection is seen from Q2 into Q3, with Q4 returning to low double-digit Adj. OPM.

b. Variability is timing-driven rather than economic. Once a power shell is received, lease and power costs start and servers/DC equipment begin depreciation; the 1-2 month fit-out period carries costs with no revenue, and revenue starts in month three with contribution margins normalizing to the mid-20s.

c. In H2, Adj. OP growth is expected to outpace revenue growth. Operating leverage should improve.

2.2 Q&A

Q: How do component price hikes affect profitability, and what changed substantively in the NVIDIA relationship? How does the 5GW fit into the 8GW 2030 target?

A: Component inflation is a system-wide reality across cloud and AI infrastructure. Over the past 6-9 months, some parts saw sharp shortages and price increases. CoreWeave is a success-driven company and prices contracts to reflect the full cost of the components required to deliver infrastructure, so we can pass through component inflation and still meet the mid-20s unit margin target cited by Nitin. This is a challenge, but supply-chain management and partnerships are strong, and the P&L impact is embedded in today’s guidance.

Regarding NVIDIA, two key developments occurred. First, NVIDIA certified our software solution as its reference architecture, a major validation of software quality. Second, the 5GW infrastructure angle: CoreWeave signed 2GW on its own over the past 12 months, including 400MW in Q1, and we can secure capacity at scale without relying on external support; the 5GW gives us the ability to accelerate when opportunities arise and meet demand at truly remarkable scale.

Q: H1 Adj. OP is only about $81mn, but H2 guidance implies ~$919mn; what drives the confidence?

A: Q2 is fully in line with expectations. Everything is governed by the cadence of active power and capacity coming online, which is non-linear, and both revenue growth and margin trajectory are tracking the path laid out on the Q4 call. Q1 is the margin trough, margin expansion is expected each quarter, and the revenue and margin inflection is seen between Q2 and Q3; we reiterate full-year revenue and Adj. OP guidance and exiting the year with low double-digit Adj. OPM, and we raise the lower end of year-end 2026 ARR. Adj. OP growth in H2 will outpace revenue growth.

As context, CoreWeave operates around 50 data centers with no single DC provider over 17%, and works with multiple OEMs and ODMs. We have built a resilient supply chain across all components to ensure delivery, which underpins our confidence in reiterating guidance.

Q: For signed but not-yet-online contracts, do component and energy cost increases cause margins to deviate from initial expectations? Is there a mechanism for price adjustment or contract restructuring?

A: We have purchase orders in hand for the infrastructure required at the time we price deals. Power costs for today, next year and through the contract term are known and contracted. Component and power costs are effectively pass-through at signing, and minor capex adjustments reflect component inflation already embedded in guidance.

Q: Q1 revenue materially beat; why not flow that directly into full-year revenue guidance?

A: As stated last call, 2026 capacity is largely sold out, and that still holds. The upside shows up in two ways: we raised the lower end of exit-2026 ARR, implying we will end 2026 stronger than expected a few months ago, and we reaffirmed 2027 ARR at $30bn+ with over 75% already contracted, excluding unexercised renewals. For 2026 as a whole, capacity is basically sold out.

Q: How does backlog convert into recognized revenue? Does it stair-step as data centers go live? Also, gross margin fell from 78% to 68% over five quarters; what drives improvement from here?

A: Margin volatility is mainly timing-related. When we receive a compute center shell, lease and power costs start and servers/DC equipment begin depreciation; the fit-out takes about 1-2 months, incurring costs with no revenue, so new deployments have negative contribution margins during that stage. Revenue begins in month three, and contribution margins normalize to the mid-20s; as OPM inflects, GM should turn as well, with expansion expected as we move from Q2 into Q3.

Active power has nearly tripled over the past year. The margin decline reflected rapid scale-up: adding 300MW on a 50MW base hits GM hard, but adding 50MW on a 2,000MW base has a far smaller effect; this is the 'escape velocity' we mean — the installed base is now large enough to absorb each new compute unit, hall and DC coming online. Revenue is recognized once capacity is deployed, GPUs are tested and handed over to the end customer, and then straight-line over the contract term.

Q: How fast is inference growing as a share of installed power? What does that imply for utilization and contracted economics for H100/A100 over the next 12-18 months?

A: We built flexible AI infrastructure that can toggle between training and inference, with customers optimizing usage. We cannot pinpoint each unit’s split at any given time, but by monitoring power consumption we estimate inference has materially exceeded 50% of compute usage, which is a strong signal that customers are monetizing AI investments via downstream model-driven revenue. Hopper and Ampere demand remains robust; customers often buy cutting-edge infrastructure for training, then reassign it to lower-intensity inference, and bring in next-gen architectures for training the next wave of models. H100s are sold out, A100s are sold out, and prices are rising as inference demand increasingly competes for these resources, a very bullish sign for the industry.

Q: How much of the components required for the $12bn-$13bn 2026 revenue are locked in?

A: The vast majority is locked. 2026 capacity is largely sold out, and we have placed purchase orders, secured infrastructure and power, and everything needed for the delivery roadmap. That is why we are confident in the guidance.

Q: How should we think about the cadence of signed power additions? Q1 added 400MW; Q4 2025 was 200MW; Q3 2025 was 600-700MW.

A: The 400MW in Q1 was incremental, and our pipeline of power contracts with DC providers is very large. We are evaluating numerous sites and deals where infrastructure can be built, while also advancing a set of self-built DCs that increase control over the operations pipeline. We added 2GW over the past 12 months; we cannot provide precise MW numbers per quarter for the next three years, but the pipeline is strong and we pace infrastructure acquisition to customer demand signals.

Q: The near-$100bn backlog roughly corresponds to ~2GW of allocated capacity, with 3.4GW+ of total signed power and ~1.4GW yet to be allocated. Is 'allocation' the right term, and is unallocated capacity up or down vs. a quarter ago?

A: 'Allocation' is a fair term. We are in a unique position where demand is so strong that we can be very deliberate in allocating infrastructure across customers — whether life sciences, foundation model labs or inference products — to build an ecosystem of industry leaders. That said, constraints extend beyond power to labor, memory, storage and delivery capacity; we coordinate all these factors to ensure customers can rely on the quality and scale of what we deliver to power their businesses.

<End of content>

Risk Disclosure & Statement:Dolphin Research Disclaimer & General Disclosure