NEW YORK — Anthropic has agreed to lease data center computing capacity from CoreWeave in a multiyear deal, according to a Bloomberg report published Friday, as the maker of the Claude AI model moves to secure additional infrastructure to keep pace with surging demand. The agreement gives Anthropic access to Nvidia chip architectures at CoreWeave's U.S. data centers, according to CoreWeave CEO Michael Intrator.
Financial terms of the deal were not disclosed. CoreWeave shares gained in premarket trading Friday following the announcement. The deal adds another high-profile AI lab to CoreWeave's growing roster of customers, which already includes Meta Platforms, OpenAI, and Perplexity, and further cements Anthropic's position as the most infrastructure-aggressive company in the frontier AI space.
CoreWeave's Customer Base | $87.8 Billion Revenue Backlog
The Anthropic agreement arrives during a period of explosive growth for CoreWeave. The company expanded its partnership with Meta just this week, announcing a $21 billion agreement on Thursday to provide AI cloud capacity through December 2032, building on a prior $14.2 billion contract signed in September 2025. That deal brought CoreWeave's total revenue backlog to roughly $87.8 billion, an extraordinary figure for a company that went public on the Nasdaq only in March 2025.
| CoreWeave Metric | Detail |
|---|---|
Total revenue backlog | ~$87.8 billion |
Meta deal (April 2026) | $21 billion through December 2032 |
Meta prior contract (Sept 2025) | $14.2 billion |
Nvidia investment (Jan 2026) | $2 billion |
Compute capacity target | 5+ gigawatts by 2030 |
IPO | Nasdaq, March 2025 |
Key customers | Meta, OpenAI, Perplexity, Anthropic |
CoreWeave has positioned itself as a specialist GPU cloud provider closely aligned with Nvidia, which invested $2 billion in CoreWeave in January 2026. Unlike hyperscale cloud providers that offer general-purpose compute alongside AI capacity, CoreWeave focuses exclusively on GPU workloads, giving it a performance and pricing advantage for AI training and inference at scale. The company is targeting more than five gigawatts of AI computing capacity by 2030, a figure that would make it one of the largest dedicated AI infrastructure providers globally.
Anthropic's Compute Hunger | $30 Billion Run Rate, 1,000+ Clients
Anthropic has been on a rapid infrastructure buildout as demand for its Claude models accelerates. The company disclosed earlier this month that its revenue run rate has surpassed $30 billion, more than tripling from approximately $9 billion at the end of 2025. Its business customer base has doubled in recent months to more than 1,000 clients, a growth rate that has strained existing compute capacity across all three major cloud platforms.
The revenue trajectory is staggering by any measure. Court filings disclosed approximately $5 billion in total revenue since Anthropic began commercializing Claude, and the company's $10 billion joint venture with Blackstone signaled the scale of enterprise adoption. A $30 billion run rate implies monthly revenue approaching $2.5 billion, driven primarily by enterprise API usage, the Claude Pro subscription, and the Blackstone partnership's portfolio-wide deployment.
That growth creates an insatiable demand for compute. Training next-generation models requires cluster-scale GPU deployments measured in tens of thousands of chips, while inference, running Claude for millions of concurrent users, requires distributed capacity across multiple regions. Every new enterprise client adds sustained inference load that compounds over time.
Multi-Cloud Strategy | AWS, Google, Azure, and Now CoreWeave
On April 6, Anthropic announced a separate agreement with Google and Broadcom for roughly 3.5 gigawatts of next-generation TPU capacity expected to come online starting in 2027, which the company called its largest compute commitment to date. Anthropic trains and runs Claude on a mix of hardware: Google TPUs, Amazon Trainium chips, and Nvidia GPUs, spread across all three major cloud platforms, AWS, Google Cloud, and Microsoft Azure.
| Compute Partnership | Details |
|---|---|
CoreWeave (April 2026) | Multiyear lease, Nvidia GPU architectures, U.S. data centers |
Google + Broadcom (April 2026) | 3.5 GW next-gen TPU capacity, online starting 2027 |
Amazon Web Services | Primary cloud provider and training partner, Trainium chips |
Microsoft Azure | Inference hosting, enterprise distribution |
Google Cloud | TPU access, training and inference |
The CoreWeave deal represents a fourth compute channel that operates outside the hyperscaler ecosystem. That diversification is strategic. Dependence on any single cloud provider creates both pricing leverage risk and concentration risk. If AWS experiences capacity constraints during a training run, Anthropic can shift workloads to CoreWeave or Google. If Google's TPU roadmap slips, Nvidia GPUs at CoreWeave provide a fallback. The multi-source approach is expensive, but the alternative, being capacity-constrained while competitors scale, is existentially worse.
The AI Infrastructure Race | Demand Outstripping Supply
The deal underscores a broader reality in the AI industry: demand for frontier-model-grade compute continues to outstrip available supply, even as billions flow into new data center construction. Nvidia's Blackwell GPU architecture, the current standard for AI training, remains allocation-constrained through at least Q3 2026. Companies that locked in capacity early, as CoreWeave did through its direct Nvidia relationship, hold significant advantage.
The competitive dynamics are intensifying. OpenAI, Anthropic's primary rival, has its own massive infrastructure agreements, including a reported $100 billion Stargate partnership with SoftBank and Microsoft. Google is building proprietary TPU fabs. Meta is spending more than $40 billion on AI infrastructure in 2026 alone. The political backlash against data center expansion has added permitting delays in key markets, further constraining supply.
For Anthropic, the CoreWeave deal is another step in a broader push to lock in computing resources from multiple providers, reducing dependence on any single source as competition for AI infrastructure intensifies across the industry. The company is also reportedly exploring designing its own custom AI chips, a move that would follow Google, Amazon, and Meta in pursuing in-house silicon optimized for specific workloads.
What This Means for CoreWeave | Diversification Beyond Meta
For CoreWeave, the Anthropic agreement further diversifies its customer base beyond its largest client. Meta accounts for the largest single share of CoreWeave's backlog, a concentration risk that investors flagged following the IPO. Adding Anthropic alongside OpenAI and Perplexity spreads that exposure across the three most commercially successful AI labs and reduces the impact of any single customer scaling down.
CoreWeave's model, purpose-built GPU clouds leased on multiyear contracts, has proven remarkably capital-efficient. The company raises debt against committed revenue, then deploys the capital into new GPU clusters that are pre-sold before they come online. The Nvidia investment provides both capital and preferential access to new chip architectures, creating a flywheel that traditional cloud providers struggle to replicate at the same speed.
The growing regulatory scrutiny of frontier AI models adds a layer of uncertainty to the infrastructure buildout. If international regulators impose compute caps or licensing requirements on the most capable models, the demand curve for GPU capacity could shift. But for now, every major AI lab is operating under the assumption that more compute equals better models, and the race to secure that compute shows no signs of slowing.
Filed under
Discussion
Every comment appears live in our Discord server.
Join to see the full conversation and connect with the community.
Comments sync to our ObjectWire Discord · Anthropic Signs Multiyear Deal to Rent CoreWeave Data Center Capacity.
Written by
Jack Brennan