OBJECTWIRE

Independent · Verified · In-Depth

TechAIFinance8 min read

Anthropic $40B Deal | $30B Milestones, IPO Plan, Claude vs Gemini

The $10 billion upfront is liquidity. The remaining $30 billion is a high-stakes roadmap: three milestone categories tied to Mythos model deployment, TPU v8 infrastructure migration, and enterprise client expansion, all timed to set up an October 2026 IPO with hardware risk removed from the prospectus

JS

Technology and Finance Desk

1. The $30B Roadmap | Three Milestone Categories

When Google announced its commitment to Anthropic on April 24, the headline was the $10 billion upfront. The strategic substance is the $30 billion that follows, contingent on Anthropic hitting a set of performance milestones across three interconnected categories. According to reporting from Bloomberg and TechCrunch, those categories are: the internal deployment and safety vetting of the Mythos model family, the migration of core training workloads to Google's TPU v8 architecture, and the continued expansion of enterprise "whale" clients. Together they define a five-year operating roadmap that Google can verify and, implicitly, influence. For the deal overview, see ObjectWire's initial $40B deal coverage.

BY THE NUMBERS

$10B

Upfront Capital

$30B

Milestone Capital

$40B

Total Deal Value

Milestone 1 | Mythos Integration

Unreleased model, limited partner access

A significant portion of the contingent capital is linked to the development and internal deployment of Mythos, Anthropic's next-generation model. The model is reportedly restricted to a select group of partners including Google for cybersecurity and safety vetting before any broader release.

Milestone 2 | TPU v8 Migration

5GW of capacity, shifting from Nvidia clusters

Anthropic must successfully migrate core training workloads from Nvidia-based GPU clusters to Google's proprietary TPU v8 architecture. Milestones unlock progressively as Anthropic ingests the 5 gigawatts of power capacity Google Cloud has committed.

Milestone 3 | Enterprise Whale Expansion

1,000+ clients at $1M+ annually, targeting growth

With Anthropic already at $30B annualized revenue and over 1,000 enterprise customers spending $1M+ annually, the third category targets continued expansion of this high-value tier. Specific client count and revenue thresholds have not been disclosed.

Why Milestones Instead of Cash

The milestone structure gives Google meaningful visibility into Anthropic's technical roadmap and creates a dependency on Google infrastructure that compounds over five years. For Anthropic, it trades some strategic flexibility for the certainty of guaranteed capital without diluting below its chosen valuation floor.

2. Claude Code vs Gemini | The Benchmark War at Cloud Next '26

Despite the capital partnership, Google and Anthropic are competing head-to-head for enterprise developer workflows. Benchmarks released around the Google Cloud Next '26 conference show a near-parity landscape with distinct specialized edges that matter enormously for enterprise procurement decisions.

SWE-bench Scores | April 2026

Claude Code (Anthropic): 82.1% — leading the industry. Best for large multi-file refactoring and deep codebase understanding. Unique edge: 14.5-hour autonomous task horizon with Opus 4.6, enabling multi-session agentic work without human checkpoints.

Gemini Enterprise Agent (Google): 80.6% — close second. Strongest on speed, Firebase integration, and Android Studio workflows. Unique edge: native multimodality, meaning it can watch a screen recording of a bug and write the fix directly from the video input.

The 1.5-percentage-point SWE-bench gap is narrow enough that enterprise buyers are making decisions on workflow integration rather than raw benchmark performance. Claude Code's dedicated terminal interface targets engineering teams who want an agentic coding layer outside the IDE. Gemini's direct integration into the Agentic Data Cloud targets teams building production-ready applications within Google's own infrastructure. The two products are converging on the same buyer, but approaching from opposite directions. For context on the coding agent competitive landscape, see earlier ObjectWire reporting on Google DeepMind's internal response to Claude Code's enterprise momentum.

BY THE NUMBERS

82.1%

Claude Code SWE-bench

80.6%

Gemini Enterprise SWE-bench

14.5 hrs

Opus 4.6 Task Horizon

3. The Dual-Engine Infrastructure | Google Trains, Amazon Deploys

CFO Krishna Rao has described Anthropic's approach as "disciplined scaling," and the architecture of the Google and Amazon deals together reveals what that means in practice. The two cloud commitments are not redundant, they are deliberately complementary, with each provider assigned a distinct role in Anthropic's infrastructure stack.

Google Cloud | Training Engine

5GW capacity, TPU v8 silicon, Broadcom JV from 2027

Google provides the custom TPU silicon and raw power capacity needed to train massive future models including Mythos. The Broadcom joint venture adds 3.5GW of next-generation custom AI chips beginning in 2027, extending the training capacity runway well beyond current model generations.

Amazon AWS | Global Inference Engine

$100B ten-year deal, Trainium, Inferentia, every region

Amazon's $25B package is the visible portion of a decade-long infrastructure commitment that analysts estimate at $100B total. Via AWS Bedrock, Amazon ensures Claude is available in every global region with low latency, handling the inference workload that Anthropic's 1,000-plus enterprise clients demand at scale.

We are building the capacity necessary to serve the exponential growth we have seen. This is a disciplined approach to scaling.
Krishna Rao, CFO, Anthropic

The dual-engine model means Anthropic is not betting its infrastructure on any single provider's hardware roadmap. If Google's TPU v8 migration hits delays, Amazon's Trainium clusters absorb inference demand. If AWS has regional outages, Google Cloud provides training continuity. The architecture is also a negotiating hedge: neither provider can unilaterally cut capacity without triggering a competitive disadvantage that the other provider would immediately exploit. See also the ObjectWire Nvidia hub for context on why Anthropic's TPU migration away from Nvidia-based clusters is a significant chip market signal.

4. The IPO Factor | Removing Hardware Risk from the Prospectus

The timing of the Google and Amazon deals is not incidental. Analysts tracking Anthropic's trajectory have pointed to an October 2026 IPO as the likely destination, and the combined $65 billion in compute commitments from both providers serve a specific function in that context: they eliminate hardware shortage as a risk factor that a prospective public shareholder would need to price.

The IPO Calculation

An AI company going public in late 2026 without guaranteed compute faces a credible existential risk: if demand outpaces supply and training runs get queued, revenue growth stalls. By locking in $65B in committed infrastructure across two providers before filing, Anthropic can include a five-year compute roadmap directly in its S-1, converting what would otherwise be a speculative risk into a contractual certainty. That single structural change likely expands the addressable investor base significantly.

The valuation trajectory supports the IPO thesis. Anthropic's Series G in February 2026 set a formal valuation of $380 billion. The Google deal was structured at a slight discount, $350 billion, which may reflect either negotiating leverage on Google's part or Anthropic's preference for a clean round at a defensible number over a higher mark that could invite scrutiny. Bloomberg has reported that some private investors offered valuations as high as $800 billion, all of which Anthropic declined. A company that has walked away from $800 billion private offers does not need capital. It needs a public market event that crystallizes the valuation for institutional holders who cannot participate in private rounds.

The remaining variable is whether regulators in the EU and UK, both of which are actively reviewing the market structure implications of large cloud providers making compute-for-equity deals with frontier AI companies, will clear the arrangements before an October filing window. The reviews are ongoing as of April 2026. For continued coverage of this story, see the ObjectWire Google hub and the OpenAI hub for competitive context as the frontier model IPO race develops.

Sources & References

Discussion

Every comment appears live in our Discord server.

Join to see the full conversation and connect with the community.

Join ObjectWire Discord

Comments sync to our ObjectWire Discord · Anthropic $40B Deal | $30B Milestones, IPO Plan, Claude vs Gemini.