Alphabet announced on April 24, 2026, that it is committing up to $40 billion to Anthropic, the AI research firm behind the Claude model family, in a deal structured as $10 billion upfront and $30 billion tied to specific performance milestones. The transaction values Anthropic at $350 billion, a figure that is itself a discount to the company's most recent funding round. The announcement came days after Amazon committed a comparable $25 billion package, $5 billion upfront and $20 billion in milestones, through an expanded version of its existing AWS Bedrock hosting arrangement. Together, the two commitments mark the most concentrated single-week capital deployment in AI infrastructure history. For the full picture on Google's AI strategy, see the ObjectWire Google hub.
The Compute Currency | Gigawatts, TPUs, and Broadcom's 2027 Silicon
Both deals are structured as much around physical infrastructure as cash. The real strategic prize for Anthropic is guaranteed compute capacity at a scale that would be impossible to procure on the spot market as the company scales.
Google's contribution includes 5 gigawatts of power capacity over five years and access to millions of its custom Tensor Processing Units (TPUs), the proprietary AI accelerator chips that underpin Google Cloud's training and inference infrastructure. Amazon's package provides additional multi-gigawatt capacity routed through Trainium and Inferentia chips, AWS's own custom silicon optimized for large-model training and deployment respectively.
A third hardware partner enters the picture in 2027: Broadcom is working with both Google and Anthropic to deliver 3.5 gigawatts of next-generation custom AI silicon, designed to handle Anthropic's projected user-base growth beyond the current generation of training infrastructure. Broadcom's involvement signals that Anthropic is hedging its chip dependency across multiple architectures rather than committing exclusively to any one provider's proprietary hardware stack. For related coverage on the custom silicon arms race, see ObjectWire's reporting on Nvidia's position in the AI chip market.
Anthropic's Revenue Trajectory | $30B Annualized, 1,000 Enterprise Customers at $1M-Plus
The investment scale is underpinned by a revenue picture that has moved faster than most analysts forecast. Anthropic's annualized revenue had reached $30 billion as of April 2026, up from approximately $9 billion in late 2025, a roughly threefold increase in roughly five months. The company now counts over 1,000 enterprise customers each spending more than $1 million annually, a concentration of large-contract adoption that reflects how deeply Claude, and particularly Claude Code, has embedded itself in enterprise engineering workflows.
CFO Krishna Rao framed the company's approach as deliberate restraint in the face of a valuation environment that has offered far more capital than Anthropic has chosen to accept: "We are building the capacity necessary to serve the exponential growth we have seen. This is a disciplined approach to scaling." Bloomberg has reported that some private investors have offered valuations as high as $800 billion, which Anthropic declined in favor of the structured compute-for-equity arrangements with Google and Amazon. The company's most recent formal valuation was set at $380 billion in its Series G round in February 2026.
The Google Frenemy Problem | Vertex AI Sells Claude While Gemini Competes Against It
Google's position in the Anthropic deal involves a structural tension that neither party has resolved. Google Cloud's Vertex AI platform currently sells Anthropic's Claude models as a managed service to enterprise customers, generating meaningful revenue for both companies. At the same time, Google's own Gemini Enterprise Agent Platform competes directly against Claude in the same enterprise segment, targeting the same customers with overlapping use cases.
This "frenemy" dynamic is not unique to the AI industry, but the capital scale of the current deal amplifies its stakes considerably. A $40 billion commitment with milestone-linked disbursements creates a long-term dependency that constrains both parties' strategic flexibility. For Anthropic, heavy reliance on Google infrastructure while competing against Google products introduces negotiating complexity that will compound as the company approaches potential IPO discussions. For Google, funding a competitor's growth while selling that competitor's products creates a conflict that regulators in the EU and UK have already flagged as a concern in their ongoing reviews of AI market concentration.
The outcome of those reviews, combined with the structure of the performance milestones that govern the remaining $30 billion, will define how much of the announced capital actually transfers over the five-year term. For context on the regulatory environment shaping these deals, see earlier ObjectWire coverage of Nvidia and Google Cloud's agentic AI infrastructure partnership and the ObjectWire OpenAI hub for competitive context across the frontier model landscape.