OBJECTWIRE

Independent · Verified · In-Depth

Technology

NVIDIA\u2019s Blackwell B300 Demand Has \u2018Completely Broken\u2019 Data Center Planning Models

52-week lead times. Colocation sold out through 2027. Power grids overwhelmed. Hyperscalers are ordering at a pace the industry has never seen.

||7 min read
📊
By the Numbers:
  • B300 lead time (current): 52 weeks
  • US colocation capacity sold out through: Q4 2027
  • NVIDIA data center revenue — Q1 2026 (est.): $41B
  • Power draw per GB300 NVL72 rack: 120 kW
  • Hyperscaler GPU capex 2026 (combined): $320B+

The phrase making rounds among data center executives this week: NVIDIA’s Blackwell B300 GPU demand has “completely broken” every planning model the industry built over the past three years. Lead times on the GB300 NVL72 rack system — NVIDIA’s flagship 72-GPU Blackwell Ultra configuration — have stretched to 52 weeks at major resellers, while enterprise customers report being told to expect 14–18 months for large-scale deployments.

The surge is not a supply problem, executives emphasize. TSMC is producing Blackwell dies at record speed. NVIDIA’s CoWoS-L packaging capacity at TSMC has expanded significantly since the Hopper-era bottleneck of 2023–2024. The issue is demand velocity — hyperscalers are ordering at a rate that makes the H100 wave look modest by comparison.

The Hyperscaler Arms Race

Microsoft, Google, Amazon, Meta, and Oracle collectively announced over $320 billion in AI infrastructure capital expenditure for 2026 in their most recent earnings calls. Analysts at Raymond James estimate that roughly 65% of that capex will flow directly to NVIDIA GPU procurement — the largest concentration of spend on a single vendor in tech history.

Amazon Web Services placed its largest-ever single hardware order in February: a reported 850,000 Blackwell B300 GPUs over 18 months, enough to fill multiple new data center campuses. Microsoft’s Azure team is in the process of commissioning 14 new data center sites across North America, Europe, and Asia specifically to house Blackwell deployments. Google DeepMind confirmed it is running its Gemini Ultra 2 training runs exclusively on GB300 NVL72 clusters.

The Power Problem Is Now the Constraint

The limiting factor has shifted from silicon to electricity. A single GB300 NVL72 rack draws 120 kilowatts at peak load — more than triple the power draw of a standard H100 rack. A 1,000-rack Blackwell cluster requires 120 megawatts of dedicated power, roughly equivalent to the peak load of a small city.

Data center developers report that power purchase agreements and utility interconnection queues are now the binding constraint for new builds. In Northern Virginia — the world’s largest data center market — the interconnection queue for new facilities has grown to over 40 gigawatts of pending capacity, according to Dominion Energy filings. At current approval rates, new facilities in the region won’t receive utility power until 2028 at the earliest.

The bottleneck has pushed hyperscalers toward aggressive alternatives: on-site natural gas generation, nuclear power purchase agreements (Microsoft’s Three Mile Island deal being the highest-profile example), and dedicated transmission line construction. Amazon has filed for permits to build a 2.4-gigawatt wind farm in Texas specifically to power its Blackwell cluster expansion.

Colocation Sold Out Through 2027

The secondary impact is a complete lockout of enterprise customers from colocation capacity. Equinix, Digital Realty, and Iron Mountain all report their US AI-grade colocation inventory — facilities with the power density and cooling infrastructure to support Blackwell racks — is sold out through the fourth quarter of 2027.

For enterprise AI teams that don’t operate their own data centers, the options are narrow: pay cloud provider rates for on-demand GPU access (which has risen 35–50% year- over-year for H100-equivalent compute), wait for new colocation capacity in 2028, or relocate workloads to international markets where power is more available — the Middle East, Southeast Asia, and Scandinavia are all seeing aggressive data center investment as a result.

NVIDIA’s Revenue Trajectory

Wall Street analysts are revising NVIDIA estimates upward for the third consecutive quarter. The consensus Q1 2026 data center revenue estimate has risen to $41 billion — up from $35.6 billion in Q4 2025, itself a record. Full-year 2026 data center revenue projections now range from $160 billion to $185 billion, which would represent NVIDIA becoming one of the highest-revenue technology companies in history based on a single product category.

Jensen Huang addressed the demand environment at a customer event in San Jose last week: “The world is building the infrastructure for a new kind of intelligence. Every country, every company is now racing to build it. We are going as fast as we physically can.” He confirmed that NVIDIA is working with TSMC on the next generation — Rubin — with a planned production ramp starting in late 2026, though he declined to give specific timeline commitments given how quickly the demand picture is moving.

What This Means for Enterprise Buyers

For companies that don’t have existing GPU allocations locked in, the practical advice from procurement consultants is blunt: you are not getting Blackwell hardware at scale in 2026. The allocation pipeline is controlled by hyperscalers and a small number of sovereign AI programs (UAE, Saudi Arabia, Singapore, India). Enterprise orders placed today are being quoted Q1–Q2 2027 delivery windows.

The alternative ecosystem is growing quickly as a result. AMD’s MI355X, Intel’s Gaudi 3, and a growing number of custom silicon providers (Groq, Cerebras, SambaNova) are all seeing dramatically increased enterprise interest from companies that cannot wait in the NVIDIA queue. Whether any of these can meaningfully fill the gap is an open question — NVIDIA’s software moat via CUDA and the NVLink interconnect fabric remains a significant barrier to switching.

More from Nvidia

View all

Filed under

#NVIDIA#AI#Blackwell#Data Centers#GPU#Infrastructure#Cloud

Discussion

Comments post live to the ObjectWire Discord server.
Join server →

Every comment appears live in our Discord server.

Join to see the full conversation and connect with the community.

Join ObjectWire Discord

Comments sync to our ObjectWire Discord · NVIDIA\u2019s Blackwell B300 Demand Has \u2018Completely Broken\u2019 Data Center Planning Models.

C

Written by

Conan Boyle

Technology Editor