Technology

Sam Altman Calls for Government Oversight of AI After Controversial Pentagon Deal

The OpenAI CEO said the U.S. government should hold 'more authority' over private AI companies. The statement arrived days after OpenAI signed a classified DoD contract and closed a $110 billion funding round at an $840 billion valuation.

A
Alfanasa
March 5, 2026📖 6 min read

OpenAI CEO Sam Altman stated on March 3, 2026, that the U.S. government should hold “more authority” over private AI companies than it currently does. The comment arrived in response to widespread criticism following OpenAI's March 1 announcement of a contract to deploy its models within classified Department of Defense networks, just hours after the Trump administration blacklisted rival Anthropic from all federal use.

The sequence of events unfolded over a compressed five-day window between February 27 and March 3, 2026, and has triggered a public reckoning on AI governance, military deployment, and corporate positioning in a sector where the line between national security tool and commercial product grows thinner by the quarter.

📊
$840 billion: OpenAI's post-money valuation after closing a $110 billion funding round on March 1, 2026, the same day it announced the Pentagon contract.

Timeline of Events: February 27 to March 3

Negotiations between Anthropic and the Department of Defense collapsed on February 27, 2026, after Anthropic CEO Dario Amodei declined to waive safety guardrails that would have prohibited Claude models from being used for mass domestic surveillance or fully autonomous lethal weapons. The Trump administration responded by labeling Anthropic a “supply-chain risk” and directing all federal agencies to cease using Anthropic technology immediately, with a six-month phase-out period. The blacklisting effectively removed one of the most capable AI systems from the government's toolset overnight.

By midnight PST on February 28, OpenAI had announced a contract to integrate its models into DoD classified networks for intelligence analysis and decision support. The agreement included technical restrictions prohibiting use for domestic surveillance of U.S. persons or autonomous lethal weapons. The timing drew immediate scrutiny: OpenAI appeared to have stepped into the vacancy left by Anthropic's refusal within 24 hours.

Between March 1 and March 3, the hashtag #CancelChatGPT trended across social media platforms, with users accusing OpenAI of opportunistically filling the gap created by Anthropic's principled stance. On March 1, OpenAI closed a $110 billion funding round valuing the company at $840 billion post-money. On March 3, Altman issued a public statement and internal memo acknowledging that the rollout “looked opportunistic and sloppy” but defending the technical safeguards embedded in OpenAI's models.

Altman's Position on Government Authority

Altman argued that private companies should not hold ultimate decision-making power over AI systems with national-security implications. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,” he stated. “We are going to amend our deal to add language that explicitly prohibits the use of our systems for domestic surveillance of U.S. persons.” He further called for expanded government authority to set binding red lines, stating that technical and contractual guardrails alone are insufficient without regulatory backing.

The position places Altman in an unusual rhetorical space. He is simultaneously the CEO of a company that just signed a defense contract and the public voice arguing that companies like his should have less autonomy in these decisions. Critics have noted the tension: calling for regulation after signing the deal, rather than before, suggests the call itself functions as reputation management rather than genuine policy advocacy. Supporters counter that Altman is one of the few tech executives willing to publicly concede the limits of self-governance.

Technical Guardrails vs. Contractual Refusal

The core disagreement between OpenAI and Anthropic centers on methodology rather than principle. Both companies oppose mass surveillance and autonomous weapons, but they enforce those positions differently. Anthropic insisted on a blanket contractual refusal to supply models for any such use cases. When the DoD required flexibility on those terms, Amodei walked away. This approach drew respect from safety advocates but ultimately cost Anthropic its federal access.

OpenAI claims its approach differs by embedding prohibitions directly into model architecture. Specific measures include hard-coded refusal prompts that block queries related to domestic surveillance of U.S. persons, output filters that detect and reject content consistent with autonomous lethal-weapon targeting, and audit trails that log all DoD interactions for third-party review. The question is whether architectural guardrails can withstand the institutional pressures of a classified environment where the operator has both the motivation and the clearance to push boundaries. This question has no definitive answer as of March 2026, and OpenAI has not disclosed which third party will conduct audits.

Funding Round and Valuation Context

OpenAI closed the $110 billion funding round on March 1, 2026, led by existing investors and new participants, bringing its post-money valuation to $840 billion. The round was announced on the same day as the DoD contract, prompting questions about whether valuation pressure influenced the timing. For comparison, Anthropic's last reported valuation stood at $61.5 billion in late 2025, roughly 7 percent of OpenAI's current figure. The gap in valuation reflects differences in revenue trajectory, with AI tool companies like Cursor doubling ARR to $2 billion in three months, indicating that the broader AI market rewards speed and scale over caution.

Current Status of Major AI Providers With the U.S. Government

As of March 5, 2026, the federal AI landscape has shifted dramatically. OpenAI is active and deploying in DoD classified networks with amended technical safeguards. Anthropic has been blacklisted, with a six-month phase-out ordered for all federal use. xAI, Elon Musk's AI company, is active and cleared for DoD use despite federal agencies raising safety concerns about its Grok chatbot, which the GSA described as “sycophantic and too susceptible to manipulation.” The DoD has not disclosed the scope or value of the OpenAI contract beyond “intelligence analysis and decision support” applications.

The arrangement positions OpenAI and xAI as the two primary AI providers for classified government work, while Anthropic's Claude, which had been the sole model cleared for classified use until Grok's approval, faces complete removal from federal systems. The irony is not lost on observers: the company that refused to compromise on safety now has no seat at the table, while the company that signed the contract calls for someone else to set the rules.

Upcoming Scrutiny and Public Appearances

Altman is scheduled to speak at the Game Developers Conference in San Francisco on March 18, 2026, where Microsoft is also expected to discuss Project Helix hardware. The DoD contract and funding round are likely topics of discussion. Amazon CEO Andy Jassy, who appeared alongside Altman on CNBC on February 27, made separate comments about AI reducing headcount across many long-standing roles, reflecting the broader corporate consensus that AI deployment will accelerate regardless of the governance framework surrounding it.

Congressional hearings on the Anthropic blacklisting and the OpenAI contract are expected in the coming weeks, with senators from both parties signaling interest in examining whether the administration's vendor decisions were driven by security assessments or political considerations. The outcome of those hearings could determine whether Altman's call for government oversight becomes policy or remains a press statement.

When a CEO calls for more government authority the same week his company closes a $110 billion round and signs a Pentagon deal, the only thing clearer than the timeline is the number of headlines it generates.

Related Coverage

Tags

#OpenAI#Sam Altman#Pentagon#Department of Defense#Anthropic#Dario Amodei#xAI#Grok#AI Regulation#CancelChatGPT#AI Governance#National Security

Tags

#OpenAI#Sam Altman#Pentagon#AI Regulation#Anthropic#DoD#Dario Amodei
A

Written by

Alfanasa

Technology Reporter

Part ofObjectWirecoverage
📩 Newsletter

Stay ahead of every story

Breaking news, deep-dives, and editor picks — delivered straight to your inbox. No spam, ever.

Free · Unsubscribe anytime · No ads