OpenAI CEO Admits 'Rushed' Deal with Defense Department After Backlash

Sam Altman says OpenAI will amend contract to limit use of AI for domestic surveillance

Mar. 2, 2026 at 9:15pm

OpenAI CEO Sam Altman acknowledged that the company "shouldn't have rushed" its recent deal with the U.S. Department of Defense and said OpenAI would amend the contract to include language prohibiting the use of its AI systems for domestic surveillance of U.S. persons. Altman said the company was trying to "de-escalate things" but the rushed deal "looked opportunistic and sloppy." The announcement comes after a public feud between rival AI company Anthropic and the Pentagon over safeguards for its AI tools.

Why it matters

The deal between OpenAI and the Defense Department has raised concerns about the potential use of powerful AI systems for surveillance and other purposes without proper oversight or safeguards. Altman's acknowledgment of the rushed nature of the deal and the need for revisions highlights the challenges tech companies face in navigating sensitive partnerships with government agencies.

The details

In a post on X, formerly known as Twitter, Altman said OpenAI would amend the contract with the Defense Department to include language stating "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." He also said the Pentagon had affirmed that OpenAI's tools would not be used by intelligence agencies like the NSA. Altman admitted he "shouldn't have rushed" to get the deal out on Friday, just hours after the White House directed federal agencies to stop using rival AI company Anthropic's tools.

  • On Friday, March 3, 2026, OpenAI announced its deal with the Defense Department, just hours after the White House directed federal agencies to stop using Anthropic's AI tools.
  • On the same day, March 3, 2026, the U.S. carried out strikes on Iran.

The players

Sam Altman

The CEO of OpenAI, the company behind the popular ChatGPT AI assistant.

Anthropic

A rival AI company whose tools, including the Claude AI system, have been the subject of a public feud with the U.S. Defense Department over safeguards and use cases.

U.S. Department of Defense

The federal agency that struck a deal with OpenAI, which has raised concerns about the potential use of powerful AI systems for surveillance and other purposes without proper oversight.

Got photos? Submit your photos here. ›

What they’re saying

“There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety.”

— Sam Altman, CEO, OpenAI (X)

“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

— Sam Altman, CEO, OpenAI (X)

“In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we've agreed to.”

— Sam Altman, CEO, OpenAI (X)

What’s next

OpenAI will work with the Pentagon on technical safeguards to ensure its AI systems are not used for domestic surveillance or other inappropriate purposes.

The takeaway

The controversy surrounding OpenAI's deal with the Defense Department highlights the delicate balance tech companies must strike when partnering with government agencies, particularly on sensitive issues like surveillance and national security. Altman's acknowledgment of the rushed nature of the deal and the need for revisions underscores the importance of thorough planning and stakeholder engagement to ensure such partnerships uphold ethical principles and public trust.