OpenAI Amends A.I. Deal With the Pentagon

The new pact includes additional protections to prevent the use of the company's technology for mass surveillance of Americans.

Mar. 2, 2026 at 9:47pm

After a weekend of criticism, OpenAI said on Monday that its deal to provide artificial intelligence technologies for the Defense Department's classified systems now included additional protections to prevent its technology from being used in mass surveillance of Americans. The amendment prohibits the 'deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.'

Why it matters

The deal between OpenAI and the Pentagon has raised concerns about the potential use of AI technology for domestic surveillance, which conflicts with OpenAI's stated principles around the ethical development of AI. The amendment to the agreement aims to address these concerns and ensure the technology is not misused for unlawful purposes.

The details

Under the original deal, OpenAI agreed to let the Pentagon use its AI systems for any lawful purpose. However, the amended agreement now specifically prohibits the use of OpenAI's technology 'for domestic surveillance of U.S. persons and nationals' in line with relevant federal laws. The amendment also adds language barring 'deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.'

  • On Friday, OpenAI announced its original agreement with the Pentagon.
  • On Monday, OpenAI said the deal had been amended to include additional protections.

The players

OpenAI

An artificial intelligence company that has now amended its deal with the Pentagon to include additional protections against the use of its technology for domestic surveillance.

The Pentagon

The U.S. Department of Defense, which had originally negotiated a deal with OpenAI to use the company's AI systems for any lawful purpose.

Sam Altman

The chief executive of OpenAI, who said the amended deal was meant to 'protect the civil liberties of Americans' and that the company will 'continue to learn and refine as we go.'

Anthropic

An AI company that had tussled with the Pentagon over how its AI could be used, ultimately failing to reach an agreement by the Pentagon's deadline.

Pete Hegseth

The U.S. Defense Secretary, who declared Anthropic a 'supply-chain risk to national security' after the company failed to reach a deal with the Pentagon.

Got photos? Submit your photos here. ›

What they’re saying

“It's critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear.”

— Sam Altman, Chief Executive, OpenAI

“Just like everything we do with iterative deployment, we will continue to learn and refine as we go.”

— Sam Altman, Chief Executive, OpenAI

What’s next

The Defense Department and Anthropic did not immediately respond to a request for comment on the amended OpenAI deal.

The takeaway

The amended OpenAI-Pentagon deal highlights the ongoing tensions and concerns around the use of AI technology, particularly when it comes to issues of privacy and civil liberties. This case underscores the need for clear guidelines and safeguards to ensure AI is developed and deployed responsibly, especially in sensitive government and military applications.