OpenAI CEO Addresses Military Use of AI Tech

Altman says company doesn't control how Pentagon deploys its models

Published on Mar. 6, 2026

OpenAI CEO Sam Altman told employees that the company doesn't get to decide how the Pentagon uses its artificial intelligence technology, including for military operations. This comes after OpenAI reached a deal to deploy its models on the Pentagon's classified network, shortly after a similar agreement between Anthropic and the Department of Defense fell apart.

Why it matters

The deal between OpenAI and the Pentagon raises questions about who controls the use of AI technology - the companies that develop it or the government agencies that deploy it. This issue has become a point of contention, with Anthropic refusing to allow its models to be used for autonomous weapons or mass surveillance, leading to the collapse of its Pentagon deal.

The details

Altman said in an all-hands meeting that OpenAI doesn't 'get to weigh in' on how the military uses its AI, such as whether the Iran strike or Venezuela invasion was good or bad. OpenAI announced its Pentagon deal days after Anthropic's agreement fell apart over similar red lines. The Department of War, as it has been informally renamed, argued it needs Anthropic's technology for all lawful use cases, leading to the breakdown in talks. After the deal with OpenAI faced criticism, Altman said the company 'shouldn't have rushed' into it and will add stronger language prohibiting domestic surveillance.

  • On March 5, 2026, OpenAI announced its deal with the Pentagon to deploy its models on the classified network.
  • Hours after the Anthropic-Pentagon deal fell apart on an unspecified date.

The players

Sam Altman

The chief executive officer of OpenAI Inc.

Anthropic

An AI company responsible for the chatbot Claude, which was in talks with the Pentagon before the deal fell apart.

Department of War

The informally renamed Department of Defense, which is seeking to use advanced AI models like Anthropic's for a variety of military purposes.

Pete Hegseth

The Defense Secretary who declared Anthropic would be designated a 'supply chain risk', effectively cutting the company off from government work.

Sean Parnell

A spokesperson for the Department of War who addressed Anthropic's red lines in a social media post.

Got photos? Submit your photos here. ›

What they’re saying

“So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that.”

— Sam Altman, CEO, OpenAI (Source familiar with the meeting)

“The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.”

— Sean Parnell, Spokesperson, Department of War (Social media post)

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

— Sam Altman, CEO, OpenAI (Statement)

“We are going to amend our deal to add this language, in addition to everything else: 'Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.'”

— Sam Altman, CEO, OpenAI (Statement)

What’s next

The judge in the case will decide on Tuesday whether or not to allow Anthropic to continue working with the Pentagon.

The takeaway

This dispute highlights the ongoing tension between tech companies and the military over the use of advanced AI systems, with both sides seeking to maintain control over how the technology is deployed. The outcome could set important precedents for the future of AI governance and the balance of power between the private sector and government.