OpenAI CEO Addresses Military Use of AI Tech

Altman says company doesn't control how Pentagon uses its models

Published on Mar. 6, 2026

OpenAI CEO Sam Altman told employees that the company doesn't get to decide how the Pentagon uses its artificial intelligence technology, including in military operations. This comes after OpenAI reached a deal to deploy its models on the Pentagon's classified network, following the breakdown of a similar agreement between the military and Anthropic.

Why it matters

The deal between OpenAI and the Pentagon raises questions about who controls the use of AI technology, the companies that develop it or the government that deploys it. This issue has become a point of contention, with Anthropic refusing to allow its models to be used for autonomous weapons or mass surveillance of Americans.

The details

Altman said at an all-hands meeting that OpenAI doesn't 'get to weigh in' on how the military uses its AI, such as whether the Iran strike or Venezuela invasion was good or bad. This follows OpenAI's announcement that it had reached an agreement with the Pentagon to deploy its models on the military's classified network, hours after Anthropic's deal with the Pentagon fell apart over similar issues.

  • On March 6, 2026, OpenAI CEO Sam Altman addressed employees about the company's deal with the Pentagon.
  • Last week, the Pentagon addressed Anthropic's red lines, saying it needs the company's technology for all lawful use cases.

The players

Sam Altman

The CEO of OpenAI, the company behind the popular AI chatbot ChatGPT.

Anthropic

An AI company responsible for the chatbot Claude, which was in talks with the Pentagon before the negotiations fell apart.

Department of War

The informally renamed Department of Defense, which is seeking to use AI technology from companies like OpenAI and Anthropic for military operations.

Got photos? Submit your photos here. ›

What they’re saying

“So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that.”

— Sam Altman, CEO, OpenAI (Source familiar with the meeting)

“The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.”

— Sean Parnell, Spokesperson, Department of War (Social media post)

What’s next

The judge in the case will decide on Tuesday whether or not to allow Walker Reed Quinn out on bail.

The takeaway

This case highlights the ongoing debate over the use of AI technology by the military, with companies like OpenAI and Anthropic seeking to maintain control over how their models are deployed, while the Pentagon argues it needs unfettered access to the technology for all lawful purposes.