OpenAI CEO Says Company Can't Control Military's Use of Its Tech

Altman tells employees OpenAI doesn't get to weigh in on how Pentagon uses its AI models.

Published on Mar. 5, 2026

OpenAI CEO Sam Altman told employees at an all-hands meeting that the company doesn't 'get to make operational decisions' about how its artificial intelligence technology is used by the Pentagon, according to a source familiar with the meeting. Altman's comments came days after OpenAI announced an agreement to deploy its models on the Pentagon's classified network, hours after a similar deal between Anthropic and the Department of Defense fell apart.

Why it matters

The dispute between AI companies and the Pentagon over the use of their technology highlights the tension between private firms and government agencies over who gets to control how advanced AI is deployed, especially for military and national security purposes.

The details

Altman said in the meeting that OpenAI doesn't 'get to weigh in' on decisions like 'whether the Iran strike was good and the Venezuela invasion was bad.' The Pentagon has argued it needs access to the most advanced AI models for all lawful use cases, while companies like Anthropic have pushed back, setting red lines around autonomous weapons and domestic surveillance. After talks with Anthropic broke down, the Pentagon declared it would designate the company as a 'supply chain risk,' effectively cutting it off from government work.

  • On March 3, 2026, OpenAI announced its agreement to deploy models on the Pentagon's classified network.
  • On March 4, 2026, the talks between Anthropic and the Pentagon fell apart over the company's red lines.

The players

Sam Altman

The chief executive officer of OpenAI Inc., a prominent artificial intelligence company.

Anthropic

An AI company responsible for the chatbot Claude, which was in talks with the Pentagon before the negotiations fell apart.

Department of War

The informally renamed Department of Defense, which is seeking access to advanced AI models for military and national security purposes.

Got photos? Submit your photos here. ›

What they’re saying

“So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that.”

— Sam Altman, CEO, OpenAI (source)

“The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes. This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.”

— Sean Parnell, Spokesperson, Department of War (Social media post)

What’s next

The judge in the case will decide on Tuesday whether or not to allow Anthropic to continue working with the Pentagon.

The takeaway

This dispute highlights the ongoing tension between tech companies and the government over the use of advanced AI, with both sides struggling to balance innovation, ethics, and national security priorities.