OpenAI CEO Sides with Anthropic on Military AI 'Red Lines'

Altman says he shares Anthropic's restrictions on use of AI models for domestic surveillance and autonomous weapons.

Feb. 27, 2026 at 8:03pm

OpenAI CEO Sam Altman has voiced support for the 'red lines' set by rival Anthropic on how the military can use AI models, amid an escalating feud between Anthropic and the Pentagon. Altman said he shares Anthropic's restrictions on using AI for domestic mass surveillance or fully autonomous weapons, even as the Defense Department has threatened to cancel Anthropic's $200 million contract if it doesn't comply with the military's demands to allow unrestricted use of its technology.

Why it matters

The standoff between Anthropic and the Pentagon highlights the growing tensions between AI companies and the military over the use of advanced technologies. As AI becomes more powerful and ubiquitous, there are concerns about its potential misuse, particularly for surveillance and autonomous weapons. Altman's support for Anthropic's position could complicate the Pentagon's efforts to replace the company if it follows through on its threat to cancel the contract.

The details

The Department of Defense has given Anthropic a deadline of 5:01 p.m. ET today to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used 'for all lawful purposes.' Defense officials have threatened to invoke the Korean War-era Defense Production Act (DPA) to compel Anthropic to allow use of its tools and have warned it would label Anthropic a 'supply chain risk,' potentially blacklisting it from lucrative government contracts.

  • The Department of Defense has given Anthropic a deadline of 5:01 p.m. ET today to drop restrictions on its AI model.
  • On Thursday evening, OpenAI CEO Sam Altman sent an internal note to staff about seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval.

The players

Sam Altman

The CEO of OpenAI, who has voiced support for the 'red lines' set by rival Anthropic on how the military can use AI models.

Anthropic

An AI company that has set restrictions on how its AI model, Claude, can be used by the military, including for domestic mass surveillance or fully autonomous weapons.

Department of Defense

The U.S. military agency that has threatened to cancel Anthropic's $200 million contract if the company does not comply with the Pentagon's demands to allow unrestricted use of its AI technology.

Emil Michael

The Pentagon's undersecretary for research and engineering, who has accused Anthropic CEO Dario Amodei of having a 'God-complex' and putting the nation's safety at risk.

Got photos? Submit your photos here. ›

What they’re saying

“I don't personally think the Pentagon should be threatening DPA against these companies.”

— Sam Altman, CEO, OpenAI (CNBC)

“We cannot in good conscience accede to their request.”

— Dario Amodei, CEO, Anthropic (Anthropic statement)

“He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk.”

— Emil Michael, Undersecretary for Research and Engineering, Department of Defense (X)

What’s next

The judge in the case will decide on Tuesday whether or not to allow Anthropic to maintain its restrictions on the use of its AI model by the military.

The takeaway

This standoff highlights the growing tensions between AI companies and the military over the use of advanced technologies, particularly around issues of surveillance and autonomous weapons. As AI becomes more powerful, there are concerns about its potential misuse, and companies like Anthropic and OpenAI are pushing back against unrestricted military use of their technologies.