Pentagon Brands Anthropic CEO a 'Liar' with 'God Complex' as Deadline Looms

The Department of War has given Anthropic until Friday to remove restrictions on military use of its AI, or face being labeled a national security threat.

Feb. 27, 2026 at 3:52pm

The Department of War has publicly questioned the character of Anthropic CEO Dario Amodei, accusing him of being a 'liar' with a 'God-complex', as a dispute over the military's use of Anthropic's AI technology reaches a critical deadline. Anthropic has refused to remove restrictions it has placed on the use of its AI models for autonomous weapons and mass surveillance, leading the Pentagon to threaten severe consequences if the company does not comply by Friday.

Why it matters

The Anthropic-Pentagon fight is now threatening to spiral into an industry-wide rebellion among tech workers at AI companies over how the AI systems they are building are used by the military. The outcome of this dispute could set important precedents regarding the independence of AI companies and their ability to uphold ethical standards when working with the U.S. government.

The details

Anthropic CEO Dario Amodei has published a statement explaining the company's position, arguing that frontier AI systems are not reliable enough to power fully autonomous weapons and that powerful AI can be misused for mass surveillance. The Pentagon has demanded Anthropic remove these restrictions by Friday or face having its $200 million contract canceled or being labeled a 'supply-chain risk'. The Department of War has also threatened to invoke the Cold War-era Defense Production Act to compel Anthropic to hand over an unrestricted version of its AI model.

  • The Pentagon has given Anthropic until 5:01 p.m. on Friday, February 27, 2026 to comply with its demands.
  • In 2023, the Biden Administration invoked the Defense Production Act to compel frontier AI labs to hand over information about the safety of their AI models.

The players

Dario Amodei

The CEO of Anthropic, an AI company that has refused to remove restrictions on how the U.S. military can use its AI models.

Emil Michael

The U.S. Under Secretary of War, who has publicly called Amodei 'a liar' with a 'God-complex'.

Anthropic

An AI company that has placed restrictions on the use of its AI models for autonomous weapons and mass surveillance, leading to a dispute with the U.S. Department of War.

Pentagon

The U.S. Department of War, which has demanded Anthropic remove its restrictions on military use of its AI models or face severe consequences.

Seán Ó hÉigeartaigh

The executive director of Cambridge's Centre for the Study of Existential Risk, who has warned that the Pentagon's actions against Anthropic set a 'very dangerous precedent'.

Got photos? Submit your photos here. ›

What they’re saying

“Frontier AI systems are 'not reliable enough to power fully autonomous weapons' and without proper oversight, they 'cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.'”

— Dario Amodei, CEO, Anthropic

“Using it against a domestic company for reasons of them not being willing to bend on some principles of this sort is really quite escalatory and unprecedented.”

— Seán Ó hÉigeartaigh, Executive Director, Centre for the Study of Existential Risk

“They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”

— Anonymous, Current employee, Google DeepMind or OpenAI

What’s next

The judge in the case will decide on Tuesday whether or not to allow Anthropic to continue operating under its current contract terms with the Pentagon.

The takeaway

This dispute highlights the growing tensions between the tech industry and the U.S. military over the use of advanced AI systems. The outcome could set important precedents regarding the independence of AI companies and their ability to uphold ethical principles when working with the government.