Pentagon Official Clashes with Anthropic Over Autonomous Warfare

Defense Undersecretary says AI company's ethical restrictions are an 'irrational obstacle' to military's autonomous weapons plans

Published on Mar. 7, 2026

A top Pentagon official, U.S. Defense Undersecretary Emil Michael, said he clashed with AI company Anthropic over the use of its technology in autonomous weapons systems. Michael claimed Anthropic's ethical restrictions on the use of its chatbot Claude were an 'irrational obstacle' as the military pursues greater autonomy for drones, vehicles and other machines to compete with rivals like China. Anthropic has disputed parts of Michael's account and emphasized that its protections were narrow and not based on existing uses of Claude.

Why it matters

This dispute highlights the growing tension between the military's push for more autonomous weapons and AI companies' ethical concerns about the use of their technology. It reflects the broader debate over the role of AI in warfare and the need to balance national security priorities with ethical considerations.

The details

Michael, the Pentagon's chief technology officer, said the dispute came after debates over how AI could be used in President Trump's future Golden Dome missile defense program, which aims to put U.S. weapons in space. Michael said he needs a 'reliable, steady partner' that won't 'wig out' over the military's autonomous weapons plans. The Pentagon has formally designated Anthropic as a supply chain risk, cutting off its defense work, which the company plans to sue over.

  • In August 2025, Michael said he took over the military's 'AI portfolio' and began scrutinizing Anthropic's contracts.
  • In late 2025, the Pentagon began insisting Anthropic and other AI companies allow for 'all lawful use' of their technology, which Anthropic resisted.

The players

Emil Michael

U.S. Defense Undersecretary and the Pentagon's chief technology officer.

Anthropic

An AI company based in San Francisco that has disputed parts of Michael's account of the dispute.

Dario Amodei

CEO of Anthropic.

Donald Trump

Former U.S. President who ordered federal agencies to stop using Anthropic's chatbot Claude.

David Sacks

Former PayPal executive who is now Trump's AI czar and has been a vocal critic of Anthropic.

Got photos? Submit your photos here. ›

What they’re saying

“I need a reliable, steady partner that gives me something, that'll work with me on autonomous, because someday it'll be real and we're starting to see earlier versions of that. I need someone who's not going to wig out in the middle.”

— Emil Michael, U.S. Defense Undersecretary (All-In* podcast)

“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”

— Dario Amodei, CEO, Anthropic (Anthropic statement)

What’s next

The next stage of the dispute between the Pentagon and Anthropic will likely happen in court, as Anthropic has vowed to sue over the Pentagon's designation of the company as a supply chain risk.

The takeaway

This clash between the Pentagon and Anthropic highlights the broader tension between the military's push for more autonomous weapons and AI companies' ethical concerns. It underscores the need for clear policies and guidelines to govern the use of AI in warfare, balancing national security priorities with ethical considerations.