Pentagon Clashes with AI Firm Anthropic Over Autonomous Warfare

Defense official says Anthropic's ethical restrictions were an "irrational obstacle" as military pursues greater AI autonomy.

Published on Mar. 7, 2026

A top Pentagon official, U.S. Defense Undersecretary Emil Michael, said Anthropic's dispute with the government over the use of its AI technology in autonomous weapons came after a debate over how AI could be used in the military's future programs. Michael criticized Anthropic's ethical restrictions on the use of its chatbot Claude, viewing them as an obstacle as the U.S. seeks to give greater autonomy to drones, vehicles, and other machines to compete with rivals like China.

Why it matters

The clash between the Pentagon and Anthropic highlights the growing tension between the military's desire for more autonomous weapons systems and AI companies' concerns about the ethical implications of such technology. This dispute could have broader implications for the regulation and use of AI in the defense sector.

The details

Michael, the Pentagon's chief technology officer, said he began scrutinizing Anthropic's contracts and questioned the company over terms of use that he deemed too restrictive. Anthropic resisted the Pentagon's demand for "all lawful use" of its technology, while other AI firms like Google and OpenAI agreed to the change. Anthropic also did not want its technology used for mass surveillance of Americans, another sticking point in the negotiations.

  • The Pentagon formally designated Anthropic a supply chain risk in March 2026, cutting off its defense work.
  • President Trump ordered federal agencies to stop using Anthropic's chatbot Claude, though he gave the Pentagon six months to phase out the product.

The players

Emil Michael

U.S. Defense Undersecretary and the Pentagon's chief technology officer.

Dario Amodei

CEO of Anthropic, the AI company that clashed with the Pentagon over the use of its technology in autonomous weapons.

Anthropic

An AI company based in San Francisco that has vowed to sue the Pentagon over its designation as a supply chain risk.

President Donald Trump

The Republican president who ordered federal agencies to stop using Anthropic's chatbot Claude.

Golden Dome

A missile defense program that aims to put U.S. weapons in space, which was discussed in the Pentagon's negotiations with Anthropic.

Got photos? Submit your photos here. ›

What they’re saying

“I need a reliable, steady partner that gives me something, that'll work with me on autonomous, because someday it'll be real and we're starting to see earlier versions of that. I need someone who's not going to wig out in the middle.”

— Emil Michael, U.S. Defense Undersecretary and Pentagon Chief Technology Officer (All-In* podcast)

“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”

— Dario Amodei, CEO, Anthropic (Anthropic statement)

What’s next

The next stage of the dispute between the Pentagon and Anthropic will likely happen in court, as Anthropic has vowed to sue over the Pentagon's designation of the company as a supply chain risk.

The takeaway

This clash highlights the growing tension between the military's desire for more autonomous weapons systems and AI companies' concerns about the ethical implications of such technology. The outcome of this dispute could set important precedents for the regulation and use of AI in the defense sector.