Pentagon labels AI company Anthropic a supply chain risk

The move could force other government contractors to stop using Anthropic's AI chatbot Claude.

Published on Mar. 6, 2026

The Pentagon has officially designated artificial intelligence company Anthropic as a supply chain risk, effective immediately. This unprecedented move could force other government contractors to stop using Anthropic's AI chatbot Claude. The Pentagon claims Anthropic is restricting the military's ability to use its technology for all lawful purposes, potentially putting warfighters at risk. Anthropic CEO Dario Amodei says the company cannot in good conscience comply with the Pentagon's demands to remove safeguards against mass surveillance and fully autonomous weapons. The decision has drawn criticism from lawmakers and former defense officials who say it is a misuse of a tool meant to address threats from foreign adversaries, not American innovators.

Why it matters

This designation could have major implications for Anthropic's business, as it may force other government contractors to cut ties with the company. It also sets a concerning precedent of the U.S. government using national security powers to pressure a domestic tech company over ethical concerns about its technology.

The details

The Pentagon said it has 'officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.' This decision appears to shut down the opportunity for further negotiation after President Trump and Defense Secretary Hegseth accused Anthropic of endangering national security last week. Anthropic CEO Amodei said the company will challenge the decision in court, as it 'does not believe this action is legally sound.' The Pentagon claims it is simply seeking to use Anthropic's technology for 'all lawful purposes,' while Amodei says the company's narrow exceptions relate to high-level usage areas, not operational decision-making.

  • On Friday, March 6, 2026, the Pentagon officially designated Anthropic as a supply chain risk.

The players

Anthropic

An artificial intelligence company that sells the AI chatbot Claude to a variety of businesses and government agencies.

Dario Amodei

The CEO of Anthropic.

U.S. Department of Defense

The federal agency that has designated Anthropic as a supply chain risk.

Sean Parnell

The Assistant Secretary of War who said the Pentagon is asking to use Anthropic's tech solely for all lawful purposes.

Kirsten Gillibrand

A U.S. Senator and member of the Senate Armed Services Committee and Senate Intelligence Committee who criticized the Pentagon's decision as a 'dangerous misuse' of a tool meant to address threats from foreign adversaries.

Got photos? Submit your photos here. ›

What they’re saying

“We do not believe this action is legally sound, and we see no choice but to challenge it in court.”

— Dario Amodei, CEO, Anthropic (kcra.com)

“This has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”

— Sean Parnell, Assistant Secretary of War (kcra.com)

“This reckless action is shortsighted, self-destructive, and a gift to our adversaries.”

— Kirsten Gillibrand, U.S. Senator (kcra.com)

What’s next

Anthropic plans to challenge the Pentagon's decision in court, as the company believes the action is not legally sound.

The takeaway

The Pentagon's unprecedented move to designate Anthropic as a supply chain risk highlights the growing tensions between the government's desire to use advanced AI technology and the ethical concerns raised by companies like Anthropic. This decision could have far-reaching implications for the AI industry and the relationship between the tech sector and the military.