Pentagon Deems Anthropic a 'Supply Chain Risk'

The decision could force other government contractors to stop using the AI chatbot Claude.

Mar. 6, 2026 at 1:22pm

The Trump administration has officially designated artificial intelligence company Anthropic as a supply chain risk, a move that could force other government contractors to stop using Anthropic's AI chatbot Claude. The Pentagon said it has informed Anthropic that the company and its products are deemed a supply chain risk, effective immediately. This decision appears to shut down the opportunity for further negotiation with Anthropic, after the company refused to back down over concerns that its products could be used for mass surveillance of Americans or autonomous weapons.

Why it matters

The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries to a domestic American company has been met with broad criticism. Some argue this is a 'dangerous misuse' of the tool and a 'profound departure' from its intended purpose, which is to protect the U.S. from infiltration by foreign companies beholden to adversaries like China or Russia, not American innovators.

The details

The Trump administration is following through with its threat to designate Anthropic as a supply chain risk, an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said it has 'officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.' This decision appeared to shut down the opportunity for further negotiation with Anthropic, after the company's CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance or autonomous weapons. Amodei said Anthropic will challenge the decision in court, as the company does not believe the action is legally sound.

  • On March 6, 2026, the Pentagon officially designated Anthropic as a 'supply chain risk'.

The players

Anthropic

An artificial intelligence company that sells the AI chatbot Claude to a variety of businesses and government agencies.

Dario Amodei

The CEO of Anthropic.

Donald Trump

The former President of the United States.

Pete Hegseth

The former Defense Secretary.

Kirsten Gillibrand

A U.S. Senator from New York and a member of the Senate Armed Services Committee and Senate Intelligence Committee.

Got photos? Submit your photos here. ›

What they’re saying

“We do not believe this action is legally sound, and we see no choice but to challenge it in court.”

— Dario Amodei, CEO, Anthropic

“This reckless action is shortsighted, self-destructive, and a gift to our adversaries.”

— Kirsten Gillibrand, U.S. Senator

“The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent.”

— Former defense and national security officials

What’s next

Anthropic plans to challenge the Pentagon's decision in court, as the company believes the action is not legally sound.

The takeaway

The Pentagon's designation of Anthropic as a 'supply chain risk' has sparked widespread criticism, with many arguing it is a misuse of a tool meant to address threats from foreign adversaries, not American companies. The decision could have far-reaching consequences for the U.S. AI sector and the military's ability to access the best technology.