Pentagon Deems Anthropic a Supply Chain Risk, Forcing Contractors to Cut Ties

The Trump administration's unprecedented move could impact the AI company's partnerships across the government.

Mar. 6, 2026 at 12:52am by Ben Kaplan

The Pentagon has officially designated artificial intelligence company Anthropic as a supply chain risk, effective immediately. This decision could force other government contractors to stop using Anthropic's AI chatbot Claude. The move comes after Anthropic CEO Dario Amodei refused to back down over concerns that the company's products could be used for mass surveillance of Americans or autonomous weapons, leading to a standoff with the Trump administration.

Why it matters

This unprecedented action against a domestic American company sets a concerning precedent, as the supply chain risk designation is typically used to address threats posed by foreign adversaries, not U.S. firms. Critics argue this move could hurt the U.S. AI sector and the military's ability to access the best technology, while also deepening the rivalry between Anthropic and OpenAI, which has struck a deal to replace Anthropic with its ChatGPT in classified military environments.

The details

The Pentagon said it has 'officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.' This decision appears to shut down the opportunity for further negotiation, nearly a week after President Trump and Defense Secretary Hegseth accused Anthropic of endangering national security. Some military contractors, like Lockheed Martin, have already started cutting ties with Anthropic as a result of the designation.

  • On March 6, 2026, the Pentagon officially designated Anthropic as a supply chain risk.
  • Last Friday, on the eve of the Iran war, President Trump and Defense Secretary Hegseth announced threatened punishments against Anthropic.

The players

Anthropic

An artificial intelligence company based in San Francisco that sells the AI chatbot Claude to a variety of businesses and government agencies.

Dario Amodei

The CEO of Anthropic, who refused to back down over concerns that the company's products could be used for mass surveillance of Americans or autonomous weapons.

Donald Trump

The former President of the United States who accused Anthropic of endangering national security.

Pete Hegseth

The former Defense Secretary who, along with President Trump, accused Anthropic of endangering national security.

Lockheed Martin

A major military contractor that said it will 'follow the President's and the Department of War's direction' and look to other providers of large language models.

Got photos? Submit your photos here. ›

What they’re saying

“This reckless action is shortsighted, self-destructive, and a gift to our adversaries.”

— Kirsten Gillibrand, U.S. Senator and member of the Senate Armed Services Committee and Senate Intelligence Committee

“The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent.”

— Michael Hayden, Former CIA Director

What’s next

It's unclear if the Pentagon's designation aims to block Anthropic's use by all federal government contractors or just those that partner with the military. The dispute has also further deepened Anthropic's rivalry with OpenAI, which has announced a deal to replace Anthropic with ChatGPT in classified military environments.

The takeaway

This unprecedented action against a domestic AI company raises serious concerns about the government's use of supply chain risk designations, which are typically reserved for foreign adversaries. The decision could have far-reaching consequences for the U.S. AI sector and the military's access to the best technology, while also setting a dangerous precedent that could embolden other countries to take similar actions against American companies.