Anthropic Sues Federal Government Over Supply-Chain Risk Label

AI company alleges Pentagon designation is retaliation for refusing unlimited AI use in defense

Published on Mar. 9, 2026

Artificial intelligence company Anthropic has filed a lawsuit against the federal government, alleging that the Pentagon inappropriately labeled it a supply-chain risk in retaliation for the company's refusal to allow unlimited use of its AI model Claude for mass surveillance and autonomous weapons without human approval.

Why it matters

This case highlights the growing tensions between tech companies and the government over the use of AI, particularly in defense and national security applications. Anthropic's lawsuit challenges the government's authority to designate companies as supply-chain risks and cancel their contracts as punishment for exercising their free speech rights.

The details

Anthropic signed a $200 million contract with the Pentagon in July 2026, but the company said its AI model could not be used for mass surveillance in the U.S. or for autonomous weapons without human approval. On February 27, the Pentagon gave Anthropic a deadline to comply with its demands, but before the deadline, President Trump announced that no government workers could use Anthropic's services. The Pentagon then labeled Anthropic a supply-chain risk, blocking it from any government contracts.

  • In July 2026, Anthropic signed a $200 million contract with the Pentagon.
  • On February 27, 2026, the Pentagon gave Anthropic a 5 p.m. deadline to comply with its demands.
  • Before the February 27 deadline, President Trump announced that no government workers could use Anthropic's services.
  • On February 27, 2026, the Secretary of Defense labeled Anthropic a supply-chain risk.

The players

Anthropic

An artificial intelligence company that filed a lawsuit against the federal government over being labeled a supply-chain risk.

Department of Defense

The federal agency that labeled Anthropic a supply-chain risk, a designation that blocks the company from any government contracts.

Dario Amodei

The CEO of Anthropic who said the company's AI model Claude could not be used for mass surveillance or autonomous weapons without human approval.

Donald Trump

The former U.S. President who announced that no government workers could use Anthropic's services before the Pentagon's deadline.

Pete Hegseth

The Secretary of Defense who labeled Anthropic a supply-chain risk.

Got photos? Submit your photos here. ›

What they’re saying

“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”

— Anthropic spokesperson (CNN)

“These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”

— Anthropic (Anthropic complaint)

What’s next

The lawsuit filed by Anthropic will be heard in the U.S. District Court for the Northern District of California, and the company also plans to file a suit in the U.S. Court of Appeals for the D.C. Circuit.

The takeaway

This case highlights the ongoing tensions between tech companies and the government over the use of AI, particularly in sensitive national security applications. Anthropic's lawsuit challenges the government's authority to retaliate against companies for exercising their free speech rights, raising important questions about the limits of executive power and the role of the judiciary in protecting corporate interests.