Anthropic Suffers Setback in Bid to Lift 'Supply Chain Risk' Label

U.S. Appeals Court rules against AI company in dispute over military use of its technology

Apr. 9, 2026 at 9:56pm

A highly detailed, glowing 3D illustration of an intricate AI neural network infrastructure, with pulsing neon lights representing the flow of data, conceptually illustrating the power and complexity of advanced AI technology.As the military continues to deploy Anthropic's AI technology, the company's public stance against certain uses has put it at odds with the Pentagon.Washington Today

Artificial intelligence company Anthropic has suffered a setback in its legal battle with the U.S. government over the use of its Claude AI model by the military. A U.S. Appeals Court panel ruled against Anthropic's request to lift the 'supply chain risk' label imposed by the Pentagon, citing the need for the military to continue using Anthropic's critical AI services during an ongoing conflict.

Why it matters

This case highlights the growing tensions between tech companies and the government over the use of AI technology, particularly in military applications. Anthropic's public stance against allowing its AI to be used for autonomous weapons or surveillance has put it at odds with the Pentagon, leading to the 'supply chain risk' designation that now bars the company from new contracts and systems.

The details

The U.S. Appeals Court panel ruled that 'the equitable balance here cuts in favor of the government,' despite acknowledging that the 'supply chain risk' label will continue to exclude Anthropic from new contracts and Pentagon systems. The court said granting a stay would 'force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict'.

  • The feud between Anthropic and the Trump administration publicly escalated in February 2026.
  • Oral arguments in the case are set for May 19, 2026.

The players

Anthropic

An artificial intelligence company that is fighting the Pentagon over the use of its technology during warfare.

U.S. Government

The U.S. government, specifically the Pentagon, has designated Anthropic as a 'supply chain risk' and is continuing to deploy the company's Claude AI model in military operations.

U.S. District Judge Rita F. Lin

The judge who previously said the Pentagon's ban on Anthropic 'looked like an attempt to cripple' the company.

Dario Amodei

The CEO of Anthropic, who announced that he would not allow the company's Claude AI model to be used for autonomous weapons or to surveil American citizens.

Got photos? Submit your photos here. ›

What’s next

Oral arguments in the case are set for May 19, 2026, where the court will further consider Anthropic's appeal.

The takeaway

This case highlights the ongoing tensions between tech companies and the government over the use of AI technology, particularly in military applications. Anthropic's stance against allowing its AI to be used for certain purposes has put it at odds with the Pentagon, leading to the 'supply chain risk' designation that now bars the company from new contracts and systems.