U.S. Judge Temporarily Blocks Pentagon's Blacklisting of Anthropic

Anthropic's lawsuit alleges the designation violated its free speech and due process rights

Mar. 27, 2026 at 1:34am by Ben Kaplan

A U.S. District Judge has temporarily blocked the Pentagon's blacklisting of Anthropic, the AI company behind the chatbot Claude. Anthropic sued the government, alleging the designation as a national security supply-chain risk was unlawful and violated its constitutional rights. The case is still pending, but the judge's ruling is a temporary win for Anthropic as it fights the Pentagon's move that could cost the company billions in lost business.

Why it matters

This case highlights the growing tensions between tech companies and the military over the use of AI, particularly in autonomous weapons and surveillance. Anthropic's stance on AI safety has put it at odds with the Pentagon, which wants unfettered access to the company's technology. The outcome could set an important precedent for how the government regulates and partners with AI firms in the future.

The details

U.S. District Judge Rita Lin, an appointee of former President Joe Biden, temporarily blocked the Pentagon's blacklisting of Anthropic after the company sued, alleging the designation violated its free speech and due process rights. Anthropic refused to allow the military to use its AI chatbot Claude for surveillance or autonomous weapons, prompting Defense Secretary Pete Hegseth to label Anthropic a national security supply-chain risk. This designation would have blocked Anthropic from certain military contracts. Anthropic says the move could cost it billions in lost business and reputational harm. The government argued Anthropic's refusal to lift the restrictions could cause uncertainty and risk disabling military systems.

  • On March 9, 2026, Anthropic filed a lawsuit in California federal court over the Pentagon's blacklisting.
  • On March 26, 2026, U.S. District Judge Rita Lin temporarily blocked the Pentagon's blacklisting of Anthropic.

The players

Anthropic

An AI company that created the chatbot Claude. Anthropic is fighting the Pentagon's designation of the company as a national security supply-chain risk.

Pete Hegseth

The U.S. Defense Secretary who designated Anthropic as a national security supply-chain risk, blocking the company from certain military contracts.

Rita Lin

A U.S. District Judge appointed by former President Joe Biden who temporarily blocked the Pentagon's blacklisting of Anthropic.

Got photos? Submit your photos here. ›

What’s next

The judge's ruling is not final, and the case is still pending. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.

The takeaway

This case highlights the growing tensions between tech companies and the military over the use of AI, particularly in sensitive applications like autonomous weapons and surveillance. The outcome could set an important precedent for how the government regulates and partners with AI firms in the future.