- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
U.S. Judge Temporarily Blocks Pentagon's Blacklisting of Anthropic
Anthropic's lawsuit alleges the designation violated its free speech and due process rights
Mar. 27, 2026 at 1:34am by Ben Kaplan
Got story updates? Submit your updates here. ›
A U.S. District Judge has temporarily blocked the Pentagon's blacklisting of Anthropic, the AI company behind the chatbot Claude. Anthropic sued the government, alleging the designation as a national security supply-chain risk was unlawful and violated its constitutional rights. The case is still pending, but the judge's ruling is a temporary win for Anthropic as it fights the Pentagon's move that could cost the company billions in lost business.
Why it matters
This case highlights the growing tensions between tech companies and the military over the use of AI, particularly in autonomous weapons and surveillance. Anthropic's stance on AI safety has put it at odds with the Pentagon, which wants unfettered access to the company's technology. The outcome could set an important precedent for how the government regulates and partners with AI firms in the future.
The details
U.S. District Judge Rita Lin, an appointee of former President Joe Biden, temporarily blocked the Pentagon's blacklisting of Anthropic after the company sued, alleging the designation violated its free speech and due process rights. Anthropic refused to allow the military to use its AI chatbot Claude for surveillance or autonomous weapons, prompting Defense Secretary Pete Hegseth to label Anthropic a national security supply-chain risk. This designation would have blocked Anthropic from certain military contracts. Anthropic says the move could cost it billions in lost business and reputational harm. The government argued Anthropic's refusal to lift the restrictions could cause uncertainty and risk disabling military systems.
- On March 9, 2026, Anthropic filed a lawsuit in California federal court over the Pentagon's blacklisting.
- On March 26, 2026, U.S. District Judge Rita Lin temporarily blocked the Pentagon's blacklisting of Anthropic.
The players
Anthropic
An AI company that created the chatbot Claude. Anthropic is fighting the Pentagon's designation of the company as a national security supply-chain risk.
Pete Hegseth
The U.S. Defense Secretary who designated Anthropic as a national security supply-chain risk, blocking the company from certain military contracts.
Rita Lin
A U.S. District Judge appointed by former President Joe Biden who temporarily blocked the Pentagon's blacklisting of Anthropic.
What’s next
The judge's ruling is not final, and the case is still pending. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.
The takeaway
This case highlights the growing tensions between tech companies and the military over the use of AI, particularly in sensitive applications like autonomous weapons and surveillance. The outcome could set an important precedent for how the government regulates and partners with AI firms in the future.
San Francisco top stories
San Francisco events
Apr. 4, 2026
MJ (Touring)Apr. 4, 2026
Nimesh Patel: With All Due Disrespect




