- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Judge blocks ban on Anthropic's AI, calling it illegal 'retaliation'
Anthropic wins court order pausing Trump administration's plan to sever ties with the AI company over national security concerns
Mar. 27, 2026 at 2:09pm by Ben Kaplan
Got story updates? Submit your updates here. ›
Anthropic PBC, the maker of the Claude chatbot, won a court order blocking a Trump administration ban on government use of the company's artificial intelligence technology. U.S. District Judge Rita F. Lin issued a preliminary injunction, pausing the administration's plan to sever all ties with Anthropic while a legal fight plays out in San Francisco federal court. The judge said the ban appeared to be "classic illegal First Amendment retaliation" and not directed at legitimate national security interests.
Why it matters
This case highlights the growing tensions between the government and private AI companies over the use and regulation of advanced technologies. Anthropic argued the ban could cost it billions in lost revenue, while the government cited national security concerns in trying to sever ties with the company. The ruling could set an important precedent for how the government can restrict the use of AI by federal agencies.
The details
Anthropic sued earlier this month to block a declaration by the Defense Department that the company posed a threat to the U.S. supply chain. The startup wanted assurances its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment, while the government cited national security in arguing it couldn't accept any restrictions. In her ruling, Judge Lin said the U.S. Justice Department had no 'legitimate basis' to determine that Anthropic's stance on AI restrictions could lead it to 'become a saboteur.' An attorney for Anthropic pointed out that the Pentagon can review any AI model before deploying it, and that Anthropic has no way to stop a model from working, change how it works, turn it off, or see how it's being used by the military.
- On March 27, 2026, U.S. District Judge Rita F. Lin issued a preliminary injunction blocking the Trump administration's ban on government use of Anthropic's AI technology.
- The judge put the order on hold for seven days to give the government a chance to appeal.
The players
Anthropic PBC
An artificial intelligence company that developed the Claude chatbot. Anthropic sued to block a government ban on federal use of its AI technology, arguing the move could cost it billions in lost revenue.
U.S. District Judge Rita F. Lin
The judge who issued the preliminary injunction blocking the Trump administration's ban on government use of Anthropic's AI. She said the ban appeared to be "classic illegal First Amendment retaliation" and not directed at legitimate national security interests.
Trump administration
The former presidential administration that sought to ban government use of Anthropic's AI technology, citing national security concerns.
Defense Department
The government agency that declared Anthropic posed a threat to the U.S. supply chain, leading to the attempted ban on the company's AI.
Emil Michael
The under secretary of defense for research and engineering, who called the judge's decision "a disgrace" and claimed "there are dozens of factual errors" in the judgment.
What they’re saying
“If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic.”
— U.S. District Judge Rita F. Lin
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
— Anthropic
“There are dozens of factual errors in the judgment.”
— Emil Michael, Under Secretary of Defense for Research and Engineering
What’s next
The government has a seven-day window to appeal the judge's preliminary injunction blocking the ban on Anthropic's AI technology.
The takeaway
This case highlights the complex and contentious relationship between the government and private AI companies, as both sides grapple with issues of national security, free speech, and the appropriate use of advanced technologies. The ruling could set an important precedent for how the government can restrict the use of AI by federal agencies in the future.
San Francisco top stories
San Francisco events
Mar. 29, 2026
Captivate Through Comedy 101 Graduation ShowMar. 29, 2026
Goldie BoutilierMar. 29, 2026
Tophouse




