Anthropic Faces Potential $5 Billion Loss in Pentagon Dispute

AI researchers from rival companies rally behind Anthropic as legal battle with the government escalates.

Published on Mar. 10, 2026

Anthropic, a leading AI company, is embroiled in a legal dispute with the Pentagon that could cost the startup up to $5 billion in lost revenue. The dispute stems from a breakdown in negotiations over how Anthropic's AI models could be used, particularly around mass domestic surveillance and autonomous lethal weapons. The Pentagon has labeled Anthropic a "supply-chain risk," which could discourage companies from working with the startup. Anthropic has sued the government, arguing the decision violates its First Amendment rights and unfairly retaliates against the company. More than 30 researchers from rival companies, including OpenAI and Google, have filed a joint amicus brief supporting Anthropic.

Why it matters

This case highlights the growing tensions between the tech industry and the government over the use of AI technology, particularly in the context of national security and defense. The outcome of this dispute could have significant implications for the broader US AI industry and its competitiveness on the global stage.

The details

Anthropic executives have warned that the fallout from the Pentagon's decision is already impacting the company's finances. The company's chief financial officer, Krishna Rao, stated that hundreds of millions of dollars in expected revenue tied to Pentagon-related work are at risk this year. If the government succeeds in discouraging companies from working with Anthropic more broadly, the company could ultimately lose up to $5 billion in sales, which is roughly equivalent to its total revenue since commercializing its AI technology in 2023. Anthropic's chief commercial officer, Paul Smith, also noted that the pressure from the government is causing business partners to take steps that "reflect deep distrust and a growing fear of associating with Anthropic."

  • Anthropic commercialized its AI technology in 2023.
  • Last month, Defense Secretary Pete Hegseth said that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
  • Anthropic is now seeking a temporary court order that would allow it to continue working with military contractors while the legal fight continues. The first hearing could take place in San Francisco as soon as Friday.

The players

Anthropic

A leading AI company that is embroiled in a legal dispute with the Pentagon over the use of its AI models.

Pentagon

The U.S. Department of Defense, which has labeled Anthropic a "supply-chain risk" and is restricting companies from working with the startup.

OpenAI

A rival AI company whose employees, including CEO Sam Altman, have rallied in support of Anthropic.

Google DeepMind

A rival AI company whose chief scientist, Jeff Dean, has signed an amicus brief in support of Anthropic.

Amazon

A major cloud provider that has said it will continue offering Anthropic's Claude AI models to customers without ties to the Pentagon.

Got photos? Submit your photos here. ›

What they’re saying

“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond.”

— Employees of OpenAI and Google (Joint amicus brief)

“Enforcing the supply chain risk designation would be very bad for our industry and our country.”

— Sam Altman, CEO, OpenAI (Social media)

What’s next

Anthropic is seeking a temporary court order that would allow it to continue working with military contractors while the legal fight continues. The first hearing could take place in San Francisco as soon as Friday.

The takeaway

This dispute highlights the growing tensions between the tech industry and the government over the use of AI technology, particularly in the context of national security and defense. The outcome of this case could have significant implications for the broader US AI industry and its global competitiveness.