Microsoft Seeks Court Action to Protect $5B Anthropic Investment

Tech giant files motion to block Pentagon's ban on AI firm Anthropic, citing risks to military contracts and US tech leadership.

Mar. 11, 2026 at 12:52am

Microsoft has asked a U.S. court to block the Pentagon's decision to temporarily classify the artificial intelligence company Anthropic as a supply-chain risk. Microsoft, which recently invested up to $5 billion in Anthropic, argues that such a move could disrupt the military's access to advanced AI systems and jeopardize its investment. The tech giant has filed a motion seeking a temporary restraining order to prevent the Pentagon's ban from being applied to Anthropic's existing defense contracts.

Why it matters

This case highlights the growing tensions between the government's national security concerns and the tech industry's desire to freely develop and deploy advanced AI technologies. The outcome could have significant implications for the Pentagon's access to cutting-edge AI tools, as well as the broader competitiveness of the U.S. in the global AI race.

The details

Microsoft claims the Pentagon's designation of Anthropic as a supply-chain risk is unprecedented and could force companies working with the Defense Department to rapidly transition away from Anthropic's AI models, potentially disrupting military operations. Anthropic has also sued the government over the decision, alleging it is unlawful and could severely damage its business. The debate centers around Anthropic's AI models, called Claude, and the company's desire to limit their use in autonomous weapons or mass surveillance.

  • Last week, the Pentagon formally barred Anthropic's technology from defense contracts and designated the company a supply-chain risk.
  • In November 2026, Microsoft announced it would invest up to $5 billion in Anthropic, one of the fastest-growing AI firms in the U.S.

The players

Microsoft

A multinational technology company and one of the largest investors in the artificial intelligence industry, having recently committed up to $5 billion to Anthropic.

Anthropic

An American artificial intelligence company that has developed advanced AI models, including one called Claude, which has become a point of contention with the Pentagon.

U.S. Department of Defense

The federal agency responsible for national defense, which has designated Anthropic as a supply-chain risk, barring the use of its technology in defense contracts.

Got photos? Submit your photos here. ›

What they’re saying

“This may potentially disrupt US warfighters at a crucial moment.”

— Microsoft

“If the Pentagon was dissatisfied with its contract with Anthropic, it could simply have ended the agreement and chosen another provider instead of labeling the company a supply-chain threat.”

— Researchers, from OpenAI and Google DeepMind

What’s next

The U.S. District Court for the Northern District of California will rule on Microsoft's request for a temporary restraining order to prevent the Pentagon's ban from being applied to Anthropic's existing defense contracts.

The takeaway

This case highlights the delicate balance between national security concerns and the tech industry's drive to advance AI capabilities. The outcome could have far-reaching implications for the Pentagon's access to cutting-edge AI tools, as well as the broader competitiveness of the U.S. in the global AI race.