Anthropic Denies Ability to Sabotage Military AI Tools

Company says it lacks access to disable or alter its generative AI model Claude used by the Pentagon

Mar. 21, 2026 at 12:03am by Ben Kaplan

Anthropic, the AI developer behind the generative model Claude, has denied allegations from the U.S. Department of Defense that it could manipulate or sabotage the AI tools used by the military during wartime. In a court filing, Anthropic executives stated that the company does not have the ability to disable, alter the functionality of, or otherwise influence Claude once it is in use by the military. The Pentagon has labeled Anthropic a "supply-chain risk" and banned federal agencies from using its software, prompting the company to file lawsuits challenging the constitutionality of the ban.

Why it matters

The dispute between Anthropic and the Department of Defense highlights the growing tensions around the use of AI technology for national security purposes. The Pentagon's concerns about potential sabotage or disruption of critical military systems by Anthropic raise questions about the level of control and oversight the government should have over AI tools developed by private companies.

The details

In a court filing, Anthropic's head of public sector, Thiyagu Ramasamy, stated that the company "does not maintain any back door or remote 'kill switch'" and that its personnel cannot log into Department of Defense systems to modify or disable the models during operations. Ramasamy also said Anthropic can only provide updates to Claude with the approval of the government and its cloud provider, Amazon Web Services. Anthropic has filed lawsuits challenging the Pentagon's ban and is seeking an emergency order to reverse it, but some customers have already begun canceling deals with the company.

  • On March 4, 2026, Anthropic proposed a contract to the Department of Defense that would guarantee the company does not have the right to control or veto lawful operational decision-making.
  • On March 21, 2026, Anthropic filed a court filing denying the ability to sabotage or manipulate its AI model Claude used by the military.
  • A hearing in one of Anthropic's lawsuits is scheduled for March 24, 2026 in federal district court in San Francisco.

The players

Anthropic

An AI developer and the creator of the generative AI model Claude, which is being used by the U.S. Department of Defense.

Thiyagu Ramasamy

Anthropic's head of public sector, who filed a court statement denying the company's ability to sabotage or manipulate its AI tools used by the military.

Pete Hegseth

The U.S. Defense Secretary who labeled Anthropic a "supply-chain risk", leading to a ban on federal agencies using the company's software.

Sarah Heck

Anthropic's head of policy, who filed a court statement claiming the company was willing to guarantee it does not have the right to control or veto lawful Department of Defense operational decision-making.

Amazon Web Services

The cloud provider that Anthropic says must approve any updates to the Claude model used by the military.

Got photos? Submit your photos here. ›

What they’re saying

“Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations.”

— Thiyagu Ramasamy, Head of Public Sector, Anthropic

“Anthropic does not maintain any back door or remote 'kill switch'. Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way.”

— Thiyagu Ramasamy, Head of Public Sector, Anthropic

“For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any right to control or veto lawful Department of War operational decision‑making.”

— Sarah Heck, Head of Policy, Anthropic

What’s next

A hearing in one of Anthropic's lawsuits challenging the Pentagon's ban is scheduled for March 24, 2026 in federal district court in San Francisco. The judge could decide on a temporary reversal of the ban soon after.

The takeaway

The dispute between Anthropic and the Department of Defense highlights the complex issues surrounding the use of AI technology for national security purposes. While the Pentagon is concerned about potential sabotage or disruption, Anthropic maintains it lacks the ability to manipulate its AI tools once they are in use by the military, raising questions about the appropriate level of control and oversight over these powerful technologies.