Anthropic's Claude AI App Faces Technical Issues Amid Pentagon Clash

The popular AI chatbot experiences 'elevated errors' as tensions rise over its use by the military

Apr. 10, 2026 at 4:41pm

A highly detailed, glowing 3D illustration of a complex circuit board with pulsing neon cyan and magenta lights, conceptually representing the advanced AI infrastructure behind the Claude app and the technical challenges it faces.The technical complexities of AI development collide with ethical concerns and government oversight, as the Claude app faces performance issues amidst a clash with the Pentagon.Washington Today

Anthropic's Claude AI app, a top free app on the Apple App Store, has experienced 'elevated errors' in its latest Opus 4.6 model, according to the company's status updates. This comes amid a heated clash between Anthropic and the Pentagon over the use of Claude's technology, with the government ordering a cease of Anthropic's services across all agencies. The rivalry has intensified as OpenAI, a competing AI firm, quickly signed a deal with the Department of Defense shortly after the government's decision.

Why it matters

The issues with the Claude AI app and the ongoing tensions with the Pentagon highlight the complex ethical and regulatory challenges surrounding the development and deployment of advanced AI technologies. As AI continues to advance, the debate over its responsible use and the role of government oversight becomes increasingly relevant, with implications for national security, innovation, and the public good.

The details

Anthropic, the company behind the Claude AI app, had signed a $200 million contract with the Pentagon in July. However, tensions arose when Anthropic requested the government to ensure its AI models weren't used for fully autonomous weapons or mass domestic surveillance. The Pentagon's response was firm, demanding the military be allowed to use the platform for any lawful purpose. This led to a standoff, and on Friday, President Donald Trump ordered a cease of Anthropic's technology use across all government agencies. Defense Secretary Pete Hegseth followed suit, labeling Anthropic a 'supply-chain risk to national security'.

  • On July 1, 2026, Anthropic signed a $200 million contract with the Pentagon.
  • In late July 2026, tensions arose between Anthropic and the Pentagon over the use of the Claude AI models.
  • On April 10, 2026, President Donald Trump ordered a cease of Anthropic's technology use across all government agencies.
  • On April 10, 2026, Defense Secretary Pete Hegseth labeled Anthropic a 'supply-chain risk to national security'.

The players

Anthropic

The company behind the Claude AI app, which signed a $200 million contract with the Pentagon in July 2026.

Claude AI

A popular AI chatbot app that has experienced 'elevated errors' in its latest Opus 4.6 model.

Pentagon

The U.S. Department of Defense, which clashed with Anthropic over the use of the Claude AI models.

Donald Trump

The President of the United States, who ordered a cease of Anthropic's technology use across all government agencies.

Pete Hegseth

The U.S. Defense Secretary, who labeled Anthropic a 'supply-chain risk to national security'.

Got photos? Submit your photos here. ›

What they’re saying

“We must ensure our AI models are not used for fully autonomous weapons or mass domestic surveillance.”

— Anthropic, Company

“The military must be allowed to use the platform for any lawful purpose.”

— Pentagon, Department of Defense

What’s next

The judge in the case will decide on Tuesday whether or not to allow Anthropic to continue providing its AI technology to the government.

The takeaway

This case highlights the ongoing debate over the responsible development and use of AI technology, with implications for national security, innovation, and the public good. As AI continues to advance, the need for clear ethical guidelines and effective government oversight becomes increasingly crucial.