Hegseth to Meet with Anthropic CEO as AI Principles Clash with Military Contracting

The meeting highlights the debate over AI's role in national security and concerns about how the technology could be used in high-stakes situations.

Published on Feb. 24, 2026

Defense Secretary Pete Hegseth plans to meet with Anthropic CEO Dario Amodei, as the AI company remains the only one of its peers to not supply its technology to a new U.S. military internal network. Anthropic has raised ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and AI-assisted mass surveillance. The meeting underscores the tension between Anthropic's safety-minded approach and the Pentagon's push to adopt AI systems without 'ideological constraints' that limit military applications.

Why it matters

The meeting between Hegseth and Amodei highlights the ongoing debate over the role of AI in national security and the potential risks of how the technology could be used, particularly in high-stakes situations involving lethal force, sensitive information, or government surveillance. It also comes as the Pentagon seeks to root out what it calls a 'woke culture' in the armed forces, putting Anthropic's ethical stance at odds with the military's priorities.

The details

Anthropic, the maker of the chatbot Claude, is the only AI company approved for classified military networks, where it works with partners like Palantir. The other major AI companies, including Google, OpenAI, and Elon Musk's xAI, are only operating in unclassified environments for now. Hegseth has criticized AI models 'that won't allow you to fight wars,' and has highlighted xAI and Google as preferred partners, while Anthropic has maintained its stance on the ethical use of AI.

  • The meeting between Hegseth and Amodei is scheduled for Tuesday, February 24, 2026.

The players

Pete Hegseth

The current U.S. Defense Secretary who has vowed to root out what he calls a 'woke culture' in the armed forces.

Dario Amodei

The CEO of Anthropic, an AI company that has raised ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and AI-assisted mass surveillance.

Anthropic

An AI company that has positioned itself as the more responsible and safety-minded of the leading AI companies, and is the only one of its peers to not supply its technology to the new U.S. military internal network.

xAI

An artificial intelligence company founded by Elon Musk that has been highlighted by Hegseth as a preferred partner for the Pentagon's AI initiatives.

Google

An AI company that has also been highlighted by Hegseth as a preferred partner for the Pentagon's AI initiatives.

Got photos? Submit your photos here. ›

What they’re saying

“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”

— Dario Amodei, CEO, Anthropic (Fortune)

“Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications. So the company's bargaining power here is limited, and it risks losing influence in the department's push to adopt AI.”

— Owen Daniels, Associate Director of Analysis and Fellow, Georgetown University's Center for Security and Emerging Technology (Fortune)

What’s next

The judge in the case will decide on Tuesday whether or not to allow Walker Reed Quinn out on bail.

The takeaway

This meeting highlights the ongoing tension between Anthropic's ethical stance on AI and the Pentagon's push to adopt the technology without 'ideological constraints,' raising concerns about the potential misuse of AI in high-stakes national security contexts.