- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Pentagon Gives Anthropic Ultimatum Over Military AI Use
Defense Secretary demands unrestricted access to Anthropic's AI technology, raising concerns over AI safety and ethics
Feb. 25, 2026 at 11:33pm
Got story updates? Submit your updates here. ›
The Pentagon has given the AI company Anthropic an ultimatum - allow the U.S. military unrestricted use of its AI technology, known as Claude, or face a ban from all government contracts. At the heart of the issue is a dispute over who controls how artificial intelligence models are used, with Anthropic seeking to impose certain guardrails to prevent misuse, while the Pentagon wants unfettered access to the technology for national security purposes.
Why it matters
This standoff highlights the growing tension between the military's desire to rapidly adopt and utilize powerful AI capabilities, and AI companies' concerns about the potential for misuse and unintended consequences. It raises questions about the appropriate role of the private sector in shaping the development and deployment of transformative technologies like AI, especially when it comes to national security applications.
The details
The Pentagon awarded Anthropic a $200 million contract last year to develop AI capabilities for advancing U.S. national security. Anthropic is currently the only AI company to have its model deployed on the Pentagon's classified networks, through a partnership with data analytics firm Palantir. However, Anthropic has repeatedly asked the Pentagon to agree to certain guardrails, such as restricting the use of its AI technology, Claude, for mass surveillance of Americans or final targeting decisions in military operations without human involvement. The Pentagon has expressed concerns that such restrictions could prevent the military from taking critical actions, such as responding to an ICBM launch.
- In July 2025, the Pentagon awarded Anthropic a $200 million contract to develop AI capabilities.
- In January 2026, the U.S. military used Anthropic's Claude technology during the operation to capture former Venezuelan President Nicolás Maduro.
- On February 28, 2026, the Pentagon has given Anthropic a deadline to agree to unrestricted military use of its AI technology or face being blacklisted from government contracts.
The players
Anthropic
An AI company that developed the technology known as Claude, which the Pentagon has been using on its classified networks. Anthropic has sought to impose certain guardrails on the military's use of its technology to prevent misuse.
Pete Hegseth
The U.S. Defense Secretary, who has given Anthropic an ultimatum to allow unrestricted military use of its AI technology or face being banned from government contracts.
Dario Amodei
The CEO of Anthropic, who has been vocal in expressing concerns about the potential dangers of AI and has advocated for sensible AI regulation to mitigate risks.
Emil Michael
The Undersecretary of Defense for Research, who has expressed concerns that Anthropic's proposed guardrails could prevent the military from using the technology in urgent situations.
Palantir
A data analytics company that has partnered with Anthropic to deploy its AI technology on the Pentagon's classified networks.
What they’re saying
“We will not employ AI models that won't allow you to fight wars. We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We're building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”
— Pete Hegseth, Defense Secretary (CBS News)
“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”
— Dario Amodei, CEO, Anthropic (CBS News)
What’s next
The Pentagon is considering invoking the Defense Production Act to compel Anthropic to comply with its demands on national security grounds. If an agreement cannot be reached, defense officials have discussed declaring Anthropic a "supply chain risk" to push the company out of government contracts.
The takeaway
This standoff between the Pentagon and Anthropic highlights the growing tension between the military's desire for unfettered access to powerful AI capabilities and AI companies' concerns about the potential for misuse and unintended consequences. It underscores the need for clear guidelines and oversight to ensure the responsible development and deployment of transformative technologies like AI, especially when it comes to national security applications.
Washington top stories
Washington events
Mar. 17, 2026
Wizards VIP Packages: 3/17/2026Mar. 17, 2026
Artemas - LOVERCORE TourMar. 17, 2026
Inherit the Wind




