- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Anthropic's Claude AI Used by US Military in Venezuela Raid
Safeguards-focused AI company's technology played direct role in deadly operation, reports say
Published on Feb. 14, 2026
Got story updates? Submit your updates here. ›
According to reports from Axios and The Wall Street Journal, the US military actively used Anthropic's Claude AI model during the operation to capture Venezuelan President Nicolas Maduro last month. The precise role of the AI remains unclear, though the military has previously used AI models to analyze intelligence in real-time. Anthropic has publicly emphasized its commitment to AI safeguards, but the revelation lands at an awkward moment as the company negotiates with the Pentagon over loosening restrictions on deploying AI for military applications.
Why it matters
This development raises concerns about the use of advanced AI technology in military operations, particularly by a company that has positioned itself as focused on AI safety and ethics. It also highlights the tension between Anthropic's public stance on AI safeguards and the potential real-world deployment of its technology for military purposes.
The details
According to the reports, Claude was utilized during the active operation in Venezuela, not merely in preparatory phases. No Americans lost their lives in the raid, but dozens of Venezuelan and Cuban soldiers and security personnel were killed on January 3. Anthropic's usage policies explicitly prohibit its technology from being used to "facilitate violence, develop weapons or conduct surveillance." However, the company is reportedly negotiating with the Pentagon over whether to loosen these restrictions.
- The US military operation in Venezuela took place on January 3, 2026.
- Anthropic CEO Dario Amodei has warned repeatedly of the existential dangers posed by unconstrained use of artificial intelligence.
- On Monday, the head of Anthropic's Safeguards Research Team, Mrinank Sharma, abruptly resigned with a cryptic warning that "the world is in peril."
The players
Anthropic
An AI research company based in San Francisco that has publicly emphasized its commitment to AI safety and ethics.
Nicolas Maduro
The President of Venezuela who was the target of the US military operation.
Dario Amodei
The CEO of Anthropic who has warned of the dangers of unconstrained use of artificial intelligence.
Mrinank Sharma
The former head of Anthropic's Safeguards Research Team who abruptly resigned with a warning that "the world is in peril."
Pete Hegseth
The US Defense Secretary who has vowed not to use AI models that "won't allow you to fight wars."
What they’re saying
“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise. Any use of Claude - whether in the private sector or across government - is required to comply with our Usage Policies.”
— Anthropic spokesperson (Axios)
“the world is in peril.”
— Mrinank Sharma, Former head of Anthropic's Safeguards Research Team (Axios)
“won't allow you to fight wars.”
— Pete Hegseth, US Defense Secretary (Axios)
What’s next
Anthropic is reportedly negotiating with the Pentagon over whether to loosen restrictions on deploying AI for autonomous weapons targeting and domestic surveillance. The standoff has stalled a contract worth up to $200 million.
The takeaway
This case highlights the complex and often contradictory relationship between advanced AI technology and its potential military applications, even for companies that have positioned themselves as focused on AI safety and ethics. It raises important questions about the oversight and regulation of AI systems, especially when they are deployed in sensitive national security contexts.
San Francisco top stories
San Francisco events
Feb. 15, 2026
S. F. Comedy ShowcaseFeb. 15, 2026
Cold Cave w/ Rosa AnschutzFeb. 15, 2026
The Notebook (Touring)




