- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Anthropic Faces Backlash After Leaks Expose Internal Files
Cybersecurity lapses at the AI firm raise concerns about its safety protocols.
Apr. 1, 2026 at 2:49am by Ben Kaplan
Got story updates? Submit your updates here. ›
Anthropic, the San Francisco-based artificial intelligence company known for its focus on AI safety, is facing a major reputational crisis after two separate security incidents led to the exposure of sensitive internal files and code related to its flagship language model, Claude.
Why it matters
As one of the leading AI research firms, Anthropic's missteps raise questions about the industry's ability to safeguard critical technologies and data. The leaks could undermine public trust in Anthropic's commitment to responsible AI development.
The details
The first incident involved the accidental release of Anthropic's Claude model code and internal files due to a packaging error. The second incident was a more serious breach that exposed additional confidential information. While Anthropic claims the leaks were not the result of a security breach, the incidents have nonetheless raised concerns about the company's security practices.
- The first security incident occurred on April 15, 2025.
- The second, more serious breach happened on April 25, 2025.
The players
Anthropic
A San Francisco-based artificial intelligence company known for its focus on AI safety and development of the Claude language model.
What they’re saying
“We take security and privacy extremely seriously at Anthropic. These incidents were the result of human error, not a malicious attack, and we are taking immediate steps to strengthen our protocols and regain the trust of the public and our partners.”
— Dario Amodei, Chief Technology Officer, Anthropic
What’s next
Anthropic has promised a full investigation into the security lapses and plans to release a detailed report on its findings and new security measures within the next two weeks.
The takeaway
The Anthropic security breaches underscore the critical importance of robust cybersecurity practices in the AI industry, where the stakes are high and public trust is paramount. These incidents will likely prompt renewed scrutiny of AI firms' data protection protocols and their ability to safeguard sensitive technologies.





