- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Anthropic Accidentally Exposes Unreleased AI Model and Internal Data
Leaked information included details on an upcoming 'step change' in AI capabilities.
Mar. 28, 2026 at 2:04am
Got story updates? Submit your updates here. ›
Anthropic, an AI company, unintentionally revealed details of an unreleased AI model, an exclusive CEO event, and other internal information through a security lapse in its content management system. The leaked data, which included images and PDFs, was accessible to the public before the company took steps to secure the information.
Why it matters
This incident highlights the risks and challenges tech companies face in managing sensitive information and protecting their intellectual property, especially as they increasingly rely on AI-powered tools for internal processes like software development.
The details
The leaked information was made available through Anthropic's content management system, which left nearly 3,000 unpublished assets publicly accessible. The data included details about an upcoming 'step change' in AI capabilities, with improvements in reasoning, coding, and cybersecurity. Anthropic attributed the issue to 'human error in the CMS configuration' and said the leaked materials were 'early drafts of content considered for publication' that did not involve core infrastructure, AI systems, customer data, or security architecture.
- On March 26, 2026, Fortune informed Anthropic of the security lapse.
- Anthropic took steps to secure the information after being notified.
The players
Anthropic
An AI company that develops the Claude AI models and has automated a significant portion of its internal software development using AI-powered coding agents.
Alexandre Pauwels
A cybersecurity researcher at the University of Cambridge who reviewed the leaked Anthropic material.
What they’re saying
“An issue with one of our external CMS tools led to draft content being accessible.”
— Anthropic spokesperson
What’s next
Anthropic will likely conduct a thorough review of its content management and security practices to prevent similar incidents in the future.
The takeaway
This case highlights the importance of robust data management and security protocols, especially for tech companies working on cutting-edge AI technologies. It serves as a cautionary tale about the risks of inadvertently exposing sensitive information, even for industry leaders like Anthropic.


