New AI Model Exposes Critical Software Flaws

Anthropic's Claude Mythos Preview raises alarms about global cybersecurity risks.

Apr. 8, 2026 at 8:05pm

A highly detailed, glowing 3D macro illustration of a complex network of interconnected circuits, wires, and data streams in shades of neon cyan and magenta, conceptually representing the underlying digital infrastructure that powers critical systems and the need to address its vulnerabilities.A luminous visualization of the complex digital vulnerabilities that could threaten critical infrastructure if left unaddressed.NYC Today

A new AI model developed by Anthropic, called Claude Mythos Preview, has the ability to identify and exploit vulnerabilities in major operating systems, web browsers, and critical infrastructure like power grids, hospitals, and banking systems. While Anthropic is limiting access to about 40 major firms to address the issues, experts warn that if this tool spreads beyond responsible players, even bored kids could hack into vital systems, posing a threat on par with nuclear weapons.

Why it matters

This AI model represents a significant turning point in the advancement of artificial intelligence, with profound geopolitical implications. The ability to uncover thousands of serious software vulnerabilities in core systems could be devastating if accessed by bad actors, making urgent US-China cooperation essential to address the risks.

The details

Anthropic's Claude Mythos Preview model can not only write software code better than any previous model, but it can also identify and exploit subtle vulnerabilities in every major operating system and web browser when directed by a user. The company has been consulting with the US government and is limiting access to about 40 major firms, including tech giants and financial institutions, in an effort to address the vulnerabilities before they can be exploited by malicious actors.

  • Anthropic's new model, Claude Mythos Preview, was recently unveiled.

The players

Anthropic

An artificial intelligence company that has developed the Claude Mythos Preview model, which can identify and exploit vulnerabilities in critical software and infrastructure.

Thomas L. Friedman

A New York Times columnist who has written an op-ed warning about the implications of Anthropic's new AI model.

Craig Mundie

A tech advisor who has joined Friedman in warning about the significance of Anthropic's tool, comparing it to the emergence of nuclear weapons.

Got photos? Submit your photos here. ›

What they’re saying

“Those worried about advances in artificial intelligence should know an alarming 'turning point' has already arrived sooner than expected, and with 'profound geopolitical implications.'”

— Thomas L. Friedman, New York Times Columnist

“If this tool spreads beyond responsible players, even bored kids would have the ability to hack any major infrastructure system.”

— Thomas L. Friedman, New York Times Columnist

What’s next

Anthropic is currently limiting access to the Claude Mythos Preview model to about 40 major firms, including tech giants and financial institutions, in an effort to address the vulnerabilities before they can be exploited by malicious actors. Experts are calling for urgent US-China cooperation to mitigate the global cybersecurity risks posed by this advanced AI tool.

The takeaway

The development of Anthropic's Claude Mythos Preview model represents a significant milestone in the advancement of artificial intelligence, with the potential to expose critical vulnerabilities in global software and infrastructure. This tool's capabilities raise serious concerns about cybersecurity and the need for immediate action to address these risks before they can be exploited by bad actors, making international cooperation essential.