AI researchers warn of dangers as they depart top companies

Departures from OpenAI, Anthropic, and xAI highlight growing concerns over AI safety and ethics

Feb. 11, 2026 at 7:47pm

A wave of high-profile AI researchers and executives are leaving top companies like OpenAI, Anthropic, and xAI, and using their exits to sound the alarm on the potential dangers of the technology they helped develop. They warn of AI's "potential for manipulating users in ways we don't have the tools to understand, let alone prevent," and that "the world is in peril" as these companies race toward IPOs and growth.

Why it matters

The mass departures from leading AI companies come as the industry faces increasing scrutiny over the societal impacts and ethical implications of rapidly advancing AI systems. These researchers are voicing concerns that the companies are prioritizing growth and revenue over safety and responsible development, raising questions about the industry's ability to self-regulate and mitigate the risks of transformative AI technologies.

The details

Researchers like Zoë Hitzig from OpenAI and Mrinank Sharma from Anthropic have publicly resigned, citing "deep reservations" about their employers' practices and warning about the dangers of AI's potential to manipulate users. OpenAI also recently disbanded its "mission alignment" team focused on ensuring AI benefits all of humanity. Meanwhile, multiple co-founders and staff have departed xAI, the AI company merging with Elon Musk's SpaceX, amid concerns over its controversial Grok chatbot generating nonconsensual and hateful content.

  • On February 8, 2026, Zoë Hitzig announced her resignation from OpenAI.
  • On February 7, 2026, Mrinank Sharma announced his departure from Anthropic.
  • Over the past week, multiple co-founders and staff have left xAI.

The players

Zoë Hitzig

A researcher who worked at OpenAI for the past two years and cited "deep reservations" about the company's practices in her resignation.

Mrinank Sharma

The former head of Anthropic's Safeguards Research team, who warned that "the world is in peril" as he announced his departure from the company.

OpenAI

An artificial intelligence company and a subsidiary of Alphabet Inc., Google's parent company, that has faced growing concerns over the safety and ethics of its AI systems.

Anthropic

An AI research company behind the Claude chatbot, which has faced scrutiny over the departure of its head of Safeguards Research.

xAI

An AI company that is merging with Elon Musk's SpaceX, which has faced backlash over its Grok chatbot generating nonconsensual and hateful content.

Got photos? Submit your photos here. ›

What they’re saying

“The world is in peril”

— Mrinank Sharma, Former head of Anthropic's Safeguards Research team (Anthropic)

“The technology has 'a potential for manipulating users in ways we don't have the tools to understand, let alone prevent.'”

— Zoë Hitzig, Former OpenAI researcher (CNN)

What’s next

As these high-profile departures continue, the AI industry and regulators will likely face increasing pressure to address the ethical and safety concerns raised by these researchers, potentially leading to new oversight and governance frameworks for AI development and deployment.

The takeaway

The mass exodus of AI researchers from leading companies underscores the growing divide between those focused on the potential benefits of transformative AI technologies and those deeply concerned about the risks. This highlights the urgent need for the AI industry to prioritize responsible development and transparency in order to maintain public trust and ensure these powerful tools are used for the greater good.