OpenAI Considered Alerting Canadian Police About Suspected School Shooter Months Before Attack

The AI company said it evaluated notifying authorities about a person who later committed a mass shooting.

Feb. 21, 2026 at 2:39am

OpenAI, the creator of ChatGPT, stated on Friday that last year it evaluated alerting Canadian police about the activities of a person who months later committed one of the deadliest school shootings in the country's history. The company said it ultimately decided not to report the individual to authorities due to concerns about privacy and the potential for false positives.

Why it matters

This incident raises questions about the ethical responsibilities of AI companies when they encounter potential threats of violence or other criminal behavior. It also highlights the challenges of balancing privacy rights with public safety concerns when deploying powerful AI systems.

The details

According to the report, the individual in question had interacted with OpenAI's language models and exhibited concerning behavior that the company believed could be a potential precursor to violence. However, OpenAI ultimately decided not to report the person to Canadian police due to worries about violating the individual's privacy and the possibility of making an incorrect assessment.

  • In 2025, the individual interacted with OpenAI's language models and exhibited concerning behavior.
  • Months later, the same individual committed one of the deadliest school shootings in Canadian history.

The players

OpenAI

The artificial intelligence company that created the ChatGPT language model.

Canadian Police

The law enforcement authorities that OpenAI considered notifying about the suspected school shooter.

Got photos? Submit your photos here. ›

What they’re saying

“We must balance the need to protect individual privacy with our responsibility to prevent harm when we have credible information about a potential threat.”

— Sam Altman, CEO, OpenAI

What’s next

OpenAI has stated that it will be reviewing its policies and procedures for handling potential threats identified through its AI systems in order to improve its response in the future.

The takeaway

This incident highlights the complex ethical challenges facing AI companies as their technologies become more advanced and capable of identifying potential threats to public safety. Striking the right balance between privacy rights and public security will be an ongoing challenge for the industry.