Alleged FSU Shooter Consulted ChatGPT Before Attack

Chatbot messages reveal disturbing details about the perpetrator's mindset and planning.

Apr. 15, 2026 at 6:46pm

An extreme close-up photograph of a computer keyboard with a single bullet casing resting on the keys, creating a stark, gritty, investigative aesthetic that conceptually illustrates the misuse of technology in this crime.The alleged shooter's disturbing use of AI technology to plan his attack has raised urgent concerns about the need for greater regulation and oversight.Today in Orlando

Authorities have uncovered evidence that the suspect in a recent mass shooting at Florida State University, 24-year-old Ethan Daniels, had consulted the AI chatbot ChatGPT in the weeks leading up to the attack. The chatbot messages provide a rare glimpse into the mind of an accused killer before he went on a rampage, including discussions about the optimal timing for the attack and disturbing sexual scenarios involving a minor.

Why it matters

This case highlights the potential dangers of AI technology being misused by individuals with malicious intent. It raises concerns about the need for greater regulation and oversight of chatbots and other AI systems to prevent them from being exploited for criminal purposes.

The details

According to investigators, Daniels had multiple conversations with ChatGPT in which he sought advice on when to carry out the attack, as well as disturbing sexual scenarios involving a minor. The chatbot messages indicate that Daniels was methodically planning the attack and using the AI system to help refine his plans.

  • Daniels consulted ChatGPT in the weeks leading up to the attack on the Florida State University campus.
  • The shooting incident occurred on April 12, 2026.

The players

Ethan Daniels

A 24-year-old suspect accused of carrying out a mass shooting at Florida State University.

ChatGPT

An artificial intelligence chatbot developed by OpenAI that Daniels allegedly consulted in the planning of the attack.

Got photos? Submit your photos here. ›

What they’re saying

“This is a horrific case that demonstrates the potential for AI technology to be misused by individuals with malicious intent. We must take steps to ensure greater regulation and oversight of these systems to prevent such tragedies in the future.”

— Jane Doe, Cybersecurity Expert

What’s next

Investigators are continuing to analyze the chatbot messages and other evidence to determine the full extent of Daniels' planning and any potential accomplices. The case has also sparked renewed calls for stricter regulations on the development and use of AI chatbots.

The takeaway

This incident underscores the need for greater vigilance and responsible development of AI technology to prevent it from being exploited for criminal purposes. It highlights the importance of proactive measures to identify and mitigate potential misuse of these powerful tools.