Teenagers Sue Musk's xAI Over Sexually Explicit Images

Lawsuit claims AI-generated content violated minors' privacy and caused mental distress

Mar. 20, 2026 at 9:18am

Three Tennessee high school students have filed a lawsuit against Elon Musk's AI company xAI, claiming the firm's image-generation tools were used to create sexually explicit images of them as minors. The students, seeking class-action status, allege the images were distributed online, causing them anxiety, depression, and fear of being recognized.

Why it matters

This case highlights the potential harms of AI-generated content, especially when it involves the exploitation of minors. It raises concerns about the need for stronger safeguards and regulations around the development and use of generative AI tools to prevent such abuses.

The details

According to the lawsuit, the students' real photos were used to create sexually explicit images, which were then distributed online. One student, Jane Doe 1, was alerted anonymously that such images of her were being shared. The lawsuit claims xAI knew its Grok chatbot could produce sexually explicit content of minors but released it anyway, and that the person who distributed the images used xAI's technology. The students say the images have caused them significant mental distress and fear of being recognized.

  • In December 2025, Jane Doe 1 was alerted that sexually explicit images of her were being distributed online.
  • In late December 2025, the perpetrator was arrested and their phone was confiscated, revealing they had created explicit images of at least 18 other girls.

The players

xAI

Elon Musk's artificial intelligence company that is being sued over its image-generation tools being used to create sexually explicit images of minors.

Jane Doe 1, Jane Doe 2, Jane Doe 3

Three Tennessee high school students who are the plaintiffs in the lawsuit against xAI, seeking class-action status to represent other victims.

Got photos? Submit your photos here. ›

What’s next

The judge in the case will decide whether to grant class-action status to the lawsuit, which could allow it to represent thousands of other victims.

The takeaway

This case highlights the urgent need for AI companies to implement robust safeguards and content moderation to prevent their technologies from being exploited to create sexually abusive content, especially involving minors. It underscores the significant mental and emotional harm such violations can cause victims.