Teens sue Musk's xAI over AI-generated child sexual abuse material

Lawsuit alleges xAI's Grok model was used to create explicit content of minors without consent

Mar. 17, 2026 at 12:34am

A class action lawsuit was filed against Elon Musk's xAI company, alleging that its Grok AI model was used to create and distribute child sexual abuse material featuring real images of teenage girls. The lawsuit claims xAI "knowingly" participated in the production and dissemination of this illegal content.

Why it matters

This case highlights growing concerns over the potential misuse of powerful AI models like Grok, which can be used to generate nonconsensual and exploitative content, especially targeting minors. It raises questions about corporate responsibility and the need for robust safeguards when developing advanced AI technologies.

The details

The lawsuit was filed on behalf of several Tennessee families whose daughters' photos were allegedly used by a man to create sexually explicit content using xAI's Grok model. Police informed the families that child sexual abuse material had been created of their minor daughters after the man's arrest in December 2025. The lawsuit argues that xAI deliberately designed Grok to produce this type of harmful, illegal content for financial gain, without proper safeguards to protect children.

  • In late December 2025 and early January 2026, researchers estimated Grok generated approximately 3 million sexualized images and 23,000 images depicting apparent children.
  • On January 14, 2026, California's Attorney General announced an investigation into xAI over the nonconsensual, sexually explicit material produced by Grok.
  • On January 15, 2026, a lawsuit was filed against xAI by a conservative influencer who shares a child with Elon Musk, alleging Grok was used to create sexual images of her, including when she was 14 years old.
  • On January 23, 2026, an anonymous plaintiff filed a class action lawsuit against xAI in California, arguing the company knew Grok would be used to create sexually explicit deepfake images of women and girls.

The players

xAI

An artificial intelligence company founded by Elon Musk, which developed the Grok AI model at the center of the lawsuit.

Elon Musk

The founder and CEO of xAI, who the lawsuit alleges deliberately designed Grok to produce sexually explicit content for financial gain.

Annika Martin

A lawyer from Lieff Cabraser, one of the law firms representing the families in the class action lawsuit against xAI.

Rob Bonta

The Attorney General of California, who announced an investigation into xAI over the nonconsensual, sexually explicit material produced by Grok.

Ashley St. Clair

A conservative influencer who filed a lawsuit against xAI, alleging Grok was used to create sexual images of her, including when she was 14 years old.

Got photos? Submit your photos here. ›

What they’re saying

“These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators. Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it.”

— Annika Martin, Lawyer, Lieff Cabraser (USA TODAY)

“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further. We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material.”

— Rob Bonta, Attorney General of California (USA TODAY)

What’s next

The judge in the case will decide on whether to allow the class action lawsuit to proceed against xAI over the allegations related to the Grok AI model.

The takeaway

This case highlights the urgent need for robust regulations and oversight around the development and deployment of powerful AI models, to prevent them from being exploited for the creation and distribution of nonconsensual and abusive content, especially targeting minors. It underscores the significant harm that can result when companies prioritize profits over ethics and user safety.