- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Abusive GenAI Content Proliferates Online, New Laws Aim to Curb Harm
Federal and state authorities take action to address the rise of nonconsensual deepfakes and other exploitative AI-generated content
Mar. 11, 2026 at 12:21am
Got story updates? Submit your updates here. ›
A growing flood of abusive and exploitative generative AI (GenAI) content is overwhelming social media platforms and other online spaces. Malicious users are leveraging advanced AI tools to create nonconsensual deepfake images and videos, including sexually suggestive content of celebrities and ordinary individuals, as well as depictions of child sexual abuse. Lawmakers have responded with new federal legislation like the TAKE IT DOWN Act, which imposes notice-and-removal requirements on platforms and criminal penalties for perpetrators. State attorneys general and global regulators are also taking action to curb the proliferation of this harmful content.
Why it matters
The rapid rise of abusive GenAI content poses serious privacy and safety risks, especially for vulnerable individuals like minors. This crisis has prompted a flurry of legislative and regulatory activity at the federal, state, and international levels to establish new rules and enforcement mechanisms to protect victims and hold platforms and AI developers accountable.
The details
The TAKE IT DOWN Act prohibits knowingly posting or threatening to post intimate images or AI-generated deepfakes of another person without consent. Violations can result in up to 2 years in prison for offenses involving adults and up to 3 years for offenses involving minors, along with fines and mandatory restitution. The Act also requires platforms to implement formal notice-and-removal procedures allowing individuals to request the takedown of nonconsensual content. Failure to comply can be treated as an unfair or deceptive practice by the Federal Trade Commission. Alongside federal action, state attorneys general have used existing consumer protection and child safety laws to pressure platforms and AI developers to curb the creation and distribution of this exploitative content. Regulators worldwide, including in France, Indonesia, Malaysia, the UK, EU, India, and Australia, have also announced investigations and potential enforcement actions.
- The TAKE IT DOWN Act's formal compliance deadline for platforms is May 16, 2026.
- French cybercrime police recently executed searches of the Paris offices of xAI as part of an investigation into Grok's use to create and disseminate child sexual abuse material.
- Governments in Indonesia and Malaysia have temporarily blocked access to Grok, citing ineffective safeguards and violations of national online-safety and obscenity laws.
The players
TAKE IT DOWN Act
A federal law that prohibits knowingly posting or threatening to post intimate images or AI-generated deepfakes of another person without consent, and requires platforms to implement notice-and-removal procedures.
Federal Trade Commission (FTC)
The federal agency charged with enforcing the TAKE IT DOWN Act's notice-and-removal requirements, with the ability to treat platform noncompliance as an unfair or deceptive practice.
State Attorneys General
A bipartisan coalition of 35 state and territorial attorneys general that have used existing consumer protection, child safety, and criminal enforcement authorities to pressure platforms and AI developers to curb the creation and distribution of nonconsensual AI-generated intimate images.
xAI
The developer of the Grok chatbot, which has been used to generate nonconsensual intimate images and child sexual abuse material, leading to regulatory actions in several countries.
Zachary A. Myers
A former United States Attorney and member of the US Attorney General's Child Exploitation Working Group, who has extensive experience leading investigations and prosecutions of child exploitation and technology-facilitated crimes.
What they’re saying
“We must disable Grok's ability to produce nonconsensual intimate images and child sexual abuse material, eliminate existing exploitative content, take action against users generating illegal content, and provide users with control over whether their content can be altered by Grok.”
— State Attorneys General (wishtv.com)
What’s next
The judge in the case will decide on Tuesday whether or not to allow xAI to continue operating Grok while the investigation into the chatbot's use to create nonconsensual intimate images and child sexual abuse material is ongoing.
The takeaway
The proliferation of abusive GenAI content has prompted a swift and coordinated response from lawmakers, regulators, and law enforcement at all levels of government, both in the US and globally. This crisis highlights the urgent need for platforms and AI developers to proactively implement robust safeguards, content moderation protocols, and accountability measures to protect users and prevent the exploitation of their technologies.
Indianapolis top stories
Indianapolis events
Mar. 17, 2026
The Wiz (Touring)Mar. 17, 2026
JOURNEY - Final Frontier Tour (An Evening With)Mar. 18, 2026
Indiana Pacers vs. Portland Trail Blazers




