YouTube AI Videos Pose Dangers to Children

Disturbing new reports find AI-generated 'educational' videos on YouTube targeting young kids with harmful or nonsensical content.

Mar. 19, 2026 at 2:03pm

Recent investigations have uncovered a proliferation of AI-generated videos on YouTube that are targeting young children with dangerous or nonsensical content. These videos, often masquerading as educational or nursery rhymes, show children engaging in unsafe behaviors like walking in traffic or eating choking hazards. Experts warn that the mixed signals and incorrect information in these videos can have serious cognitive impacts on young, developing minds.

Why it matters

The rapid spread of these AI-generated videos on YouTube is raising major concerns about the platform's ability to protect children from harmful content. With more parents relying on YouTube to entertain and educate their kids, the influx of this 'AI slop' could have significant negative effects on child development and safety.

The details

Investigations by outlets like The 74 and Mother Jones have uncovered numerous examples of AI-generated YouTube videos targeted at children that either provide dangerous misinformation or complete nonsense. This includes videos showing children riding without seatbelts, walking in traffic, and even eating choking hazards like whole grapes. Experts warn that the inconsistent and incorrect information in these videos can significantly delay a child's learning and 'wire the brain in incorrect ways' during critical early development.

  • The channel behind one of the AI nursery rhyme videos has uploaded over 10,000 videos in the past 7 months, averaging about 50 new videos per day.

The players

Carla Engelbrecht

Has worked for children's media brands like Sesame Street and PBS Kids.

Kathy Hirsh-Pasek

A professor of psychology and neuroscience at Temple University.

Dana Suskind

A professor of surgery and pediatrics at the University of Chicago and the author of the upcoming book 'Human Raised: Nurturing Connection, Curiosity, and Lifelong Learning in the Age of AI'.

Got photos? Submit your photos here. ›

What they’re saying

“We're at the beginning of a monster problem, and we have to get hold of it quickly.”

— Kathy Hirsh-Pasek, Professor of psychology and neuroscience at Temple University

“This is not neutral content. I think of this as toddler AI misinformation at an industrial scale. It's very risky for the developing brain.”

— Dana Suskind, Professor of surgery and pediatrics at the University of Chicago

“Mixed signals means you are delaying them learning the cause and effect of a thing. If you learn that red is blue and blue is red, that's a delay. If you're inconsistent, it takes that much longer to learn. Every delay they have means everything else gets pushed back. That's taking their executive function offline to go learn nonsense.”

— Carla Engelbrecht, Children's media expert

What’s next

YouTube has stated it has stricter 'quality principles' for children-targeted content, but many of these harmful AI videos are still slipping through the cracks. Experts warn that the platform needs to take urgent action to address this 'monster problem' before it causes further damage to young, developing minds.

The takeaway

The proliferation of dangerous and misleading AI-generated content on YouTube aimed at children highlights the urgent need for stronger content moderation and safeguards to protect vulnerable young audiences from harmful misinformation that could significantly impact their cognitive development.