YouTube
A New York Times investigation found that over 40% of YouTube Shorts recommended after watching children
After watching CoComelon, Bluey, and Ms. Rachel, more than 40% of the Shorts that YouTube recommended appeared to contain AI-generated visuals. That's according to a New York Times investigation published this week, and it should make every parent uncomfortable.
Let me state this clearly. YouTube's recommendation algorithm is directing children from legitimate, human-created content to AI-generated material, and YouTube has no requirement that animated AI videos for children be labeled as such. The platform places the entire burden of moderation on parents. That's a choice, not an accident.
What the Investigation Found
The methodology was straightforward. Researchers created fresh YouTube accounts and watched popular children's channels. After a few videos, they checked what Shorts the algorithm recommended. Over 40% of those recommendations contained what appeared to be AI-generated visuals.
These aren't abstract deepfakes or experimental art projects. They're brightly colored, fast-moving videos designed to look like legitimate children's content. Characters that look almost like familiar cartoon characters but not quite. Environments that feel familiar but off. Stories that don't make narrative sense but move quickly enough to keep a toddler's eyes on the screen.
The problem isn't just that the content exists. It's that the algorithm actively promotes it. YouTube's recommendation engine is optimized for engagement, not quality. AI-generated content is cheap to produce at scale, and if it keeps kids watching, the algorithm rewards it. More views, more recommendations, more views. The loop feeds itself.
Why Labels Matter and Don't Exist
YouTube requires labels on AI-generated content in some contexts, particularly when it involves realistic depictions of real people. But animated AI content targeting children? No labeling requirement. A producer can generate hundreds of AI videos a day, upload them to YouTube, and there's no obligation to tell viewers, parents, or YouTube itself that the content was machine-made.
This matters because parents use channel reputation as a proxy for content quality. If a child is watching CoComelon, the parent knows what they're getting. But when the algorithm takes that child from CoComelon to an AI-generated knockoff, the parent has no way to distinguish between real content and machine-produced filler without sitting down and watching every video.
Asking parents to monitor every second of their child's YouTube consumption isn't a solution. It's a deflection. Parents already have to worry about screen time, content appropriateness, and digital literacy. Adding "determine whether this cartoon was made by a human or a machine" to that list is unreasonable, especially when the platform itself won't do the basic work of labeling.
The Economics of AI Slop
Here's why this problem will get worse before it gets better. Producing a traditional children's animated show takes months and costs hundreds of thousands of dollars per episode. Teams of animators, writers, voice actors, and producers create each minute of content.
AI-generated children's content costs almost nothing. A single person with access to video generation tools can produce dozens of videos per day. The quality is lower, but quality isn't what the algorithm measures. It measures watch time. And AI slop is designed to maximize watch time through rapid visual stimulation, bright colors, and familiar character shapes.
The math is brutal. If a traditional creator invests $100,000 in a quality episode and an AI content farm spends $100 to generate something that gets comparable watch time, the algorithm doesn't care about the cost difference. It promotes whatever keeps eyes on screens.
This creates a race to the bottom. Legitimate children's content creators are competing against an unlimited supply of cheap AI-generated material that the algorithm treats equally. Over time, the AI content pushes real creators down in recommendations because there's simply more of it and it's optimized for the same engagement metrics.
What YouTube Could Do
YouTube has the tools to fix this. The company already uses AI to identify copyrighted music, extremist content, and spam. It could use similar technology to detect AI-generated visuals and either label them or exclude them from children's recommendations.
Content ID, YouTube's copyright detection system, processes millions of uploads daily and matches them against a database of protected content. Building a similar system for AI-generated detection is technically feasible. Google's own researchers have published papers on detecting AI-generated images and video. The capability exists inside the same company.
The question isn't whether YouTube can solve this. It's whether YouTube wants to. AI-generated content drives engagement. Engagement drives ad revenue. Kids are a massive and lucrative audience. From a purely financial perspective, YouTube has no incentive to reduce the flow of content that keeps children watching, regardless of how that content was made.
Whose Responsibility Is It?
YouTube will argue that parents should use YouTube Kids, the filtered version of the platform designed for younger viewers. But YouTube Kids has its own problems with low-quality content, and many parents use the main YouTube app on family devices.
YouTube will also argue that content moderation at scale is difficult. And it is. But other platforms have made more aggressive moves. TikTok requires AI-generated content labels. Instagram requires disclosure on AI-altered images. YouTube's position of not requiring labels on animated AI content for children is a policy choice, not a technical limitation.
The broader question is whether platforms should be responsible for the quality of content their algorithms recommend, not just the legality of it. YouTube isn't hosting anything illegal by recommending AI-generated cartoons to children. But it is making a choice about what children see, and right now that choice is driven entirely by engagement metrics with no quality filter.
Until that changes, the AI slop pipeline will keep flowing. And the algorithm will keep feeding it directly to kids.
Get AI news in your inbox
Daily digest of what matters in AI.