YouTube has updated its rulebook for the era of deepfakes. Starting today, anyone uploading video to the platform must disclose certain uses of synthetic media, including generative AI, so viewers know what they’re seeing isn’t real. YouTube says it applies to “realistic” altered media such as “making it appear as if a real building caught fire” or swapping “the face of one individual with another’s.”

The new policy shows YouTube taking steps that could help curb the spread of AI-generated misinformation as the US presidential election approaches. It is also striking for what it permits: AI-generated animations aimed at kids are not subject to the new synthetic content disclosure rules.

YouTube’s new policies exclude animated content altogether from the disclosure requirement. This means that the emerging scene of get-rich-quick, AI-generated content hustlers can keep churning out videos aimed at children without having to disclose their methods. Parents concerned about the quality of hastily made nursery-rhyme videos will be left to identify AI-generated cartoons by themselves.

YouTube’s new policy also says creators don’t need to flag use of AI for “minor” edits that are “primarily aesthetic” such as beauty filters or cleaning up video and audio. Use of AI to “generate or improve” a script or captions is also permitted without disclosure.

There’s no shortage of low-quality content on YouTube made without AI, but generative AI tools lower the bar to producing video in a way that accelerates its production. YouTube’s parent company Google recently said it was tweaking its search algorithms to demote the recent flood of AI-generated clickbait, made possible by tools such as ChatGPT. Video generation technology is less mature but is improving fast.

Established Problem

YouTube is a children’s entertainment juggernaut, dwarfing competitors like Netflix and Disney. The platform has struggled in the past to moderate the vast quantity of content aimed at kids. It has come under fire for hosting content that looks superficially suitable or alluring to children but on closer viewing contains unsavory themes.

WIRED recently reported on the rise of YouTube channels targeting children that appear to use AI video-generation tools to produce shoddy videos featuring generic 3D animations and off-kilter iterations of popular nursery rhymes.

The exemption for animation in YouTube’s new policy could mean that parents cannot easily filter such videos out of search results or keep YouTube’s recommendation algorithm from autoplaying AI-generated cartoons after setting up their child to watch popular and thoroughly vetted channels like PBS Kids or Ms. Rachel.

Some problematic AI-generated content aimed at kids does require flagging under the new rules. In 2023, the BBC investigated a wave of videos targeting older children that used AI tools to push pseudoscience and conspiracy theories, including climate change denialism. These videos imitated conventional live-action educational videos—showing, for example, the real pyramids of Giza—so unsuspecting viewers might mistake them for factually accurate educational content. (The pyramid videos then went on the suggest that the structures can generate electricity.) This new policy would crack down on that type of video.

“We require kids content creators to disclose content that is meaningfully altered or synthetically generated when it seems realistic,” says YouTube spokesperson Elena Hernandez. “We don’t require disclosure of content that is clearly unrealistic and isn’t misleading the viewer into thinking it’s real.”

The dedicated kids app YouTube Kids is curated using a combination of automated filters, human review, and user feedback to find well-made children’s content. But many parents simply use the main YouTube app to cue up content for their kids, relying on eyeballing video titles, listings, and thumbnail images to judge what is suitable.

So far, most of the apparently AI-generated children’s content WIRED found on YouTube has been poorly made in similar ways to more conventional low-effort kids animations. They have ugly visuals, incoherent plots, and zero educational value—but are not uniquely ugly, incoherent, or pedagogically worthless.

AI tools make it easier to produce such content, and in greater volume. Some of the channels WIRED found upload lengthy videos, some well over an hour long. Requiring labels on AI-generated kids content could help parents filter out cartoons that may have been published with minimal—or entirely without—human vetting.


Comments are closed.