Advertisement

YouTube to require creators to disclose ‘realistic’ AI-generated videos

YouTube on App Store displayed on a phone screen
YouTube said it plans next year to begin enforcing a new policy requiring creators to self-identify videos created with generative AI, which will be labeled as such on the platform.
(NurPhoto via Getty Images)
Share via

YouTube, the video platform owned by Alphabet’s Google, will soon require video makers to disclose when they’ve uploaded manipulated or synthetic content that looks realistic — including video that has been created using artificial intelligence tools.

The policy update, which will go into effect sometime in the new year, could apply to videos that use generative AI tools to realistically depict events that never happened, or show people saying or doing something they didn’t actually do. “This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” Jennifer Flannery O’Connor and Emily Moxley, YouTube vice presidents of product management, said in a company blog post Tuesday. Creators who repeatedly choose not to disclose when they’ve posted synthetic content may be subject to content removal, loss of ad revenue or other penalties, the company said.

When content is digitally manipulated or generated, creators must select an option to display YouTube’s new warning label in the video’s description panel. And for f content about sensitive topics, YouTube will display a label more prominently on the video player itself. The company said it would work with creators before the policy rolls out to make sure they understood the new requirements, and is developing its own tools to detect when the rules are violated. YouTube is also committing to automatically labeling content that has been generated using its own AI tools for creators.

Advertisement

Google — which both makes tools that can create generative AI content and owns platforms that can distribute such content far and wide — is facing new pressure to roll out the technology responsibly. Earlier on Tuesday, Kent Walker, the company’s president of legal affairs, published a company blog post laying out Google’s “AI Opportunity Agenda,” a white paper with policy recommendations aimed to help governments around the world think through developments in artificial intelligence.

In a frightening use of deepfake technology, scammers are using AI-powered audio and video to pass themselves off as their targets’ relatives or loved ones in real time.

“Responsibility and opportunity are two sides of the same coin,” Walker said in an interview. “It’s important that even as we focus on the responsibility side of the narrative that we not lose the excitement or the optimism around what this technology will be able to do for people around the world.”

Like other user-generated media services, Google and YouTube have been under pressure to mitigate the spread of misinformation across their platforms, including lies about elections and global crises such as the COVID-19 pandemic. Google already has started to grapple with concerns that generative AI could create a new wave of misinformation, announcing in September that it would require “prominent” disclosures for AI-generated election ads. Advertisers were told they must include language like, “This audio was computer generated,” or, “This image does not depict real events,” on altered election ads across Google’s platforms. The company also said that YouTube’s community guidelines, which prohibit digitally manipulated content that may pose a serious risk of public harm, already apply to all video content uploaded to the platform.

Advertisement

In addition to the new generative AI disclosures, the company said it will eventually make it possible for people to request the removal of AI-generated or synthetic content that simulates an identifiable person, using its privacy request process. A similar option will be provided for music partners to request the removal of AI-generated music content that mimics an artist’s voice, YouTube said.

The company said not all content would be automatically removed once a request is placed; rather, it would “consider a variety of factors when evaluating these requests.” If the removal request references video that includes parody or satire, for instance, or if the person making the request can’t be uniquely identified, YouTube could decide to leave the content up on its platform.

Lawmakers ask the CEOs of Meta and X to explain any rules they’re crafting to curb the harms of AI-generated political ads on their platforms.

Advertisement