YouTube requires users to disclose AI-created content

Welcome to The Hill’s Technology newsletter


The Big Story 

YouTube said it will now require users to disclose when their videos feature content created using artificial intelligence (AI), as part of a larger announcement by the platform about its approach to responsible AI innovation.

© Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

The new policies require creators to disclose when they’ve created or incorporated AI content into their posts, including content that “realistically depicts an event that never happened” or shows “someone saying or doing something they didn’t actually do.”


“Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” YouTube said in a news release on Tuesday.


“We’ll work with creators before this rolls out to make sure they understand these new requirements,” it added.


Viewers will also be allowed to submit a request form for YouTube to remove AI content “that simulates an identifiable individual, including their face or voice,” our colleague Olafimihan Oshin reported.


The company noted that not all requests will be honored, and they’ll “consider a variety of factors when evaluating these requests.”


“This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar,” the release said.


YouTube also said it plans to introduce a process where its music industry partners can request for AI content to be removed from the site if it “mimics an artist’s unique singing or rapping voice.”


The release comes after Meta, the parent company of Facebook and Instagram, announced last week that it would require political advertisers on its platforms to disclose when they use AI or other digital methods.


Read more in a full report at

Welcome to The Hill’s Technology newsletter, we’re Rebecca Klar and Julia Shapero — tracking the latest moves from Capitol Hill to Silicon Valley.

Did someone forward you this newsletter? Subscribe here.

Essential Reads 

How policy will be impacting the tech sector now and in the future:



 Full Story



 Full Story



 Full Story



 Full Story

The Refresh 

News we’ve flagged from the intersection of tech and other topics:

Social media companies to face kids’ safety suits

Social media companies including Meta, ByteDance, Alphabet and Snap have to proceed with lawsuits over allegations of adversely impacting children’s mental health after a judge rejected the companies’ motion to dismiss the dozens of cases, TheVerge reported. 

Instagram expands Close Friends feature

Instagram is expanding its Close Friends feature, which lets users choose a select group to share certain content with, into its main feed, rather than just through the stories feature, TechCrunch reported

On Our Radar 

Upcoming news themes and events we’re watching:

  • The Business, Government, and Society Initiative at Stanford Graduate School of Business will hold an event discussing generative artificial intelligence and democracy Thursday from 3-4 p.m. ET

In Other News 

Branch out with other reads on The Hill:



Full Story



Full Story

What Others are Reading 

Two key stories on The Hill right now:


{section_2.content[0].description} Read more


{section_2.content[1].description} Read more

What Others are Reading 

Opinions related to tech submitted to The Hill:

You’re all caught up. See you tomorrow! 

For the latest news, weather, sports, and streaming video, head to The Hill.