In an important move given the apparent global events, and with the midterm elections approaching, YouTube is expanding its similarity detection tool to include political and civic leaders, as well as journalists, in an effort to limit AI-generated content that may seek to mislead or mislead the platform’s users.
Participating politicians and journalists (after their identities have been verified by YouTube) will be able to review videos flagged for similarity, and request removal if they violate YouTube’s privacy policies. Naturally, generative AI has made it easier to fake someone else’s appearance or voice.
YouTube first announced the tool in December 2024, initially rolling it out to A-list actors and athletes. Last year, it expanded it to include top creators, and the company now says about 4 million creators in the YouTube Partner Program have signed up to use it.
“We’ve always known there’s a need for this technology to go beyond just creators, so today we’re excited to announce that we’ll be expanding this beta to include journalists and government officials, and we’ll start with a pilot group so we can see how this group of users will use it to protect their identities online,” Amjad Haneef, YouTube’s vice president of creator products, said in a press conference ahead of the feature’s launch. “As we learn more about election cycles and how journalists use them, we will expand its scope to include a broader group of people.”
“This expansion is really about the integrity of the public conversation,” adds Leslie Miller, YouTube’s VP of government affairs and public policy. “We know that the risks of AI impersonation are particularly high for those working in the civilian field.”
The company declined to comment on which political leaders, civic leaders and journalists will be invited to participate in the pilot, though they said they expect it to ramp up quickly.
“We’ve been having regular conversations with people, and we encourage policymakers and others to reach out to us if they want to learn more and get involved in the ways we’re scaling this up,” Miller says.
Importantly, YouTube also makes clear that free speech principles will apply, lest politicians try to abuse the tool:
“As we introduce this new armor, we’re also being careful about how we use it,” Miller says. “Detection does not mean automatic removal. YouTube has a long history of protecting free speech, and this includes parody, satire, and political criticism. If a video of a world leader is a clear parody, it is more likely to stick around.”
Hanif points out that among the YouTube celebrities and creators who actually use the tool, the number of takedown requests is surprisingly small.
“They might be watching a lot of games, and I think for a lot of them it was just an awareness of what was being created,” he says. “But the volume of takedown requests is actually quite low, because most of them turn out to be fairly harmless or additive to their overall business.”
The video platform also released a video explaining the tool and what it will mean for those who are now eligible to use it.

