Platform
YouTube Expands AI Likeness Detection to Government Officials, Journalists, and Political Candidates
YouTube has announced the expansion of its likeness detection technology to a pilot group of government officials, political candidates, and journalists, broadening a tool previously available to creators in the YouTube Partner Program.
According to a blog post, the tool functions similarly to YouTube’s existing Content ID system, scanning newly uploaded videos for AI-generated faces that match enrolled individuals. To enroll, participants must submit a video selfie and a government-issued photo ID. Approval can take up to five days. Once verified, participants can access a dashboard that displays flagged videos and request the removal of content they believe violates YouTube’s privacy guidelines.
Detection does not guarantee removal. YouTube will continue to preserve content qualifying as parody, satire, or public-interest critique.
“This expansion is really about the integrity of the public conversation,” Leslie Miller, YouTube’s VP of Government Affairs and Public Policy, said in a statement. “We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.”
Rollout and Background
YouTube originally launched likeness detection in December 2024 for high-profile actors and athletes before extending it to millions of creators in the YouTube Partner Program. The company has not confirmed which specific officials or journalists are among the initial pilot testers, but stated it intends to make the technology broadly available over time.
YouTube says the volume of removal requests from creators has been “very small,” with most flagged content proving “fairly benign.” The company acknowledged that dynamics may differ for government officials and journalists.
Personal data collected during enrollment, including selfie video and government ID, will be used solely for identity verification and will not be used to train Google’s generative AI models. Participants can deactivate the tool at any time, and scanning will stop within approximately 24 hours.
Legislative Context
YouTube is also backing the NO FAKES Act, bipartisan federal legislation that would establish a property right in a person’s AI-generated digital replica and create a notice-and-takedown process for unauthorized deepfakes. The bill, reintroduced in April 2025, includes exemptions for content protected by the First Amendment, such as parody, satire, and news commentary.
