YouTube is broadening its likeness detection technology, which spots AI-created deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Participants will have access to a tool to detect unauthorized AI-created content and request its removal if it violates YouTube policy.
The technology was initially launched last year to about 4 million YouTube creators in the YouTube Partner Program, following earlier tests.
Similar to YouTube’s Content ID system, which identifies copyright-protected material in uploaded videos, the likeness detection feature searches for AI-generated faces. These tools are sometimes used to spread misinformation and alter perceptions, using deepfake personas of public figures, such as politicians, to show them doing or saying things they didn’t actually do.
With the pilot program, YouTube aims to balance users’ free expression with the risks of AI technology that can create a realistic likeness of public figures.
“This expansion is about the integrity of public conversation,” said Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy. “The risks of AI impersonation are high for those in the civic space. But while providing this new shield, we’re cautious in its application,” she noted.
Miller explained that not all detected matches would be removed upon request. YouTube will assess each request based on existing privacy policy guidelines to see whether the content is parody or political critique, which are protected forms of expression.
YouTube is also advocating for these protections at a federal level, supporting the NO FAKES Act, which would regulate AI usage for unauthorized recreations of voices and visual likenesses.
To utilize the new tool, eligible testers must verify their identity by uploading a selfie and government ID. They can then create a profile, view appearing matches, and request removals. YouTube plans to eventually allow users to prevent uploads of violating content before they go live or possibly monetize the videos, similar to the Content ID system.
The company would not specify which politicians or officials would be initial testers, but aims to make the technology widely available over time.
AI videos will be labeled, but the label placement varies. For some, it appears in the video’s description, while for others on more sensitive topics, it appears at the start of the video. This approach aligns with YouTube’s treatment of all AI-generated content.
“There’s a lot of AI-produced content, but that distinction doesn’t affect the content itself,” explained Amjad Hanif, YouTube’s Vice President of Creator Products, on label placement. “It could be a cartoon generated with AI, so there’s a judgment on whether it merits a visible disclaimer,” he said.
YouTube isn’t disclosing the number of removals managed by the detection technology but noted that so far, the amount of removed content remains “very small.”
“For many creators, it’s about awareness of what’s created, and actual removal request volume is low as most content is benign or adds to their business,” Hanif said.
This might not hold true for government officials, politicians, or journalists.
YouTube plans to extend its deepfake detection technology to more areas, including recognizable spoken voices and other intellectual properties like popular characters.
