YouTube has announced an expansion of its innovative "likeness detection" technology, designed to identify AI-generated content, including deepfakes, specifically for individuals in the entertainment sector. This move aims to bolster the protection of creators and public figures against unauthorized uses of their likenesses, a growing concern in today's digital landscape.
This advanced technology mirrors YouTube's established Content ID system, which identifies copyrighted material in user-uploaded videos, allowing rights holders to either request removals or share in the revenue generated by those videos. Similarly, the likeness detection feature focuses on simulated faces, ensuring that celebrities and creators can safeguard their identities from misuse in scam advertisements and other unauthorized contexts.
Initially rolled out to a select group of YouTube creators as part of a pilot program, the technology has seen broader implementation this spring, extending its reach to include politicians, journalists, and now, those within the entertainment industry, such as talent agencies and management firms. Major agencies like CAA, UTA, WME, and Untitled Management have provided valuable feedback to enhance this tool.
Importantly, the use of this likeness detection does not necessitate that entertainers have their own YouTube channels. Instead, the system scans for AI-generated content to find visual matches of registered individuals. Users can then opt to request the removal of videos that violate privacy policies, submit copyright claims, or take no action. YouTube clarifies that not all content will be removed, as it allows for parody and satire under its guidelines.
Looking ahead, YouTube plans to evolve this technology further by incorporating audio detection capabilities. In conjunction with these advancements, the platform is also advocating for federal legislation, supporting the NO FAKES Act in Washington D.C., which aims to regulate the unauthorized use of AI to recreate an individual's voice and likeness.
While YouTube has yet to disclose how many AI deepfake removals have been facilitated by this tool, it acknowledged in March that the current number remains "very small."
This development marks a significant step towards ensuring the integrity of digital identities in an era where technology continually blurs the lines between reality and simulation. As AI capabilities expand, such protective measures will be crucial in shaping a safer online environment for creators and public figures alike.