- YouTube’s new policy allows users to request the removal of AI-generated content that simulates their face or voice.
- The request process falls under privacy violations, not misleading content.
- Content removal depends on various factors, including disclosure, public interest, and whether the content is satire or parody.
YouTube has rolled out a new policy allowing users to request the removal of AI-generated or synthetic content that mimics their face or voice.
This policy change, introduced quietly in June, expands YouTube’s approach to managing AI content, initially outlined in their responsible AI agenda in November. Here’s a breakdown of what this means for users and creators.
Requesting Content Removal as a Privacy Violation
Unlike before, where misleading AI content like deepfakes could be flagged for deception, the new policy addresses privacy concerns directly.
Now, individuals can request the removal of AI-generated content that simulates their face or voice by citing privacy violations.
According to YouTube’s updated Help documentation, first-party claims are required except in specific cases like minors, deceased individuals, or those without computer access.
However, submitting a removal request doesn’t guarantee that the content will be taken down.
YouTube will evaluate each complaint based on several factors, including whether the content is clearly labeled as synthetic, if it uniquely identifies a person, and whether it’s considered parody, satire, or holds public interest.
Evaluating the Content
YouTube’s review process will consider multiple aspects:
- Disclosure: Is the content identified as AI-generated or synthetic?
- Identification: Does it uniquely identify a person?
- Nature of Content: Could it be seen as parody, satire, or have public value?
- Public Figures: Does it involve well-known individuals, and does it show them in sensitive situations like criminal acts or political endorsements?
These considerations are essential in an election year, where AI-generated content could potentially influence voter decisions.
Content Removal Process
Once a complaint is filed, YouTube gives the content uploader 48 hours to address it. If the content is removed within this period, the complaint is closed.
Otherwise, YouTube initiates a review. If removal is warranted, the content is fully removed from the site, and any personal information, including names and details from titles, descriptions, and tags, is also erased.
Simply making a video private is not sufficient as it could be reverted to public status later.
Creator Responsibilities and Tools
YouTube is not entirely against AI content and has explored generative AI tools like comment summarizers and conversational features. However, even AI-labeled content must adhere to YouTube’s Community Guidelines.
For creators, a privacy complaint does not automatically result in a strike. Privacy violations are separate from Community Guidelines strikes. However, YouTube may take action against accounts with repeated privacy violations.
In March, YouTube introduced a tool in Creator Studio that allows creators to disclose when content is made with synthetic media.
Recently, YouTube tested a feature for users to add crowdsourced notes for additional context on videos, such as identifying parodies or misleading content.