X has announced a new policy aimed at creators who post AI-generated videos related to armed conflict without proper disclosure. Nikita Bier, the company's head of product, emphasized the importance of authentic information during sensitive times such as war. According to Bier, creators who fail to label their AI content will face a temporary suspension from the Creator Revenue Sharing Program for 90 days.
If these creators continue to share misleading AI content after their suspension period, they risk permanent removal from the program. X plans to identify such misleading posts using advanced tools designed to detect generative AI content, along with its community-driven fact-checking initiative, Community Notes.
Bier highlighted the ease with which AI can generate misleading visuals, stating, "It is critical that people have access to authentic information on the ground." This proactive measure aims to maintain the integrity of information shared on the platform during times of conflict.
The Creator Revenue Sharing Program allows users to earn income by sharing popular content and participating in advertising revenue. While this initiative is intended to encourage engaging material, some critics argue that it may inadvertently promote sensationalism and clickbait, as creators might prioritize attention-grabbing content over factual accuracy.
Despite this new policy, concerns remain regarding the broader implications of AI-generated media. Outside of armed conflict, AI tools can still be utilized to create misleading political content or promote deceptive products, raising questions about the overall effectiveness of X's approach to content regulation.
As the landscape of digital content continues to evolve, X's initiative reflects a growing awareness of the need for transparency and accountability among creators, especially in the context of rapidly advancing AI technologies.