YouTube-Creators

YouTube new rules for AI content safeguard user trust

Content creators in YouTube nowadays have started creating videos with the help of generative AI. These AI generated contents may sometimes lead to user trust erosion as they can spread misinformation. The AI generated videos also have the potential to deceive viewers by presenting fabricated content that appears real.

As a countermeasure, YouTube is implementing new rules for AI-generated content. According to the new set of rules, the platform will require creators to disclose whether they've used generative artificial intelligence to create realistic-looking videos or not.

The policy updates are outlined in a recent blog post. It has stated that creators who fail to disclose the use of AI tools for altered or synthetic videos may face penalties. YouTube may take action by removing their content from their platform. If content creators fail to adhere to the guidelines, it can also lead to suspension of their account from the platform's revenue-sharing program.

While YouTube wants their users to retain trust in the authenticity of the content on YouTube, they are not unaware of the artistic and innovative possibilities that come with the use of generative artificial intelligence (AI) in content creation. YouTube aims to balance the creative potential of generative AI with its responsibility to protect the community.

The rules also extend to political ads. By next year, YouTube content creators will have new options to indicate AI-generated content, particularly in the contents that will be discussing sensitive topics. Viewers will be alerted to altered videos with prominent labels, and the platform will employ AI to identify and remove content that violates its rules.

Additionally, YouTube's privacy complaint process will be updated. The update will allow removal of AI-generated videos that simulate identifiable persons. Music partners can request the takedown of AI-generated music content mimicking an artist's unique voice.