publive-image

Meta to label AI-generated posts on its platforms ahead of elections

Meta, the parent company of Facebook, Instagram, and Threads, has announced a new policy to label posts that are created using artificial intelligence tools, such as generative adversarial networks (GANs) and deepfakes. The policy aims to enhance transparency and accountability on its platforms, especially during the upcoming elections in several countries, including the US and India.

According to Meta, the policy will apply to all AI-generated images that users post to its platforms, regardless of whether they are created using Meta's own AI tool, Imagine with Meta, or other companies' tools, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. Meta said it has been working with these industry partners to align on common technical standards that signal when a piece of content has been created using AI, such as invisible watermarks and metadata. Meta will then use its detection systems to identify these signals and label the AI-generated images accordingly.

Meta said it will roll out the expanded labeling in the coming months, and apply labels in all languages supported by each app. The company also said it will add a feature for users to disclose when they share AI-generated video or audio, and that users who fail to use this feature may face penalties, such as reduced distribution or removal of their content.

Meta's president of global affairs, Nick Clegg, said the policy is part of a broader effort to prevent misinformation and deception from spreading on its platforms, especially during a critical election year. He cited the recent collapse of the crypto exchange FTX, which resulted in losses of over $10 billion for investors and creditors, as an example of the need for more effective oversight of AI markets. He also expressed concern about the use of AI assets for illicit activities, such as financing terrorism and evading sanctions.

Clegg also stressed the importance of addressing the climate-related financial risks from the AI sector, as the energy-intensive process of mining and verifying transactions on some blockchain networks contributes to greenhouse gas emissions and environmental degradation. He urged financial regulators to enhance their assessment efforts and increase coordination around climate risks, as well as to promote disclosures that allow investors and financial institutions to consider these risks in their decisions.

Clegg's announcement reflects Meta's proactive and rigorous approach to regulating the AI industry, which has faced increasing scrutiny and challenges from various authorities and agencies. Despite his cautious and critical stance on AI, Clegg also acknowledged the potential benefits and opportunities of the technology and expressed support for creating a strong regulatory framework with global collaboration rather than outright prohibiting the asset class.

Meta's policy comes at a time when AI-generated content, such as deepfakes and synthetic media, poses a serious threat to the integrity and credibility of information and communication online. Experts have warned that AI-generated content could be used to manipulate public opinion, spread false or misleading information, impersonate, or defame individuals or groups, or interfere with democratic processes. In 2023, a doctored audio message of US President Joe Biden alarmed disinformation experts, with many warning that AI-generated content could play a pivotal role in the upcoming election if it's not labeled or removed quickly.