OpenAI Introduces Solutions to Tackle DALL-E 3 Image Misuse: A Step Forward in AI Ethics
OpenAI is launching a tool that can recognize images generated by its text-to-image generator DALL-E 3, Microsoft-backed (MSFT.O) said Tuesday amid growing concerns about the impact of AI. The new tab the startup opens content in this year’s global election.
The company said the tool correctly displays DALL-E 3-enhanced images up to 98% of the time during in-house testing and can handle standard adjustments such as compression, cropping, and saturation adjustments without being affected a small amount.
The ChatGPT creator also plans to add indestructible watermarks that mark digital content, such as images or audio, with symbols that are difficult to remove.
As part of the effort, OpenAI also joined a group of companies, including Google, Microsoft, and Adobe, and plans to provide a standard to help determine the origins of media.
AI-developed features and the proliferation of deepfakes are increasingly being used in elections in India, the US, Pakistan, Indonesia, and other parts of the world. OpenAI said it is joining Microsoft in launching a $2 million "social resilience" fund to support Artificial Intelligence education.
The in-house testing also challenged methods of analyzing images created with AI models from other companies. In these cases, OpenAI’s tool was only able to identify 5% to 10% of the pictures from these external views.
Agarwal told the Journal that altering such images, such as changing colors, also significantly reduced their effectiveness. This is another limitation that OpenAI hopes to fix with further testing.
Conclusion: By engaging with the challenges associated with DALL-E 3 image misuse, OpenAI is reaffirming its commitment to advancing AI in a manner that prioritizes societal well-being and ethical considerations.