publive-image

Government Issues Advisory on Labeling AI-Generated Content

According to the latest advisory on intelligent technologies, the government has dropped the need to license untested AI models but emphasizes the need to register AI-developed products.

Instead of allowing AI prototypes, a new advisory issued by the Ministry of Electronics and IT on Friday evening effectively amended the compliance requirements under the IT Act of 2021. March 2024," the advisory said that.

According to the new advice, IT companies and forums are frequently negligent in complying steps with due diligence outlined under the IT (Intermediary

Guidelines and Digital Media Ethics Code) regulations.

The government has asked companies to use their AI software or platform to label products and disclose to consumers any potential inherent errors or unreliability in products made with their AI tools.

"Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deep fake, it is advised that such information created, generated, or modified through its software or any other computer resource is labeled that such information has been created, generated or modified using the computer resource of the intermediary," the advisory said.

It added that if any changes are made by the user, the metadata should be formatted so that the user or computer resources that affected such changes can be identified.

Following a controversy over Google's AI platform's answers to questions about Prime Minister Narendra Modi, the government issued an advisory on March 1 for social media and other platforms claiming it's an AI model that can't be tested as well, preventing illegal deposits.

The Department of Electronics and Information Technology warned of criminal action for non-compliance in an advisory letter to mediators and conferences. Consultants who have previously told organizations to use untested or unreliable artificial intelligence (AI) models after receiving government approval, labeling them "possible and inherent fallibility or unreliability of the output generated"later.

Conclusion: The dedicated government consultation on the labeling of AI-enabled products is a milestone in promoting transparency, honesty, and accountability in the digital age. It highlights the need for convergence act and innovate responsibly to harness the power of AI while mitigating the risks and challenges.