publive-image

High Court Grills Centre Over Lack of Deepfake Regulation: Unveiling Legal Dilemmas

On Wednesday, the Delhi High Court sought the Centre's response to veteran journalist Rajat Sharma's PIL against the deregulation of deep-fake regulation technology in the state. It also sought directions to restrict public access to applications and software that enable such processes.

Deepfake regulation makes it easy to create virtual videos, audio recordings, and images that can manipulate and deceive viewers by overlaying one person’s likeness onto another, changing their words and actions, and walking, thus creating a false statement or spreading misinformation.

The bench said this was a severe problem and wanted to know from the central government whether it was willing to work on the issue. Political parties are complaining about this, too. "You do nothing," the court said.

Rajat Sharma, Chairman and Editor-in-Chief of Swatantra Samachar Seva Pvt Ltd (INDIA TV), said in a public interest litigation (PIL) that the proliferation of deepfake technologies poses a severe threat to sections of society. Disinformation and misinformation, including campaigns, undermine public discourse and the integrity of the democratic process.

The PIL said there was a risk that the technology could be used for fraud, identity theft, intrusion, loss of personal reputation, privacy and security, breach of trust in media and public institutions, and infringement of intellectual property rights and privacy rights.

The petition states that due to these threats, strict enforcement and proactive measures are needed to mitigate potential harm to abusers.

The petition said the lack of adequate legislation and safeguards against the misuse of deep fake technology poses a serious threat to the fundamental rights guaranteed under the Constitution of India, including the right to free speech, the right to free speech, including privacy, and the right to a fair trial.

It said the government should formulate a legal framework to define and categorize deepfakes and AI-generated content and prohibit the creation, distribution, and dissemination of deepfakes for malicious purposes.

Conclusion: As the legal battle unfolds, stakeholders in the legal, technical, and civil society groups will be watching closely to see how the Institute responds to court inquiries and whether it has taken concrete steps to draft and issue flying laws with adequate tax depth implemented.