Navigating the Legal Terrain: Ethical Considerations of Generative AI Models
Generative AI is an artificial intelligence capable of generating new content, be it text, images, or music. These AI systems are capable of learning from data and then creating new, original products that resemble the input they receive. While generative AI has the potential to transform industries, it raises ethical concerns about bias and potential abuse. The ways tech companies train most generative AI models can be “legitimate”, according to a retired executive at British startup Stability.
Ed Newton-Rex, who recently resigned as VP of Audio at the London-based AI unicorn fixture due to ethical concerns, told UKTN that companies like OpenAI and Google may be breaking copyright law by ripping off their images them on who they are “The jury is still out”.
Before stepping down, Newton-Rex ran Stability AI, which was developing a music processing tool in the audio sector. He stood down because he disagreed with “the company’s view that training generative AI models in copyrighted works is ‘fair use’.
It comes as rapid advances in generative AI are pushing the limits of copyright law. Intellectual property lawyers say most existing copyright laws were not written with AI in mind.
Newton-Rex said: “The jury is still out on whether this falls under common law other than copyright law. But I don’t think it’s happening.”
Citing the recent viral announcement of OpenAI’s video generation tool Sora, Newton-Rex highlighted what he sees as a tendency for AI companies to refuse to use copyrighted material.
The legal precedent for AI models using non-public resources is still being determined. A few high-profile events are set to cause ripples throughout the project.
The most important case is the New York Times lawsuit against OpenAI and Microsoft, alleging that ChatGPT stole newspaper coverage. Meanwhile, Getty Images is embroiled in a lawsuit against Stability.
Stability has become one of the UK’s leading AI companies, recently raising £89 million in its Series A round. The company has previously stated that “AI development is the protection of acceptable, adaptive, and useful uses of existing resources”.
Josh Little, a partner at law firm Marriott Harrison, told UKTN: “The debate over whether or not it’s illegal to train large language systems on unlicensed data is not a gift with an answer it’s straight wrapped. But it’s also not a quick question.”
The UK’s AI approach is ‘naive’
AI technologies cut across geopolitical boundaries, making it difficult to establish how copyright laws should be applied in the age of AI. Newton-Rex believes the UK’s approach to AI policy is “incredibly lazy” and “naive”.
The UK government has repeatedly stated that it does not plan to regulate AI in the “short term”, hoping instead that a lighter supplementary regulatory approach will help establish Britain as an AI superpower.
“It’s an incredibly lazy approach,” said Newton-Rex. “It’s a naive approach because it ignores the fact that you can have AI development, and you can have fair treatment of creatives.”
Newton-Rex said there is no need for new laws to protect artists, writers, and creators. “Just be clear that this type of training is illegal,” he said. “You don’t need a new law for that, you just point to an existing law and say no, it’s not allowed.”
Conclusion: The legitimacy of generative AI models represents a multifaceted ethical challenge that requires careful consideration. As these models continue to advance and proliferate, it is imperative to develop robust legal and regulatory frameworks that promote innovation while safeguarding against potential harm.