EU proposes stricter rules for Generative AI models in groundbreaking AI regulation efforts
The European Union is moving forward with a comprehensive three-tiered approach to regulating generative AI models and systems, marking a pivotal step in their efforts to address the rapidly advancing technology. The proposed approach revealed in a document seen by Bloomberg, is set to make the EU the first Western government to impose mandatory rules on artificial intelligence.
The EU's legislative framework, known as the AI Act, is currently under development. This act, if approved, will require that systems with the ability to predict crimes or screen job applications undergo risk assessments and comply with specific regulations. Negotiators intend to refine the legislation in their upcoming meeting on October 25, to finalize it by year-end.
The proposed three-tier system for regulation includes the following categories:
1. All Foundational Models:
AI developers would face transparency requirements before introducing any model to the market. Documentation of the model and its training process, results of internal "red-teaming" efforts, and standardized protocol evaluations would be mandatory. Companies would need to provide information to businesses using their technology and enable testing of foundation models.
2. Very Capable Foundation Models:
Stricter rules would apply to companies producing this tier of technology. Models must undergo regular red-teaming by external experts vetted by the EU's AI Office. Companies would introduce systems to detect systemic risks, and independent auditors and researchers would perform compliance controls. The EU is considering a forum for companies to discuss best practices and a voluntary code of conduct.
3. General Purpose AI Systems at Scale:
These systems would also undergo red-teaming by external experts, with results sent to the AI Office. Companies would introduce a risk assessment and mitigation system. A system with 10,000 registered business users or 45 million registered end users would be considered a GPAI at scale.
Additional discussions are needed to establish safeguards ensuring the prevention of illegal and harmful content generation. The newly formed AI Office would oversee compliance with the additional rules, with the power to request documents, organize compliance tests, and suspend models as a last resort.