Why the EU is Contemplating to Regulate High-Risk AI Technology?

Artificial Intelligence

Artificial Intelligence

The AI regulatory law will establish responsibility for an AI system’s actions across the EU.

The European Commission today unveiled its plan to strictly regulate artificial intelligence (AI) that it terms as high risk. The consortium of countries recently issued its long-anticipated white paper on AI. This document is a prequel to the new legislation and lists down the regulations that would govern technologies leading to global consequences.

The paper lists 27 nation blocs which should have strict legal requirements for “high-risk” uses of AI. The consortium defines high risk as “a risk of injury, death or significant material or immaterial damage; that produce effects that cannot reasonably be avoided by individuals or legal entities”, that span across healthcare, transportation, energy and government domains.


Banning the Black Box of AI

The European Commission will draft new laws that include a ban on “black box” AI systems which are tough for the human mind to decode, especially into medical devices and self-driving cars. Although the regulations would be broader and stricter than its predecessors and counterparts, European Commission President Ursula von der Leyen announced a plan that would promote “trust, not fear.” The ongoing plan also includes measures to update the European Union’s 2018 AI strategy and pump billions into R&D over the next decade.

However, the measures in the proposals are not final. Over the next 12 weeks, experts, lobby groups, and the public can debate over the plan before it is drafted to become concrete laws. Besides, any final regulation will need the approval of the European Parliament and national governments, which is unlikely to happen this year.

 

EU’s caution Approach

Europe is taking a more vigilant approach when compared to its stronger counterparts China and the USA, where policymakers are reluctant to impose restrictions in a battle to win the AI supremacy race. EU officials hope the regulatory curbs would lead to winning consumers’ trust, thereby driving wider adoption of AI.

Andrea Renda, an AI policy researcher at the Centre for European Policy Studies and a member of the commission’s independent advisory group on AI quotes, “The EU tries to exercise leadership in what they’re best at, which is a very solid and comprehensive regulatory framework,”.

 

Establishing Responsibility for AI

The AI regulatory law will also establish responsibility for an AI system’s actions that include, the company who is using it, or the company that has designed it. The high-risk applications would have to be shown to the commission for approval before being deployed in the European Union.

The commission says besides drafting stricter laws on the use of AI, it will also launch a broad European debate on facial recognition systems, another AI application that can identify individuals in crowds without their consent. something that the EU is concerned about. Although countries such as Germany have announced plans for deployment, officials say they facial recognition systems often violate EU privacy laws, including special rules for police work.

 

Supporting Sustainable AI

The white paper notes that the use of AI systems would play a decisive role in achieving its sustainable development goals, and support the democratic process and social rights.

With its recent proposals on the European Green Deal, the continent is leading in tackling environmental-related challenges. The commission believes that digital technologies, artificial intelligence included are a critical enabler for attaining the Green Deal objectives. Given the increasing importance of Artificial intelligence, the environmental impact of AI algorithms needs to be duly considered as regards to resource usage required for data training and its usage.

A common European approach to AI is necessary to reach the objectives of a responsible AI and accelerate its market adoption. Given the widespread applications of AI in finance, pharmaceuticals, aviation, medical devices, the proposed plans should not duplicate existing functions, instead establish close links with other EU and national competent authorities. The long-term goal is to help existing authorities in monitoring and the oversight of the activities of economic operators involving AI systems and AI-enabled products and services.

leave a reply