European Commission proposes a risk-based approach to regulate use of AI

April 20, 2021

The European Commission has proposed a new regulation aiming to turn Europe into the global hub for trustworthy AI. It hopes that this will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. It also plans complementary rules on machinery, adapting safety rules with the aim of increasing users’ trust in new products. 

The aim will be for European citizens to trust what AI has to offer. The Commission intends that proportionate and flexible rules will address the specific risks posed by AI systems and set high standards. In addition, a Coordinated Plan outlines policy changes and investment at member state level to develop human-centric, sustainable, secure, inclusive and trustworthy AI.

The new regulation will apply directly in the same way across all member states based on what the Commission says will be a future-proof definition of AI. The rules follow a risk-based approach based on unacceptable, high, limited and minimal risk.

Unacceptable risk

AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (eg toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

High risk

AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (eg transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (eg scoring of exams);
  • Safety components of products (eg AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (eg CV-sorting software for recruitment procedures);
  • Essential private and public services (eg credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (eg evaluation of the reliability of evidence);
  • Migration, asylum and border control management (eg verification of authenticity of travel documents);
  • Administration of justice and democratic processes (eg applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market, including:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk; and
  • High level of robustness, security and accuracy.

In particular, all remote biometric identification systems are considered high risk and will be subject to strict requirements. In principle, their live use in publicly accessible spaces for law enforcement purposes will be prohibited. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use will be subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the databases searched.

Limited risk

AI systems with specific transparency obligations will be considered limited risk. When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk

The draft Regulation allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. These AI systems represent only minimal or no risk for rights or safety.

Governance

In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are planned for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.

Coordinated Plan

The Coordinated Plan with member states aims to do the following:

  • Create enabling conditions for AI’s development and uptake through the exchange of policy insights, data sharing and investment in critical computing capacities;
  • Foster AI excellence ‘from the lab to the market’ by setting up a public-private partnership, building and mobilising research, development and innovation capacities, and making testing and experimentation facilities as well as digital innovation hubs available to SMEs and public administrations;
  • Ensure that AI works for people and is a force for good in society by being at the forefront of the development and deployment of trustworthy AI, nurturing talents and skills by supporting traineeships, doctoral networks and postdoctoral fellowships in digital areas, integrating trust into AI policies and promoting the European vision of sustainable and trustworthy AI globally; and
  • Build strategic leadership in high-impact sectors and technologies including environment by focusing on AI’s contribution to sustainable production, health by expanding the cross-border exchange of information, as well as the public sector, mobility, home affairs and agriculture, and robotics.

New machinery products

Machinery products cover an extensive range of consumer and professional products, including robots, lawnmowers, 3D printers, construction machines and industrial production lines. A new Machinery Regulation will aim to ensure that the new generation of machinery is safe for users and consumers, and encourages innovation. While the AI Regulation will address the safety risks of AI systems, the new Machinery Regulation will ensure the safe integration of the AI system into the overall machinery. Businesses will need to perform only one single conformity assessment. 

Additionally, the new Machinery Regulation aims to respond to the market needs by bringing greater legal clarity to the current provisions, simplifying the administrative burden and costs for companies by allowing digital formats for documentation and adapting conformity assessment fees for SMEs, while ensuring coherence with the EU legislative framework for products.

Next steps

The European Parliament and the member states will need to adopt the Commission’s proposals in the ordinary legislative procedure. Once adopted, the Regulations will be directly applicable across the EU. In parallel, the Commission will continue to collaborate with member states to implement the actions announced in the Coordinated Plan.

The legal framework will apply to both public and private organisations inside and outside the EU as long as the AI system is placed on the EU market or its use affects people located in the EU. Therefore, it will be relevant for parties in the UK doing business in the EU.