The EU Draft AI Regulation: what you need to know now

June 3, 2021

The EU has now published its draft Regulation on AI (“Draft Regulation”), which was promised by President von der Leyen shortly after her appointment in 2019. 

The Draft Regulation is the first of its kind: it is a comprehensive (and bold) legal framework for the regulation of AI systems; it is directly applicable in Member States (it is not, for example, a “directive”, which relies on domestic implementing legislation); and it aims to put at its core the protection of EU citizens from the harm that could be caused by AI systems. 

The Draft Regulation adopts a “risk-based approach”: all AI systems are affected to some extent and some AI uses are prohibited; but the main focus is on “high-risk AI systems”, which are subject to various onerous obligations.  

The Draft Regulation has been referred to as the “GDPR of AI”, but there are, in reality, only a few similarities (notably, the extra-territorial reach of the Draft Regulation and the stringent financial penalties for non-compliance). It is actually more akin to a product safety regime for AI; in fact, it builds upon the EU’s “New Legislative Framework”, which was implemented in 2008 in an effort to harmonise product safety standards, conformity assessments and product certification / labelling.

The Draft Regulation will now be subject to the EU’s legislative process and may not come into force for at least a few years, during which time it is likely to be amended in various respects.  It is nevertheless an important document. This article summarises its key points and suggests areas which organisations should consider in the context of AI governance more generally. 

Applicability

The Draft Regulation is “multi-dimensional” in the sense that it seeks to regulate certain AI systems and, at the same time, the individuals or organisations involved with those AI systems. This is one reason why the framework is so comprehensive but also, in places, not straightforward to follow.

The Draft Regulation  distinguishes between providers, users, importers and distributors of high-risk AI systems. “Providers” are made subject to the majority of the relevant obligations for high-risk AI systems, perhaps reflecting the EU’s approach to product safety more generally where manufacturers bear the principal regulatory burden. However – and what could become a key provision – Article 28 of the Draft Regulation states that users, importers and distributors will be deemed to be providers where they: (i) deploy a high-risk AI system in their own name or trademark, (ii) modify the intended purpose of a high-risk AI system that is already on the market, or (iii) make a “substantial modification” to a high-risk AI system. Users of high-risk AI systems will need to consider this provision carefully, given the implications it could have for them.

The Draft Regulation also has a wide application in terms of its territorial and substantive scope:  

  • Like some other EU laws, the Draft Regulation is extra-territorial. Article 2(1) states that the Draft Regulation will apply to: (i) providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established in EU or not, and (ii) providers or users of AI systems based in EU or, if not based in EU, where the “output produced by the system is used” in the EU. Limb (ii) is open to interpretation (it is unlikely to be clear for all AI systems what its output is and where that output is “produced”) and this would benefit from further clarity.  
  • The core definition of “AI system”, which is fundamental to the scope of the Draft Regulation is also (surprisingly) wide. Article 3(1) defines an AI system as: 

“software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. 

Annex I contains 3 paragraphs: paragraph (a) refers to “machine learning approaches”, which is unsurprising; but, paragraphs (b) and (c) refer, respectively, to “logic- and knowledge-based approaches” and “statistical approaches”. Whilst the concept of “AI” is inherently nebulous and difficult to define, these latter approaches are likely to catch software which is not conventionally considered to be AI and which could actually be relatively unsophisticated. By including this wording in Annex I, the EU might be trying to “future-proof” the Regulation against further technological development in AI. However, in doing so, it might be over-reaching and creating a hidden risk for many organisations. For example, a financial institution using AI in consumer-facing products is unlikely to be subject to the Draft Regulation, but, if it uses a “logic-based approach” in its recruitment processes (which would be categorised as a high-risk AI system under the Draft Regulation), then it could be subject to onerous requirements in relation to that system. 

AI systems: categories of risk 

The “risk-based approach” in the Draft Regulation is evident from the 3 categories of AI systems which are subject to differing requirements:

  1. Prohibited AI systems: 3 types of AI systems are prohibited: (i) AI systems or uses that deploy subliminal techniques or which exploit any vulnerabilities, (ii) “social scoring” systems used by public authorities to evaluate or classify the trustworthiness of individuals, and (iii) real-time remote biometric identification systems (e.g. facial recognition) used in public spaces for law enforcement purposes (unless certain conditions are met). 
  2. High-risk AI systems: these are: (i) AI systems which comprise the products, or form a safety component of the products, covered by the EU legislation listed in Annex II of the Draft Regulation (e.g. medical devices), or (ii) the AI systems listed in Annex III, which includes: biometric identification systems, AI used in public utilities services, AI used to determine access to education institutions, AI used in recruitment processes or in employment decisions, and AI used in various “public law” contexts e.g. in law enforcement, judicial contexts and migration, asylum and border control.
  3. Other AI systems: all other AI systems are subject to only minimal requirements; notably, providers must ensure that AI systems make clear, where they are intended to interact with individuals, that those individuals are aware that they are interacting with an AI system. Member States are also encouraged to promulgate voluntary codes of conduct for these AI systems, but that might be wishful thinking on the part of the EU (at least in the short term).

Onerous requirements for high-risk AI systems

The main focus of the Draft Regulation is on high-risk AI systems. Practically speaking, therefore, if an organisation does not use or provide a high-risk AI system, and is not involved in any prohibited AI systems, then the Draft Regulation will have a limited impact on it. However, there may be hidden risks through, for example, the expansive definition of “AI system” and the list of EU laws in Annex II, which potentially widen the scope of AI systems which may be considered as “high-risk” under the Draft Regulation.  

Providers of high-risk AI systems are subject to extensive and onerous requirements, including in relation to: the datasets used in training and testing the AI system, record-keeping around the AI system, transparency of the AI system, human oversight, accuracy and cybersecurity. Furthermore, these providers are obliged to put in place a risk management system and quality management system to reduce the likelihood of any harm being caused by the AI system once it is deployed. There are also post-market monitoring obligations in place and a requirement to report “serious incidents”. 

Conformity assessments for high-risk AI systems

In order to determine whether or not high-risk AI systems meet the requirements noted above, the Draft Regulation lays down a conformity assessment procedure. This will be a familiar regime for those with experience of the EU’s “New Legislative Framework” – in a similar way to that framework, high-risk AI systems must be assessed for conformity with the relevant requirements and then given a CE marking accordingly.

This is where the Draft Regulation becomes particularly complex because it intersects with the existing New Legislative Framework. For example, Annex II contains two sets of EU laws relating to products which may already require conformity assessments. It appears that any AI systems which constitute products or components of products covered by the Section A set of EU laws in Annex II will need to be assessed for conformity with the AI Regulation in the context of the existing conformity assessments for those products, whereas for the products / components covered by the EU laws in Section B of Annex II, the Draft Regulation appears to envisage further rules about the conformity assessment procedure for those AI systems.  

The conformity assessment may need to be approved by a third party (a “notified body”) (using the Annex VII procedure) or it may be undertaken by the provider itself (the Annex VI procedure). This might also depend on whether there are any harmonised standards or common specifications in place for the relevant AI system.

This part of the Draft Regulation is not straightforward to navigate and will need further explanation and clarification by the EU in due course. 

Financial penalties

Perhaps the most striking feature of the draft Regulation, demonstrating the EU’s commitment to the regulation of AI, is the value of the financial penalties that could be imposed in case of infringement (particularly for “companies”). 

A company can be fined up to 4% of its worldwide annual turnover for non-compliance with the draft Regulation (increasing to 6% where the non-compliance relates to the prohibited AI uses and the data-related obligations for high-risk AI systems).

Framework for monitoring and compliance

The Draft Regulation addresses, albeit not in detail, how the legal framework it lays down will be monitored and enforced. 

In summary, each Member State will need to establish or designate a “national competent authority” to apply the Regulation and also a “national supervisory authority” to act as the notifying authority and market surveillance authority. At an EU level, there will be a “European Artificial Intelligence Board” to assist the European Commission and national supervisory authorities to ensure a consistent application of the Regulation.

Grace period

The final provision of the Draft Regulation, Article 85, contains an important clarification – the Regulation, once it comes into force, will have a grace period (currently 24 months) for organisations to ensure compliance. 

What next?

The Draft Regulation is currently open for feedback and it will then go through the EU’s legislative process, during which it is likely to be amended. As noted above, it may be a few years before it actually comes into force and there will then also be a grace period. 

Nevertheless, the EU has given a clear indication as to the direction of travel for AI regulation and, particularly in light of the onerous requirements laid down in the Draft Regulation and the potential fines for non-compliance, organisations should start to consider what, if anything, they should do now to prepare for this Regulation. One point to consider is how to deal now with AI systems that are being developed or procured which may still be in use in a few years’ time. Under the Draft Regulation, users and providers will need to have detailed technical information about those AI systems – including how they were developed and trained – which it would be prudent to consider now. 

The UK position

The Regulation will not apply to the UK. This leaves the UK with various options as to if and how it introduces its own AI regulation. The UK is due to publish its AI strategy later in 2021 and that may give clues as to the UK’s possible position on this point.

In any case, however, given the extra-territorial effect of the EU’s Draft Regulation, a significant number of users and providers of AI based in the UK (particularly those who engage in cross-border commercial activities) are likely to be subject to the Draft Regulation anyway.  

profile picture of minesh tanna

Minesh Tanna is a solicitor-advocate at Simmons & Simmons, with a focus on telecommunications, media and technology (TMT) matters. He is also the firm’s AI Lead and regularly advises clients on AI-related legal issues. Minesh has recently been appointed as Chair of the SCL’s new AI Group.