Council of the EU adopts position on Artificial Intelligence Act

December 7, 2022

The European Commission proposed a draft AI Regulation in April 2021. The Council of the EU has now adopted its common position on the Regulation (called the Artificial Intelligence Act). Its aim is to ensure that AI systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values.

The proposal follows a risk-based approach and sets out a uniform, horizontal legal framework for AI that aims to ensure legal certainty. Its further aims are to promote investment and innovation in AI, enhance governance and effective enforcement of existing law on fundamental rights and safety, and facilitate the development of a single market for AI applications. The Council has made various amendments, which are described in this article.

  • Definition of an AI system – the Council’s text narrows down the definition to systems developed through machine learning approaches and logic – and knowledge-based approaches.
  • Prohibited AI practices – the text extends to the prohibition on using AI for social scoring to private bodies. Furthermore, the provision prohibiting the use of AI systems that exploit the vulnerabilities of a specific group of people now also covers those who are vulnerable due to their social or economic situation. Regarding the prohibition of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities, the text clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be exceptionally allowed to use such systems.
  • Classification of AI systems as high-risk – the Council’s text adds an amendment aimed at ensuring that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured.
  • Requirements for high-risk AI systems – the revised text clarifies and adjusts the requirements for high-risk AI systems so that they are more technically feasible and will impose less barriers for compliance. These include the quality of data, and the technical documentation that should be drawn up by SMEs to demonstrate that their high-risk AI systems comply with the requirements. Because AI systems are developed and distributed through complex value chains, the amended text include changes clarifying the allocation of responsibilities and roles of the various organisations in those chains, especially providers and users of the AI systems. It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other EU legislation, including financial services and data protection.
  • General purpose AI systems – new provisions review situations where AI systems can be used for many different purposes (general purpose AI), and where general-purpose AI technology is subsequently integrated into another high-risk system. It also provides that certain requirements for high-risk AI systems would also apply to general purpose AI systems. However, a separate implementing act would specify how they should be applied in relation to general purpose AI systems, based on a consultation and detailed impact assessment and considering specific characteristics of these systems and related value chain, technical feasibility and market and technological developments.
  • Scope and provisions relating to law enforcement authorities – the amended text explicitly excludes national security, defence, and military purposes from the scope of the AI Act. It also excludes research and development AI and people using AI for non-professional purposes, except for the transparency obligations.
  • Several changes have been made to provisions about the use of AI systems for law enforcement purposes. These reflect the need to respect the confidentiality of sensitive operational data in relation to their activities.
  • Compliance framework and AI Board – the amended text simplifies the compliance framework and the market surveillance provisions. It also substantially modifies the provisions concerning the AI Board, aiming to ensure that it has greater autonomy and to strengthen its role in the governance architecture for the AI Act. There will also be more proportionate caps on administrative fines for SMEs and start-ups.
  • Transparency provisions – the text includes several changes that increase transparency regarding the use of high-risk AI systems. Some provisions have been amended to provide that certain users of a high-risk AI system that are public entities will also be obliged to register in the EU database for high-risk AI systems. In addition, users of an emotion recognition system will be required to inform natural persons when they are being exposed to such a system. A natural or legal person may take a complaint to the relevant market surveillance authority concerning non-compliance with the AI Act and may expect that such a complaint will be handled in line with the dedicated procedures of that authority.
  • Measures to support innovation – the provisions concerning measures in support of innovation have been substantially modified. They include changes to AI regulatory sandboxes and permission to allow unsupervised real-world testing of AI systems, under specific conditions and safeguards. Smaller companies will benefit from some limited and clearly specified derogations.

The adoption of the general approach will allow the Council to enter negotiations with the European Parliament after the Parliament adopts its own position with a view to reaching an agreement on the proposed Regulation.