Council of the EU agrees position to streamline rules on AI

March 19, 2026

The Council of the EU has agreed its position on the proposal to streamline rules on AI as have two European parliamentary committees.

The proposal forms part of the so-called “Omnibus VII” legislative package in the EU’s simplification agenda. The package includes proposals for two regulations aiming to simplify the EU’s digital legislative framework and the implementation of harmonised rules on AI.

The European Commission proposed to adjust the timeline for applying rules on high-risk AI systems by up to 16 months, so that the rules start to apply once the Commission confirms the needed standards and tools are available. The Commission also proposed extending certain regulatory exemptions for SMEs to small mid-caps (SMCs), reduce certain requirements in a very limited number of cases, allow more processing of sensitive personal data for bias detection and mitigation, reinforce the AI Office’s powers and reduce the fragmentation of AI governance.

Main amendments introduced by the Council

The Council broadly supports the Commission’s proposal.  However, it has added a new provision in the AI Act which prohibits AI practices regarding the generation of non-consensual sexual and intimate content or child sexual abuse material. The text also introduces a fixed timeline for the delayed application of high-risk rules: the new application dates would be 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products.

Significantly, the Council wants to reinstate the obligation for providers to register AI systems in the EU database for high-risk systems, where they consider their systems to be exempted from classification as high-risk. It also reinstates the standard of strict necessity for the processing of special categories of personal data for the purpose of ensuring bias detection and correction.

In addition to these changes, the text postpones the deadline for the establishment of AI regulatory sandboxes by competent authorities at national level until 2 December 2027. It also clarifies the competences of the AI Office for the supervision of AI systems based on general-purpose AI models where the model and that system are developed by the same provider by listing exceptions where national authorities remain competent, including law enforcement, border management, judicial authorities and financial institutions.

Finally, the Council mandate adds a new obligation for the Commission to provide guidance to assist economic operators of high-risk AI systems covered by sectoral harmonisation legislation in complying with the high-risk requirements of the AI act in a manner that minimises compliance burden.

European parliamentary committee changes

The Conmmittee would like to introduce fixed dates for application of the AI Act with the aim of ensuring predictability and legal certainty.

  • For high-risk AI systems specifically listed in the regulation (including those involving biometrics, and used in critical infrastructure, education, employment, essential services, law enforcement, justice or border management), the MEPs propose 2 December 2027.
  • For AI systems that are covered (or used as safety components in products that are covered) by EU sectorial legislation on safety and market surveillance, the MEPs propose 2 August 2028.

MEPs are also in favour of giving providers more time to comply with rules on watermarking AI-created audio, image, video or text content to indicate its origin. However, they suggest a shorter extension, until 2 November 2026 (instead of 2 February 2027 as proposed by the Commission).

The Committees also want to introduce a new ban on so-called “nudifier” systems that use AI to create or manipulate images that are sexually explicit or intimate and resemble an identifiable real person without that person’s consent. The ban would not apply to AI systems with effective safety measures preventing users from creating such images.

The MEPs are in favour of allowing service providers to process personal data to detect and correct biases in AI systems but they want safeguards to ensure this is done only when strictly necessary.

To help EU companies scale up as they outgrow SME status (where they enjoy certain support measures), the MEPs backed the proposed extension of these measures to small mid-cap enterprises (SMCs).

To prevent overlapping application of sector-specific EU product safety rules and the AI Act, the MEPs argue that obligations under the AI Act can be less stringent for products already regulated under sectoral laws (for example, medical devices, radio equipment, toy safety, and others). They say that the Commission should address possible gaps by updating those rules accordingly.

Once Parliament’s mandate is approved in plenary (vote expected on 26 March), negotiations with Council can begin.