Innovation v Regulation – how might the EU’s AI Act affect the UK’s own AI legislation?

February 3, 2023

On 21 April 2021, the EU Commission published the first ever legal framework on AI. The Artificial Intelligence Act (“the Act”) was created to address both the apparent risks AI may pose, and to take a global lead with innovation of AI across the EU bloc. Following on from its publication of the National AI Strategy in September 2021, on 18 July 2022 the UK Government set out its approach to regulating AI in the form of an interim policy paper. The latest policy paper was released ahead of the white paper, which was originally intended to be published by the end of 2022. Fast further forward to 2023, it is clear that – legally – not all roads lead to Rome (or Brussels for that matter) since the UK’s departure from the EU. This article aims to reflect on the critique so far of the Act and how the Act differs to the UK’s current policy approach.

Flash points of the EU AI Act

In the wake of Open Loop’s recent report, Operationalizing the Requirements for AI Systems Part I, law makers at home and abroad have been able to digest the private tech sector’s response to the Act and see if there is a possible alternative to what is a substantive legislative draft.

Taxonomy of AI Actors (Article 3)

Article 3 sets out the various definitions of the Act and, in general, the businesses surveyed by Open Loop found the descriptions of various AI actors to be ‘clear on paper’. Two of the key definitions examined were ‘user’ and ‘provider’. However, when seeking to apply those definitions to a supply chain which concerns an AI system, it was found that:

‘in reality the roles of users and providers are not as distinct as the Act presupposes, particularly in the context of the dynamic and intertwined relationships and practices that occur between the various actors involved in the development, deployment, and monitoring of AI systems.’

The survey participants highlighted the concern that there are sometimes blurred lines between different AI actors; for example, a user can also be a provider and vice versa, and there could be difficulty in differentiating liability in respect of users and end users. A user might provide a service to an end user (‘end user’ is not defined under the Act), however the provider is the party that has placed the product on the market in the first place.

This naturally led the Open Loop report to conclude that this ‘raises questions as to who should be held responsible for the requirements in the Act and who is responsible when these requirements are not met’. Of course, it is open to the parties to a contract to apportion liability in respect of these three roles and/or definitions. However that approach raises further questions, such as whether the regulator is able to overrule a B2B contract in circumstances where on the facts liability clearly falls on one party over the other. While it is rarely this straightforward, it might be that a party assumes responsibility for end users even if both contracting parties can potentially be defined as user and provider.

At the moment it is still unclear who the Act would hold responsible for compliance. The taxonomy of AI actors in Article 3 will need to be revised to one extent or another, and at the very least, attempt to describe more accurately the possible interactions between actors, especially AI systems are co-created and open-source tooling is used.

Managing risk (Article 9)

One of the key features of Article 9 is that it stipulates that a risk management system should assess “reasonably foreseeable misuse”. While feedback so far has indicated that parties are willing to manage risks, even when they are not classified as ‘high risk’ in the Act, the Open Loop report remarked that ‘it was difficult for [participants] to predict and anticipate how users or third parties would use their AI systems.’

The report already recommends that guidance on risk and risk assessment would be useful, including what the format of a risk assessment may look like. There is also the practical point of how far the ‘testing procedures’ under Article 9 need to go. Could a party potentially be absolved of liability (in full or in part) if the testing of a high-risk AI system is sufficiently thorough, even if that process was unable to reasonably foresee (with the benefit of hindsight) certain risk-related outcomes? In terms of the risks to natural persons, the more severe risks could include reputational damage, exclusion and discrimination.

While guidance on managing risk will go some way to allaying concerns with regards to potential misuse of an AI system, there remains the unanswered questions as to what the guidance may look like; how the guidance might evolve, given the plethora of AI applications already in existence; and whether that guidance carries any judicial weight (if at all). That said, it is positive that the feedback received so far signals the willingness of European-wide business to take direction on this issue.

Data quality and technical documentation (Articles 10 and 11)

One of the main criticisms of the Act is that it is perhaps too prescriptive with respect to data quality, and therefore could be difficult to implement. Article 10(3) states that ‘training, validation and testing data sets shall be relevant, representative, and to the best extent possible, free of errors and complete‘ [emphasis added]. And if a business wishes to implement a high-risk AI system, they will need to compile that documentation prior to release, and ensure it is maintained (Article 11(1)).

This “best effort” approach introduced by the European Parliament has been more positively received by the business community. However, just as with risk management, the importance of guidance to assist with the implementation of these requirements cannot be overstated. One of the notable comments from the Open Loop report made clear that ‘without further guidance, clear and objective methods, and metrics for establishing compliance with these data quality requirements, this provision in the AIA [the Act] is seen as impractical.’

It is worth emphasising that Articles 10 and 11 apply to high-risk AI systems, such as: AI applications in robot-assisted surgery, critical infrastructures (e.g. transport), and employment tools (e.g. CV-sorting software for recruitment procedures). So while the underlying reasoning for having stringent data quality and documentation regulations is sound, industry will have to wait for guidance from the regulator and subordinate legislation to assist with compliance.

Transparency and human oversight (Articles 13 and 14)

It is clear that those drafting the Act intended that there be an accountable and transparent framework in place for high-risk systems, and for there to be adequate tools so as to enable human oversight. However, some lines are blurred when reading Articles 13 and 14 together.

For example, Article 13(1) mentions that high-risk systems should be transparent so both users and the provider can comply with their obligations. Further at Article 14(4) it states that ‘the individuals responsible for human oversight’ should be placed in a position so as to execute their oversight. But these roles are not yet clear. This also raises certain questions. Can a provider facilitate human oversight whilst also being a user of that very system? And if so, in practice is this achievable by certain users, such as doctors, engineers and human resource professionals, or will it require dedicated personnel in the form of AI risk managers?

In line with the survey feedback from Open Loop, it could be that different types of available information will allow different individuals to effectively carry out these responsibilities. And there is also consensus that differentiating between different actors and their responsibilities will assist. However, there is a clear balancing act between transparency, performance of an AI system, and ensuring individuals comprehend their role(s) within an AI lifecycle. Clearly more information and guidance is needed as to how this might work in practice.

How might the Act affect the UK’s approach to regulating AI?

The UK Government’s interim white paper titled ‘Establishing a pro-innovation approach to regulating AI‘, set out a different path for the regulation of AI. The current UK approach is that AI development should not currently be constrained by fixed legislation, and so should be free to operate within a less-restrictive legal landscape to that on the continent. The thinking is that that regulation may potentially stifle innovation in the tech sector though, the paper reserves the right to legislate stating the position as: ‘While we currently do not see a need for legislation at this stage, we cannot rule out that legislation may be required as part of making sure our regulators are able to implement the framework.’

Since its introduction, it seems that the AI Act may not affect the UK’s approach as we might have first thought. The UK is currently looking to implement a decentralised, sector-specific approach, with existing regulators (as opposed to the creation of a new single regulator) being asked to issue guidance and put in place regulatory requirements which apply to the business they regulate. The Policy Paper identified the Information Commissioner’s Office (ICO), Competition and Markets Authority (CMA), Ofcom, Medicine and Healthcare Regulatory Authority (MHRA), and Equality and Human Rights Commission (EHRC) as the key regulators in its new regime. It also differs to the EU’s central list of prohibited or high-risk use cases. The UK Government does not seek to ban specific uses of AI but will leave it up to the above regulators to decide if the use of AI in specific scenarios should not be allowed or should be subject to higher regulators burdens.

To conclude, it seems that the EU and the UK AI regulation regimes are on divergent paths. Innovation versus regulation. Pro-active versus reactive. That said, with the potential certainty provided from an EU legislative framework, and the status quo that many UK businesses already seek to ensure they are compliant with EU regulations so they may trade with the single bloc, innovation of AI could yet be very much on the EU’s terms. And the tech sector may yet favour the relative certainty of a regulatory framework over the proposed ‘principled’ approach in the UK. Time will tell the extent to which the UK will ultimately seek to depart from the EU’s approach in its proposed AI regulation.

Jacob Gatley

Jacob Gatley, Solicitor, BDB Pitmans and a member of the Society for Computers and Law

Sources:

Proposal for a regulation of the European Parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (COM/2021/206 final), originally published 21.04.2021

Artificial Intelligence Act: A Policy Prototyping Experiment – Operationalizing the Requirements for AI Systems – Part 1 – published November 2022.