Liability for AI: European Commission publishes Experts' report

Report looks at liability for AI and other digital technologies.

The European Commission has published a report on liability for AI and other digital technologies. It says that artificial intelligence and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies, have the potential to be transformational. However, there also need to be sufficient safeguards, to minimise the risk of harm that such technologies may cause, such as bodily injury or other harm. 

There are product safety regulations in force in the EU. However, these cannot completely exclude the possibility of damage resulting from the operation of AI and other digital technologies. If damages does result, victims will seek compensation. They typically do so on the basis of liability regimes under private law, in particular tort law, perhaps in combination with insurance. Only the strict liability of producers for defective products, which only makes up a small part of this kind of liability regime, is harmonised at EU level by the Product Liability Directive.  All other regimes, apart from some exceptions in specific sectors or under special legislation, are regulated by the member states themselves. 

The report assesses existing liability regimes and concludes that the ones in force in the member states ensure at least basic protection of victims whose damage is caused by new technologies. However, the specific characteristics of these technologies and their applications – including complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats – may make it more difficult to offer these victims a claim for compensation in all cases where this seems justified. It may also be the case that the allocation of liability is unfair or inefficient. To rectify this, certain adjustments need to be made to EU and national liability regimes. 

The key findings of the report are detailed below:

  • A person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation. If a service provider ensuring the necessary technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI, this should be taken into account in determining who primarily operates the technology. 
  • A person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor and maintain the technology in use and – failing that – should be liable for breach of such duties if at fault. 
  • A person using a technology which has a certain degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary. 
  • Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market. 
  • For situations exposing third parties to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential tortfeasors against the risk of liability. 
  • Where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected, victims should be entitled to facilitation of proof. 
  • Emerging digital technologies should come with logging features, where appropriate in the circumstances, and failure to log, or to provide reasonable access to logged data, should result in a reversal of the burden of proof in order not be to the detriment of the victim. 
  • The destruction of the victim’s data should be regarded as damage, compensable under specific conditions. 
  • It is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.

Published: 2019-12-03T10:00:00

    Please wait...