The UK Jurisdiction Taskforce of Lawtech UK is consulting on a statement about liability for AI harms in England and Wales.
The Taskforce says that the current lack of any AI-specific liability regime in England & Wales (and elsewhere) creates legal uncertainty. This relates to whether a person harmed by AI would have a right of recourse and, if so, against whom.
UKJT says that it is important to distinguish between those areas of true novelty (which are rare) and areas where, although the factual background may be novel, the application of well-established legal principles is reasonably straightforward. Many potential harms caused by AI fall into the latter category.
It points out that the EU AI Act is probably the most comprehensive and well-known example of AI legislation in any jurisdiction. However, although it contains extensive regulatory measures, it does not address private law liability for AI harms. The European Commission has withdrawn its proposal for a directive on AI liability specifically.
The UKJT has co-ordinated the preparation of an authoritative Legal Statement on Liability for non-deliberate AI harms under English private law and is now consulting on that paper.
The overarching question that it seeks to address is in what circumstances, and on what legal bases, English common law will impose liability for loss that results from the use of AI. The primary focus is on the liability of those who did not deliberately cause harm. English law generally has means of ensuring legal redress against deliberate wrongdoers, and the fact that AI may have been used as a tool to inflict deliberate harm is unlikely in most circumstances to affect the legal analysis. Therefore, the Legal Statement contains an analysis of the application of the law of negligence to physical and economic harms caused by AI. In addition, the Legal Statement addresses the following questions:
- Does the principle of vicarious liability apply to loss caused by AI?
- In what circumstances can a professional be liable for using or failing to use AI in the provision of their services? If AI used in the provision of professional services produces erroneous output, is the professional liable for loss resulting from the error?
- Can a person ever be liable for harms caused by use of AI where there is no fault on their part?
- Does liability attach to false statements made by an AI chatbot?
The UKJT’s approach to the legal analysis of liability for loss caused by AI starts from the premise that AI does not have legal personality in English law and that, therefore, it cannot itself be held legally responsible for physical or economic harm. Instead, liability for harms that arise from the use of AI must be attributed to legal persons, using ordinary legal principles.
In commercial situations, the question of whether a person is liable for a particular harm and, if so, which person, can often be governed by contractual agreements between parties. As between members of the AI supply chain, contract is the primary – and often sole – basis for allocating liability. The extent of liability and ability to pass losses up the chain are typically determined by warranties, indemnities, limitations and exclusions. It is in contexts where there is no relevant governing contract that the main legal issues for analysis arise.
Where there is no relevant governing contract, the question of whether a person is liable for physical or economic harm caused by use of AI turns on whether the law imposes a non-contractual duty on that person to protect against that harm. The primary (albeit not sole) legal framework governing liability for such harm is the law of negligence.
Therefore, a core part of the Legal Statement is an analysis of how the law of negligence can be applied to harms caused by AI and, in particular, how duties of care are likely to arise, what standards the courts are likely to apply to developers and users of AI, and how the courts will likely approach causation in the context of technology that is “autonomous”. In the context of physical harm, the common law is supplemented by the Consumer Protection Act 1987, which implements the original EU Product Liability Directive and imposes “no fault” liability for defective products. The statement therefore considers the extent to which the CPA 1987 applies to AI harms.
Circumstances in which AI might be expected to cause economic harm include where a professional uses AI, or where an AI “chatbot” or equivalent produces false statements and thereby provides false information or advice. To address these areas, the Legal Statement analyses the application to AI of the law of professional negligence and the main common law principles applicable to the generation of false statements: negligent misstatement and defamation.
The consultation ends on 13 February 2026.