Artificial Intelligence: Who’s to blame?

Lynn Richmond looks at the way we apportion blame in law and the need for a reconsideration of the relevant law in the light of the development of AI. She goes on to bring the issue close to home by considering the liability of lawyers in the future AI-influenced practice of law

One of the fundamental concerns in relation to AI is the absence of a body of law which applies specifically to AI. This is hardly a novel problem - even before the industrial revolution, the law has had to develop and adapt to deal with new technology. However, never before have we been faced with a situation where a particular form of technology may (one day) be truly autonomous.

While AI has not yet developed its own body of law, it does not sit in a complete vacuum and many existing areas of law have an impact on AI, most notably intellectual property which is relevant for the purposes of identifying and regulating the ownership and output of AI. There is no overarching legal framework which applies specifically to AI and, in particular, which regulates who has responsibility and liability for artificial intelligence.

The autonomy and inscrutability of more advanced AI raises liability issues. When mistakes are made, who has ultimate liability? Where AI is employed to mimic the process carried out by a human, fault is likely to be easier to identify. The party who has provided the wrong data, the wrong pre-determined result or the wrong process is likely to be liable but when AI writes its own code and uses its own algorithms, who is at fault then? This is a question which will affect every sector and area of business.

The possibility of holding AI itself liable seems to be out of the question. AI does not have legal personality and from a natural justice perspective it would seem perverse to provide that an entity which has no assets and no capacity for punishment is responsible for acts or omissions that cause damage to third parties.

That then brings us back to the issue of liability for the owners or controllers of the AI. Identifying the responsible party may not be an issue where the result is effectively pre-determined by an entity with legal personality, but where machine learning is involved this is often not the case.

Given the current state of the law, it is difficult to envisage how liability may be attributed in litigation where there is an unintended and unexpected consequence as a result of machine learning. How is causation demonstrated in a situation where no one could have anticipated the results? Will it simply be the case that no liability arises or will the law have to adapt to deal with these situations? If the former, it is imperative that the class of those potentially affected by AI is made aware of the fact that AI is being used and that they effectively agree to waive any claims insofar as its use is concerned. It seems highly unlikely that consumers or businesses would agree to that.

An alternative is simply to accept that if AI is used, the possible consequences will be potentially very far-reaching and while not foreseeable to those employing the AI, it is accepted as a risk of use of the technology. In this scenario, insurance is likely to play a key role in meeting the costs of claims arising out of the use of artificial technology. This is certainly the path envisaged in the government proposals for the regulation of automated vehicles, with the insurer assuming liability for any accident caused by the driverless vehicle. Where no insurance is in place, the owner of the vehicle will be liable. Interestingly, the Vehicle Technology and Aviation Bill also allows the insurer to exclude liability for failures to update software and operating systems. This adds another layer of complexity to the problem, where failure to run updates or the installation of defective updates may amount to an act which breaks the chain of causation.

There has been some discussion as to whether the application of vicarious liability is appropriate in the context of AI. While that proposition does have a certain attraction (the human creator will have ultimate responsibility for its AI creation), the concept of vicarious liability as we currently understand it is ill-suited. It may be true that AI has a certain degree of autonomy, but before true vicarious liability can apply, AI would still need to have separate legal personality. Agency would also fall short on the same principles. Perhaps a model of liability based on the Consumer Protection Act 1987 would be appropriate, whereby the producer or the supplier may be found liable depending on the circumstances. Were such a model adopted, the end-user would have some comfort that a remedy may be pursued against the supplier if it proves impossible to identify the ‘producer’ or creator of the technology. This model would address some of the potential difficulties the claimant would face where several different parties are involved in the creation of the AI technology. From a public policy perspective, this is an attractive solution. However, supplier liability is generally viewed as a remedy of last resort. Any legislation regulating liability in this area must be sufficiently robust to avoid a situation where parties using AI to deliver a service simply offer up all other parties involved without providing any greater clarity for the consumer.

One outcome which does seem certain is the increased use of contractual frameworks to regulate liability where AI is used. A growth in insurance cover and claims also seems likely and it is inevitable that parties using and developing AI will try to make provision for liability and responsibility for damages claims and potential fines levied as a result of the use of AI.

AI, Liability and Lawyers

Lawyers are not well known for embracing modern technology. We are seen as a staid profession that has not quite caught up with modern ways of working and our obsession with detail is often perceived as an obstacle to business rather than a facilitator. How many lawyers have inwardly groaned when a client has insisted that a contract be one side of A4, when most of the contracts we deal with fail to encapsulate anything more than the defined terms on the first page? Despite this, artificial intelligence is now making inroads into the legal profession, particularly in litigation in disputes ranging from road traffic accidents to PPI claims. But are lawyers ready for these changes and, more importantly, is the law?

Liability for artificial intelligence is of particular relevance to lawyers not only because lawyers will no doubt be involved in more and more disputes regarding the liability of AI for loss and damage but also because of the increasing use of technology in the legal sector.

In dealing with negligence claims, one of the first tasks of any lawyer is to identify the responsible party, whether that party is a natural or corporate body. The difficulty with AI is that it has no legal personality and cannot be held accountable for legal wrongs.

More basic forms of AI should be easier to regulate. Where AI is designed to reach a pre-determined outcome, it is easier to conceive that identifying the “responsible party” will simply be a case of looking to the users, owners or programmers depending on the particular facts of the case. But as we move to a form of AI that is truly autonomous, does that need to change?

While use of AI can certainly assist lawyers in processing claims, the real use of AI technology at this stage tends to be limited to just that – a process. AI can be used to predict certain outcomes based on a set of facts as presented but many current technologies would struggle with anything more complex, for example identifying the court which has jurisdiction based on anything more than the address of the defender, identifying whether a claim may be time-barred and applying any recent changes in the law, particularly case law which may affect the prospects of success. One of the potential problems with machine learning is that the program learns from previous problems which it has processed. The risks then of providing “negligent” advice during that learning process seem relatively high. Perhaps no higher than engaging a human lawyer, some would argue, but the scope for using machine learning in situations which turn entirely on their own facts and circumstances seems some way off. For now the old maxim seems to apply – garbage in, garbage out.

If court proceedings are unsuccessful, because a solicitor fails to take into account recent precedent which has subtly, but importantly, changed the law, that solicitor will, more than likely, be held to account for that omission. However, when the merits of the case are assessed by AI how does a recent change in case law factor in any liability for a decision made by AI? A different type of learning will need to be undertaken by the system before the correct legal tests are applied.

Even if court proceedings are successful, consider the scenario where the client has incurred costs of, say, £5,000 in successfully pursuing a claim for £500. The rate of recovery of judicial expenses means that the client is still substantially out of pocket. In order to replicate the task of the solicitor, AI must operate in such a way that it considers not only the merits of the claim itself but the wider economic consequences. It is certainly conceivable that AI will be able to factor in variable such as expenses in litigation but monitoring the cost/benefit analysis involved in a litigation is an ongoing process. What looks like a worthwhile investment at the outset of a case may look like a very different prospect two years down the line when substantial costs have already been incurred.

It seems very unlikely that the legal profession’s regulators would absolve a solicitor from blame in situations where they have failed to give the standard warnings about the perils of litigation but where AI is used in processing a claim, are solicitors now also under a duty to advise of the risks of using AI as a basis for informing or making decisions?

Conclusion

Artificial Intelligence tends to polarise opinions with many seeing it as a panacea and others as the source of deep-rooted concerns or a form of technology that will never really catch on to the extent some claim. The truth more likely lies somewhere in the middle. While AI is does not yet have the freedom of thought that one might associate with the novels of Philip K Dick, the extent to which AI, in some form, already plays a part in our everyday lives is striking.

The Privacy and Electronic Communications Regulations 2003 are due to be replaced over the coming year. Technology has developed apace since 2003 and an update is well overdue to reflect modern digital interaction. This aptly demonstrates the long lead-in time that is usually required for the legal system to address new challenges. On that basis a review of the law in relation to AI would be welcome sooner rather than later.

Lynn Richmond is an Associate at BTO Solicitors, Edinburgh


Published: 2018-08-08T10:20:00

    0 comments

      This site uses cookies. By using the site you agree to our use of cookies as set out in our Privacy Policy.

      Please wait...