Robots and Law

October 6, 2010

I always find a return from holiday rather difficult. There may be a mountain of things to do but (a) I am not really in the mood, (b) others generally assume that, as you have been away for a fortnight and are refreshed, you can do a fortnight’s catch up in a day and (c) I haven’t played with the Internet much for a while. The road to post-holiday sanity can involve one being led into (i) a bit of displacement activity or (ii) general research (which is not much different from (i)). I was led astray this week by Professor Lilian Edwards (cue Frankie Howerd) and some fascinating reflections on robots and law in her {Pangloss blog: http://blogscript.blogspot.com/}. I strongly recommend that you allow yourself to be led astray there too, although you should not watch the cartoon clip on ‘Don’t Date Robots’ unless you too are in post-holiday depression.

What I know about robotics could be written in the corner of a very small postage stamp, and I could still read it without my glasses. But there is something about the mention of them that excites me. And I am by no means alone. I notice from consulting the web site stats that any reference on the SCL site to concepts that we categorise as futuristic gets a lot of attention. Fernando Barrio’s piece about {Autonomous Robots and the Law: http://www.scl.org/site.aspx?i=ed1082} is still frequently referred to – no doubt the fact that the article deals with sex with robots helps increase the numbers. But it’s the whiff of science fiction, not the sex, that attracts the readers (although those old enough to remember {i}Barbarella{/i} will know that the two are by no means exclusive). Andrew Katz’s piece on {Intelligent Agents and Internet Commerce in Ancient Rome: http://www.scl.org/site.aspx?i=ed1107} is still popular too and that has no sexual references that I am aware of (although a {i}peculium{/i} does sound like the sort of thing that would get a laugh when said by Frankie Howerd); it is the coverage of artificial intelligence that draws people in.

What Professor Edwards says about robots and the law really made me think. First, the glimpse of what I thought was the bleeding obvious (an ‘obvious’ point though that had hitherto entirely escaped me): ‘Robots are not subjects of the law nor are they responsible agents in ethics; but the people who make them and use them {i}are{/i}’. It follows from that perception that robots are merely products and that the laws which Professor Edwards goes on to propose are not laws for robots but laws for roboticists.

What I am still musing over is what might be a slightly convenient circuit in this approach. I am no longer so sure that the ‘obvious’ is true. Of course, if robots are merely products akin to washing machines then we have plenty of laws in place to deal with them and the roboticists. OK, you might get problems when the robot ‘outlives’ the supplier and designer and then goes wrong, but that happens with washing machines too. However, as is generally acknowledged, what marks out a robot is the capacity to adapt and learn, and one of the frequent characteristics of robot use is interaction with humans at a level beyond that envisaged by even the most ambitious washing machine. My instinct, possibly over stimulated by science fiction, is that those two elements are enough to make robots genuinely different and that we will need more than product liability to cope. If we only allow the design and production of robots where we can be sure of what they do in any situation, will we remove a lot of the incentive for progress? If you cannot imagine every situation, and once robots are in ‘the wild’ you cannot, how can you predict every AI-inspired robotic response to it? The product liability type of approach remains immensely tempting and may perhaps be the only rational form of control, but I can see a time when it may not feel at all {i}fair{/i}.

The first of the laws suggested by Professor Edwards also made me think: ‘Robots are multi-use tools. Robots should not be designed solely or primarily to kill, except in the interests of national security.’ Nobody is going to argue with that – not even Kim Il-sung (from his mausoleum) and Kim Jong-il – but one man’s national security is another man’s repression. For some reason, as the 5th of November approaches, I was reminded of the TV adverts telling us to use fireworks wisely.

In any case, moving away from the bigger issues, I do think we will need some level of law specific to robots. I am not suggesting a separate justice system for robots or the like (sorry robots), but I do think that there will come a time when we need more focused legal control – a Robot Commissioner if you will. If we think data protection is worth it then surely robots will need a similar figure who can become immersed in the detail and give useful guidance. Maybe that’s not one for this Parliament though.

{b}Post script: if you read my last post on the ACS data saga before all the comments were added, you missed the best bits. The SCL member comments telling me why I was wrong are well worth your attention.