Editorial

August 23, 2015

This issue includes a number of articles that arise from or are inspired by the SCL Technology Law Futures Conference, which was hosted and sponsored by Herbert Smith Freehills in June. I hope that the next issue of the magazine will have further material arising from that Conference. I am grateful to those involved and of course to the contributors and to the Conference Chair, Simon Deane-Johns, in particular for their help in easing the transition of coverage from presentation to magazine article.

That Conference always covers stimulating themes and sparks interesting debates but I venture to suggest that the issues surrounding artificial intelligence, the leading theme of the Conference, raises more profound worries than any of the previous themes. It has certainly tempted me to go beyond the usual areas for my editorial.

The article featured on the cover, entitled ‘Is Luddism the Answer to Keeping Humans at the Heart?’, very broadly reflects Neil Brown’s contribution to the Technology Law Futures Conference but it is fair to say that the cover picture itself reflects misconceptions about Luddism and does not give a fair go to Neil’s more subtle parallels between the experience of the Luddites and the stirrings of suspicion about AI and the replacement of trades (and professions) by technologically advanced machines. Indeed, one could argue that while the Luddites damaged and destroyed a few machines, a more apposite cover would have shown a machine stomping on 19th century textile workers for it was they who were destroyed.

Neil’s call for lawyers (among others) to consider the human impact of advanced technology is likely to fall on deaf ears. We are living in times when the financial imperative is more powerful than Caesar (or, perhaps more appositely, Cnut) and the wave of change appears irresistible. Moreover, there is a very sensible argument that the lesson from history is that, as one form of employment is destroyed by technology, another, more remunerative, form arises from it. Unless you steadfastly refused to adapt and stuck to your stocking-frame in the 19th century, there were opportunities for new riches and an improved standard of living (though those that stuck to the old ways are now probably hipster artisans and doing very nicely). We often forget that the horrors of factory life drew people from rural areas to urban areas because, awful though life in the factory might be, it seemed preferable to the hovel and grind that was the lot of many agricultural workers – it wasn’t all bucolic joy. The extent of the flexibility of the employment market is demonstrated too, much more recently, in the astonishing change in the nature and spread of women’s employment.

With that lesson from history, and plenty of other things to worry about, should we find time to worry about AI? My instinct is that we should worry, a lot. The combination of social trends and technological developments could lead us to a dystopian future. AI might just possibly be different from what went before, and one lesson from history is that it does not always repeat itself.

The expectation that the benefits of technology would be spread to enable a life of greater leisure for all and a more rounded, better informed life for most was a 1960s pipe-dream that, in large measure, has come to pass. Those of you working 60-hour weeks with a firm that expects 24/7 availability may allow yourselves a hollow laugh but for the vast majority living in the technologically advanced world, things got better and the world got wiser – and technology was key. But we may be swinging away from that.

Technology was by no means the only factor in seeing the spread of improvements. Post-war social solidarity and the influence of the unions were vital influences, most obviously in Germany. So the lesson from history that might enable us to deal most effectively with the worrying aspects of AI should be about solidarity. We may need a new definition because the ASLEF way, though it may be effective in the short term for a very few, is not the long-term way forward in spreading the benefits of technology widely.

Some of that solidarity could come from law but that would need a social springboard in place first. The irony is that the Internet and social media provide mechanisms for creating levels of social solidarity that could not previously have been contemplated. Solidarity beyond borders, beyond social class and spanning generations – and it can be educative too, breaking down barriers along the way. I don’t doubt that such solidarity can be perceived as that of the mob, as the Luddite history shows – indeed the distinction between the mob and the applied weight of social media brought to bear on the dentist that shot Cecil the Lion is a fine one – ‘fine’ in both senses. But, properly applied, such solidarity, allied to a level of non-artificial intelligence that we must do our best to apply, can save lions, require the redrafting of privacy policies, shame a bucketful of politicians and maybe even lead to a better application of AI and a more equitable distribution of the fruits of AI.