Editorial

February 1, 2017

This issue ranges across the wide areas of
interest of tech lawyers. Some articles cover recent developments and cases,
others revisit the sort of contract problems that fill many days and we also
look at the impact AI might have on the future practice of law.

As regards the latter, we have two
wonderful articles which on first glance take diametrically opposed views of
the importance of AI to legal practice. Rohit Talwar and Alexandra Whittington
offer a vision of a legal profession transformed by 2025 (see p 22) while
Robert Morley claims to be debunking myths (see p 27). When I quietly suggested
that it might be unlikely, say, for Iceland to be ruled by an Algocracy by
2025, Rohit and Alexandra made it clear that their aim is to provoke. From that
perspective, their article is useful in its glorious vision for the potential
of AI, wrapped into the entertaining stories they have created. Robert Morley’s
vision for AI is less glorious and based on current trends rather than a
futuristic outlook. But his ‘debunking’ is far from dismissive. He sees AI as a
limited tool that will never threaten the role of the good lawyer. I highly recommend
reading both and seeing which standpoint appeals to you most. I think that the
division between their views is less of a chasm than one might suppose; it
requires a bit of a juggle, but I find myself agreeing with both.

I am convinced that AI will transform legal
practice. The reality will be less flashy than Rohit and Alexandra envisage,
the speed of change will not match their predictions and I suspect that there
will be many bumps along the road. But, as Joanna Goodman’s book shows (see p
29), we are already seeing applications of AI that make a real difference. How
big a difference and how quickly life will change is pretty well impossible to
predict but we will certainly see the magazine revisiting that question many
times over the coming years.

The impact of AI was a topic under
discussion at a recent meeting of the SCL Editorial Advisory Board. Trevor
Callaghan of DeepMind shared some of his experience and I have found myself
recalling different aspects of the ensuing discussion on a number of occasions
since. Three unresolved thoughts still have me worried.

That Board meeting discussion wandered at
one point towards driverless cars. I suspect that is because it is the most
accessible form of AI but also because we are all experts on driving, or at
least think we are. Trevor Callaghan had reminded us that one problem with an
Air France air accident was the pilots’ lack of familiarity with the testing
situation that arose. It is not hard to see how those who are used to
driverless vehicles of the most advanced kind will not only take time to adjust
to a sudden call for human intervention but actually lack the skills they need
to cope. Don’t stand behind me when I am in a hire car because I reverse in
reliance on the warning beeps in my normal car. And I am what might politely be
described as an experienced driver – God help the newbie in a few years.

That situation is frightening enough but we
were also discussing the increasing and mind-blowing accuracy of newly
developing medical apps. It’s not hard to imagine a situation where reliance in
diagnosis on such apps is almost total – and, judging from my experience of
medical diagnostic skills this week, that time cannot come quickly enough. But
what happens when the system fails? Can we justify the investment in
maintaining human skills that would enable a substitute service? Is it even
possible to develop such skills in a simulation, ie without the potential of
responsibility for error?

Finally, we considered ethics and AI. The
thought which troubles me is that, with so many current reminders that we live
in bubbles or echo chambers, are we likely to apply ethical constraints that
vast sections of society will regard as deluded. In an article to be published
on the SCL website shortly, Stuart Young from Gowling WLG refers to a survey
finding that people wanted to drive around in a car that protects them and
their passengers rather than minimising casualties among other road users. That
might actually help to sell driverless cars – which, on a grand scale, will
save lives. It’s worth a moment’s contemplation though, especially if (as is
likely for years) the driverless car passengers are rich and the ‘other road
users’ are poorer. Tricky.