Can We Keep Humans At The Heart Of Technology?

Simon Deane-Johns examines that question and looks forward to the SCL events which will focus on it

This issue has snowballed into The Big Question of 2015. Last year's warning by Google CEO, Eric Schmidt, that machines would take non-creative jobs from humans struck a particularly emotional chord, and prompted widespread speculation as to whether humans will survive developments in artificial intelligence.[1] Even great minds like Stephen Hawking[2] and Elon Musk[3] have warned that we must remain constantly alert to the capabilities of machines and the uses to which they are put, if we humans are to ensure our survival. Not all seem to share their concern on this front. After all, 'The Singularity' is the term used by believers to describe the moment when machines finally out-compete humans to the point of extinction. In their version of the future, humans create machines and robots which themselves build better and better machines until they become autonomous. Having achieved independence from their human masters, the machines simply no longer support them. One leading prophet of human doom, Stuart Armstrong, reckons 'there's an 80% probability that the singularity will occur between 2017 and 2112'.[4]  Far from leading the rebellion against the robotic onslaught, Silicon Valley plays host to this vision. Indeed, the Singularity University has been established by corporate founders, including Google and Cisco 'to apply exponentially growing technologies, such as biotechnology, artificial intelligence and neuroscience, to address humanity's grand challenges.'[5]

Meanwhile, the Society for Computers and Law is also doing its best to focus attention on the question whether machines could shake off the constraints of humanity. On 2 March 2015, SCL will launch its own Technology Law Futures Group with the speech 'Superintelligence - a godsend, or a doomsday device?' by Professor Nick Bostrom, of Oxford University's Future of Humanity Institute and Programme on the Impacts of Future Technology. This will be followed by the SCL Technology Law Futures Conference in London, on 18 and 19 June 2015, when we will explore how to keep humans at the heart of technology. We will examine the roadmap to 'superintelligence', the concept of humanity-by-design, the rise of the 'P2P' or human-to-human economy and consider what should be the appropriate rules governing developers and the machines they create.

Last, but not least, the Media Board has been considering how to organise the SCL's articles and other material conveniently for those seeking to put humans at the centre of the machines and applications of the future - a 'developer's guide to humanity', as it were.

There is an abundance of SCL material that will be helpful in designing and developing humane technology. Sure, one can find it easily enough via the web site now, if you know what you're looking for. But given what's at stake it seems appropriate to publish more obvious links to key developments in privacy, authentication, security, big data, midata, various media and devices, the Internet of Things, drones, driverless cars, biometrics and so on.

Organising the SCL's material in this way might also inspire others to add consistent material to deepen the database or organise events to highlight key issues. And we might even inspire a few developers to hard-wire humanity into their creations. 

Simon Deane-Johns is a consultant solicitor with Keystone Law and Chair of the SCL Media Board. 


[1] http://www.bbc.co.uk/news/business-25872006

[2] http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

[3] http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat

[4] http://fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI

[5] http://singularityu.org/

Published: 2015-02-17T11:47:26

    1 comments

    • If this is a credible threat to which regulation of some form is an appropriate response — and, at the moment, to me, that is quite a big "if" — it will be interesting to see how the balancing act might be struck between enabling innovation and preventing harm. If such a thing as "The Singularity" might come about, clearly, the usual ex post approach to regulation would not work: by the time the damage has happened, it will be too late to step in to regulate. However, ex ante regulation, of the type found in the regulation of communications networks today, requires a particular skillset for looking out over the horizon, identifying potentially harmful trends and proposing proportionate interventions where necessary, typically through economic modelling. I suspect that the kind of skills and capabilities necessary to regulate the kind of environment envisaged here are few and far between at the moment — developing the regulatory apparatus will be a feat in itself.
      Neil Brown, 16:49:08 18/03/2015
    Please wait...