Technological Revolution and Catastrophe Part 2: can we manage technological revolution now?

In part 1 of this article, Ben Kaplinsky looked at how unregulated advances in technology contributed to the outbreak of WW1. In this second part, he reviews the current regulatory efforts to handle social media, Artificial Intelligence and other technologies and questions whether we may make the same mistakes again.

“Whoever becomes the leader in artificial intelligence will become ruler of the world”

President Putin, June 2017

The landscape of new technologies is vast and complex while the commercial opportunities arising from cutting edge technologies are boundless. The most powerful fast-growing companies in recent years are those that have been quick to exploit new technologies such as big data and artificial intelligence (Amazon, Google, eBay and others). In every era in modern times, companies and nations that have successfully exploited ground-breaking technologies have thrived.

We saw in Part 1 to this article that a failure of legislators in Edwardian Europe to control new technologies was an important causative factor in the outbreak of World War One, contributing to a chain of events that lead to the deaths of 70 – 80 million people. Have we learned the lessons of history? Do we have sufficient legal control of our own technological revolution of the 2020s? Are we at risk of repeating the technology induced crises of the 20th century? 

In some areas, legislation appears to have anticipated and controlled the risks of several new important technologies. For example, our legal system appears to have succeeded in anticipating and controlling the danger of drones to aeroplanes. In December 2018 Gatwick Airport was closed after drones were found to be flying above the airport complex. Mercifully, despite this and other near misses, no airline calamities have been caused by drone collisions as timely legal changes were implemented as far back as 2012. Likewise, the UK Law Commission is now concluding a painstaking and thorough three-year consultation on the law relating to the use of self-driving cars, prior to devising new laws to make autonomous vehicles safe. In a different sector of technology, human genetic engineering has also been tightly controlled by the prescient Human Fertilisation and Embryology Act which was enacted as  far back as 1990.

Conversely domestic and international law-making continues to lag behind other powerful new waves of technological change, leaving a dangerous lawless area of the economy and society where unregulated technology is deployed. Consequently serious harm has occurred and is likely to continue to occur.

Our out-of-date laws create an environment in which cybercrime increasingly thrives, generating very high profits for cyber criminals with a very low risk of prosecution and punishment. The international growth in disastrous ransomware attacks is increasing steadily, yet the domestic and international legal framework for cybercrime is, in numerous important respects, deficient. In the UK and America, it is still legal for insurance companies to make ransom payments to cybercriminals which only serve to fuel this highly lucrative racket. Few politicians have an expert understanding of emerging technologies and, in the UK, the Crown Prosecution Service lacks both in depth knowledge of cybercrime and sufficient resources to successfully prosecute cyber criminals, instead relying on domestic legislation (the Computer Misuse Act) that is woefully out of date and no longer fit for purpose.  

Colonial Pipeline Cyber Attack by "DarkSide"
The Colonial Pipeline transports almost of half of the fuel supplies for America’s East Coast facilitated by highly technical digital and robotic control. In May of this year a Russia Cybercrime group called DarkSide hacked into the network and the entire pipeline was temporarily shut down. The victim company paid the hackers $4.4 million to resume operations in the most significant attack on national infrastructure yet.

Social media pervades modern life but, at the time of writing, remains largely unfettered by legal control. In 2017, 14-year-old British girl Molly Russell committed suicide after viewing multiple “how-to” videos on social media. Between 2015 and 2018 it was widely reported that Islamic State used online platforms such as Twitter to recruit an estimated 40,000 foreign members (including many from the UK) and YouTube’s algorithms have been heavily criticised for recommending terrorist content. The important role of Facebook in the 2016 Rohingya Genocide is beyond doubt. The regulation of social media was first proposed by the UK government in 2017. It then took 18 months for the parliamentary white paper to be published and it appears that the law may not be eventually enacted until 2024, a full seven years from conception to completion: the wheels of parliament turn at a staggering slow pace.

Rohingya Genocide (Rakhine State, Myanmar)
rohingya map
In 2018 Facebook agreed with a report that found it had been used as a principal tool to incite genocide against the Rohingya minority in Myanmar. Facebook with 18 million users in Myanmar was used to whip up and co-ordinate hatred against the Rohingya inciting the murders of 6,700 people including 730 children.
Equally important is the abuse of big data and artificial intelligence during the American presidential election of 2016. The European draft Artificial Intelligence Act will prohibit the type of misuse of personal data deployed by Trump and Cambridge Analytica but, five years on from that scandal, this first legal attempt to control AI has still not been enacted. This failure to control AI in 2016 had the potential to enable Trump to dismantle the sacrosanct constitutional checks and balances that have protected the US democratic process for more than two centuries.

Before I attempt to predict the potential technology-induced crises in the coming years three general observations on the changing landscape can be made. 

Firstly, many companies continue to generate huge profits when deploying new technologies in legal vacuums. This is an ever-present feature of the technological frontier. This happened during the arms race before WW1 and occurred 100 years later as Facebook and Amazon were growing into two of the world’s richest companies. Domestic and international legislators are now eventually enacting laws to control social media and online marketplaces but  in so doing are seeking to close the stable door many years after the Facebook and Amazon horses have bolted.

Secondly, cyber criminals can now overcome national boundaries. Nations hostile to western interests directly fund cyber-crime and provide safe havens for cyber criminals. Since the Budapest Convention of 2001, Russia has refused to provide co-operation in the investigation of cybercriminals. How can we police cybercrime when cybercriminals can operate across national boundaries and locate themselves beyond the reach of the law with little risk of extradition? It is widely believed that Russian authorities allow the prolific, highly-organised activities of cybercrime organisations, such as DarkSide, to continue so long as these cybercriminals attack exclusively foreign targets. Some of these attacks appear to be state sponsored while others are tolerated, if not actively encouraged.  

ransomware growth graph 2021

Thirdly, it may be argued that the basic concepts of western legal systems, designed and refined during the 18th and 19th centuries, lack the speed and agility required to control extremely dynamic new technologies. Many fast-changing technology companies (and cybercrime organisations) have little difficulty out-manoeuvring the law making institutions which are, in comparison, cumbersome, old fashioned and extremely slow. Does the underlying conceptual design of the western legislative processes need to be fully rethought and modernised? How can our legal system speed up so that it can begin to track the pace of change at the technological frontier?  

Taking account of developing trends in technological change, and gaps in law and policy, on my analysis it appears that there are seven potential national and global crises on our horizon in 2021.

  1. It is a relatively safe bet that there will be continuing, and increasingly serious and paralysing incidents of cybercrime caused by cybercriminals located in states hostile to international legal norms strengthened by artificial intelligence with increasing risks of mass infrastructure being shut down. Cybercrime will continue to be directly funded and directed by these state governments and carried out by highly profitable cybercrime groups such as DarkSide enjoying their safe location beyond the reach of investigating authorities. The risks of cybercrime may  soon increase  further due to advances in quantum computing which, if the predictions are right, will soon be able to crack much of today’s encryption. The Colonial Pipeline attack in May of this year may well be a harbinger of future similar increasingly serious attacks paralysing national infrastructure in western nations.
  2. The Chinese communist party’s use of IoT and Artificial Intelligence to create smart cities and social credit systems could become a model of social control replicated by high tech dictatorships throughout the world who may combine such systems with automated weaponry. It is unlikely that China will be the only surveillance state of the future.  The potential seductive benefits to government of making more efficient use of public funds and the need to manage crime could  lead to ever more intrusive surveillance techniques  being deployed in democratic states as well. There is already controversy of the Serbian use of (Chinese manufactured) surveillance cameras in Belgrade.   
  3. New 3-D Printing technology may be widely used by criminal gangs to manufacture home- made firearms causing serious damage to Europe’s tight gun controls and this could cause European homicide rates to spiral towards those in countries such as USA, Brazil and South Africa or could lead to an increased risk of coups by armed militias.
  4. It is very likely that grave harm caused by abuse of social media will continue. Whether the Online Harms Act, with huge fines that dwarf those of GDPR, will succeed in controlling these risks in the UK remains to be seen.
  5. Following the success of Trump, Cambridge Analytica and Russian interference in subverting the 2016 American presidential election, it is very likely that future democratic elections throughout the world will be similarly subverted by those exploiting Artificial Intelligence.  This year the European Commission unveiled the first ever legal framework on artificial intelligence in the form of Artificial Intelligence Act which seeks to prohibit this type of abuse.  Despite these first attempts, the effective control of AI (with its extra-territorial reach) is unlikely to be achieved soon.
  6. Chinese digital imperialism, while not a global catastrophe, may be viewed as a very harmful change in the world order for western states. TikTok which uses AI to tailor entertainments to each user’s personality and interests, at the time of writing, has more than one billion users (many of whom are located in USA and Europe). There is a clear risk that the Chinese company that owns TikTok, Byte Dance, will provide vast quantities of valuable data to the Chinese government with unforeseeable consequences. India is the first large nation to ban TikTok with the ban made permanent in January of this year.
  7. Finally, at the time of writing while we are still recovering from the COVID pandemic, it appears at least possible that COVID was caused by a leak during a laboratory experiment in bioterrorism at the WUHAN institute of virology. Could it be that viruses, such as COVID 19, will be weaponised with devastating effect in years to come leading to a new era of man-made pandemics?

International legal co-operation is required to tackle each of these risks, but the underlying commercial and political conflict of interest between western nations and states such as Russia and China means that simple international legal solutions are unlikely to materialise soon. Russia has at previous international conventions (notably the Budapest Convention of 2001) firmly refused to combine forces with western states in eradicating risks from new technologies. The Kremlin has certainly not demonstrated a marked desire to work in harmony with western governments and law makers.

For those who work in the field of emerging technologies the concepts of prescience and forethought are key. Those who think through and successfully deploy the opportunities of emerging technologies invariably thrive.  We saw in part 1 to this article that a failure of prescience by legislators contributed to the crises of the last century and, while the future is not ours to see, emerging technologies, with all the great opportunities and hazards that they bring, may be carefully observed, and successfully anticipated. I hope these two brief articles have shown that upholding the rule of law on the technological frontier is uniquely challenging but also essential for the future well-being of humanity. 

Read Part 1

profile picture of ben kaplinsky

Ben Kaplinsky worked as a trial advocate in the higher courts before re-specialising as an in-house technology lawyer. He is principal technology counsel for FTSE 100 company Kingfisher PLC.

Published: 2021-08-13T09:00:00

    Please wait...