Graham Smith’s Keynote Speech to the SCL Annual Conference 2019 in which he argues that proposed cures for a broken internet risk harming the patient.
We know the internet is broken, because we have been told so for as long as many of us can remember.
It is broken because, as Robert Hannigan, then about to become Director of GCHQ, told us in November 2014: the US tech companies which dominate the web are the “command and control networks of choice for terrorists and criminals”.
It is broken because, as The Times headlined a piece by David Cameron’s former speech writer Clare Foges in August 2015, the Internet giants are the terrorists’ friend.
It is broken because, as then Prime Minister Theresa May said in June 2017, the internet provides the safe space that extremist ideology needs to breed.
It is broken because of hate speech, toxic discourse, online abuse, silenced voices, bullying, suicide sites, trolling, doxxing, fake news, conspiracy theories, grooming, child sexual abuse images, copyright infringement, surveillance capitalism, encryption, revenge porn, because it is not safe for children, because of amplification and filter bubbles, because of addictive behaviour, and many others.
The internet must be broken, above all because Sir Tim Berners-Lee has told us so. Or, to be precise, he says the web is broken. The web and the internet are not the same thing. But broken nonetheless.
Sir Tim’s journey to this conclusion reflects a certain strand of tech-utopianism. Technology was held to reflect and encourage the best aspects of human nature.1
As Sir Tim himself said a year ago,
“I thought all I had to do was keep it, just keep it free and open and people will do wonderful things” … “If you’d asked me 10 years ago I would have said humanity is going to do a good job with this… if we connect all these people together, they are such wonderful people they will get along. I was wrong”.2
If Sir Tim really did think that people would do and say only wonderful things on the free and open web, then not only was he mistaken but demonstrating a degree of naivety.
Naivety can, unfortunately, be dangerous. Not just because we are left unprepared for bad actors, but because inflated and inevitably disappointed expectations will surely be followed by the backlash – the techlash – that is now in full flood: the swing to the opposite pole, with the loss of perspective and rationality that accompanies a full blown moral panic.
Is opening up the ability to communicate to the world liberating? Yes. Is that a good thing? Absolutely. Article 19 of the Universal Declaration of Human Rights could have been written specifically for the internet.
Does that mean that everyone will only use it in ways that would befit a vicar’s drawing room or a primary school classroom? That no-one will say things that are offensive, shocking, disturbing or upsetting? Of course not. And the case for protecting freedom of speech does not depend on their refraining from doing so.
Of course, people do do wonderful things on the web. Probably far more than the bad things. Certainly so if we include all the everyday things. Those are the things we never hear about because they are not news. The good that comes from the internet is mostly invisible, submerged under the torrent of outrage that accompanies each new example of someone doing something terrible.
A generation of entrepreneurs started out claiming that their tech was a force for good. A succession of Silicon Valley founders has now queued up to recant their early faith, helping in the process to fuel a lurch from extreme optimism to extreme negativity. If you set out to be seen as a saviour, don’t be surprised if you are cast in the role of demon when the promised heaven fails to appear.
The utopianism has not, however, gone away. Rather than underpinning a belief that the web would usher in an era of universal harmony, it now feeds demands that governments should adopt legislative programmes to achieve the ideal that technology alone could not. Fully one third of Sir Tim’s Contract for the Web is a collection of proposals for legislation that he thinks will help deliver his vision of what he meant the web to be.
Sir Tim would not be the first inventor to wish that people would use his brainchild in the way that he would like to see it used. But the inventor of the quill pen did not get to tell people what to write with it, nor should Gutenberg have expected to tell people what to print.
Legislating in order to achieve a Utopian vision of the web — even one emanating from someone as self-evidently well-intentioned as Sir Tim Berners-Lee — is something that we should treat with as much, if not more, caution than claims that any given technology will be an unalloyed force for good.
It suffers from F. A. Hayek’s Fatal Conceit: that the intelligentsia, being very clever, think they know enough about society to design it in their image — or at least in the image to which they aspire.
Hayek said: “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.”3
What went for the tech-Utopians goes also for policy and legislative Utopians. It is foolish to think that if only we could design the technology correctly, whether voluntarily or under legal compulsion, user behaviour would magically and universally conform to some ideal notion of state-defined social good.
That is neither an achievable nor a desirable goal. Unachievable, because it treats internet users as mere technology puppets incapable of personal choice or agency. Undesirable because the role of legislation, all the more so where speech is concerned, is (or should be) to set outer boundaries within which we are free to pursue and speak to our own goals as we wish, regardless of whether they are goals set by the state.
Legislation that starts from the premise that we should be able to say only what the technology allows us to say, and that technological constraints should be designed-in under compulsion from a state in pursuit of its social goals, sets a disturbing precedent.
Is the internet broken?
Is the internet broken? Certainly it is caught in a perfect storm on many fronts, all of which have contributed to the demonisation of the tech industry. In 2019 an online intermediary can do no right.
Nevertheless, when it comes to online behaviour, we should not forget that it is individual perpetrators who do and say the things that are found so objectionable. Some will argue that particular features of some platforms exacerbate such behaviour. Even then, users are not mere puppets of technology. They are human beings who make personal choices about what to say online.
And there are many kinds of intermediary against whom such accusations cannot be levelled. The most that can be said about them is that they facilitate the ability of individuals to speak to a large audience without the moderating influence of an editor.
Some may regard that as creating an inherent risk, one that justifies imposing proactive moderation duties on the intermediary. I would suggest that it is wrong to regard speech, including public speech, as something that constitutes an inherent risk from a legal perspective. That is an existential challenge to the notion of freedom of expression. Put in the context of duties of care, speech is not a tripping hazard.
As to editorial control, nowhere in Article 19 of the UDHR or even in the comparatively attenuated Art 10 of the ECHR does it say that freedom of speech is conditional upon having an editor blue-pencil your work.
Yet to listen to some of the debate in the last few years, one would think that we individuals are not to be trusted with the power of public speech, that it was a mistake ever to allow anyone to speak or write online without the moderating influence of an editor, and that by hook or by crook the internet genie must be stuffed back in its bottle.
At any rate, when we look at the legislative proposals that are floating around, the focus is, of course, not on the perpetrators but on the intermediaries. We find some common themes:
Cures for the broken internet
This brings us neatly to proposed cures for the broken internet. I will touch on two of these: the Online Harms White Paper and the likely re-opening of the eCommerce Directive.
First, the eCommerce Directive. The tension between the ECD and various legislative proposals both nationally and at EU level has been growing for some years. Much of the tension is around proactive monitoring obligations, or upload filtering. We see that in the Digital Copyright Directive passed in April 2019, we see it in the proposed Terrorist Content Regulation currently under discussion in the EU Trilogue, and we see it in the Online Harms White Paper.
Article 15 of the ECD prohibits a Member State from imposing a general monitoring obligation on conduit, caching or hosting intermediaries. Increasingly, legislation is butting up against Article 15.
Why does this matter? Because back in 1999, with perhaps unusual prescience, the European Commission was alive to the potential for intermediaries to be used by Member States as chokepoints to control the information flowing through them. Gateways could be converted into gatekeepers operating under the direction of government. This was, rightly, seen as an undesirable possibility that should be guarded against – not for the benefit of intermediaries, but for the protection of internet users.
Twenty years later, the very danger that Article 15 was designed to guard against is now with us. The pressure is on the Directive, which is routinely characterised as out of date and in need of revision. It is likely that the Directive will be re-opened as part of the Digital Services Act proposed by the new Commission.
Something that even the ECD did not foresee — presumably because it would have been dismissed as an outlandish suggestion — was that an intermediary might be required to monitor for lawful material. Article 15 prohibits only a general monitoring obligation aimed at detecting illegal information or activity.
Perversely, therefore, when the Online Harms White Paper suggests that proactive monitoring should be required for lawful material, that would apparently not fall foul of Article 15; whereas the same obligation for unlawful material would do so.
The Online Harms White Paper has been heavily discussed and criticised. I do not propose to cover it comprehensively, but to touch on two points: what is meant by harm, and the parallels with offline duties of care.
The scheme of the proposed legislation is that it would create a statutory so-called duty of care, which would apply to a huge variety of sharing, search, messaging and other platforms – anything that allows users to interact with each other online. Everything from Facebook and Twitter to Mumsnet and the John Lewis customer review section.
So-called, because unlike (say) occupiers’ liability or negligence, there is no proposal to create a direct right of action for anyone harmed by a breach of the duty of care. Interpretation, supervision and enforcement would be for a statutory regulator, which would be expected to devise Codes of Practice for various different kinds of subject matter. For terrorism and CSAM the Home Secretary would have ultimate sign-off on the Codes of Practice.
I say interpretation, but notoriously the White Paper makes no attempt to define harm. Nor does there appear to be any intention to do so. This is not interpretation as a lawyer would conventionally understand the term. It is, in effect, delegation to the regulator of the power to make decisions about what speech is and is not to be regarded as harmful.
We can usefully recall the oft-quoted words of Lord Justice Hoffmann (as he then was) in R v Central Independent Television:5
“But a freedom which is restricted to what judges think to be responsible or in the public interest is no freedom. Freedom means the right to publish things which government and judges, however well motivated, think should not be published.
It means the right to say things which "right-thinking people" regard as dangerous or irresponsible.”
That is the most famous passage. But, more significantly for present purposes, he went on:
“This freedom is subject only to clearly defined exceptions laid down by common law or statute."
The emphasis on clearly defined exceptions reflects the principles of legality, certainty, foreseeability and the rule of law. It resonates with particular force in relation to laws affecting freedom of expression. There is something terribly wrong with a legislative proposal to govern individual speech that deliberately goes out of its way to be as vague and ambiguous as possible.
Lastly, a look at how the Online Harms White Paper proposals compare with the approach that the law takes to these kind of issues offline.
For many years the principle of online-offline equivalence has, at least ostensibly, held sway in policy debates. Typically it takes one of two forms:
The House of Lords Communications Committee6 invoked a 'parity principle':
"The same level of protection must be provided online as offline".
The then Culture Secretary Jeremy Wright said, shortly before the White Paper was published,7 that
"A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…".
However, the White Paper in fact represents a significant departure from the principle of online/offline equivalence. The underlying principle now seems to be that the internet is different, and so must be subject to a different legal regime from offline.
That shift may not be spelled out explicitly, but is implicit in the fact that the White Paper proposals are for a duty of care regime that has no comparable offline equivalent.
What might be the offline comparables?
1. Distributors of third party information, such as booksellers, print distributors and the like, have long been considered to be the closest analogue to online intermediaries. They are subject to general laws of obscenity, copyright, defamation and so on, but usually qualified in some way to reflect their role as intermediaries.
But neither the distributor, nor the publisher, nor the author owes a duty of care to readers to prevent them suffering harm from reading the book. Even less is there a regulator standing over their shoulders defining and enforcing such a duty.
2. More recently an analogy has been drawn with operators of public spaces such as theme parks, bars and so on8. Under common law and the Occupier's Liability Act they owe a duty of care to visitors in respect of their safety. The proposition is that social media platforms and the like should be treated similarly and be subject to a statutory duty of care to prevent their users being harmed.
However, neither the occupier's duty of care nor the common law equivalent translates into the duty proposed in the White Paper. The existing duty is limited to physical injury, rather than extending to the broad and subjective notions of harm encompassed by the White Paper. The existing duty very rarely imposes a duty to prevent one visitor injuring another visitor – precisely the kind of duty contemplated by the White Paper. And it never imposes a duty in relation to what one visitor may say to another visitor.
3. In fact the White Paper most closely resembles the existing regime for broadcast regulation. That raises the separate issue of whether broadcast regulation is an appropriate regime for individual speech. We have been here before – back in the 1990s with the Bangemann Charter, in the early 2000s with the debate on the Communications Act, and in various iterations of the Audiovisual Media Services Directive.
The question this really raises whether it is right, or desirable, to subject a Twitter user to the same speech restrictions and kind of regulatory regime as an audience member on a daytime TV show. That has always been rejected in the past9, in favour of the argument that only clearly defined general law should apply to individual speech. We shall see whether that survives this time around.
If the internet is broken, it is only because we are broken – we have not respected the laws or we have not provided effective means to enforce the laws that exist.
Maybe we should debate whether new laws setting different boundaries for what we can and cannot say online are required. That is a project that is currently being undertaken by the Law Commission.
But in the quest to tame the internet dragon by co-opting intermediaries to the hunt and empowering a regulator to write a new set of rules for online speech, are we in danger of actually breaking the internet? Would it turn out to be an act of epic self-harm?
One thing is certain: we won’t see the damage. The very nature of compelled upload filtering or, to give it its old-fashioned name, censorship, is that we cannot see what never sees the light of day. Nor can we guess at the words that are never written because of the impact of a chilling effect.
The precautionary principle, now so often deployed in favour of the kind of legislation that is proposed to clean up the internet, translates into prior restraint when applied to speech. The traditional presumption against prior restraint is, for good reason, a foundation of protection of freedom of expression.
I fear that we are in grave danger of sacrificing the invisible good of the internet in our quest to suppress the visible bad.
Graham Smith, Of Counsel, Bird & Bird, blogs at Cyberleagle and is author of Internet Law and Regulation (Sweet & Maxwell)
1 Wikipedia, "Technological utopianism", citing as an example Rushkoff, Douglas (2002): "Renaissance Now! Media Ecology and the New Global Narrative". Explorations in Media Ecology. 1 (1).
2 The inventor of the web says the internet is broken — but he has a plan to fix it CNBC, 5 November 2018.
3 F.A. Hayek, The Fatal Conceit: The Errors of Socialism (1988) University of Chicago Press. The concept owes something to a passage in Adam Smith’s The Theory of Moral Sentiments (1759): “The man of system, on the other hand, is apt to be very wise in his own conceit; … He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might chuse to impress upon it.”
4 The hosting regime under the Electronic Commerce Directive incentivises takedown by removing the liability shield if a host does not remove or disable access to unlawful material expeditiously upon gaining relevant knowledge. It does not, however, impose a positive obligation to do so. See further, ‘The Electronic Commerce Directive – a phantom demon?’ 30 April 2019, Cyberleagle blog.
5 R v Central Independent Television plc  3 All ER 641.
6 House of Lords Select Committee on Communications Regulating in a digital world, 2nd Report of Session 2017-19 - published 9 March 2019 - HL Paper 299.
7 "We must make the digital world a safer place", DCMS blog, 21 February 2019.
8 Professor Lorna Woods and William Perrin, "Internet Harm Reduction: A Proposal", Carnegie UK Trust, 30 January 2019 (and related papers).
9 The most recent revision of the Audiovisual Media Services Directive can be regarded as representing the first incursion into this principle.