Do social network providers require (further?) regulation? #1

May 20, 2019

The House of Lords Select Committee on Communication concluded this month that “regulation of the digital world has not kept pace with our lives”.1 It described the last twenty years as a period of “rapid innovation enabled by light-touch regulation” and recommended as a matter of urgency the extension of regulatory powers, regulatory co-ordination, and the creation of both a new regulator (the Digital Authority) and a joint committee of both Houses of Parliament, the latter to consider matters related to the digital environment.2 This essay outlines this regulatory failure, and assesses the likely benefits of swift regulatory escalation.

At the time of writing, a 29 year-old Australian man had been held by New Zealand police on a charge of murder in relation to the Christchurch mosque shootings on Friday 15th March. 50 people were shot dead during a killing spree which was recorded on video by the perpetrator and live streamed on social networks during the commission of the crime. The video, perhaps inevitably, remained in online circulation despite the efforts of authorities to remove it. 

Horrors such as Christchurch should be treated cautiously: they often lead to thoughtless or reflexive responses on the part of the public and politicians alike. That the social networks of the internet can be so widely and wildly provocative, and that they also appear to escape meaningful legal accountability, is perhaps no coincidence. It is why the current state of affairs has been described as “regulation by outrage”, although in most cases, this crisis-led campaigning and lobbying has produced little more than provider-led initiatives.3

On the day of the shooting, New Zealand’s Office of Film and Literature Classification issued a warning that the footage was likely to be “objectionable” under New Zealand law. Formal classification as “objectionable” occurred after or during the weekend, on the grounds of s3(3)(d) and (e) of the Films, Videos and Publications Classifications Act 1993 (FVPCA 1993). The footage was considered a harmful publication that promoted, encouraged and justified acts of murder and terrorist violence against an identified group of people. “If you have had a copy,” New Zealand’s Department of Internal Affairs told those within its jurisdiction, “you must now delete it”.4

What the Department did not say was that their classification instantly rendered any New Zealander with the video still in his computer’s memory cache, or in any of his social media streams, knowingly or not, potentially guilty of a criminal offence under s131 of FVPCA 1993. One can readily assume, given the inherent nature of terrorism, that the footage was calculatedly horrific, which is to say it was sensational in the most literal sense. One can equally comprehend that, given the human appetite for sensationalism and the dynamics of the internet, the footage must have reached a sizeable audience. It was shown in extract by Sky Australia and the MailOnline, amongst other corporate outlets, the latter described as the most visited English-language newspaper website in the world.5 Viewing extracts of  the footage shown on such websites was now illegal in New Zealand, as was the failure to have adequately wiped your hard drive having viewed the footage prior to its classification. A significant proportion of the country’s population was, in effect, presented with a choice: collective self-censorship or criminality.

The New Zealand government can be commended generally for the sensitivity of its response to the murders of 15th March. Its legal response, in making viewing of the footage or of the alleged murderer’s self-titled “manifesto” a criminal act, can be considered an act of civic responsibility. However, it does not necessarily follow that it can be considered good law. Most of the FVPCA 1993 predates the popular internet. It is, in bulk, two years older than the first edition of Microsoft’s Internet Explorer. This is the law which has been applied in order to potentially criminalise an unknown but undoubtedly sizeable number of innocent people; a law originally intended for film producers, publishers, and commercial distributors. Today, these corporate entities have largely been supplanted by the social network providers who enjoy broad exemptions from the law, which has instead been inverted to criminalise “end users”, namely the public which the law once served to protect.

This legal disparity in relation to social networks and their users is beyond dispute. In New Zealand it is held that the viewing of objectionable material when someone is unaware the material is objectionable carries a maximum penalty of $2,000 NZD.6 That would be the fine, upon actual prosecution, of any unwitting New Zealander who viewed the shooting footage in extract on the MailOnline website. The maximum fine for knowingly possessing such material carries a maximum fine of $50,0000 NZD or a maximum sentence of ten years; knowingly distributing it carries a maximum sentence of 14.7 But these penalties apply only to individuals. Under the Act, organisations such as Facebook are subject to a maximum penalty of $200,000 NZD.8 This injustice should be obvious. By its own admission Facebook published the shooting footage 300,000 times, whilst knowing it to be objectionable (it had already or concurrently blocked 1.2m attempted uploads).9 The idea that Facebook’s liability in that Act should be limited to the maximum possible fine of four of its individual users is a nonsense, given Facebook’s financial resources and culpability. Its accounts are impenetrable, but in 2017 Facebook posted annual profits of $40bn USD, whilst its last known New Zealand tax payment for that year was only $392,000 NZD.10

On the Sunday following the shooting New Zealand police announced they had arrested a single 22 year-old man for sharing the shooting video on-line.11 The facts of the case are unknown at the time of writing but it appears that faced with mass public illegality, and a global corporation with minimal liability, New Zealand authorities may have sought to make an example of a single individual. Again, this cannot be good law.

America’s reverence for its first amendment means it has no equivalent laws for “objectionable material”. Hate speech, to use a vernacular term, is protected as of right unless directed to incite imminent lawless action and likely to produce such an action.12 Even so, the privileges of the social network providers in that county remain notable. s512(c) of the Digital Millennium Copyright Act 1998 exempts social networking providers from liability for copyright infringement for postings by users, and s230 of the Communications Decency Act 1996 exempts providers from liability in most tort claims for “the publication of information provided by another”. Individual US users enjoy none of these immunities.

Indeed, it was s230 of the US Communications Decency Act which provided broad immunity for internet companies, and the social network providers which were to come, pointing liability solely instead at the individual, rather than the corporate facilitator.

In the European Union, America’s s230 was echoed in Directive 2000/31/EC, which created the “safe harbour” regime which granted similar exemptions to providers regarding liability for the content they published or distributed. In retrospect it seems all but certain the necessary conditions for these exemptions to be applied were never met. Social network providers are not “mere conduits” of the content they host. That content is intricately analysed, dissected, categorised, strategically distributed, partially anonymised, data harvested and repeatedly sold. This activity is an essential element of their revenue, although social network providers have done much to obscure this fact, preferring to describe their business activity simply as digital advertising. “Senator, we run ads” was how Facebook CEO Mark Zuckerberg summarised his company’s business activity.13

At the time of writing, the UK remains a member of the EU, and its law reflects the “safe harbour” Directive. However, it is the UK jurisdiction which best illustrates another key regulatory and legal problem regarding social network providers and internet service providers: their compliance with unlawful mass surveillance on the part of their host (and indeed non-host) governments, as revealed to us by Edward Snowden, revelations which have been accepted as evidence in UK and EU courts. The regulatory void occupied by social network providers neatly mirrors another black hole in Britain’s legal system: that of anti-terrorism and state security. The social network providers can be understood as part of the state security apparatus, enjoying similar privileges, and shrouded in the same secrecy. The scale of their complicity in data interception and collection is unknown, as is the scale and level of the online surveillance this apparatus currently performs. The courts have declared its methods unlawful on more than one occasion and may well do so again. It is now difficult to keep track of the multiple legal challenges which have been brought against HM Government’s enactment of the Data Retention and Investigatory Powers Act 2014 and its successor, the Investigatory Powers Act 2017.

The relationship between social network providers and governments such as Britain’s seems symbiotic. Although their motivations may differ, the gross and routine intrusions of privacy they have conspired in suggests something of a quid pro quo. Social network providers have willingly embraced the unsavoury character of the informant-provocateur, enriching himself through the facilitation and encouragement of illegality. The rule of law is further diminished when the criminal law is only rarely applied, on a highly selective basis; or when the government is unclear, perhaps deliberately so, about what may constitute an offence.

The Metropolitan Police warned Twitter users (via Twitter) in 2014 that viewing footage of  the beheading of American James Foley by a member of ISIS could be a criminal offence. Asked by a solicitor what offence this could be, they were unable to cite any applicable statute.14 The man who murdered Foley was subsequently reported to be Mohammed Emwazi, known to tabloid journalists as ‘Jihadi John’, a West London IT graduate extrajudicially assassinated by drone strike in Syria the following year. “It was the right thing to do,” said David Cameron (via Facebook) of the killing, and by inference, of the failure to bring any criminal proceedings. It is difficult to construe anything in this episode that could be said to further the rule of law. It is by no means certain that either of these killings would have occurred if not for the unregulated status of social network providers. The war may have been about Syria, but these deaths belonged to a digital battlefield which remains open to all comers.

It may by the political potential of these networks which explains their legal privileges. Consider Britain’s old-fashioned pre-internet media. Sensible arguments have long been made that it has escaped proper regulation due to its political influence, arguments which have gained traction since the discovery that some of its routine journalistic practices were illegal. Much has been said about the failure to implement the Levenson recommendations, or even the second stage of the Levenson inquiry itself, yet there is at least some regulatory structure in place for the pre-internet media, however dysfunctional it may be. The newspapers bear legal responsibility for their content. British television broadcasters are even under a duty of impartiality and accuracy. In contrast, social network providers are under no such obligations. The recent US Presidential election illustrates how invidious this is.

Trump’s electoral success has brought multiple claims, of varying seriousness, that agents of Vladimir Putin, influenced (to some unqualified extent) the vote through hacking and other nefarious activities. What has been accepted, however, and revealed by the social network providers themselves, is that their networks are biased both by design and in practice.

American users of Facebook were, at least as late as 2018, able to enter their political tab of their personal settings and see where on the political spectrum the provider had placed them, having analysed their postings by algorithm.15 Users’ news feeds would automatically reflect this. In the editorial elements of a user’s Facebook page, which centred around the “trending” section, selection was made by individual Facebook employees; this selection was sometimes one of conscious political bias.16 It is the network providers who decide what content is made prominently visible. This process is far from neutral and determines their profitability. It is similar in effect to what occurs in old-fashioned newsrooms. The political failure to adequately regulate legacy media does not inspire optimism that any regulation of the social networks will achieve a positive outcome. 

The social networks are often described as “a wild west”.17 This is an unfortunate phrase. The “wild west” was lawless: the lands of the American west, prior to their legal annexation by the United States, were without legal systems, and any pre-annexation approximation of one was illegal in and of itself. In contrast, the social network providers reside in highly developed, and highly regulated, economies where they are exempted from certain legal responsibilities. These providers have achieved enormous concentrations of capital and political influence for precisely this reason.

“I think that it is inevitable that there will need to be some regulation” admitted Facebook’s CEO last year.18 To hear such an admission from such a person may indicate that the horse has not just left the stable but bolted well beyond the horizon. But even though regulation is inevitable, it does not follow that it will be successful. It may resemble little more than operational protocols intended to justify the “safe harbour” exemptions which disapply social network providers from the laws the rest of us live under. This was the case with the European Commission Recommendation of 1st March 2018, with its “fast track procedures for trusted flaggers”.19 The key point of this Recommendation, as the Commission’s Vice President tweeted shortly after, was that “the EU’s limited liability system… should remain the backbone of an open, fair and neutral internet”.20

Examining the impact of the providers’ behavioural advertising on electoral processes in the wake of the Cambridge Analytica scandal last July, the Information Commissioner said that a “code of practice” would “fix the system”.21 It is difficult to share that faith, for the reasons explained above. The regulations when issued may prove supine or the regulator (or self-regulator) be unfit for purpose. It would not be the first time that has happened. Besides which, too much secondary legislation is enacted in this jurisdiction already. It is, in any case, a fundamental category error to address the problem of unlawfulness by creating more law.

Unlawfulness does not arise because of an absence, in volume or specificity, of law. It arises because of a failure to apply and enforce the law which already exists. In England, this application – of a millennium-old common law tradition to a modern internet phenomenon such as the social networks – is the true task of the technology lawyer. The alternative is the status quo, a situation where the online publishing industry has convinced lawmakers “that its capacity to distribute harmful material is so vast that it cannot be held responsible for the consequences of its own business model.”22

Footnotes

1 Select Committee on Communications, Regulating in a digital world (HL 2017-2019).

2 Ibid.

3 Mark Bunting, ‘Keeping Consumers Safe Online: Legislating for platform accountability for online content’ (Communications Chambers, July 2018) p16.

4 The Department’s response to the Christchurch terrorism attack video – Background information and FAQs, Department of Internal Affairs <https://www.dia.govt.nz/Response-to-the-Christchurch-terrorism-attack- video> accessed 24th March 2019.

5 Amanda Meade, ‘Sky New Zealand stops airing Sky News Australia after Christchurch massacre coverage’ The Guardian (20th March 2019) <https://www.theguardian.com/media/2019/mar/16/sky-new-zealand-pulls-sky- news-australia-off-air-over-christchurch-massacre-coverage> accessed 24th March 2019; Omar Oakes, ‘Mirror editor apologises over New Zealand massacre video, Campaign (15th March 2019) <https://www.campaignlive.co.uk/article/mirror-editor-apologises-new-zealand-massacre-video/1579267> accessed 24th march 2019.

6 s131, Films, Videos and Publications Classifications Act 1993 (NZ).

7 ss131A and 124(2)(a), ibid.

8 s124(2)(b), ibid.

9 Facebook says it removed 1.5 million videos of the New Zealand mosque attack, Reuters (Reuters Technology News 17th March 2019) <https://uk.reuters.com/article/us-newzealand-shootout-facebook-video/facebook- says-it-removed-1-5-million-videos-of-the-new-zealand-mosque-attack-idUKKCN1QY05X> accessed 24th March 2019.

10 Nick Perry, ‘New Zealand plans new tax for giants like Google, Facebook’, Star Tribune, 18th February 2019 <http://www.startribune.com/new-zealand-plans-new-tax-for-giants-like-google-facebook/505978252/> accessed 24th March 2019.

11 Update 17: Christchurch terror attack; court appearance, New Zealand Police <https://www.police.govt.nz/news/release/update-17-christchurch-terror-attack-court-appearance> accessed 24th March 2019. At the time of publication six people had been charged with these offences: https://www.upi.com/Top_News/World-News/2019/04/15/6-charged-with-sharing-images-of-New-Zealand-mosque-attacks/6361555332234/ (14 May 2019).

12 Brandenburg v Ohio, 395 U.S. 444, 447 (1969).

13 US Senate Judiciary and Commerce Committees Joint Hearing, ‘Facebook, Social Media Privacy, and the Use and Abuse of Data’, 10th April 2018.

14 The solicitor was David Allen Green and these exchanges were described in his Financial Times blog on 21st August 2014, http://blogs.ft.com/david-allen-green/2014/08/ accessed 24th March 2019.

15 Erin Egan and Ashlie Beringer, “It’s Time to Make Our Privacy Tools Easier to Find”, Facebook Newsroom 28th March 2018. Press release.

16 Facebook overhauls Trending feature after bias claims, BBC (BBC News) <https://www.bbc.co.uk/news/technology-37205029> accessed 24th March 2019.

17 The social networks have been described thus repeatedly by Chris Elmore MP, Chair of the All-Party Parliamentary Group on Social Media and Young People’s Mental Health and Well-Being, for example.

18 US Senate Judiciary and Commerce Committees Joint Hearing, ‘Facebook, Social Media Privacy, and the Use and Abuse of Data’, 10th April 2018.

19 Commission recommendation of 1.3.2018 on measures to effectively tackle illegal content online, Brussels, C(2018) 1177 final, paras 25-27.

20 Andrus Ansip (@Ansip_EU), “EU’s limited liability system in #eCommerce should remain backbone of an open, fair and neutral internet. I do not want Europe to become a ‘big brother’ society in online monitoring #NetNeutrality #platforms #MWC18. My speech.” 9:13AM, 26th February 2018. Tweet.

21 Facebook faces £500,000 fine from UK data watchdog, BBC (BBC News) <https://www.bbc.co.uk/news/technology-44785151> accessed 24th March 2019.

22 Frank LoMonte, ‘The law that made Facebook what it is today’, 11th April 2018 (The Conversation) <http://theconversation.com/the-law-that-made-facebook-what-it-is-today-93931> accessed 24th March 2019.