Do social network providers require (further?) regulation?

July 23, 2019

The  question invites us to consider several sub questions. First, whether social network providers should be regulated. Second, how they are regulated. Third, whether this regulation works, and, if not, to finally consider any remedies. To answer these questions, I shall concentrate  to what I think are the most pressing problems with social networks: child protection, data privacy and political disinformation.

Any time we connect and relate with one another, we are socially networking. Social networks in the physical and Internet worlds take the same form, but their matter differs radically. The matter in the physical world is rarely controlled or owned by anyone else; our interactions in a network belong to us actors. We are at a business conference and I ask about your recent work. I own those words and the curiosity and motivations that lie behind them. Likewise, you own your response and all the knowledge and experience that goes into it. Inaction can also be a means of control. For instance, not telling people about our daily movements, sexual habits and political views. In short, we are agents and agency is the matter of most social networks in the physical world.

By contrast, the matter of social networks in the Internet world is structure, a structure determined by higher agents. These higher agents create structures of code that make the Internet world look and feel very much like the physical. We can have the same conversation about your recent work through a direct messenger or a video call, as if stood by the tea and coffee urns at the conference. However, the higher agent has the power to knock down the walls of the conference centre, turn the tea and coffee into wine and mute us all in a flash. It is not unduly hyperbolic to say these people are like gods. They have the power to create, destroy, see, hear, guide and influence the human interaction that takes place in their corner of the Internet world.

If you are on-board with the concept of law, you probably believe this power needs to be regulated. All schools of jurisprudence and political thought, even the most minimalist laissez-faire ones, will call for regulation to protect private property. Our property here could be intellectual, such as our blog, music or code, or personal property such as our private information. In 1996, the late John Perry Barlow declared that the Internet world was and should be independent from ‘tyrannical’ nation states and the laws (and mores) of the physical world.1 But then the Internet got physical. Children are killing themselves after indulging in suicidal Instagram posts, our daily movements and Web-searches are tracked, and elections and debates are corrupted by disinformation.2 If the most basic duty of nation states is to protect their people, they are duty-bound to, as John Locke says, use the law as a fence to stop us falling into bogs and precipices.3 As for Barlow’s followers, so long as they enjoy the protections, freedoms and rights of the ‘tyrannical’ nation state, they must abide by its laws. By reverting to first principles, it would seem most would answer the question “do social network providers require regulation?” with an emphatic “Yes!” So, we move on to consider whether they need further regulation.

Let us now take an unavoidably brief and selective look at the state of regulation of our three issues: child protection, data privacy and political disinformation. By child protection I mean the ease with which children fall into harm, such as self-harm glory posts, pornography, violence and paedophilia. By data privacy I mean the various ways social network providers and their partners take away our agency and freedom to choose with whom and how we share our private lives. By political disinformation I mean the deliberate emission and transmission of falsehoods to pollute political debate.

The picture is one of patchy, ad hoc regulation. No state yet has a regulator dedicated to social network providers. In the US, the First Amendment prohibits the Government from regulating what is said on social networks. There is no legal obligation on their providers to moderate anything said on their networks save for that which is criminal. And there is no comprehensive federal law regulating the collection and use of personal data. The Federal Trade Commission Act prohibits unfair or deceptive practices and is often relied on in data protection actions.4 The Federal Trade Commission has been active in enforcing the Children’s Online Privacy Protection Act.5 There are a myriad of other federal laws regulating data use across various sectors of the economy but none applicable to social networks.6

In the United Kingdom, social networks are subject to little or no regulation.7 Ofcom is the media and communications regulator which ensures television and radio content is legal and not unduly offensive or harmful. The first section of their Code protects children from any such content.8 However, their governing statute has not been amended to cover Internet based providers, save for the BBC, and this will remain so until Parliament changes the law. With regards to the misuse of personal private information, individual victims are left to sue for misuse of private information or breach of the Data Protection Acts of 1998 or 2018. They may also have a cause for unlawful interference in their Article 8 right to a private life (Human Rights Act 1998). If the data misuse or breach affects many people, the Information Commissioner may get involved. They are currently investigating Facebook and Cambridge Analytica for their parts in harvesting, without consent, the profile information of 87 million Facebook users.9 Finally, there is no law against disinformation spread on social networks. By contrast, most traditional and mainstream media organisation are subject to scrutiny by IMPRESS and, to a lesser degree, the Independent Press Standards Organisation. Both apply standards of “good journalism” to media organisations, which includes looking at the accuracy of content.10 The European Union has made some moves towards protecting children and our agency over our private lives from social network providers. The General Data Protection Regulation 2016 and the Audiovisual Media Services Directive (AMSD) standardised regulatory standards in data protection and online video content respectively.11 The GDPR is perhaps the best solution to our three problems the world has seen. Its ambit is broader than privacy alone, including rights and freedoms related to “social protection, public health and humanitarian purposes”.12 

The European Convention on Human Rights (ECHR) has a role to play too. The risks that social media pose to children’s lives could be guarded by the first right, the right to life. The type of harm arising from the Cambridge Analytica scandal can be guarded against by Protocol 1, Article 3, the right to “free elections”.13 The AMSD further protects children against harmful content and advertising. Providers are now required to put in place age verification systems and parental controls and to act quickly if content is flagged as harmful. Furthermore, it prohibits content that incites hatred based on race, sex, religion or nationality. Vietnam and Turkey have made similar moves to regulating online on-demand video content.14

On the whole, patchiness prevails in the global effort to regulate social networks. Legislative passivity may explain this patchiness. Technologies have developed quickly in the face of legislators whose average age across the globe was, by the last count, 53.15 But this does not explain everything. Right from the start, legislators actively decided not to hold social network providers liable for the content they host. In the EU, liability was limited by the 2000 European e-Commerce Directive which exempts “intermediaries” from liability if all they do is “play a neutral, merely technical and passive role towards the hosted content”.16 In the US, Section 230 of the Communications Decency Act 1996 exempts information content providers of interactive computer services from being treated as the publisher of any information provided by another provider.17

Facebook is the apotheosis of this legal delusion – perhaps a huge print of René Magritte’s La Trahison des images hangs in their Menlo Park office. “We are not a media company” insist Mark Zuckerberg and Sheryl Sandberg;18 despite WIRED magazine pointing out that Facebook is the United States’ largest supplier of news with all the mod coms you would expect to see at a $50 billion dollar media company (mod coms: moderators and commissioned content).19 Both Twitter and Google have refused to cooperate with the UK Press Recognition Panel’s (PRP) review into disinformation and fake news on the grounds that they are “not publishers”.20 Twitter’s refusal expressly cited section 41 of the Crime and Courts Act 2013 and the E-Commerce Directive.21 Both the Act and Directive are acts of legislation, positive decisions to limit the liability of social network providers. This is not passivity.

Like weeds around flagstones, our three harms have grown out of the space between the laws. But this is the result of activity, not passivity, and must be counteracted with further regulation. There are, as I see, it three options: 1) expand the jurisdiction of current law; 2) create new law; and 3) create a body with the power to judge and punish.

Let us examine the first option. There are laws that would help, but their jurisdiction is limited to traditional media because legislators are not sure whether social network providers are media companies. Once they decide that they are, one way to address the harms is to use existing law. This decision may come soon, as the UK PRP notes “the boundaries between the press and some social media platforms are dissolving”.22

However, in my view, the network/media question is a red herring. While it would help if we all agreed 1 + 1 = 2 and called them media companies, we should not view this happy day for taxonomists as the end of our three harms. By comparison to traditional media, the technology of social networks/media providers is so radically different and fast changing. If courts and regulators have to play catch up, jurisprudence   could quickly become muddied. Cases would be easily distinguishable on the facts between traditional and social media companies, meaning the same point of law may have to be considered at least twice. This may lead effectively to two branches of law. Again, the taxonomist is happy, but it could be decades before we see settled case law in relation to social media that gets anywhere near the present level of protections from harms from the traditional media: we do not have decades. Furthermore, it could do harm to traditional media. The UK PRP recently pointed to the irony of the campaign by the mainstream press to have social network companies more heavily regulated. They warned it may be that “an unintended consequence of this media campaign is that a system of state regulation for social media platforms could be implemented that is subsequently applied to the press.” For these reasons, I do not think expanding the jurisdiction of existing law is the best option.

The second option is to create new law. The most popular argument is that which calls for a new “duty of care” for social network providers.23 This treats social networks as public spaces and imparts on their providers a responsibility for reasonably foreseeable harms that arise within.24 It places them in a new legal category, one that is neither a platform or a publisher so they “assume legal liability for content identified as harmful after it has been posted by users.”25 This poses difficulties. What is harmful and who gets to decide? Is it equated to illegality or something else? Freedom of expression campaigners are concerned about this ambiguity.26 They argue that defining harm beyond what is illegal risks unjustifiably restricting free expression. Others argue that there is not much evidence of online harm or harm towards children.27 However, a 2012 study published in the American Journal of Public Health found evidence of an increased risk of suicide among vulnerable people who use social media.28

The question about who, or what, decides also arises with the take-down model of Germany’s Network Enforcement Law. This requires social network providers to remove illegal content within 24 hours of notification and is backed by €50 million fines. As Mark Bunting points out, this “belongs to a previous technological era.”29 Content is now blocked by algorithms, not people. The effectiveness of these tools is hard to assess and there is always the concern that they are too puritanical in removing legitimate content. Content removal, as under the duty of care and take-down models, is sure to help ameliorate our three harms but it is not a long-term fix, without proper oversight it could easily become ultra-performative censorship. (In the same way the analogous duty of care in occupiers’ liability means that buildings are now ornately decorated with signage spelling out every possible risk to life.)

The third option is to create a body with the power to judge and punish. There are a few ideas of what this should look like so I have taken the essence from some and applied it to the international level.30 I have already extolled the virtues of the GDPR but accept that it was created by a community whose members are used to obeying community rules and share an understanding of rights and freedoms. I cannot, as much as I want to, see the European model working on a global level in the foreseeable future. However, any such  oversight body should have in mind the GDPR as the model for a less harmful future. It should engender a language of rights and freedoms and assess risks and harms accordingly.

  • The EU should lead the way in establishing an international oversight body based (notionally) on the rights and freedoms of the ECHR and GDPR.
  • This overseer will have the power to require social network providers to assess and report-back to them and to any national regulators on the various issues arising from their networks.
  • The overseer should make recommendations to network providers for actions as well as to state regulators with regards to penalties. Penalties should include blocking connections between and isolating users in parts of the world where harms are being committed, such as the emission of harmful content or data misuse.

This model places the physical agents, that is us as represented in the overseer, in supremacy to the higher Internet agents. It reclaims our agency from the gods of the digital heavens and works towards a sustainable and less harmful future of social networking. Only by using law as a fence can we save ourselves from falling into and over the bogs and precipices strewn around the Internet by the ‘freemen’ of Cyberspace.

Footnotes

1 John Perry Barlow (1996): A Declaration of the Independence of Cyberspace. Available from https://www.eff.org/cyberspace-independence or to listen to at http://www.departmentofrecords.co/dor1.html.

2 Channarong Intahchomphoo (2018): “Social Media and Youth Suicide: A Systematic Review”. Twenty- Sixth European Conference on Information Systems, Portsmouth, UK. Available from http://ecis2018.eu/wp-content/uploads/2018/09/1119-doc.pdf. See also, Guardian (2019): Social Media urgerd to take ‘moment to reflect’ after girl’s death. Available from https://www.theguardian.com/media/2019/jan/30/social-media-urged-to-take-moment-to-reflect- after-girls-death.

3 John Locke (1988): “An Essay Concerning the True Original, Extent and End of Civil Government” in Two Treatises of Government. (Ed. Peter Laslett). (Cambridge: Cambridge University Press); p. 279.

4 15 U.S.C. §§41-58.

5 15 U.S.C. §§6501-6506.

6 The Financial Services Modernization Act regulates the use and collection of financial information 15 U.S.C. §§6801-6827; The Health Insurance Portability and Accountability Act regulates medical information 42 U.S.C. §1301 et seq.; Personal contact details, by The Controlling the Assault of Non- Solicited Pornography and Marketing Act and The Telephone Consumer Protection Act.15 U.S.C. §§7701- 7713 and 18 U.S.C. §1037; 47 U.S.C. §227 et seq. The Electronic Communications Privacy Act 18 U.S.C. §2510regulates the interception of electronic communications.

7 Ofcom (2019): Addressing Harmful Content Online. Available from https://www.ofcom.org.uk/ data/assets/pdf_file/0022/120991/Addressing-harmful-online- content.pdf.

8 Ofcom (2019): Broadcasting Code. Available from https://www.ofcom.org.uk/tv-radio-and-on- demand/broadcast-codes/broadcast-code.

9 Information Commissioner (2018): Investigation into the use of data analytics in political campaigns. Available from https://ico.org.uk/media/action-weve-taken/2259371/investigation-into-data-analytics- for-political-purposes-update.pdf.

10 For IMPRESS, see https://impress.press/standards/; and IPSO https://www.ipso.co.uk/editors-code- of-practice/#Accuracy.

11 For the GDPR, see https://gdpr-info.eu; and AMSD https://eur-lex.europa.eu/legal- content/EN/ALL/?uri=CELEX:32010L0013.

12 Recital 73 GDPR, see https://gdpr.eu/recital-73-restrictions-of-rights-and-principles/.

13 See Eerke Boiten (2019): Social media doesn’t need new regulations to make the internet safer: GDPR can do the job. Available from https://inforrm.org/2019/02/16/social-media-doesnt-need-new-regulations- to-make-the-internet-safer-gdpr-can-do-the-job-eerke-boiten/. https://www.echr.coe.int/Documents/Convention_ENG.pdf.

14 For Vietnam, see https://www.bakermckenzie.com/en/insight/publications/2018/12/views-on-ott- services; and Turkey, see https://s3.amazonaws.com/documents.lexology.com/71a21f85-9db9-4486- b6d3-879101fb52aa.pdf.

15 Global Parliamentary Report (2012): Facts and Figures. Available from http://archive.ipu.org/dem- e/gpr/media/index.htm; In the UK, consistently around, see researchbriefings.files.parliament.uk/documents/CBP-7483/CBP-7483.pdf.

16 Directive 2000/31/EC, Article 42.

17 See https://www.law.cornell.edu/uscode/text/47/230

18 Tech Crunch (2017): Zuckerberg implies Facebook is a media company, just “not a traditional media company”. Available from https://techcrunch.com/2016/12/21/fbonc/.

19 WIRED (2017): Memo to Facebook: how to tell if you’re a media company. Available from https://www.wired.com/story/memo-to-facebook-how-to-tell-if-youre-a-media-company/. 

20 Press Recognition Panel (2019): Report on the recognition system. Available from https://pressrecognitionpanel.org.uk/recognitionreportfeb19/.

21 Ibid, p.17.

22 Ibid, p.1.

23 This idea is championed by Professor Lorna Woods at the University of Essex and William Perrin, see https://www.carnegieuktrust.org.uk/publications/internet-harm-reduction/. See also, Digital, Culture, Media and Sport Committee (DCMS) (2019): Disinformation and “Fake News”. Available from https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture- media-and-sport-committee/news/fake-news-report-published-17-19/. See also Tom Watson (2019): Speech on fixing the distorted digital market. Available from https://labour.org.uk/press/tom-watson- speech-fixing-distorted-digital-market/.

24 Mark Bunting (2019): Keeping consumers safe online: regulation of online platforms. Available from https://blogs.lse.ac.uk/mediapolicyproject/2019/02/28/keeping-consumers-safe-online-regulation-of- online-platforms/.

25 DCMS report, p. 10.

26 Index on Censorship (2019): Wider definition of harm can be manipulated to restrict media freedom. Available from https://www.indexoncensorship.org/2019/02/wider-definition-of-harm-can-be- manipulated-to-restrict-media-freedom/.

27 Sky News (2019): We know online harms exist – but this concept has one small weakness. Available from https://news.sky.com/story/sky-views-we-need-to-tackle-online-harms-but-nobody-knows-what-they- are-11630239. See Research4Committees (2018): Child safety online: definition of the problem. Available from https://research4committees.blog/2018/02/07/child-safety-online-definition-of-the-problem/.

28 David Luxton (2012): “Social Media and Suicide: A Public Health Perspective”. American Journal of Public Health 102. Available https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3477910/. See also 

29 Mark Bunting (2019).

30 See Mark Bunting (2019); Eerke Boiten (2019); and Emma Goodman (2019): The DCMS Select Committee’s proposals for social media regulation: would they work? Available from https://blogs.lse.ac.uk/mediapolicyproject/2019/02/21/the-dcms-select-committees-proposals-for- social-media-regulation-would-they-work/.

John-Paul Tettmar-Saleh has recently completed Bar Professional Training Course at the University of Law, London Bloomsbury. He is also studying for an MA in Intellectual History.