Do social network providers require (further?) regulation? #2

June 25, 2019
Social networks are websites that allow people to communicate and share information on the internet.1 ‘Regulation’ is ‘a rule or directive maintained by an authority’.2 The following analysis revolves around four questions.
(1) How are social network providers currently regulated?
(2) Why would further regulation be desirable?
(3) Why would further regulation be undesirable?
(4) Do social network providers require further regulation?
I will argue that social network providers do not require further regulation. Instead, the most effective way to remedy the problem with social networks – that online content can harm those who use social networking websites (hereafter simply ‘users’) – is to develop existing technologies which identify and remove harmful content without human intervention.
(1) How are social network providers currently regulated
At present, no external body sets rules governing the content that may be displayed on social networks. This is different from the regulation of television broadcasting which is achieved in two stages. First, Parliament sets out ‘statutory standards objectives’ specifying what regulation aims to achieve (i.e. protection from ‘harmful and offensive material’).iii Second, ‘Ofcom’ (an external regulatory body created by statute) creates a Broadcasting Code containing specific rules to vindicate Parliament’s objectives (i.e. programmes cannot glamourise ‘violent, dangerous or seriously antisocial behaviour’)4. By contrast, Parliament sets no objectives for the regulation of social network providers and no external body reviews online content.
Instead, social network providers self-regulate. Websites have ‘Rules’5 or ‘Community Standards,6 laying down substantive rules with which online content must comply. Failure to comply may lead to the content being removed from the website. Most social networks prohibit similar types of content from being uploaded. For example, Tumblr’s prohibition of any ‘content that actively promotes or glorifies self-harm’ is representative of most providers’ policies.Similarly, the ‘zero-tolerance’ policy that Twitter has toward content featuring child sexual exploitation is common to all social networks.8 Other commonly prohibited types of content include those displaying sexual or violent activity, or those violating another’s intellectual property rights.9 In addition, most social networks prohibit users from abusing other users. Twitter prohibits ‘behaviour that harasses, intimidates, or uses fear to silence another’s voice’.10 Similarly, Facebook excludes any ‘direct attack on people based on… race, ethnicity, national origin, religious affiliation… [etc]’.11
Violations of these rules are identified in two different ways. Commonly, users are relied upon to ‘report’ or ‘flag’ content which they think violates the rules. Social network employees (called ‘moderators’) will then review the content to determine whether a rule-violation has in fact occurred. More recently, social network providers have begun using technology to identify specific forms of prohibited content as it is being uploaded. For example, Microsoft’s ‘PhotoDNA’ can be used to detect child sexual exploitation. 97% of the 487,363 Twitter accounts that were suspended for uploading such content between January and June 2018 were flagged automatically by programmes like PhotoDNA.12
Once a rule violation has been identified, various sanctions can be imposed. The appropriate sanction will reflect the user’s history on the website and the severity of the violation. For example, Facebook will ‘warn’ someone for a first violation’, but may ‘disable their profile’ if violations continue.13 However, Twitter will respond to any Tweet displaying child sexual exploitation with ‘immediate and permanent suspension’, and ban the individual from creating Twitter accounts in the future.14
So, while no external body regulates social media providers, the websites are subject to self-imposed rules which regulate content that is posted online. Non-compliant content will be ‘flagged’ or ‘reported’ for violating these rules. If a ‘moderator’ establishes that the rules have indeed been violated, the relevant user can be banned from using the website in future.

(2) Why would further regulation be desirable?
One of  the main incentives for further regulating social network providers is to reduce the ‘harms’ that websites can inflict on their users. ‘Harmful’ content can come in a number of forms, four of which are set out below.

(i) Sexual content
According to Dr Justin Coulson, exposure to sexual content is ‘empirically proven to have a desensitizing impact on both adults and children’.15 Desensitisation is harmful because it distorts expectations of present or future relationships. Furthermore, some sexually explicit images are shared without the consent of the pictured individual. This is an upsetting, gross invasion of their privacy. In 2018, 0.11 – 0.13% of Facebook views were of content that displayed adult nudity or sexual activity.16 Greater regulation might further reduce the damage caused by such content.

(ii) Content promoting self-harm
According to an All Party Parliamentary Group report published on 18th March 2019 entitled ‘#NewFilters’, young people use social media to find support for mental health conditions.
However, while doing so, they’re at a ‘high risk of unintentional exposure to graphic content’. xvii Indeed, the 2017 suicide of fourteen year old Molly Russell has been linked to the viewing of self- harm-related content on Instagram.18 Swifter removal of such content would reduce the number of young people whose health is compromised when seeking support for mental health conditions.

(iii) Content inciting terrorism or hatred
A 2018 study by the University of Warwick investigated a possible link between Donald Trump’s Twitter activity and anti-Muslim hate crimes. The study concluded that ‘with the start of Donald Trump’s presidential campaign, social media may have come to play a role in the propagation of hate crimes…’19 Between January and June 2018, Twitter suspended 205,156 separate accounts for violations related to promotion of terrorism.20 Greater regulation might further impede dissemination of content which risks inciting hateful attacks.

(iv) ‘Politically disruptive’ content
It’s been alleged that Twitter was used by a Russian government-linked organisation called the Internet Research Agency in an attempt to influence the results of the 2016 US Presidential Election.21 In 2018, Twitter identified 50,258 automated accounts that were ‘Russian-linked and Tweeting election-related content’, reaching a minimum of 677,775 US citizens during the election period. Twitter described their findings as posing ‘ a challenge to democratic societies everywhere’.22
This ‘politically disruptive’ content could be removed if further regulation of social network providers entailed more effective identification of similar automated accounts.
As these four points illustrate, the reduction of harms caused by offensive online content is a weighty incentive for further regulation.

(3) Why would further regulation be undesirable?
Further regulation may impede the free sharing of information by restricting individuals’ ability to express themselves. This is important; the European Convention of Human Rights grants every UK citizen the right to impart opinions and receive information free from public authority interference.23 Free sharing of information is important for the following four reasons, each of which might be threatened by further regulation of social network providers.

(i) Free sharing of information aids campaigning movements
Users may share offensive content online to challenge perceived societal inequalities. For example, the ‘Free the Nipple’ movement  uses images of female breasts to criticise the ‘double standards regarding the censorship of female breasts in public’.24 To accommodate for such advocacy, Facebook have a ‘nuanced’ policy on adult nudity in recognition of the fact that ‘nudity can be shared for a variety of reasons, including as a form of protest [or] to raise awareness about a cause’.25 If ‘further regulation’ of social networking entailed prohibition of all forms of objectionable online content regardless of context, individuals’ abilities to campaign against perceived inequalities may be stifled.

(ii) Free sharing of information aids ‘whistleblowers’
A ‘whistleblower’ is a person who shares information about a person or organisation who is engaging in unlawful or immoral activity. Social networks provide whistleblowers with a platform on which to disseminate [often confidential] information to a large audience quickly, easily and without financial cost. On 12th March 2019, the European Commission recognised the value of whistleblowers who ‘do the right thing for society’, notwithstanding the legal liability they may incur by breaching confidentiality agreements.26 If ‘further regulation’ restricts users’ abilities to post private/copyrighted content, whistleblowers’ abilities to draw attention to illicit activities may be curtailed.

(iii) Government regulation of social media risks the suppression of political dissent
State regulation of online content entails a risk that political dissent (or entire historical events) will be removed from public discourse. For example, in China, the phrase ‘Tiananmen Square’ returns no results on internet search engines. As a result, in 2015, only 15% of university students recognised images of the 1989 government crackdown on political dissidents.27 While one might feel that similar censorship in the UK is unlikely, the risk of subtler forms of misinformation undoubtedly arise whenever a government regulates the content that may be posted online.

(iv) Social networks provide evidence of criminality
Law enforcement can request that social network providers produce the personal details of anyone who discloses online that a criminal offence has been committed. Between January and June 2018, Facebook complied with 91% of the 10,325 requests that UK law enforcement made about the holders of 110,325 separate accounts.28 If ‘further regulation’ of social network providers entailed prohibiting all content that displayed unlawful conduct, UK law enforcement would lose a means by which to gather information on suspected crimes.
The risk that free sharing of information might be stifled is an important deterrent against further regulation of social network providers.

(4) Do social network providers require further regulation?
I don’t think that social network providers require further regulation. While it’s true that harmful content abounds online, the appropriate remedy doesn’t necessarily lie in more stringent regulation. In fact, when a user sustains harm by viewing offensive online content, their harm flows not from the lack of a rule to prohibit such content, but from a failure to identify the fact that the content violates the platform’s existing rules. It follows that the amount of harmful content on social networks will only decrease when the measures used to identify rule violations are made more effective.
It seems that the best way to go about this is to further develop the technologies that providers already use to identify rule-violations. The volume of online content posted each day makes it highly impractical to expect a human moderator to review every post. For example, 500 million Tweets are sent daily29 but, as of 2018, Twitter had only 3,920 staff.30 
To effectively review this volume of online content, social network providers could either hire more human moderators or  develop the technologies already employed to automatically identify offensive content. The latter option has proven to be an incredibly effective way to identify certain types of rule violations. Of the 3m Facebook posts that incited terrorism between July and September 2018, technologies removed 99.7% without any human involvement.31 YouTube’s 2018 transparency report illustrates that technologies can also effectively review video content. Between July and September 2018, YouTube removed 7.8m videos, 81% of which were first detected by machines. 74.5% of these had never received a single view.32 Going forward, these technologies might be further developed to automatically identify a greater range of content which potentially violates websites’ rules. On 15th March 2019, Facebook’s technologies failed to recognise livestreamed video footage of terrorist attacks on two New Zealand mosques as potentially ‘Violent and Graphic’, contrary to Facebook’s ‘Community Guidelines’. If these technologies had been programmed to prohibit first-person footage involving weaponry, the livestream might have been blocked before the terrorist attacks began. Such development of existing technologies appears the most effective way to review the massive volume of online content for compliance with social network providers’ existing rules.
Importantly, it’s difficult to conceive of an appropriate manner in which social network providers actually could be further regulated. One way of going about this might be to fine social network providers for failure to remove content which violated the website’s rules. Indeed, in 2017, the German Bundestag passed a law known as ‘NetzDG’. This law exposes social network providers to fines of up to €50m for failing to remove ‘obviously illegal’ online content 24 hours after it is reported.33 The problem with this is that it gives social network providers an incentive to disallow controversial online content, which risks chilling users’ ability to freely express themselves. David Kaye of the Office of the High Commissioner for Human Rights made this criticism of ‘NetzDG’; that ‘the short deadlines, coupled with… severe penalties, could lead social networks to over-regulate expression… as a precaution to avoid penalties… [which] would interfere with the right to seek, receive and impart information’.34
More importantly, further regulation fails to address the real problem; that social networks can harm users not because regulation is insufficiently stringent, but because it is difficult to identify content which violates websites’ existing rules. On March 18th 2019 the All Party Parliamentary Group’s ‘#NewFilters’ report recommended that the government should establish a ‘duty of care on all social media companies with registered UK users aged 24 and under in the form of a statutory code of conduct, with Ofcom to act as regulator’.35 
There are three problems with this suggestion. 
First, imposing a ‘duty of care’ on social network providers will not reduce the amount of harmful content online. Imposing a ‘duty of care’ presumably aims to create an incentive to remove offensive content quickly to avoid incurring a financial penalty you breaching that duty. However, as explained above, such a move risks raising an undesirable presumption in favour of disallowing online content. 
Second, it’s difficult to see how a ‘statutory code of conduct’ would protect users from harm any more effectively than websites’ existing ‘Rules’ and ‘Community Standards’. Not only would a statutory code of conduct presumably override all websites’ existing rules – rules which have evolved to best regulate the specific content posted on each unique platform – but doing so would fail to recognise the existing difficulty associated with identifying violations of such rules. 
Third, appointing Ofcom as a regulator would likely decrease the number of ‘moderators’ of online content, making it even harder to identify harmful material. While, by the end of 2018, Ofcom employed 868 staff,36 Facebook had between 4,500 and 7,500 ‘moderators’.xxxvii It’s difficult to see how appointing Ofcom as regulator would increase the effectiveness of identifying offensive material, given that the 868 staff would presumably be tasked with regulating content posted on several different social networks at once.

Conclusion
I propose that the most effective way to decrease the harm caused by offensive online content is to develop existing technologies which scan text, video and images for prohibited material. The real problem with social networks is that the volume of new content posted online makes it difficult to identify violations of websites’ rules. This problem will not be remedied by introducing more stringent rules, nor by vesting authority to moderate online content in an external regulator. Technologies like Microsoft’s PhotoDNA have proven effective at identifying violations of social networks’ rules and, if developed to recognise a greater range of potential rule violations, seem likely to mitigate harms caused by offensive content to a greater extent than could be achieved by further regulation.
Jordan Briggs is a student at the University of Oxford
————————–
Notes and links
1 https://dictionary.cambridge.org/dictionary/english/social-network
2 https://en.oxforddictionaries.com/definition/regulation
(1) How are social network providers currently regulated?
3 https://www.ofcom.org.uk/ data/assets/pdf_file/0016/132073/Broadcast-Code-Full.pdf
4 ibid
v5i.e. ‘Twitter Rules’; see (x)
6 i.e. Facebook’s ‘Community Standards’; see (xiii)
7 https://www.tumblr.com/policy/en/community
8 https://help.twitter.com/en/rules-and-policies/sexual-exploitation-policy
9 cf (viii) and (xi)
10 https://help.twitter.com/en/rules-and-policies/twitter-rules
11 https://www.facebook.com/communitystandards/hate_speech
12 https://transparency.twitter.com/en/twitter-rules-enforcement.html
13 https://www.facebook.com/communitystandards/
14 https://help.twitter.com/en/rules-and-policies/sexual-exploitation-policy
(2) Why would further regulation be desirable?
15 https://ifstudies.org/blog/the-problem-with-exposing-kids-to-sexual-and-violent-content/
16 https://transparency.facebook.com/community-standards-enforcement
17 https://www.rsph.org.uk/uploads/assets/uploaded/23180e2a-e6b8-4e8d-9e3da2a300525c98.pdf
18 https://www.theguardian.com/uk-news/2019/mar/18/molly-russell-death-police-likely-to-access-teenagers-phone-data
19 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3149103
20 https://transparency.twitter.com/en/twitter-rules-enforcement.html
21 https://www.theguardian.com/us-news/2016/dec/16/qa-russian-hackers-vladimir-putin-donald-trump-us-presidential- election
22 https://blog.twitter.com/official/en_us/topics/company/2018/2016-election-update.html
(3) Why would further regulation be undesirable?
23 http://www.un.org/en/universal-declaration-human-rights/
24 https://www.academia.edu/20907784/Free_the_Nipple
25 https://www.facebook.com/communitystandards/adult_nudity_sexual_activity
26 http://europa.eu/rapid/press-release_IP-19-1604_en.htm
27 https://www.theguardian.com/books/booksblog/2015/jul/21/louisa-lim-the-peoples-republic-of-amnesia-tiananmen- revisited-china
28 https://transparency.facebook.com/government-data-requests/country/GB
(4) Do social network providers require further regulation?
29 http://www.internetlivestats.com/twitter-statistics/
30 https://www.statista.com/statistics/272140/employees-of-twitter/
31 https://transparency.facebook.com/community-standards-enforcement
32 https://youtube.googleblog.com/2018/12/faster-removals-and-tackling-comments.html
33 https://transparencyreport.google.com/netzdg/youtube?hl=en
34 https://www.ohchr.org/Documents/Issues/Opinion/Legislation/OL-DEU-1-2017.pdf
35 https://www.rsph.org.uk/our-work/policy/wellbeing/new-filters.html
36 https://www.ofcom.org.uk/ data/assets/pdf_file/0012/115230/annual-report-1718-accessible.pdf
37 https://www.facebook.com/zuck/posts/10103695315624661?notif_t=notify_me&notif_id=1493820261300939