Lords Digital Select Committee launches new inquiry on freedom of expression online

November 18, 2020

Debates and exchanges of information and content increasingly take place online. The internet has enabled ways of searching, publishing and sharing information with others that were not previously possible. Consequently, the House of Lords Communications and Digital Select Committee has launched a new inquiry into how the right to freedom of expression should be protected online and how it should be balanced with other rights. 

The Committee points out that freedom of expression is a fundamental right protected by Article 10 of the European Convention on Human Rights. It is also protected under common law and in the International Covenant on Civil and Political Rights. Historically, this right has been understood in terms of what is spoken by individuals or what is written or said in the media. Content posted online arguably occupies an ambiguous middle ground between these two. The right to freedom of expression includes people’s ability to freely search for, receive and communicate information, whether this is face-to-face or mediated across time and space. It comes with responsibilities related to avoiding harm to individuals and groups.

It further notes that the founders of Facebook and Twitter have both described their platforms as a digital equivalent of the public square. The US Supreme Court has noted that such websites “can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.” However, these ‘public squares’ are controlled by private companies, which are free to ban or censor whatever they wish and whose platforms shape both the nature and visibility of communications transmitted across them.

Huge volumes of user-generated content are uploaded to platforms each day, making AI increasingly important in moderation decisions. This raises questions about algorithms’ effectiveness and possible biases, including the diversity of their designers and whether they should be subject to external audits. For example, according to the Committee, two studies in the US found evidence of widespread racial bias, with algorithms more likely to identify posts by African-Americans as hateful. Google has been found to rank racist and sexist content highly in search results, which may have the effect of excluding and silencing individuals accessing online spaces.

In recent years, there have been many high-profile controversies about action taken by platforms. These include Twitter banning Graham Linehan, creator of Father Ted and The IT Crowd; Twitter preventing users from tweeting a story by the New York Post; Facebook banning the famous ‘napalm girl’ photograph from the Vietnam War before reversing its decision; YouTube taking down videos which “go against World Health Organisation recommendations” on Covid-19; and Instagram limiting the visibility of posts by black and plus-size women.

Websites have also been criticised for not doing enough to remove content which breaks the law or community standards. More than 1,200 companies and charities, including Adidas, Unilever and Ford, suspended their advertising on Facebook in July 2020 to put pressure on Facebook to “stop valuing profits over hate, bigotry, racism, antisemitism, and disinformation.” Now, Facebook has set up an oversight board, which will have the final say in its review of ‘highly emblematic’ content moderation decisions on Facebook’s platforms.

The UK government aims, through its upcoming Online Harms Bill, to make the UK the safest place in the world to go online. How this legislation should balance responding to harms with protecting freedom of expression is contentious. Other developments include the Government’s plans to establish a Digital Markets Unit to strengthen digital competition regulation, the Law Commission’s consultation on reform of online communications offences, and the growing global debate about whether platforms should be liable for the content they host.

Against this background, the key questions of the inquiry are as follows:

  • Is freedom of expression under threat online? If so, how does this affect individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?
  • How should good digital citizenship be promoted? How can education help?
  • Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?
  • Should online platforms be under a legal duty to protect freedom of expression?
  • What model of legal liability for content is most appropriate for online platforms?
  • To what extent should users be allowed anonymity online?
  • How can technology be used to help protect the freedom of expression?
  • How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?
  • How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?
  • How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?
  • To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?
  • Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?

The inquiry ends on 15 January 2021.