Sir Henry Brooke Student Essay Prize Highly Commended: The greatest regulatory challenges for the Internet of 2030

This entry to our annual Sir Henry Brooke Student Essay Prize by Karolina Zielinska was highly commended by the judges. This is the final winning entry we will publish.

Our entrants were asked to answer this question: “At the start of the 2010s, two billion people used the Internet, MySpace rivalled Facebook as the most popular social network, iPads did not exist and few people had swapped their trusty Nokias for iPhones. Peer to peer networking was seen as an existential threat to copyright industries and net neutrality was not yet the law anywhere, while cloud computing was unknown to the general population. The future was unpredictable.” What, where, how, and when will the greatest regulatory challenges for the Internet of 2030 be?’

------------------

From the first commercial public use of the Internet in 1989,1 the speed of its subsequent technological development has made regulation of this behemoth a challenging task – indeed, the future of the Internet continues to be unpredictable to a substantial degree. However, areas of interest that are likely to attract more regulatory focus in the years to come can be divined by considering areas of challenge that have recently emerged and are not necessarily yet considered significant.

When considering the ways in which the Internet will have developed by 2030, care must be taken to not stray too far into the realm of science fiction. Currently, the “digital divide” has been identified as a key issue that the current European Commission will aim to tackle further in its next term, by focusing on the digital literacy aspect of this issue via educational programmes.2

However, economic barriers to Internet usage must also be addressed,3 perhaps most effectively at a national level. If plans to improve this accessibility gap to the Internet are implemented and successful, then as suggested by the PEW report on Digital Life in 2025, it may be safe to predict that information-sharing in future societies will be akin to electricity – widely available, accessible and flowing freely.4 Furthermore, the report states that the Internet of Things and ‘big data’ will make Internet users more conscious of their own behaviour online, particularly with regard to their privacy. Statistics confirm that many users are increasingly apprehensive about both data collection and the uses to which such data is put – over half of tracking cookies are typically blocked or deleted by web browsers.5 This apprehension is beginning to be reflected in legislation, for example the EU ePrivacy Directive which requires advertisers to ask for consent before collecting cookies.6

This essay will consider three key areas of current legal and political debate that imply future regulatory challenges for the Internet. First, the growing negative attitudes towards data collection and how these may be assuaged. Second, recent attempts to step up filtering of Internet content, the regulation of advertising and antisocial behaviours online and the repercussions that this might have for future Internet users. Finally, the perceived continuing lack of accountability amongst ‘Big Tech’ firms that permits them an unsurpassed level of control over fundamental features of the Internet itself, as well as over the rights bestowed upon Internet users.

Data collection

Internet users are increasingly suspicious of the uses to which their online data is put by those collecting it – indeed, per Liebhold, the ”trillion dollar economy is built on harvesting people’s personal data”.7 Via the Internet of Things, which is predicted to encompass 21.5 billion devices worldwide by 2025,8 it is increasingly easy for companies to collect vast amounts of user data across these seemingly innocuous platforms. This desire for greater individual control over one’s own data could lead to the overt recognition of data as a new resource which Internet users could pay to protect, or even be paid to give away. This could be facilitated by the use of ‘data pods’ in new platforms such as Solid, created by Berners-Lee, which would consolidate data in one place for ease of access and authorised release.9 However, on what basis data could be a separate asset class has not yet been theorised – is it a commodity, equity, intellectual property, or something else? Regulation of this class will undoubtedly prove difficult if its basis is not conceptually clear. Furthermore, a pay-to-protect model (a potential development considered for Facebook, as admitted by CEO Mark Zuckerberg in his testimony to Congress in 2018)10 could pose a threat to the equality of the internet: the  inevitable trade-off between greater data protection and convenience would mean only the rich could justify the luxury of higher degrees of privacy awareness in their online affairs, as noted in the Digital Life 2025 Report.11

Companies are beginning to respond to consumer privacy concerns, and more transparent platforms such as Sgrouples and Ello may become more popular in the future, although it seems unlikely that they will gain larger market shares unless interventionist steps are taken to move users away from the dominant ‘Big Tech’ companies of the moment. That could be through  greater funding to programmes such as the Next Generation Internet Initiative, which aim to make the internet more “human-centric” or if  more consensus is garnered. More recently, techno-solutionism has emerged as another contributor to privacy concerns, particularly in a post-COVID-19 era. Widespread testing of new types of technology, such as tracking apps, risks data-extraction from various, potentially marginalised, groups for profit. Similarly, freedom of information requests made by Privacy International in 2019 revealed that numerous UK local authorities use overt social media monitoring to collect ‘open source’ data on their constituents.12 Whilst the collection of such data must be strictly necessary and proportionate, evidence  suggests that this is not sufficiently taken into account at decision-making stages, leaving authorities open to future judicial review challenges. If left unchallenged, such practices could negatively impact freedom of expression, and lead to a climate of increased self-censorship.

Content filtering

Filtering of content, from innocuous yet irritating advertising material to terrorist content, has become a key area of Internet regulation recently and will continue to be relevant for decades to come. Net neutrality has proved itself a hot topic in the US and it is likely that it will become an area of greater regulatory interest in the UK and EU in the future. Regulation of Internet speeds is predominantly effected via competition law, although attempts in 2015 to further net neutrality directly in the US13 were met with litigation by ‘Big Tech’ companies and ultimately were rolled back in June 2018. Comparatively, the UK currently provides for net neutrality mainly via the EU Regulation on Open Internet Access 2015. However, Brexit has given UK the opportunity to deviate from this provision and engage in content-based discrimination for whatever reason (including political or financial reasons), if the government wishes to legislate further on this issue. Post-Brexit, the UK will likely face similar pressure as experienced in the US from firms reluctant to implement net neutrality, as exemplified by the amount of firms already under investigation for breaching the EU rules.14

Political influencing orchestrated via the Internet has also been a concern fuelling regulatory changes for some time, although not all predicted concerns have actually been realised. For example, deepfakes have proven to not be the key weapon in the arsenal of misleading online information despite such predictions, owing to the time, data and energy taken to create them.

However, the use of collected user data for political ends via targeted advertising will likely be heavily regulated by 2030. After widespread criticism for the role Facebook played in inciting violence against ethnic minorities in Myanmar via dissemination of propaganda,15 an option to remove political advertisements has been created on the platform in the run up to the 2020 US election – yet this appears to be an attempt by Facebook to shirk their responsibility to review such advertisements themselves.16 This is discouraging. Governments can at best set minimum standards via legislation so the policies of companies like Facebook are important in determining how far beyond these standards platforms ought to go. Perhaps by 2030, the only way forward will be to raise these minimum standards significantly in order to encourage companies to engage more with their ethical responsibilities towards platform users.

Alternatively, Hartmann suggests that in the future, content filtering may well become the responsibility of Internet users themselves, rather than the companies providing platforms for it.17 Whether this is desirable is another question altogether,  but it may well be more manageable to tackle the scale of undesirable online content in this way. Indeed, Nick Clegg, Vice- President for Global Affairs and Communications at Facebook, remarked recently that, given the volume of content posted on the platform daily, “rooting out the hate is like looking for a needle in a haystack”.18 Perhaps a realistic future regulatory framework would apportion responsibility for flagging content to both platforms and the users themselves to manage the workload.

Content filtering is also a safety concern, not just a political one. Negotiations are currently underway within the EU regarding a proposal for a regulation requiring terrorist material to be taken down within one hour.19 However the proposal has not yet gone so far as to require constant monitoring of uploads to websites and filters flagging terrorist content. This proposal thus fails to adequately recognise that the issue with such content is that it spreads extremely rapidly: the presence of an automatic content filter, or monitoring, is therefore crucial in preventing the spread of this material. However, automated filters are not foolproof – notably, in Tumblr’s attempt to use an automatic filter to eliminate explicit content from the website, a photo detailing the changes to their content policy was, rather ironically, erroneously removed.20 Thus such systems will generate freedom of expression concerns if they are over-cautious and remove copious amounts of innocuous content by accident. Such censoring places a heavy burden on the individual whose content has been removed: appealing a platforms’s decision: is not always easy, as  decisions are often not notified correctly to the individual, nor are details of available appeals processes made particularly clear.21 Overall, a more measured regulatory approach could involve flagging, rather than removing content initially, and then engaging in secondary manual review of it, as opposed to current systems, such as that of Tumblr, which are over-broad and may remove vast swathes of material erroneously.

Accountability

The current Internet accountability crisis stems predominantly from what Digital Secretary Jeremy Wright has termed the “era of self-regulation” for online companies.22 In the April 2019 White Paper on Online Harms, a new statutory duty of care was proposed for harm caused by the content or activity on websites, and transparency reports about the amount of harmful content on platforms and how it is to be tackled were also suggested, possibly along with a responsibility on platforms to employ dedicated fact-checkers.23 However, commentators including David Barker have suggested that without legal redress for a specific offence, such a duty of care will be meaningless in practice, as it will not effectively provide a deterrent to unscrupulous companies in practice.24

An alternative way forward may be for ‘Big Tech’ companies to be forcibly broken up to reduce their market share, and make regulation of their own platforms (voluntarily or otherwise) more feasible. This would have the added benefit of reducing the leverage such companies currently enjoy in current discussions around things like data collection and use,25 which could be considered anti-competitive. However, it would be a remarkably interventionist step to take – a more measured alternative approach suggested by Feld would be to create a new digital regulator to oversee ‘digital platforms’, as they are so unique in nature that they really need their own regulatory framework to be successfully controlled.26 Governments could do this by empowering a specific national regulator, as suggested in the Online Harms White Paper proposals (Ofcom would be responsible for regulating the duty of care set out above, with appeals available to courts). However, true empowerment of any regulator will require the creation of specific offences they can take action to enforce and there are still few legal routes available to constrain Internet companies when necessary.

Clearly the current regulatory system is marked by the lack of bases of legal action against companies causing online harm. Many existing offences that can form the basis of action were not intended to apply to a digital context so are ill-fitting and often the extent of power afforded to regulators (e.g. Ofcom, as above) is woefully unclear. For example, “doxxing” could be considered harassment or a potential privacy infringement depending on the circumstances, but there is not yet a specific offence to address this online phenomenon.27

That being said, plans to introduce new legislation to combat online harms are frequently set back, such as the shelving in 2019 of the plans to introduce age verification systems to access online adult content in Part 3 of the Digital Economy Act 2017,28 owing to the muddled nature of the existing law on online harm. Even the GDPR29 is not above criticism, although it has widely been considered a step in the right direction. Edward Snowden has labelled it a “paper tiger”30 promising more than it delivers. As noted in the European Commission’s Impact Assessment, the GDPR ought to have gone further to fully empower individuals to control their own data and its applicability to new forms of technology (e.g. contact-tracing apps, or facial recognition) is unclear.31 Furthermore, an expansive approach to exemptions in the GDPR has led to an enforcement gap in practice,32 in some cases making the actual scope of sanctions uncertain until after litigation. Ultimately, any new and innovative regulatory system operating in this context will entail a form of ‘digital constitutionalism’, as coined by Suzor.33 Governments must decide what they want to regulate, what checks and balances will be necessary in that area to protect human rights (above all), and consensus and social pressure must be present on companies to encourage them to enforce their own limits. Such ideas will hopefully be incorporated into practice relatively soon. Proposals for updates to the EU ePrivacy Regulation centre around the concept of ‘legitimate interest’, frequently employing proportionality as a standard to determine whether this concept can override certain individual rights in different contexts.34 That being said, consensus may well be the greater challenge to ‘digital constitutionalism’ in practice – even high-profile boycotts and other direct action rarely has a real effect on domineering platforms such as Facebook, as even the top 100 advertisers on the platform make up a mere 6% of its total advertising revenue.35

In conclusion, the greatest regulatory challenges for the Internet of 2030 will likely reflect current areas of emerging challenge – notably maintaining net neutrality and accountability of online platforms (particularly with regard to how they handle user data). Furthermore, any regulation will likely take place in a new context. The goal of universal Internet access is currently being pursued in the EU, and if this is even partially achieved by 2030, regulation will need to accommodate this new development. Similarly, if the Internet of Things progresses to a full “Internet of Everything” (that is the ability to interact via devices connected to a particular network), this greater inter-connectedness will also have follow-on consequences for data sharing regulations.

profile picture of karolina zielinska

Karolina Zielinska recently graduated with a degree in Law from the University of Cambridge. She is a Middle Temple scholar, and is currently undertaking the Bar Training Course at the Inns of Court College of Advocacy.

-------------------------

Notes & sources

1 Internet Society 1997, ‘Brief History of the Internet

2 Ursula von der Leyen, ‘Political Guidelines for the next European Commission 2019-2024

3 Annie Kelly, ‘Digital divide ‘isolates and endangers’ many of UK’s poorest

4 Janna Anderson and Lee Rainie, ‘Digital Life in 2025

5 Flashtalking, ‘Cookie Rejection Report

6 European Parliament and Council Directive 2002/58/EC

7 Matt Blitz, ‘What will the Internet look like in the next 50 years?

8 Statista, ‘Internet of Things – active connections worldwide 2015-2025

9 Tim Berners-Lee, ‘The future of the Internet

10 Bloomberg Government, ‘Transcript of Mark Zuckerberg’s Senate Hearing (Commerce/Judiciary Committees)

11 Anderson and Rainie, (n 4)

12 Privacy International, ‘UK: Stop social media monitoring by local authorities

13 Federal Communications Commission, FCC-15-24, In Re Protecting and Promoting the Open Internet, 12th March 2015

14 Margi Murphy, ‘Vodafone and Three investigated over net neutrality concerns

15 Paul Mozur, ‘A Genocide Incited on Facebook, With Posts from Myanmar’s Military

16 Casey Newton, ‘Turning off political ads on Facebook could have unpredictable consequences

17 Ivar Hartmann, ‘A new framework for online content moderation’, 2020 Vol 36 CLSR

18 Nick Clegg, ‘Facebook does not benefit from hate

19 Francois Theron, Briefing to the European Parliament on 'Terrorist content online: Tackling terrorist propaganda'

20 Stephen McLeod Blythe, “Copyright filters and AI fails: lessons from banning porn”, EIPR 2020, 42(2), 119-125

21 United Nations, ‘Report of the Special Rapporteur on the promotion and protection of the right to FoE’, 2018, A/HRC/38/35

22 Department for Digital, Culture, Media & Sport, ‘Press release – UK to introduce world first online safety laws

23 Department for Digital, Culture, Media & Sport, 2019, Online Harms White Paper, CP 57

24 David Barker, ‘Online harms: the good, the bad and the unclear

25 Paresh Dave, ‘Google stymies media companies from chipping away at its data dominance

26 Harold Feld, ‘From the telegraph to Twitter: the case for the digital platform act’, 2020 Vol 36 CLSR

27 CPS Prosecutorial Guidelines on cases involving social media

28 Nicky Morgan, Written Statement on Online Harms, HCWS13

29 General Data Protection Regulation (EU) 2016/679

30  Steve Ranger, ‘GDPR is missing the point, says Edward Snowden

31 European Commission, Communication on the GDPR, COM(2020) 264 final

32 Privacy International, ‘GDPR – 2 years on

33 Nicolas Suzor, ‘A constitutional moment: how we might reimagine platform governance’, 2020 Vol 36 CLSR

34 Presidency of the European Council, Progress Report, 29th May 2020

35 Casey Newton, ‘The Facebook boycott advertisers have the right company but the wrong diagnosis

Published: 2020-09-29T10:00:00

    Please wait...