The Metaverse: Old Problem, New Skins?

July 3, 2023

Introduction

Old problem, new skins? The issue of how best to regulate and protect individuals in dynamic and hostile online environments is not a new one. Indeed, the issue of how to deter and punish human beings in relation to harmful and offensive communications, has plagued humanity since the dawn of speech.

The metaverse –  the multi-dimensional virtual and immersive online environment that, amongst other things, seeks to replicate much of our activity in the real world – is an example of the latest space which poses ample risk, reward, and questions on the nature of human interaction.

Pre-metaverse terms and conditions

Seasoned gamers already know that social spaces online, used for the purpose of networking or gaming, pre-date the issues arising in the metaverse by many years. Massive multiplayer online games (MMOGs), have long since had to grapple with the problem of moderating the acts of disruptive players, in a virtual 3D space.

At ground zero lies the rules established by the online platform. Service providers require users to enter into a contract – the terms of service – as a condition of participating in an interactive online platform. Under the terms of service, a user will ordinarily be prohibited from committing various illegal or harmful acts on the platform. Where the breaching party’s actions have caused financial loss to the platform, it is also open for the platform to take legal action against that user for breach of contract.

A prominent example of this is Roblox’s USD 1.6 million lawsuit (in November 2021) against the YouTuber, Ruben Sim, for persistent breaches of Roblox’s terms and conditions, including racist, homophobic harassment of other players, and the dissemination of “false and misleading terrorist threats” which forced Roblox to delay its developers conference in San Francisco in October 2021. The court ordered a ban on Ruben Sim from participating in Roblox, as well as ordering a payment of USD 150k in fines.

In terms of self-help measures, players can mute or block offensive players. Platforms provide a reporting system for players to notify moderators of particularly offensive players. Sanctions are applied on a graded scale. Low level offenders may receive temporary suspensions or have their text and audio chat functions blocked by the moderator. Other benefits, such as player statistics, rewards, and customizations. may be temporarily withdrawn. The most serious and persistent online offenders may have their accounts permanently suspended.

Visions and limits of content moderation in the metaverse

The metaverse is distinguished from other MMOGs, due to its deeply immersive nature. We engage with the metaverse not simply by staring at a flat screen TV or monitor, but by wearing VR goggles and other haptic technology equipment, that seek to stimulate and indeed simulate our senses. Such verisimilitude can enhance our user experience in virtual worlds. However, it also creates an increased risk of psychological harm or distress when faced with aggressive, intrusive acts from other avatars within the metaverse .

The metaverse poses a different challenge to online content moderation. Self-help measures have had to adapt: where an avatar can interact freely with other avatars in a virtual space, muting verbal speech or text may not be sufficient. The metaverse requires the option to establish virtual ‘spatial’ boundaries too.

For example, in response to reports of online sexual harassment by female users, Meta Horizon Worlds implemented a number of safety features, including:

  • The option to activate a 4ft radius around one’s avatar, to prevent other avatars from approaching them
  • A function that turns voice chat from nearby strangers into “unintelligible friendly sounds” in order to protect oneself from unwanted speech
  • A protective bubble that users can activate so no one can virtually interact with them
  • ‘Horizon Home’ – a virtual safe space that is only available to a user’s added ‘friends’, which enables the user to interact with their friends in a safe place.

These measures are welcome as a means of empowering users to exercise digital self-defence. However, they are not enough. It would also be inappropriate to place the onus of protecting oneself online, on the user. After all, encouraging someone to conduct self-defence or take evasive action if they are physically attacked in a public setting, does not absolve the role of a hosting venue to eject and ban the attacker from their premises. Consider, for example, that AltspaceVR has deployed moderators to act as ‘bouncers’ in some of their spaces. Neither does the practice of self-defence absolve the responsibility of law enforcement bodies to investigate criminal wrongdoing and enact sanctions. It is undesirable to expect online platforms and their users to do this at their discretion.

Survey of current laws and applicability to virtual environments

Given the limits of user level moderation, how can our criminal and civil laws sanction harmful conduct on the metaverse? Some of our existing laws are already well placed to deal with online harms.

Harassment

The Protection from Harassment Act 1998, deals with harassment that may take place across the physical and online space. In particular, section 2A(3)(d) of the Act identifies “monitoring the use by a person of the internet, email or any other form of electronic communication” as an act or omission associated with the offence of stalking under the Act.

Defamation

Defamation laws (under the Defamation Act 2013) apply across print and online media. Indeed, the last 10 years or so have seen a rich body of case law (Monroe v Hopkins [2017] EWHC 433 (QB),Blackledge v Persons Unknown [2021] EWHC 1994 (QB)) that address the particularities of defining and protecting one’s reputation in cyberspace, particularly social media. We should be prepared to engage with similar disputes regarding statements made in the metaverse. One of the key questions will be over identification – what happens where avatar A makes defamatory comments against avatar B, and those comments do not effectively identify the natural person behind avatar B. Does that person lose the right to bring a claim in libel for want of identification, or should they be permitted to sue in respect of their digital identity?

Communication based offences

Section 1 of the Malicious Communications Act 1988 prohibits the sending of indecent, threatening, grossly offensive or false communications to others (including electronic communications), where the purpose of that communication is to cause distress or anxiety to the recipient.

Meanwhile, s.127(1) of the Communications Act 2003 prohibits the sending of:

  • a grossly offensive, indecent, obscene or menacing message across a public electronic communications network;
  • a false message across such a network, for the purpose of causing annoyance, inconvenience or needless anxiety to another.

Such offences already apply to a range of abusive messages that may be disseminated across social media, emails and so forth. These offences are easily applicable to communications sent in the metaverse, whether they take place via text, audio, images or gestures made by avatars.

Sexual offences

Other laws may require some creative interpretation by the courts, or even legislative reform.

Should offences such as sexual assault under the Sexual Offences Act 2003 be interpreted to apply to non-consensual, simulated contact between avatars (experienced via haptic technology) – and accordingly, an offence of sexual assault can be committed via avatars?

Some may argue that without physical contact, no such offence of sexual assault can occur under the 2003 Act as presently drafted. The applicable offence would be those under the Malicious Communications Act 1988 and/or the Communications Act 2003 (or even the ‘threatening communications’ offence under the Online Safety Bill), which are applicable to unsolicited sexual communications sent over the metaverse.

However, the communications based offences are ultimately drafted on the basis that there is no physical or simulated contact between offender and victim. We may argue that these communication based offences do not, in conceptual terms, envisage haptic technologies being used for the purposes of non-consensual touching of a person via an avatar, and the psychological harm and distress that may occur as a result.

More importantly, under the communications based offences, there is no anonymity protection for victims who seek to report instances of virtual sexual assault. There is a gap in the law for victims of virtual sexual offences. It is likely that new/amended legislation will be required to provide this anonymity protection (for instance, by amending the Sexual Offences (Amendment) Act 1992).

Given the reported incidents of sexual harassment of female avatars, as well as the current capability of the metaverse to facilitate virtual sexual encounters between avatars, there is a need to answer these difficult questions over framing appropriate boundaries of sexual conduct in virtual spaces, as well as the harm value we place on simulated touching in the metaverse.

Extending moderation, from user level to platform level

Regulating the digital space cannot be limited to sanctioning the actions of individuals. Duties and obligations must be placed on service providers also, to codify commitments to ensuring online safety and establish a high level of compliance with online safety provisions.

Online Safety Bill

This is where the UK Online Safety Bill comes into play, by requiring providers of user-to-user services to comply with a series of duties, including conducting risk assessments on illegal content, protecting the safety of children online, and protecting users from a variety of online harms, whilst ensuring that content of democratic importance is protected. The OSB will play an extensive role in regulating the metaverse in the UK. Yet the OSB remains politically contested and subject to amendment, given the disputes over the extent to which it restricts freedom of expression. The final shape of the OSB remains to be seen.

Digital Services Act

Meanwhile, the EU Digital Services Act (the “DSA”) has been in force since 16 November 2022. The DSA will play an important role in regulating harms across the metaverse, at least for those services which are provided within the EU. Although no service provider will be under a general obligation to monitor information or actively conduct fact-finding on illegal activities, service providers must nonetheless act on removing illegal content once they are notified under the safe harbour scheme (DSA Article 6). Amongst other things, service providers are required to establish a notice and action mechanism to allow individuals to notify when of illegal content, as well as establish a complaints handling system for those who seek to contest take down decisions (DSA, Articles 16, 20)

Very large online platforms (VLOPs) are subject to additional obligations, to conduct risk assessment for systemic risks arising out of the use of their service, as well as being required to subject themselves to an independent audit at least annually, to assess their compliance with their legal obligations and codes of conduct relating to online safety (DSA, Articles 33-35, 37). The information gleaned from these new reporting requirements will provide interesting reading for professionals working in the online trust and safety space.

Conclusion

There is plenty to be concerned about regarding the policing of online harms in the metaverse. The problem is an old one, but the pace of technological development will mean that legislatures, platforms and users will need to maintain sustainable solutions to protecting metaverse users, particularly those who are vulnerable. A mix of user level moderation, contractual remedies, codes of conduct as well as legislative reform will constitute key components of the regulatory architecture for the metaverse.

profile image of benson egwuonwu

Benson Egwuonwu is an associate in the Commercial Litigation team at DAC Beachcroft LLP and acts in a variety of commercial disputes, including media, defamation, harassment, intellectual property and privacy. He has a particular interest in online content regulation and has advised on takedowns of unlawful content.