Online safety law to be strengthened with new offences and age verification

February 8, 2022

The UK government has announced changes to the draft Online Safety Bill and a response to the Law Commission’s report on communications offences. The Bill has undergone formal scrutiny from two parliamentary select committees and other committees, such as the Treasury Committee, have also suggested amendments to it. The government intends to make the following amendments before the Bill is introduced to parliament.

Extending the list of priority offences in the Online Safety Bill

The draft Online Safety Bill in its current form already places a duty of care on internet companies which host user-generated content, such as social media and video-sharing platforms, as well as search engines, to limit the spread of illegal content on these services. It requires them to put in place systems and processes to remove illegal content as soon as they become aware of it but take additional proactive measures with regards to the most harmful ‘priority’ forms of online illegal content. 

To proactively tackle the priority offences, it says that firms will need to make sure the features, functionalities and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting suspicious users and having effective systems in place to prevent banned users opening new accounts.

Ofcom will have various powers, including being able to issue fines of up to 10% of annual worldwide turnover to non-compliant sites or block them from being accessible in the UK.

The priority illegal offences currently set out in the draft Bill are terrorism and child sexual abuse. 

A further list has been developed using the following criteria: (i) the prevalence of such content on regulated services, (ii) the risk of harm being caused to UK users by such content and (iii) the severity of that harm.

The offences will fall in the following categories:

  • Encouraging or assisting suicide
  • Offences relating to sexual images i.e. revenge and extreme pornography
  • Incitement to, and threats of, violence
  • Hate crime
  • Public order offences – harassment and stalking
  • Drug-related offences
  • Weapons / firearms offences
  • Fraud and financial crime
  • Money laundering
  • Controlling, causing or inciting prostitutes for gain
  • Organised immigration offences

The Law Commission’s review of harmful online communications

In 2019 the Law Commission began a review of the law relating to abusive and offensive communications. It considered whether criminal offences in England and Wales sufficiently captured harmful communications online. 

The Commission published its final report in July 2021. The government has published its interim response to the Law Commission’s report and confirmed it will take forward recommendations for three new offences  as set out below:

‘Genuinely threatening’ communications offence, where communications are sent or posted to convey a threat of serious harm

This offence is designed to better capture online threats to rape, kill and inflict physical violence or cause people serious financial harm. It addresses limitations with the existing laws which capture ‘menacing’ aspects of the threatening communication but not genuine and serious threatening behaviour.

Harm-based communications offence to capture communications sent to cause harm without a reasonable excuse

This offence will make it easier to prosecute online abusers by abandoning the requirement under the old offences for content to fit within proscribed yet ambiguous categories such as “grossly offensive,” “obscene” or “indecent”. Instead it is based on the intended psychological harm, amounting to at least serious distress, to the person who receives the communication, rather than requiring proof that harm was caused. The new offences will address the technical limitations of the old offences and ensure that harmful communications posted to a likely audience are captured.

The new offence will consider the context in which the communication was sent. It also aims to It will better protect people’s right to free expression online. Communications that are offensive but not harmful and communications sent with no intention to cause harm, such as consensual communication between adults, will not be captured. It will have to be proven in court that a defendant sent a communication without any reasonable excuse and did so intending to cause serious distress or worse, with exemptions for communication which contributes to a matter of public interest.

An offence for when a person sends a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm

Although there is an existing offence in the Communications Act that captures knowingly false communications, this new offence raises the current threshold of criminality. It covers false communications deliberately sent to inflict harm, such as hoax bomb threats.

The maximum sentences for each offence will differ. If someone is found guilty of a harm based offence they could go to prison for up to two years, up to 51 weeks for the false communication offence and up to five years for the threatening communications offence. The maximum sentence was six months under the Communications Act and two years under the Malicious Communications Act.

Age verification for pornographic sites

It has also been announced that a separate provision will be added to the Bill with the aim of preventing children from accessing pornography websites. The legislation will require providers who publish or place pornographic content on their services to prevent children from accessing that content. This will capture commercial providers of pornography as well as the sites that allow user-generated content. The government says that the onus will be on the companies themselves to decide how to comply with their new legal duty. Ofcom may recommend the use of a growing range of age verification technologies available for companies to use that minimise the handling of users’ data. The bill does not mandate the use of specific solutions as it needs to be flexible to allow for innovation and the development and use of more effective technology in the future. Video-on-demand services which fall under Part 4A of the Communications Act will be exempt from the scope of the new provision. These providers are already required under section 368E of the Communications Act to take proportionate measures to ensure children are not normally able to access pornographic content. The new duty will not capture user-to-user content or search results presented on a search service, as the draft Online Safety Bill already regulates these. Providers of regulated user-to-user services which also carry published (i.e. non user-generated) pornographic content would be subject to both the existing provisions in the draft Bill and the new proposed duty.