The Online Safety (Miscellaneous Amendments) Bill: Singapore’s effort to tackle online harms

January 10, 2023

On 9 November 2022, the Online Safety (Miscellaneous Amendments) Bill was passed in the Singapore Parliament. You can call it Singapore efficiency – four months is what it took from the start of a public consultation to the passing of the Bill, which is expected to take effect in 2023.

For those who are short on reading time, essentially the Bill will:

(a) require “online communication services” (which include social media services) with significant reach or impact to comply with Codes of Practice, and

(b) empower the Infocomm Media Development Authority (IMDA), a statutory board in the Singapore government that regulates the infocomm and media sectors, to issue directions to deal with specified categories of “egregious content” that can be accessed by Singapore users on an online communication service.

“Egregious content” is defined to mean:

  • Content that advocates or instructs on suicide or self-harm
  • Content that advocates or instructs on violence or cruelty to, physical abuse of, or acts of torture or other infliction of serious physical harm, on human beings
  • Content that advocates or instructs on sexual violence or coercion in association with sexual conduct, whether or not involving the commission of a heinous sex crime
  • Content depicting for a sexual purpose or that exploits the nudity of a child or part of a child, in a way that reasonable persons would regard as being offensive, whether or not sexual activity is involved
  • Content that advocates engaging in conduct in a way that

(a) obstructs or is likely to obstruct any public health measure carried out in Singapore; or

(b) results or is likely to result in a public health risk in Singapore;

  • Content dealing with matters of race or religion in a way that is likely to cause feelings of enmity, hatred, ill will or hostility against, or contempt for or ridicule of, different racial or religious groups in Singapore;
  • Content that advocates or instructs on terrorism; or
  • Any other content prescribed by the regulations as egregious content.

Public consultation from July to August 2022

Singapore’s Ministry of Communications and Information (MCI) conducted a public consultation to seek the public’s views on two proposed Codes of Practice that would require designated social media services with significant reach or impact, to have appropriate measures and safeguards to mitigate exposure to harmful online content for Singapore-based users.

The MCI said that harmful online content was a concern especially when published on services that reached a wide audience, or when targeting at specific groups of users. The MCI also noted that harmful online content could be amplified on social media services, for example dangerous video challenges could go viral rapidly, propelled by platform algorithms and user interest, leading to injuries and deaths.

The MCI gave two recent examples of harmful online content affecting Singapore users. First, a 2021 poll on social media asking people to rank local female religious teachers according to their sexual attractiveness, and promoting sexual violence. This caused immense distress to the individuals involved. Second, a Singaporean man had pretended to be a woman from another ethnic group and posted multiple racially offensive and insensitive public posts on a social media service. This could have incited religious intolerance and prejudiced Singapore’s racial harmony.

The MCI said that many social media services had made efforts to address harmful content, but these measures varied from service to service, and Singapore’s unique socio-cultural context should be considered. More could be done, especially to protect young users.

In the circumstances, the MCI was seeking feedback on two proposed Codes of Practice that would require designated social media services with significant reach or impact, to have appropriate measures and safeguards to mitigate exposure to harmful online content for Singapore-based users.

Code for Practice for Online Safety – Key Requirements

Under this Code, designated social media services would be required to have community standards for, and would be expected to reduce users’ exposure to (e.g. by disabling access to), the following categories of content:

  • Sexual content
  • Violent content
  • Self-harm content
  • Cyberbulling content
  • Content endangering public health
  • Content facilitating vice and organised crime

These services would be requires to proactively detect and remove the following categories of content:

  • Child sexual exploitation and abuse material
  • Terrorism content

Next, designated social media services would be required to put in place additional safeguards to protect young users, for example stricter community standards, and tools that allowed young users or their parents or guardians to manage and mitigate their exposure to harmful content and unwanted interactions, which could be activated by default. (While the MCI did not define “young users”, the public consultation papers indicated that it would cover anyone below age 18.)

The MCI noted that there might be instances where users come across harmful content despite those safeguards. Therefore, designated social media services should provide an efficient and transparent user reporting and resolution process, to enable users to alert these services to possible harmful content.

Lastly, the Code would require designated social media services to produce annual reports on their content moderation policies and practices, as well as the effectiveness of their measures in improving user safety.

Content Code for Social Media Services

The MCI noted that extremely harmful content might remain online. Examples were content relating to suicide and self-harm, sexual harm, public health, public security, or racial or religious disharmony or intolerance, might remain online. In the circumstances the Code would empower the IMDA to direct any social media service to disable access to specified harmful content for users in Singapore, or to disallow specified online accounts on the social media service from communicating content and/or interacting with users in Singapore.

Public Consultation Outcome

Based on MCI’s report of the outcome, the responses to the public consultation seemed to be positive. MCI reported that the respondents generally agreed with:

  • The proposal for designated social media services to have systems and processes to reduce exposure to harmful online content for Singapore-based users.
  • The categories of harmful content identified.
  • The importance of having safety features and tools on social media services to allow users to manage their exposure to harmful online content.
  • The proposal for designated social media services to have additional safeguards for young users, such as online safety tools for young users, parents, and guardians.
  • The proposal for social media services to have a user reporting and resolution process that is prominent and easy to use, and where action is taken on user reports in a timely manner.
  • The proposal for social media services to release annual reports on the effectiveness of their content moderation policies and practices to combat harmful content.

Understandably, industry groups were concerned about a one size fits all approach to regulations and their ability to comply with tight timelines. They suggested:

(a) An outcome-based approach in regulating social media services which, for example, took into account their business models and size in implementing the proposed requirements.

(b) An outcome-based approach when implementing safeguards for young users, and for social media services to be given flexibility to develop tools appropriate to their services’ risk profiles.

(c) That social media services be given some flexibility on the timelines for harmful content to be removed, taking into consideration the severity of the harmful content and resources of the service.

In respect of (a), the MCI agreed on the need for an outcome-based approach, and said that designated social media services would be given some flexibility to develop and implement the most appropriate solutions to tackle harmful online content on their services, taking into account their unique operating models.

In respect of (b), the MCI said that it would work with the industry to study the feasibility of these suggestions. In respect of (c), the MCI said that the timeline requirements for social media services to comply with the directions would take into account the need to mitigate users’ exposure to the spread of egregious content circulating on the services (this focusing on the impact to users rather than the service’s resources).

Some respondents highlighted other areas of concern such as harmful advertisements, online gaming, scams, misinformation, and online impersonation. (Scams was a hot topic in early 2022 when scammers impersonated a bank over the festive season and stole SGD 13.7m from around 800 customers.) The MCI said that it would continue studying these areas of concern.

First and Second Readings of the Bill

On 3 October 2022, the Bill was introduced for First Reading in Parliament. The MCI’s press release said that the proposed measures under the Bill were the result of extensive consultations with various stakeholders including parents, youths, community group representatives, academics, and the industry, since June 2022.

On 8 and 9 November, the Bill was debated in Parliament over two days and read a second time. The Minister for Communications and Information started the second reading by sharing the results of an MCI study conducted that year, that 97% of Singaporeans said that they would be comfortable walking around Singapore at night, but almost 80% were concerned about online harms.

The Minister said that Singapore’s approach to enhancing online safety for Singapore users was accretive, considered and calibrated, rather than a “Big Bang” approach through an all-encompassing law. Over the years, the government had introduced targeted laws to deal with specific types of harmful online content and behaviours, including:

  • Falsehoods, which are dealt with under the Protection from Online Falsehoods and Manipulation Act (POFMA).
  • Foreign Interference, which is dealt with under the Foreign Interference (Countermeasures) Act (FICA).
  • Online harassment such as cyberbullying, and doxxing, which are dealt with under the Protection from Harassment Act (POHA).

However, there were still gaps that needed to be addressed, for example content encouraging suicide and self-harm, and videos of reckless acts and dangerous challenges. Children who had insufficient capacity or maturity where particularly vulnerable when exposed to inappropriate content and unwanted social interaction online.

The Bill aimed to address these gaps, and instead of prescribing how online content service providers should set up systems and processes, the Bill would specify the outcomes that they ought to achieve.

Further details on the Online (Miscellaneous Amendments) Bill

Almost all of the amendments provided by the Bill will be to the Broadcasting Act 1994, to regulate providers of online communication services. “Online communication services” are essentially electronic services that enable end-users to access or communicate content on the Internet, or deliver content on the Internet to persons. For now, only one type of online communication service will be regulated: social media services.

“Social media service” is defined as an electronic service whose sole or primary purpose is to enable online interaction or linking between the two or more end-users (including enabling end-users to share content for social purposes), and which allows end-users to communicate content on the service.

Under the Bill, the IMDA will be empowered to designate other online communication services with significant reach or impact in Singapore, and through Codes of Practice require them to implement measures to keep Singapore users safe. Now that the Bill has been passed, the IMDA will further consult relevant social media services before finalising the Code of Practice for Online Safety, for issuance.

Turning to the IMDA’s power to issue directions, the Bill states that if the IMDA is satisfied that egregious content is being provided on an online communication service with a Singapore end-user link, and Singapore end-users of the service can access the egregious content, the IMDA may direct the provider of the service to disable access to the content by Singapore end-users, or to stop the delivery or communication of content to Singapore end-users, within a specified period.

The Minister clarified that the IMDA would not be able to issue directions in respect of private communications, which would remain private, and could only issue directions in respect of certain categories of egregious content relating to user safety.

The definition of “egregious content” was set out at the beginning of this article. Similar to the amended UK Online Safety Bill, Singapore’s Bill covers content that encourages or instructs on suicide or self-harm, and that targets race or religion in an abusive way. Unlike the amended UK Online Safety Bill, Singapore’s Bill:

  • Does not cover content that encourage or instruct on eating disorders or associated behaviours.
  • Does not specifically cover abusive content that targets sex, sexual orientation, disability, or gender reassignment. It is uncertain if these matters can be covered under the category “content that advocates or instructs on violence or cruelty to, physical abuse of, or acts of torture or other infliction of serious physical harm, on human beings”.

One category covered by Singapore’s Bill which does not appear in the UK’s Bill, and which I thought was rather useful to have, is “content that advocates engaging in conduct in a way that (a) obstructs or is likely to obstruct any public health measure carried out in Singapore, or (b) results or is likely to result in a a public health risk in Singapore”. The origins of this category are probably Covid-19 vaccine misinformation, and an individual in Singapore who allegedly exhorted parents to visit paediatric vaccination centres to overwhelm on-site medical staff with questions.

If the provider of an online communication service does not comply with the IMDA’s direction, the IMDA may direct the provider of an internet access service which has control over Singapore end-users’ access to the content, to stop the access.

The Bill imposes a duty on providers of online communication services and internet access services to take all reasonably practicable steps to comply with the IMDA’s directions. Non-compliance will amount to a criminal offence. Providers of online communications services may be fined up to SGD 1m. Where the offence is continuing, there will be a further fine of up to SGD 100k for every day or part of a day during which the offence continues. Providers of internet access services may be fined up to SGD 20k for each day or part of a day that they do not comply with the direction, but not exceeding a total fine of SGD 500k (i.e. 25 days’ non-compliance).

Providers of regulated online communication services also have a duty to take all reasonably practicable steps to comply with applicable Codes of Practice. If they fail to do so, they may be fined up to SGD 1m, or take steps to remedy the failure. If they were ordered to but did not remedy the failure, they can be fined up the SGD 1m, and a further fine of up to SGD 100k for every day or part of a day during which the offence continues.

Darren Grayson Chng is our International Associate Editor for Singapore

The views expressed in this article are the author’s personal views only and should not be taken to represent the views of policy position of his employer.