Regulator should have powers to address anonymity and user verification states Report on Misinformation in the COVID-19 Infodemic
The House of Commons Digital, Culture, Media and Sport Committee has issued its report on Misinformation on the COVID-19 Infodemic. The report states that in February 2020, the World Health Organisation warned that, alongside the outbreak of COVID-19, the world faced an “infodemic”, an overabundance of both accurate and false information, which prevented people from accessing authoritative, reliable guidance about the virus. The report sets out evidence on a range of harms from dangerous hoax treatments to conspiracy theories that led to attacks on 5G engineers.
The UK government is currently developing proposals for online harms legislation that would impose a duty of care on tech companies. Although the Committee says that this is not a silver bullet in addressing harmful content, this legislation is expected to give a new online harms regulator the power to investigate and sanction technology companies. However, the legislation has been delayed. As yet, the government has not produced the final response to its consultation (which closed over a year ago), voluntary interim codes of practice, or a media literacy strategy. In addition, there are concerns that the proposed legislation will not address the harms caused by misinformation and disinformation and will not contain necessary sanctions for tech companies who fail in their duty of care.
The DCMS Committee has conducted an inquiry into the impact of misinformation about COVID-19, and the efforts of tech companies and relevant public sector bodies to tackle it. This has presented an opportunity to scrutinise how online harms proposals might work in practice. Whilst tech companies have introduced new ways of tackling misinformation through the introduction of warning labels and tools to correct the record, these innovations have been applied inconsistently, particularly in the case of high-profile accounts. Platform policies have been also been too slow to adapt, while automated content moderation at the expense of human review and user reporting has had limited effectiveness. The business models of tech companies themselves contain little incentive to take action against misinformation while providing opportunities to bad actors to monetise misleading content. The role of algorithms was emphasised at various times during the inquiry. Until well-drafted, robust legislation is brought forward, the public is reliant on the goodwill of tech companies, or the bad press they attract, to compel them to act.
During the crisis the public have turned to public service broadcasting as the main and most trusted source of information. Beyond broadcasting, public service broadcasters have contributed through fact-checking and media literacy initiatives and through engagement with tech companies. The government has also acted against misinformation by reforming its Counter Disinformation Unit to co-ordinate its response and tasked its Rapid Response Unit with refuting seventy pieces of misinformation a week. However, the Committee has raised concerns that the government has been duplicating the efforts of other organisations in this field and could have taken a more active role in resourcing an offline, digital literacy-focused response. The Committee agrees with the CMA that features of the digital advertising market controlled by companies such as Facebook and Google must not undermine the ability of newspapers and others to produce quality content. Tech companies should be elevating authoritative journalistic sources to combat the spread of misinformation.
Finally, it has considered the work of Ofcom, as the government’s current preferred candidate for online harms regulator, as part of its discussion of online harms proposals. It calls on the government to make a final decision now on the online harms regulator to begin laying the groundwork for legislation to come into effect. The government must empower the new regulator to go beyond ensuring that tech companies enforce their own policies, community standards and terms of service. The regulator must ensure that these policies themselves are adequate in addressing the harms faced by society. It should have the power to standardise these policies across different platforms, ensuring minimum standards under the duty of care. In addition, the regulator should be empowered to hand out significant fines for non-compliance and there should be criminal sanctions for criminal conduct. The regulator should also have the ability to disrupt the activities of businesses that are not complying, and ultimately to ensure that custodial sentences are available as a sanction where required.