“SafetyTech”: what it is, and key legal issues

September 22, 2020

I’ve seen the term “SafetyTech” crop up quite a bit lately, mostly due to a recent report for the UK government (PDF) (and also an HTML version for those who prefer it), and it is a buzzword which, I have no doubt, we will see used with increasing frequency.

In this article, I outline what it is (and, it seems, what it is not), and comment on some of the key legal issues.

What is “SafetyTech”?

According to the report, there is “no unified or agreed definition of online Safety Technologies or Safety Tech”.

However, the report adopts a working definition of

“technology or solutions to facilitate safer online experiences, and to protect users from harmful content, contact or conduct.” 

It breaks these down into technologies which:

  • Protect users from social harms when using technology and online platforms or services (typically through filtering or controls, or through detection and removal of potentially harmful content); or
  • Provide mechanisms to flag, moderate, or intervene in the event of harmful incidents when using online platforms or services.

In essence, the report’s authors adopt a relatively broad interpretation of the term, covering the range of technologies and services which can be used to keep us safe online — but seemingly with one notable omission (read on).

But SafetyTech goes beyond just what is covered in that one report.

There is also the Online Safety Tech Industry Association, and the trade group for online age verification systems, the Age Verification Providers Association. More broadly still, companies and individuals have been working for years to give users safer online experiences, whether through ad/tracker blocking for browsers, webcam covers, scripts to enable users to automatically delete old posts, two-factor authentication to limit unauthorised account access in the event of compromised passwords, password managers… I could go on and on.

So, while the term “SafetyTech” is relatively new, developing tools and techniques to keep people safe online is not.

The elephant in the room: encryption

One of the key technologies keeping us safe online is encryption.

But there is no mention of encryption in the government’s report.

One possible explanation is that this is an examination of the UK’s SafetyTech market, and no-one in the UK is doing it.

But that’s not true.

  • We have UK-based ISPs — like Andrews & Arnold — which are offering DNS-over-https and DNS-over-TLS, to protect their users from unwanted snooping on, or interference with, their DNS look-ups. (They also offer an “anti-slamming” setting on accounts, to help users avoid the risk of some miscreant abusing the migration process and moving their Internet access to another provider.)
  • We have UK-based organisations, The Matrix.org Foundation and New Vector Limited, behind the decentralised,  end-to-end encrypted by default, messaging platform, matrix / Element, protecting their users from both corporate surveillance by a platform operator (or, for example, security lapses giving third parties access to administrative interfaces, giving access to private messages) as well as unwanted third party snooping.
  • We have a UK-based individuals developing valuable online safety tools.

– Alec Muffett (Twitter) develops the Enterprise Onion Toolkit, which helps organisations make their content accessible within the Tor network, offering increased security, privacy and availability to their users. (And if you think Tor is just a “dark” place for bad things, check out our free Tor explainer video.)

– Jules Field (Twitter) develops zend.to, a self-hosted end-to-end encrypted file transfer platform for simple and secure transfer of large files — great, for example, for avoiding sending sensitive personal data via email, helping protect customers from data leaks from insecure transmissions. And not crashing mailboxes or email servers with unreasonably large attachments.

Inhibiting or preventing unwanted third parties from reading or even modifying your messages is an obvious example of technology to “protect users from harmful … conduct”, so their silent omission is surprising.

While it’s commonplace to frown on some applications or implementations of encryption, its role in keeping all Internet users — children wishing to chat with their friends away from the prying eyes of predators, vulnerable adults seeking confidential guidance and counselling, anyone buying or selling online — safe is undeniable.

SafetyTech’s key legal issues

I thought it would be helpful to outline some of the key legal issues which those developing and implementing SafetyTech will need to consider.

It is not, of course, an exhaustive list and each technology and, most likely each implementation, will have its own set of risks and requirements. Thinking these through before getting to the point of deployment will be essential.

Indeed, most, if not all, of the key SafetyTech legal issues I highlight here require consideration as part of the development process: they’re not things that someone can solve once the product is developed as a “morning after’ legal bolt-on.

Freedom of expression

Some SafetyTech systems set out to constrain and restrict what conversations people can have online and what information they can impart and receive. For example, “web filtering” or “parental control” technologies might attempt to stop people subject to those controls, while other technologies might attempt to stop someone from including words or phrases in a post or message.

Human rights law recognises the importance of freedom of expression and sets out limited grounds for derogating from it.

This is not a freedom enjoyed only by adults as Article 13 of the Convention on the Rights of the Child states:

“The child shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of the child’s choice.”

Private companies implementing technologies which interfere with this fundamental right may have an easier ride than public sector organisations (such as schools), or governments attempting to put restrictions into legislation but, nevertheless, there will be issues to consider here.

For example:

  • decisions around website classification which result in the wrongful blocking of a site could lead to liability given the significant impact an incorrect blocking decision could have on a site.
  • ISPs deploying this kind of technology will be mindful of the interplay with the EU’s “net neutrality” / “open Internet” framework. Although it doesn’t look as if the relevant regulator in the UK, Ofcom, is unduly concerned, BEREC’s recent guidance indicates that ISPs must offer an unfiltered connection, probably by default.

Interception (and avoiding criminal offences)

Organisations which deploy technologies which entail accessing the content of a user’s non-broadcast communications while in the course of transmission will need to stay the right side of the criminal offence of unlawful interception.

For example, interfering with a user’s DNS queries, and monitoring the content of their communications as they pass across a network, are likely to trigger the interception regime.

The UK’s governing framework for this, the Investigatory Powers Act 2016, contains some helpful provisions (which I was involved in drafting!) but it is not the easiest law to navigate.

Data protection and ePrivacy

No list of potential legal risks facing online technologies would be complete without reference to data protection and privacy issues but, in the context of SafetyTech, the issues are likely to be numerous and complex.

Whether it’s profiling users via machine learning, the use of “big data”, monitoring children’s communications, surveilling their web browsing, forcing less secure connections, impersonating others (e.g. for a man-in-the-middle attack on their communications) or attempting to verify age or identity, organisations will need to ensure that what they are doing meets (at a bare minimum) the legal requirements for the processing of personal data.

The additional rules in the context of electronic communications, such as those relating to the storage of, or access to, information on user’s devices (perhaps running local scripts to attempt to detect unwanted or prohibited activity) could be relevant in at least some cases.

This gets all the more interesting in the context of processing of personal data of child users of online SafetyTech services, from being sufficiently transparent about what processing is going on, to carrying out a suitably robust data protection impact assessment and more.

For systems which entail reporting to law enforcement agencies, the issues are likely to be even more pronounced, demanding greater analysis and scrutiny — getting data protection correct when it comes to voluntary co-operation with law enforcement is not a trivial matter.

Authorised access and avoiding computer misuse

Some SafetyTech is likely to entail access to, or use of, third party platforms — for example, accessing content made available online, perhaps through scraping or perhaps through an API, to create datasets against which analysis or matching can be carried out.

SafetyTech providers will need to ensure that their access is in accordance with the law, which might entail negotiating (and, presumably, paying for) access to those third party systems.

Similarly, SafetyTech which makes changes to user’s devices, or interferes with their operation, will need to be assessed from the perspective of the computer misuse framework to avoid the potential commission of criminal offences.

Copyright (and other rights) infringement

Developers of SafetyTech which entails the collection of content which is the subject of intellectual property rights — for example, copyright — will need to consider if they can take advantage of statutory exceptions, or else how they can obtain appropriate licensing for the rights they would otherwise be infringing.

Obvious examples of this type of SafetyTech include those scraping social media sites or building up vast databases of images for use in classifiers. If you’re copying a user’s photographs, on what basis are you doing so?

Contracts and challenges with customers

Ultimately, the aim of most companies involved in SafetyTech is to make money. And to make money, they need customers.

Indeed, the report on SafetyTech suggests that the government (and the broader public sector) should be required to contribute to the coffers of SafetyTech companies, with a recommendation to:

Update public procurement guidelines, to ensure that public sector organisations are making sufficient use of Safety Tech to protect their own systems and members of the public.” 

With customers come commercial deals, and contract terms, and warranties, and indemnities, and service level agreements, and disputes to resolve, and consumer information and cancellation requirements: all those things which turn a technology into a business.

Advertising standards

According to the self-regulatory framework for advertising in the UK, advertising materials must be truthful, decent, and honest. Those three words are underpinned by a substantial self-regulatory code of practice which sets out the basis by which the Advertising Standards Authority will assess complaints.

Adverts which assert that buying or using a particular service or technology will make a user safe online must be evidenced-based and must not over-play the degree of protection they offer. Limitations in the protection should be made clear.

Although the ASA is itself self-regulatory and has no formal powers, Trading Standards holds the “backstop” powers under laws relating to misleading advertising. The Competition and Markets Authority also has new powers to seek “online interface orders” to curtail the spread of harmful advertising or problematic products.

The “Online Harms Whitepaper”

If you operate a service online for users in the UK, there’s a strong chance that you will have come across the UK government’s “Online Harms Whitepaper”.

The underlying assertion is that some companies are not doing enough to protect users, and the direction of travel appears to be the imposition of a (from a human rights law point of view, rather questionable) “duty of care”.

The debate around this is ongoing, and it is not law at the moment, but it seems likely that SafetyTech will form at least part of what is to come.

Danger through SafetyTech: mitigating risks of SafetyTech itself

Aside from the more legal-focused thoughts above, an angle which I have not seen explored in any considerable detail is the danger which SafetyTech could produce, if not designed and implemented with care.

Since the positioning of SafetyTech is typically (in my experience) an attempt to redress or mitigate harms caused by careless tech companies, it’s important that risks associated with SafetyTech itself are identified and mitigated.

For example, every few months, some bright spark comes up with the “novel” idea that online abuse could be curtailed by the imposition of a “real names only” requirement or a requirement that users must verify their identity before they can post.

In my experience, such proponents  rarely consider the implications that this kind of requirement might have on, say, victims of domestic abuse for whom anonymous or pseudonymous online communication — access to support groups, for example — might be vital. Nor the impact on those who do not have a form of government-issued identification, which is likely to lead to a further digital divide between the “haves” and “have nots”.

Surveillance-centric SafetyTech, designed with the intention of enabling parental oversight and control over which websites their child is visiting could enable coercive or controlling behaviour.

Consider, for example, the risks to a not-out gay teenager in a homophobic family environment if their private browsing was not, in fact, private. Or if it was deployed by one spouse against another. Or if it was deployed by a repressive regime against its citizens to look for dissent or what it considers to be sedition.

What about unintended bias in filtering resulting from the use of a model trained on biased data or implemented through a set of biased expectations or rules?

Presumably, purveyors of SafetyTech will be leading by example and conducting detailed and independent fundamental rights assessments (e.g. as part of data protection impact assessments, where the rights which need to be considered stretch beyond mere data protection or privacy rights) and reviews of the potential safety issues arising from the existence or deployment of their services.

I very much hope that they will be publishing these, both so that others can learn about “best practice” in undertaking this kind of analysis and to facilitate independent scrutiny.

Posted in Uncategorized