The legal and reputational risks in the end use of technology: what are they and how can we mitigate them?

Sarah Ellington discusses potential legal and reputational risks in the end use of technology, drawing on examples from court cases and non judicial complaints, and provides some practical pointers on how to mitigate such risks.

Recent decades have seen a dramatically accelerating pace in the development and adoption of new technologies such as big data, facial/voice recognition system, machine learning, artificial intelligence, satellite and drone technologies. We have also seen a rapid rise in the understanding of human rights, and the need for strong governance systems within multinational corporates - particularly since the UN Guiding Principles on Business and Human Rights were unanimously endorsed by the UN Human Rights Counsel in 2011. 

Coupled with the rapidly evolving nature of technological development, the significant spread of end users and the limitations of existing legislation, this can leave tech companies exposed to significant legal and reputational risks, in particular where their tech is used in unintended ways, or by unintended recipients. Recent court cases and non judicial complaints can help tech companies better understand these risks and point to some ways to potentially mitigate them.

What is technology?

It might seem like a strange question to pose, especially to the sophisticated reader of Computers & Law magazine, but sometimes it is instructive to go back to  basics. The Collins English Dictionary defines technology as "[referring] to methods, systems and devices which are the result of scientific knowledge being used for practical purposes". The end practical purpose for which technology is used is therefore integral to, and arguably inseparable from, the technology itself. This provides the underlying premise on which a technology company may be found liable  for the end use of its product, even where it claims that the end use is entirely independent of the technology company itself.

What are the most prominent areas of risk? 

On examining published cases, a number of themes emerge - highlighting key areas of human rights risk associated with the ever increasing proliferation of technologies and the exponential rise in the number of end users. The below is a summary of some of the main cases touching on these themes, across both court litigation and non judicial complaint mechanisms, specifically National Contact Points, which consider alleged failure to comply with the OECD Guidelines for Multinational Enterprises and the UN complaints mechanism, which links into special mandates and working groups set up by the Office of the High Commissioner for Human Rights.

1. Court litigation 

Under the UN Guiding Principles on Business and Human Rights, states are considered to be under a duty to protect human rights, including through the provision of judicial remedy. Many international covenants endorsed by states also include a duty on states to ensure that those seeking remedy are able to have their rights determined by competent, judicial, administrative or legislative authorities.  Courts, therefore, provide a key tool to determine the extent of rights and provide remedy to those affected. However, cases which touch on human rights issues are not always brought by rights-holders themselves: sometimes companies are called upon to enforce contractual and/or intellectual property rights in order to prevent, or stop abuses occurring. Some key themes in such cases relevant to the tech sector are set out below.

(1) Surveillance. Whilst surveillance technology is often designed to help improve public safety and national security, such technology could also be misused to enable human rights breaches. In one French case, two NGOs filed a criminal complaint asking the French authorities to investigate the alleged involvement of a French tech company in supplying surveillance equipment to Syrian government. The case was dismissed in December 2020 on the basis that there was insufficient causal link between the surveillance equipment and the human rights abuses.

In another two French cases,a defendant company sold its surveillance technology to the Gaddafi Government in Libya and the Egyptian government. NGOs filed two criminal complaints. In June 2021, the company and its four executives were all charged with "complicity in acts of torture", a judicial outcome that was legally and reputationally damaging to both the company and the individual executives.

(2) Spyware. Spyware is a type of malicious software which can access your personal devices without your consent and then relay it to other parties, potentially violating individuals' rights to privacy and freedom of expression. 

In 2019, the owner of a messenger application filed a lawsuit in the US against an Israeli spyware vendor2 who had hacked the company's server to plant spyware on 1,400 user devices worldwide, targeting journalists, lawyers, religious leaders, and political dissidents. The owner of the spyware app filed claims against the Israeli vendor that: (1) hacking of user's data violated the federal Computer Fraud and Abuse Act as well as the California Comprehensive Computer Data Access and Fraud Act; (2) the spyware company had breached its contracts with the plaintiff; and (3) "wrongful trespass" on plaintiff's property. In November 2021, the US appeal court rejected the defendant's claim of protection under sovereign immunity laws and is allowing the lawsuit to proceed.3 The case is still ongoing. 

(3) Data harvesting. When talking about data harvesting or data collection, it's impossible to avoid the very recent English case brought by Mr Lloyd against a tech company.4 Mr Lloyd, the former executive director of Which?, and a consumer rights activist, was pursuing a representative claim, on behalf both of himself and other affected users, against the company for collecting, without consent, iPhone users' browser generated information (which constituted personal data under Data Protection Act 1998 and thereby breaching its obligations under the DPA 1998 and sought damages suffered by these users. Even though the Supreme Court unanimously dismissed his claim, the case itself lasted three years in English courts and the tech company had been in the  spotlight in terms of data collection during and after that period. 

Across the pond, nine users of online applications filed a similar class action complaint under the federal and state law in a US court against the same tech company5 who provided services to these online applications. The claims were based on different causes of action due to (i) the different platforms used for the alleged data collection, with the former using users' browsers and the latter via third party online applications; and (ii) different legal bases (as the former based its claims on data breaches under UK's DPA 1998 whilst the latter invasion of privacy under US federal and state law). In this data collection case, the plaintiffs advanced six claims for relief, under (i) section 2511(1)(a) of the Wiretap Act; (ii) section 631 and 632 of the California Invasion of Privacy Act, (iii) the California Computer Data Access and Fraud Act, (iv) California's common law right against intrusion upon seclusion, (v) California's constitutional proscription of invasion of privacy, and (vi) California's Unfair Competition Law. The tech company made a motion to dismiss the complaint on the bases that (i) the alleged "secret" software does not exist; (ii) it had all the consents from consumers and developers of these online applications. 

In May 2021, the US court held that the consumers did not give consents to the tech company collecting their data via third party applications when the "Web & App Activity" setting was turned off. However, the developers did give consents (at the time when they signed the agreements before using the tech company's platform to build and maintain the applications) and  the tech company did not deploy any "secret" software for the alleged data collection. In conclusion, the defendant's motion to dismiss was granted with respect to plaintiff's claims for relief under (i), (ii) (s632 of CIPA) and (vi), and the motion was denied under (ii) (s631 of CIPA), (iii), (iv) and (v). The plaintiffs then amended their complaint for the defendant to respond. In January 2022, court granted the defendant's motion to dismiss the claim under (ii) s631 of CIPA. The plaintiffs' case remains ongoing, with two Californian common law privacy claims and a CDAFA proceedings past the pleading stage.

(4) Discrimination. Recent investigations by campaign groups have suggested that social media platforms may be producing discriminatory contents due to the algorithms or machine learning they employed. In US, a few social media companies have recently been flooded with civil rights lawsuits due to the alleged racial, gender and/or age discrimination caused by their targeted advertising.

(5) Workers' rights in the gig economy. The rapidly expanding gig economy, utilising high profile digital platforms has arguably eroded the traditional concept of employment, using unconventional working arrangements such as casual, temporary, freelance, on demand, or "gig" work, to increase productivity and profit.6 The companies developing these digital platforms or apps, have often argued that they are simply technology companies, who connect users with service providers, without taking any responsibility for the delivery of the services themselves, or the individuals who provide them. There have been headline cases in many jurisdictions, especially in the UK, US and some EU countries which have stronger labour law protections, finding that the technology companies are treating service providers as workers (due to the control exercised through the terms and conditions necessary to provide services through the apps), rather than contractors. This is part of a wider recognition that companies enabling the provision of services through technology cannot simply label themselves as technology companies, thereby distancing themselves from the real world impacts of those technologies.

2. NCP complaints

National Contact Points are bodies set up by governments of states which are members of the OECD in order to promote the effectiveness of the OECD Guidelines on Multinational Enterprises and also seek to resolve issues arising from the alleged non observance of the Guidelines. NCP complaints in the Technology and Telecommunication sector have centred around a few key themes:

(1) Personal dissemination of information. An NGO filed complaints to the UK NCP against 6 telecommunication companies, alleging that these companies facilitated mass interception of internet and telephone traffic by granting the UK Government access to their networks for a surveillance program. However, the UK NCP rejected the complaints on the basis that the activities were in accordance with the law. 

(2) Facilitating military action. A UK civil society organisation  filed a complaint to UK NCP against a telecom company, alleging that the company has contributed to gross human rights violations by providing communications infrastructure from a US military base in the UK to Camp Lemonnier in Djibouti. The UK NCP rejected the first complaint filed in 2013 by arguing that the CSO had not substantiated a link between the communications services and the impact of the US drone operations. Based on new evidence, the organisation filed a second complaint with the NCP in August 2014, then a third related complaint in October 2014. In January 2015, the UK NCP rejected these two complaints on the basis that the organisation could not substantiate its allegations and has not offered any new direct knowledge of the company's link to the impacts but relied on new information from generally available sources.

(3) Contributing to internet censorship. An NGO filed a complaint with the Italian NCP against an Italian telecom equipment company, alleging that the advanced technologies offered by the Italian telecom company to an Iranian telecom company risked contributing to internet censorship and suppression of fundamental freedoms and human rights in Iran. The Italian NCP rejected the complaint on the grounds that information submitted by the Italian company showed the project was of a narrower scope than that initially announced; that the Italian company had taken adequate steps to disable interception of the telecommunication data; and that the Memorandum of Understanding between the two companies was yet to be finalised as a contract, suggesting "that the current business relationship cannot be assessed as an actual or potential breach of the guidelines". This complaint made the Italian company re assess their commercial relationship with the Iranian company and whether they would go ahead with taking the legal and reputational risks.

(4) Use of tech platforms to incite violence. On 9 December 2021, 16 individuals (assisted by an NGO) filed a complaint with the Irish NCP against a tech company. The complainants alleged that the military and others used the tech company's social media platform to incite violence, which resulted in numerous human rights violations suffered by a minority group in Myanmar in 2017. The complainants have alleged that the tech company was in breach of the Guidelines because it: (1) failed to conduct adequate due diligence for its business operations in Myanmar, (2) contributed to human rights violations in 2017 through both its acts and omissions; (3) did not have a policy commitment to respect internationally recognised human rights as at 2017, and has since issued one that is not compliant with the Guidelines; and (4) failed to provide a remedy despite contributing to the human rights violations. The filing to the Irish NCP coincides with court filings. On 6 December 2021, a number of individuals from a minority group based in the US filed a class action lawsuit against this tech company in California for the role the platform played in violence committed against the minority group since its entry into the telecommunications market in Myanmar. A letter before action has also been sent making similar allegations in England.

3. UN complaints 

The United Nations Human Rights Council has appointed a number of Special Procedures mandate holders - made up of special rapporteurs, independent experts or working groups. The Special Procedures mandate holders play a role in investigating complaints made to them and seeking to obtain resolution of issues. In recent years, the majority of UN complaints against tech companies have been made in connection with breaches of rights to freedom of opinions and expression, rights to privacy and harmful dissemination of information. 

(1) Withholding access/restricting freedom of expression. The aforementioned Israeli spyware tech company has also faced UN complaints against them due to their interference with the right to freedom of opinion and expression and right to privacy. Another example is the alleged abuse of the right to freedom of expression and information by a social media platform in relation to its operations in Myanmar.7

(2) Harmful dissemination of information. There have also been complaints alleging that some social media companies have been facilitating the dissemination of extremism and terrorism, as well as spreading misinformation during the Covid-19 pandemic.

Who is bringing claims?

As we can see from the above cases, NGOs, individuals and other tech companies are often the parties bringing these claims/complaints either against tech companies or against end users misusing technologies. NGOs often publish the progress of their cases on their official websites, and invite coverage by national and international press, thereby further exposing companies to reputational risks as well as actual/potential legal risks. There are also an increasing number of group/representative actions against tech companies, especially where the rights of a large population of consumers have been harmed. 

What remedies do claimants usually seek?

Remedies sought are, of course, dependent on the nature of the claims made as well as the forum of the dispute – for example, National Contact Points are only able to issue recommendations while the UN complaints procedure focusses on encouraging dialogue and finding an agreed solution. Some common examples of remedies sought are set out below. More wide-ranging and often forward looking remedies are also achieved through settlements between the relevant parties.

1. Compensatory and punitive damages. Damages are the most sought remedies in these claims, alongside other remedies.

2. (Prohibitory) injunction preventing further action. In the  spyware case, the company whose system was hacked by the spyware sought a permanent injunction blocking that spyware company from attempting to access its computer system.

3. (Mandatory) injunction specifying specific remedies. In a German case,8 the claimant demanded court to order an online search engine to stop showing the specific search result which could easily be understood to imply that the claimant was an incurable sex offender. The Court held that this amounted to infringement of his general right of personality and breach of duty of care and therefore granted such injunction.

4. Criminal sanctions. As we can see from the above mentioned case where the French company sold surveillance technology to Gaddafi/Egyptian governments, complicity in human rights breaches could bring criminal sanctions.

5. Destruction of data gathered. In a US case,9 a tech company obtained clients' faceprints without their consent. As these individuals were survivors of domestic violence and sexual assault, and members of other vulnerable communities, they had particular reasons to fear a loss of privacy, anonymity and security. The plaintiffs sought orders requiring the defendant tech company to (1) destroy all faceprints they gathered; and (2) stop capturing new faceprints without consent under the state's legislation.

6. Other remedies. Other remedies have also been sought, such as declaration as to applicability of statute; user damages (compensation for a right to control use of property); and unjust enrichment (disgorgement of profits).

Mitigating risks

So, what should technology companies do to mitigate risks? Best practice under relevant soft law, or voluntary standards, including the UN Guiding Principles on Business and Human Rights (which also forms the basis for a number of the recent European regulatory initiatives on due diligence) includes:

  • Conducting due diligence to understand the risks which your business operations and the operations of those you do business with may pose to the human rights of third parties;
  • Undertaking action to prevent or mitigate the potential for those risks to occur;
  • Providing access to remedy where human rights are breached and your business has caused or contributed to the breaches, and/or exercising leverage over third parties who you have a business relationship with if those third parties have caused or contributed to the breaches;
  • Reporting on the actions undertaken under the above.

Undertaking these actions is designed to stop human rights abuses occurring, and ensuing that if they do, those affected have access to effective remedy, thereby also mitigating legal and reputational risks to the business concerned.

Red flags for liability

Even where a business is not ready to undertake a full due diligence process as described above, business should be mindful of red flags for (potential) liabilities. These may include:

1. Obviously unlawful materials. If materials shared on a platform are obviously unlawful (e.g. sexually offensive or extremely violent) and have been viewed many times, it is difficult for the operator of such platform to argue that it had no knowledge of such materials. 

2. Tech customised and implemented for a specific purpose/jurisdiction. In the French surveillance case, there were obvious red flags for criminal liabilities arising from the combination of the nature of the security monitoring  technology and its implementation in specific jurisdictions (i.e. Libya and Egypt).

3. Breach of statutory rights. Statutory liabilities will likely arise if court found the breach of statutory rights. The best examples would be data breaches under data protection law and other breaches of consumer protection legislation.

4. Gravity of harm. The greater the harm caused to claimants, the greater the reputational and legal risks.

5. Lack of knowledge and training. If a tech company's management and employees have no/limited understanding that technologies can cause harm and expose companies/individuals to civil and/or criminal liabilities, they will be less well equipped to deal with any issues as they arise.

Tech specific mitigating actions

Having discussed the court cases and non judicial complaints in relation to end use of technology, how might tech companies mitigate these risks?

1. Carry out due diligence and prevent harms. It is important for tech companies to carry out due diligence to understand how their technologies would be used, who would be the end users and who would be targeted, so the technology is not used to facilitate or contribute to the harms. If the company is aware of the misuse of its technology, a lack of action may still expose the company and its executives to legal and reputational risks. In these situations, action may need to be taken in order to prevent harms (or to remedy breaches/mitigate any ongoing risks). If the company is unsure about how to react to a sudden political crisis, specialist advice may be sought before making any decisions.

a. Selecting trusted intermediaries. Where possible, tech companies should use trusted intermediaries between themselves and end users, with the aim of ensuring that the technology is only used by intended groups and in an intended way.

b. Obtain users' consents wherever necessary and when required by local data protection law or privacy law (in US). Even though audiences in the EU or UK may  be bored of the discussions on GDPR, it's still vital to emphasise the importance of data protection law since online data is becoming one of the most valuable assets in many companies and legislative scrutiny will likely be tightened in the foreseeable future. Since the GDPR came into force in May 2018, it is considered to have become one of the most developed legal areas for human rights protections.

2. Utilising contractual provisions. In IT contracts, the tech company could seek to put into place indemnities or other contractual provisions to ensure  third party contractors are obliged to protect against tech falling into the wrong hands/being misused and will be equally "on the hook" if any issues arise - thereby creating contractual leverage.

3. Building in 'expiry' unless approved end users. For the end users over whom tech companies have fewer or no controls,  access to the technology could be limited to a short period only, in order to limit the exposure to legal or reputational risks arising from the harms caused by unapproved end users. Further or alternative protections may also be provided by ensuring contractual terms require updates/continued provision of information for due diligence, without which the service may be withdrawn.

4. Access to remedy. Tech companies themselves should consider facilitating accesses to remedy before resorting to judicial or non judicial proceedings. Examples include disputes boards, the purpose of which is to save time and costs on disputes, ensure the services are delivered on time and on budget, and help maintain relationships. However, some disputes boards have been criticised for failing to offer effective remedy. The UN Guiding Principles on Business and Human Rights set out a number of "effectiveness criteria" which can act as a guide when setting up a non judicial mechanism for access to remedy.

5. Exercising property rights. If there is a security breach by an end user, the company who suffers such breach could, as in the spyware case:

a. use property rights to protect itself and ultimately its consumers' data in the circumstances where there is no existing contract to govern the relationship between the property owner and the end user. The property owner might be able to get an injunction against the use of technology in that specific way (i.e. the way which is not authorised or licenced); or 

b. enforce the contractual obligations/liabilities if there is a valid contract between them. A security breach could fall within one of the specified circumstances under the breach clauses entitling the tech company to withdraw access to a product. In this way, tech companies may use property rights and contractual rights to protect the human rights of individuals.

______

Notes & sources

[1] NGOs v. Amesys; NGOs v. Nexa Technologies (ex Amesys)

[2] US District Court, Northern District of California, Case 3:19 cv 07123 JSC, Document 1

[3] US Court of Appeal for the ninth Circuit, case no. 20 16408

[4] [2021] UKSC 50

[5] US District Court, Northern District of California, case: 3: 20 cv 04688 RS, Document 1

[6] https://media.business humanrights.org/media/documents/files/documents/ES__CLA_AB_2019_ENGL_FINAL_22_Mar.pdf

[7] Link to the news: https://www.bbc.co.uk/news/technology 43385677

[8] Higher Regional Court Cologne, file no: 15 U 56/17

[9] US Circuit Court of Cook County, Illinois, case no. 20 CH 4353

______

profile picture of sarah ellington

Sarah Ellington is a recognised expert on disputes relating to environmental, social and human rights issues.

-----------

This article is also published in the Spring 2022 issue of Computers & Law which has been guest edited by the SCL Sustainability and ESG Group


Published: 2022-05-10T16:00:00

    Please wait...