Coran Darling and Hayley Milner survey what regulators in the UK, EU and US are doing to safeguard human rights in the new AI era.
The TV turns on, without prompt, and switches to the news. The company you are working for has decided to implement a new AI system, rendering your role and the roles of many of your colleagues redundant. You ask your virtual assistant to bring up your social media and attempt to post your distaste at the decision. For some reason, the post does not send. It states that the AI moderator has viewed your opinion to be contrary to their content guidelines. A new notification flashes on screen and indicates that your private messages contain details that are contrary to the government’s ‘suggested morals’ and your account is quickly shut down. You sit, staring at the black, empty, screen and wonder when AI became so pervasive in everyday life.
Until relatively recently, the thought of widespread use of complex and sophisticated AI was one of a distant future, reserved for movies where the protagonist fights against a growing technocratic society. While the use of these complex technologies has, so far, been a promising and encouraging development that can be used for the good of society, the growing potential for AI to be used for less altruistic means is apparent. Several initiatives, including a recent open letter signed by several prominent technology experts seeking a pause on certain AI experiments, have subsequently been put into action in order to quell the potential for AI to harm a person’s human rights, democracy, and the rule of law more generally.
This article examines several significant concerns modern AI raises for wider society and the approaches regulators have taken in their mitigation.
“A robot may not injure a human being, or through inaction, allow a human being to come to harm”
Asimov’s collection of technological allegories, I, Robot, attempts to apply three core rules that robots must follow throughout their lifecycle. Most referred to is the first rule, that harm may not come to a person due to the behaviour of their artificial counterpart. In each of the stories, we quickly find out that attempting to control AI is far more complex than anticipated. This is also the case in reality, as we find that many of the AI programmes currently circulating (whether this be generative AI or predictive algorithms) have the potential to, through normal intended function, harm or impact the rights and lives of citizens.
The proliferation of advanced technologies, such as those enabling real-time facial recognition and surveillance of parties, has undoubtedly been a positive step in the ability of law enforcement and intelligence agencies to carry out their responsibilities in the protection of the public. While a useful asset in good hands, the fact that misuse, or use by malicious parties, may result in the infringement of a person’s right to privacy cannot be escaped.
Over the past few years, use of advanced technology by state entities across the world to identify parties involved in protests and events of social unrest has led many to raise concerns about their newfound abilities. In some instances, biometric data has been proven to successfully identify participants and allow their information to be passed on to authorities seeking to limit expression of certain ideologies.
The potential to subdue and manipulate public behaviour using these technologies under the guise “Big Brother is watching you” has been recognised by several stakeholders and is already finding its way into the drafting of regulators. In the EU, for example, the EU AI Act is set to prohibit the use of real-time biometric data in public places outside of very strict parameters and only to the extent necessary. Whether this approach will be taken throughout other jurisdictions remains to be seen.
Speech and expression:
AI also has the potential to inhibit a person’s ability to express themselves. An increasingly common example of this, as was seen above, is the use of AI-powered content moderating mechanisms on social media platforms. These programmes are used to detect content that the creator or website host deems inappropriate. These posts are then prohibited, restricted, or illicit a ban for the user seeking to express their thoughts.
When used appropriately, these moderators can be used to maintain decorum on the digital Wild West. Algorithms can be used to detect hateful and offensive speech, prohibit spam, and identify and ban persistent internet trolls. These same mechanisms, however, can equally be used to censor expression in the event it was contrary to the opinion, morals, or beliefs of those who implement the programmes. Posts containing politically sensitive speech, for example, could be identified by state agencies and removed at the point the user clicks ‘share’. Should the technology be used in such a manner, the public may find themselves exposed to expression that is heavily weighted towards the ‘preferred’ type or may have their voice silenced before they are able to share their opinion.
The ability of AI to overtly alter our behaviour is not the only risk it poses. Through technologies such as behavioural trackers, AI also has the potential to alter a person’s behaviour through suggestive subliminal practices. Often, this takes the form of ‘personalisation’ programmes, where an algorithm will review data inputs from a user’s search history, purchase history, or similar use data points, and recommend ‘preferred’ services, products, or information. A prime example of this is the use of ‘for you’ recommendations that are often present on social media and online retail websites.
Often, however, these offered ‘preferences’ do not actually align with a person’s behaviour. For example, a person searching for images of people getting lost at sea, does not (usually) want to be lost at sea. Nor does a recovering alcoholic want to be offered alcohol. Unfortunately, this function is not simply theoretical, and misuse of recommendation algorithms have been held accountable for the passing of individuals in the past.
It is therefore no surprise that regulators are keen to monitor the use of these types of algorithms to ensure that manipulative behaviour cannot occur. The EU, for example, has included in their proposed regulation that AI systems that manipulate persons through subliminal techniques or exploit vulnerable classes are prohibited from use. The future success of this prohibition remains to be seen, and questions have been raised as to what exactly these types of technologies mean. Advertisements and search recommendations have, technically speaking, the potential to manipulate behaviours and selections based on what they prioritise showing users.
Bias and Discrimination:
One of the most flagged concerns of AI is its harmful ability to perpetuate bias and discrimination in its outcomes. It is therefore understandable that it is one of the main thorns in the side of organisations who wish to garner trust in their AI systems. Biased AI can be all that is necessary to prevent a person being admitted into their chosen university, from gaining access to credit or loans, or accepted into employment.
A study carried out by the Council of Europe in 2018 found that this concern is not without reason, and that there was increasing evidence of gender, ethnicity, disability, and sexual orientation discrimination in the application of several algorithms. This was furthered by the EU Fundamental Rights Agency, who’s own work indicated that AI has the potential to reinforce discriminative practices if not adequately trained and monitored.
These forms of bias and discrimination often, though not in every instance, emerge due to improper training and learning techniques. For example, if an algorithm is trained primarily using data from a specific gender, it is likely that the training and subsequent outputs will not accurately reflect the same outcomes as if it were trained with a more representative spread of data. This issue may also occur where data is purposefully entered in a way that, on paper, reflects total equality to an algorithm that deals with situations where data would not reflect complete equality. Inaccurately representing the context of real-life data in such cases may lead to undesired outcomes or, in some instances, further displays of discriminatory behaviour.
In many instances, regulators may be able to rely on current anti-discrimination/equality legislation to, at the very least, prohibit the use of algorithms that exhibit biased outcomes or discriminatory behaviour. These do not, however, appear to go far enough in instances where bias or discrimination is the result of error or inaccurate data. In this instance, regulators may be able to rely on existing product safety and consumer protection regulations to address this. However, in many cases these laws do not go far enough to protect the public in the case of AI. In response, we are beginning to see the emergence of AI-specific regimes that may allow for redress in the instance that bias or discrimination comes to light.
A notable example of this is the revised EU AI liability regime, which allows parties to seek redress where damage (such as that caused by discriminatory hiring algorithms) occurs at the hands of AI. It is unclear at this stage whether we have gone far enough and may only be able to fully recognise gaps in regulatory systems when cases of bias and discrimination behaviour in the context of AI leads to litigation.
Shutting the barn door after the horse has bolted: Is intervention too little and/or too late?
Several regulators have sought to protect these rights from specific infringement by AI, whether maliciously or accidentally.
The UK Government has, since the release of their National AI Strategy in September 2021, tasked itself with making Britain a global AI superpower within the coming 10 years. As part of this strategy, on 29 March 2023, the Government released a whitepaper detailing their approach to regulation of AI within the UK. The whitepaper recognises the potential for wider societal benefit of AI, while acknowledging that regulation is required to ensure that development and implementation is done in an efficient and safe manner. Particular consideration is therefore given to human rights. The UK approach to regulation of AI is distributed across the UK regulators, and the whitepaper requires that such regulators consider several principles in their approach.
These principles include:
It is hoped that regulators will be able to utilise these principles, alongside existing regulations, such as the Human Rights Act 1998, the Equality Act 2010, and the current data protection regime to protect the rights of the UK public. The Government has noted that, as parliamentary time permits, further, more specific, AI regulations may come to fruition, however the scope and specifics of these regulations remain unclear.
In Europe, the European Commission released a revised Zero Draft Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law (“AICHR”), which is due to be finalised and readied for enforcement in November 2023. Its purpose is to ensure that AI systems are fully compliant with human rights, provide appropriate respect for democracy and its mechanics, and ensure that the rule of law is observed, whether the relevant use of AI is carried out by public bodies or by private organisations acting on behalf of these bodies. It does so by enshrining many of the principles already present in the current European Convention on Human Rights. For example, Article 12 of the AICHR enshrines the principle of equality and anti-discrimination by ensuring that the design and deployment of AI systems is carried out in such a way as to respect the equality of genders, ethnicities, and the rights of vulnerable persons.
The AICHR doesn’t however provide any material methods of ensuring that these principles are upheld. While the AICHR does specify that systems to allow redress for infringements of these rights is required to be established, the method of doing so is not specified. Instead, the provisions refer to domestic regulation to uphold these rights. It is likely that these gaps will be plugged to some extent by the impending EU AI Act, which proposes several tangible obligations on organisations (both public and private) and mechanisms of redress, that will become domestically applicable on enforcement. This is expected to work alongside the new developing EU AI Liability regime, which accounts for situations where a person has suffered damages at the hands of an AI system.
In the US, in October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights. Much like the recent approach taken by the UK to the regulation of AI, the Blueprint sets out an initial framework for how government entities, technology companies, and the public can work to ensure AI is implemented in a safe and accountable manner. So far, the US position has been to avoid direct intervention into state or federal level regulations which will be left to others to decide. Instead, the Blueprint sets out several principles that will help guide organisations to manage and regulate AI in a way that upholds the rights of citizens.
These principles include (in a similar form to the UK approach):
As this is merely a framework of principles, the Blueprint does not offer any tangible protections without the direct intervention of state and federal regulators enacting legislation or the self-regulation of technology organisations and stakeholders. While this allows for regulator flexibility, it is unclear to the extent these principles will be followed or whether sufficient protections will be implemented.
While regulators are waking up to the necessary implementation of legislation to protect the rights of their citizens, the mechanics of democracy, and the wider rule of law, it remains unclear to the extent in which they have succeeded in their goal. We continue to see rapid advancements in technology that far outpace the speed in which the regulatory engine can work. As indicated above, many of these approaches are soft and subject to change. There is therefore no current indication as to whether they have gone too far or not far enough, or whether they have caught on to the need for change too late in the day. What remains the case across all regulatory approaches is that regulators have made it clear AI is here to stay and we are only beginning to see the extent in which the technology is set to revolutionise the way we live, work, and do business.
Coran Darling is an associate at DLA Piper where he is a core member of the firm’s working group for AI and the firm’s AI and health tech teams. He is a member of the Data Ethics Group of the Alan Turing Institute, a member of the European Commission’s AI Alliance, and part of the OECD’s network of experts for AI and working groups for risk and accountability, and AI incidents.
Hayley Milner is a Trainee Solicitor at DLA Piper with experience in finance and technology transactions, having joined from the banking industry. She is an active member of DLA Piper’s Working Group on AI .