Actualising AI Ethics through Algorithmic Accountability

June 16, 2022

Where should a company start with AI ethics?

Where should companies start when they wish to build ethical artificial intelligence (AI) technologies? How can the concept of AI ethics be translated into and embedded within AI technologies? The answers may well lie in the understanding and execution of algorithmic accountability.

Algorithmic accountability aims to examine the ‘soul’ or ethical constitution of intelligent AI computer programmes. Accountability mechanisms set the foundation for AI to be tested and scrutinised against ethics-based criteria. An algorithm’s performance on such tests determines how AI measures up to ethical standards and, based on the outcomes, whether it can be considered ethical at all. This article looks at AI ethics through the lens of algorithmic accountability, as a means to giving the term tangible meaning.

Understanding the fundamentals 

The Cambridge Dictionary defines AI as “the use of computer programmes that have some of the qualities of the human mind, such as the ability to understand language, recognize language and learn from experience”. John McCarthy, one of the founders of the field of AI, defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programmes”.1Computer programmes, therefore, are considered intelligent only when capable of displaying traits associated with human intelligence. Without demonstration of these underlying traits, computer programmes have no metric(s) of intelligence to be evaluated against; hence, the question of their intelligence either does not arise, or is moot. 

AI ethics “is a set of values, principles and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies”.2 Here, the term ‘AI ethics’ bolts the adjective ‘ethics’ onto intelligent computer programmes: so, just as AI must pass the ‘intelligence’ muster, AI’s accountability to ethical criteria (associated with human beings) is critical for AI to qualify as ‘ethical’.

Now let’s examine an oft-used definition of ‘accountability’: “a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences”.3 So what we see is a two-pronged insight into the mechanics of accountability. First, actors are answerable to a forum and may be held liable for their behaviour and second; that forum is empowered to query the conduct of actors and enforce punitive action.4 In this analysis, actors can be likened to AI developers and corporations while courts of law or regulatory agencies are the fora.

So is algorithmic accountability synonymous with AI ethics? I suggest it is and propose a definition along the lines below:

“Algorithmic accountability is a set of quantifiable metrics which can be evidenced through the incorporation of ethics’ principles in the design, build and implementation of the algorithm, and the adherence of such principles in both algorithmic decision-making and the derived outputs.”

This fresh perspective drifts away from a liability-focussed characterisation that hinges on “assigning responsibility when algorithmic decision-making results in discriminatory and inequitable outcomes”.5 The very existence of AI ethics rests on the applied realisation of algorithmic accountability mechanisms. Ethics without accountability is either pure fiction, a phantom notion. Algorithmic accountability is the mechanism that ensures the integration of ethics into AI.

Applying ethics through algorithmic accountability

Let’s examine three practical illustrations where AI ethics principles can be implemented through algorithmic accountability mechanisms.

ETHICS PRINCIPLE(S) ALGORITHMIC ACCOUNTABILITY MECHANISM
Equality and social inclusion Transparency
Trustworthiness Claims-verification
Fairness and absence of bias Data accountability

Transparency: promotes equality and social inclusion

Transparent reporting mechanisms in the use of high-stakes decision-making are critical given the bourgeoning use of AI solutions in industry, government and society. Such mechanisms entails the establishment of characteristics and series of steps that trace and track the development of an AI to ensure ethics are woven into their design, development and build.6 Transparency and consistent reporting mechanisms are the primary apparatus that customers, regulators and society can use to peek into AI solutions, make inquiries of developer corporations and hold them to account where AI violates legal and ethical principles.

A practical example of the transparency mechanism is the use of FactSheets:7 “a collection of information about how an AI model was developed and deployed”. In IBM’s Model Asset eXchange (MAX), there is a FactSheet for Mortgage Evaluator, an AI solution to predict mortgage approval. 

The FactSheet template requires information on training data, inputs and outputs, performance metrics and whether a user will receive an explanation of the model’s decision-making process. As internal and external stakeholders review the FactSheet, the transparency principle is enacted through response formulation, the effort that internal stakeholders make to provide accurate responses, and by addressing any third-party queries. Further, the exercise of completing a FactSheet is itself likely to result in a cultural shift in the users and developers so that they are more willing to move towards fulfilment of the transparency principle. A properly completed FactSheet can be considered as explained evidence of the AI system having satisfied the ethical requirements it set out to embed either from the start of its lifecycle or during iterations.

The FactSheet for Mortgage Evaluator reveals bias-detection measures prohibiting the use of an applicant’s race, ethnicity and gender in the making of mortgage-related decisions. Application monitoring on fairness, explainability and adversarial robustness metrics, along with worked out computations, are included as part of the solution’s policy document. Principles of equality and social inclusion are thus built-in and made available for transparent review. 

Claim-Verification: explains black-box models and imparts trustworthiness

When faced with an obscure algorithm, the following explanation methods may be used to establish accountability: (i) process(es) to explain black box models; (ii) techniques to understand the outcomes of such models; (iii) means and ways to inspect; and (iv) approaches to design transparent systems. The ethical principle of trust is upheld through a “degree of justification for emitted choices”9 that can be shared with individuals seeking an explanation of – and assurance on – the system’s accuracy and reasonableness. When properly drafted, the explanations generated can foster human trust and set the ground for AI to be held liable for discriminatory or unfair biases revealed to model evaluators or lay assessors. 

The Information Commissioner’s Office (ICO) and The Alan Turing Institute10 highlight two subcategories for explaining AI decisions:

  • “Process-based explanations” that cover the design, build and deployment stages of an AI solution and provide details on its governance; and
  • “Outcome-based explanations” that aim to examine a specific decision and expose its detailed workings.   

As we know, Articles 13 and 14 of the Regulation (EU) 2016/679 General Data Protection Regulation11 contain set requirements on information to be provided to data subjects. These Articles are further supported by Recital 60,12 which mandates that data subjects be notified of the processing of their personal data along with reasons for such processing, which together establish the GDPR’s principles of fair and transparent processing. Trust is further fostered through Article 22, which requires AI does not subject an individual to a decision based solely on automated processing, for example profiling, if such processing is likely to have any legal effect or, in any way, significantly affect the individual.13 

The voluntary character of ethics principles and their translation into tangible activity calls for the implementation of institutional, software and hardware mechanisms including ‘red teaming’ exercises (where a ‘red team’ attempt to find flaws and vulnerabilities in systems by adopting an attacker’s mindset and practices), AI incident sharing and privacy preserving machine learning, to name a few.14 These mechanisms provide the foundation for evidencing responsible AI and permit the verification of claims on AI development. If we define verifiable statements as “falsifiable statements for which evidence and arguments can be brought to bear on the likelihood of those claims being true”, then independent third-party auditors could play a vital role in assessing a developer’s claims about fairness, privacy and security. Publicised audit results will lead to verified claims and increased confidence in the audit process and the AI solution itself. 

AI developers and companies that lead the creation of AI solutions must be held accountable for the conduct and social impact of their AI. Without accountability there can be no recognition or apportionment of liability.  

Data Accountability: engineers bias-detection in AI development and promoting fair and unbiased decision-making

Financial decision-making is a key area where AI can perpetuate unfair bias against people of a certain race, place of residence and/or gender. Therefore, the importance of holding AI systems and their developers accountable for automated decisions should be vociferously articulated. 

In September 2020, Canada’s Office of the Superintendent of Financial Institutions (OSFI) highlighted the challenge faced by model risk management through “continuously evolving models and the use of AI in validation”.15 Recognising this gap in accountability, Canada’s Digital Charter Implementation Act (DCIA) has proposed accountability requirements to be achieved through the implementation of data protection law and constructive dismantling of data traceability into data lineage. 

Components of the DCIA’s strategy include data provenance, manner of collection, organization, treatment, data flows and mapping throughout an organisation’s systems, audit records of interactions, algorithmic implications and ongoing data accuracy. The deployment of such meticulous scrutiny measures – tracking the trajectory of data from its source through to output-creation – results in the ethical principle of fairness being engineered “into the core of applied analytics applications”. This accountability mechanism makes it conceivable for AI developers and corporations to be held responsible for the fair use and protection of individuals’ data as their assurances of fair and unbiased decisions can be evaluated and challenged. 

Without the algorithmic accountability measures set by the OSFI and DCIA, fairness and absence of bias would either remain unaccomplished or unknown. The data protection and traceability mechanisms set the stage for AI corporations to evidence the unbiased nature of their algorithms, fair and legitimate data processing, and for third party evaluation of such claims.

Conclusion

Algorithmic accountability is the quantifiable practice overseeing the incorporation of principled behaviours in AI and their translation into ethical decision-making/output. As seen through examples above, AI Ethics is a term best represented through accountability mechanisms that are essential for a regulator, or court of law, to query so they can enforce punitive action the creators of AI technologies that violate ethics principles. Algorithmic accountability is, not only the critical substrate on which AI Ethics rests, but a practical route through which mankind’s aspirations for ethical AI can be realised.

——

References

[1] McCarthy, J. 1997. What is Artificial Intelligence? Available electronically at http://www-formal.stanford.edu/jmc/whatisai/whatisai.html 

[2] Leslie, D. (2019). Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector. SSRN Electronic Journal. Published. https://doi.org/10.2139/ssrn.3403301

[3] Moss, E., Watkins, E., Singh, R., Elish, M. C., & Metcalf, J. (2021). Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. SSRN Electronic Journal. Published. https://doi.org/10.2139/ssrn.3877437

[4] Ada Lovelace Institute, AI Now Institute and Open Government Partnership. (2021). Algorithmic Accountability for the Public Sector. Available at: https://www.opengovpartnership.org/documents/algorithmic-accountability-public-sector/

[5] Donovan, J.M., Caplan, R., Matthews, J.N., & Hanson, L. (2018). Algorithmic accountability: a primer.

[6] Richards, J., Piorkowski, D., Hind, M., Houde, S., & Mojsilovic, A. (2020). A Methodology for Creating AI FactSheets. arXiv preprint arXiv:2006.13796.

[7] Arnold, M., Piorkowski, D., Reimer, D., Richards, J., Tsay, J., Varshney, K. R., Bellamy, R. K. E., Hind, M., Houde, S., Mehta, S., Mojsilovic, A., Nair, R., Ramamurthy, K. N., & Olteanu, A. (2019). FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63(4/5), 6:1–13. https://doi.org/10.1147/jrd.2019.2942288

[8] AI FactSheets 360. (n.d.) https://Aifs360.Mybluemix.Net/Examples/Hmda. Retrieved December 3, 2021, from https://aifs360.mybluemix.net/

[9] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter and L. Kagal, “Explaining Explanations: An Overview of Interpretability of Machine Learning,” 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 2018, p.80-89, doi: 10.1109/DSAA.2018.00018.

[10] Information Commissioner’s Office ICO and The Alan Turing Institute, “Explaining Decisions Made with AI,” 2020, https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/ accessed 28 May 2020.

[11] Chapter 3 GDPR. Rights of the Data Subject. Available online: https://gdpr-info.eu/chapter-3/ (accessed on 11 March 2022).

[12] Recital 60 GDPR. Information Obligation. Available online: https://gdpr-info.eu/recitals/no-60/ (accessed on 11 March 2022).

[13] Article 22 GDPR. Automated individual decision-making, including profiling. Available online: https://gdpr-info.eu/art-22-gdpr/ (accessed on 11 March 2022).

[14] Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., & Anderljung, M. (2020), p.1. Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213.

[15] The Institute of Electrical and Electronics Engineers (IEEE). (2021), p. 65-68. IEEE Finance Playbook Version 1.0 – Trusted Data and Artificial Intelligence Systems (AIS) for Financial Services. https://t.ly/G0mX

——-

profile picture of anj merchant

Anjum (Anj) Merchant is a solicitor with experience in artificial intelligence (AI) technologies, AI ethics, privacy and data protection. Anj is also pursuing a Masters degree in Artificial Intelligence, Ethics and Society at the University of Cambridge in collaboration with the Leverhulme Centre for the Future of Intelligence (LCFI).