Investigating Artificial Intelligence: disputes, compliance and explainability

December 2, 2020

An AI system may need to be investigated as part of a dispute, for compliance or to aid explanation. The investigation of any system starts with the definition of the issues to be resolved. This article seeks to assist those charged with defining what may be investigated and illustrates with issues that they may address.

The Question(s)

The academic genesis of Artificial Intelligence (AI) may be traced back to the 1950s. The main-stream business application has taken off more recently with the growth of big data, the availability of cheap computing power and large datasets. 

Whilst it is natural to focus on the protection of personal data and bias when considering the investigation of AI, these are by no means the only considerations. Product liability, intellectual property rights and ethics arise. The GDPR Article 22 contains explicit obligations related to automated decision making, the adherence to which may be a matter that needs to be demonstrated. The ethical behaviour of self-guided vehicles has been the matter of much debate.1  The first disputes concerning automated decisions have arisen. The regulatory environment is developing2  as are investigation and explanation. Clear, feasible questions are a good place to start when it comes to framing an investigation. A good investigation will address the points at issue in the case efficiently.

Explanation of AI

A concept of fairness has emerged that is based upon the assumption that people are more likely to accept a decision if they understand its rationale. This has influenced the GDPR and much regulation besides.3 There may yet be need for the audit of AI systems and decisions. A complainant may object to a decision, as in the case of Uber drivers4  or question an explanation. This case, currently going through the Dutch courts, raises questions of whether the decision was based on legitimately held data, whether the drivers committed fraud and whether the decision was automated (amongst others).  

The explanation of some AI is relatively straight-forward, others are obscure in the extreme. Some seek to render the system as a decision-tree that may be followed by a careful reader when plied sufficient caffeine. Others use pixel colour to illustrate clusters in large data sets (e.g. genomes). Further development may be expected.


Current work on AI investigation features two classes of approach that may be used separately or in combination:5

Model-centred – These look inside the model to understand and explain its inner workings. It may include an examination of the process used for construction and testing of the model. This works best where the model is relatively simple and where a human may comprehend the explanation. Reviewing the process of creation may be appropriate where a dispute concerns its fitness for purpose. The product may include a simplified model for the purposes of description.

Data subject-centred – This is a post-hoc approach in which the model is fed data to examine its treatment of individual data subjects or decisions. This is suitable for more complex and less intuitive data (e.g. video, image) that cannot otherwise easily be understood. It can be used to investigate discrimination and individual decisions.

This article follows the flow of a process of creating an AI model and system at a high-level. It is intended to aid the reader in establishing the aims, scope and method to adopt in their investigation and what they may reasonably expect it to reveal. Not all elements covered here will be required in every case. It concentrates on machine-learning as a commonly deployed sub-set of AI, recognising that other methods would be appropriate for other classes, such as knowledge representation and reasoning.

Requirements, Strategy

Any corporate initiative should be preceded by the definition of the objective and selection of an approach to achieve it. Should the question being investigated incorporate elements of fitness for purpose, the requirements and the feasibility of the strategy will be of interest.6 

The realisation of the strategy may require the development of competence accessible to the organisation. Actions could involve the allocation of existing staff, their development, hiring contractors and drawing upon suppliers, funding, and time. Planned and actual resourcing and capability may become an issue, as may the feasibility of the proposed business model, the rate of progress and expectations of return.

Leading writers emphasise the importance of an organisation’s governance and control of its AI issues.7 Experimentation has risks and should deliver learning. It needs to be directed and controlled. Some applications of AI can have reputational implications for the organisation that may motivate investigation and thus become within the scope of interest.

The incorporation of AI within a business is often done through a series of modest steps: it does not have to be a major undertaking. An organisation must determine what to implement first, how to do so and how rapidly. Each step may both gain immediate benefit and develop competence. If the business process concerned is already in place, the approach to transition from one to the other must maintain customer service throughout. Major changes have their own demands for which well-developed frameworks are available.

Obtaining Data

If the AI algorithm is the engine, the fuel is data. AI can demand astonishing quantities.

There is currently much concern in the press about the ethical application of IT. The informed consent of data subjects features widely. The investigator may be required to examine the data used, its treatment and flow. Data residence, security, architecture, compliance, and other issues may arise. Where personal or sensitive data are concerned, a data processing impact assessment (DPIA) may be required. It is probably wise even if your jurisdiction does not require it. The implementation may or may not be consistent with such an assessment and conditions may change over time.

It is likely that the availability of data should affect what can be achieved with AI. The choice of algorithm and the reliability of findings will be influenced by the availability and quality of the data. It is common to find that data quality is poor and that it is desirable to undertake remedial work on source systems and business processes to improve it. The data may need to be cleansed before use. The systems architecture and data flows can have a marked effect on the speed of processing and system performance.

AI does not require perfect data. One of its greatest strengths is the ability to plug gaps, for example in recommender systems. In such a case, the algorithm selected must be appropriate to the data and the business issue being addressed. Data quality is likely to affect system performance.

Once obtained, the data set is commonly segmented into sets for training, validation, and test. There is good practice and bad in data management for AI that can affect the quality of results. It will commonly need to be prepared for processing. Expect to hear of normalisation, dimension-reduction and other statistical treatment designed to aid later processing. Some actions are quick, some are easy, some are productive. This is a prime area for iterative experimentation. Sourcing a lot more data can be very time-consuming and expensive. The investigator may have access to project information that indicates whether this was done well. Both the originating team and the investigator will seek to analyse the data to work out what can and cannot be done with it.


Bias may be defined as a misalignment between the collected data and the population of interest. Instances are widely cited of facial recognition trained on the faces of a prevalent ethnic group  failing to identify those of a minority.  These are commonly associated with bias in the training data. Some may appear egregious after implementation. Bias can be remarkably hard to identify at the time of creation. The latest advice is to convene a diverse group to identify potential sources and then systematically to search for it both in the selection of data sources and in post-hoc analysis of decisions. “Error analysis” is a technique used in the development of AI models. This is a critical area for organisational learning. If an organisation is open to the possibility of treating a particular minority badly it may react with sincere thanks and a commitment to restitution when discovered. It helps if the discovery is made before the damage is too great.8 This is an instance where risk cannot be avoided so had best be managed and governed. Should a dispute follow, the “reasonableness” standard applied by a court may differ from that applied in social media. Both can be expensive.

Bias may have roots in optimisation objectives, data, or both. Algorithms are inherently amoral. The summer 2020 A-level grading issue may be characterised as the awards being optimised for consistency and integrity. That approach lost the political argument that highlighted a minority of pupils in schools that had greatly improved performance (not reflected in their history of performance). They were claimed to have been treated unfairly. Mathematics and fairness do not always understand each other. The examination of the treatment of clusters of cases is used to detect bias in decisions.

AI does not follow human rules, sometimes resulting in unexpected results. One researcher found that a system asked to identify images containing horses weighted a copyright marker over the image itself.9 Another used to discriminate between huskies and wolves noted the association with snow in the images rather than finer canine points.10  An investigator would be able to detect such inappropriate results through error analysis as part of the development process.

The Maths

AI is mathematically based. The business problem must at some stage be expressed as a function with an optimisation objective. In some cases, the formulation is in pages of Greek, in others by pull-down selections in a toolset or writing code. Where no such formulation can be made, the problem is infeasible to AI. Managerial disappointment is likely to follow.

If the maths contains an error, everything that follows will be unreliable. Most AI maths is a variation on relatively few themes. The capability of a delivery team is enhanced if at least one team member understands the maths, which is at the level expected of an engineering or similar graduate. The maths is then deployed in computer code that should also be understood. Code libraries are normally relied upon for elements and are not commonly internally inspected. The quality of the maths and its computer implementation may be inspected if the performance or the reliability of the system is in question.

Seemingly small changes can have disproportionate effect. In developing a system, the author experimented with two mathematical expressions of the same function, one vectorised , the other not.  The effect was to accelerate learning time by a factor estimated to be in the hundreds for the vectorised.11 This is likely to affect the operational suitability of applications.

AI Architecture

Some AI problems are decomposed into a series of steps before each is addressed. For example, the identification of areas of text within an image before the text is captured. Data may need to be obtained through integration with other systems. The architecture may be reviewed much as that of any other IT system, principally by an examination of design documents.

The selection of algorithm is a particularly important step, the miss-application of a method that is inappropriate to the business need or data being a major impediment. Well-developed libraries are available to draw upon together with development languages and frameworks. Some propose prototyping in one environment before taking the lessons regarding an effective approach to another for implementation in production at scale. Leading approaches (e.g. increasingly emphasise the importance of well-established software engineering approaches to AI design and development.  Such techniques improve the integrity and speed of development by assisting the developer in the re-use of established, tested code. They also accelerate the production of documentation, an area that is often neglected, impeding later maintenance.

AI can place extraordinary demands upon computing and network power. Depending on the demands upon the system, there can be considerations of engineering quality, of consistency of decision (sensitivity), design integrity and coding quality. The delivery team may mention hyper-parameter tuning, or learning-curves and convergence if they are being kind. The volumes of data can be material as can their rate of change. The system may need careful design to accommodate the movement of data within the components of the system whilst keeping up with real-time demands. Well-framed design testing will identify performance bottlenecks early in development. A well-designed system will address both functional aspects (predictive accuracy, descriptive accuracy) and non-functional (data security, processing time, network capacity and others). Legal requirements such as data residence can influence AI architecture.


Implementation involves technical and operational steps. The technical may include systems integration, data loading, training, validation, testing, error analysis, sensitivity analysis and data set management. The operational may include deployment to support staff, to operators and to external users. An organisation that has been used to the management of large numbers of low-cost staff (e.g. contact centres) may face high degrees of change when implementing automation and AI. This should be factored into the time allowed. There are few freshly-minted data science PhDs available from the leading universities and they command high salaries. Some existing staff may be found to have the intellect, base education, and attitude to build new capabilities within the organisation. Their development will take time, and their retention may demand further change. 


To answer a question meaningfully one must first frame it. The legal team considering how to plead an unhappy AI application will be faced by the confusion of fact and law that they see in other cases, together with rapidly developing practice. The first significant AI cases are starting to wend their way through the courts. As AI application becomes more widespread, it is likely that the incidence of those unhappy with AI performance may also increase. It is hoped that this brief review of what may have occurred can be used to frame instructions for investigation and thus focus on the essentials of the case. It is rare in a troubled implementation that there is a single issue, rather several interact. The art of the expert is to bring an appropriate depth and breadth of analysis to clarify the issues for the court.

profile picture of william hooper

William Hooper acts as an expert witness in IT and Outsourcing disputes and a consultant in service delivery. He is a member of the Society of Computers and Law and a director of Oareborough Consulting. He may be reached on +44 7909 958274 or


Notes & sources

1. Thornton, Pan, Erlien, Gerdes. Incorporating Ethical Considerations into Automated Vehicle Control. IEEE Transactions on Intelligent Transportation Systems Vol 18 No 6 June 2017. 
2. . The European Parliament is considering further regulation.
3. Heike Felzmann, Eduard Fosch Villaronga, Christoph Lutz, Aurelia Tamò-Larrieux Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns, Big Data and Society, Vol 6 Issue 1, June 27, 2019
5. Murdoch, W. James; Singh, Chandan; Kumbier, Karl; Abbasi-Asl, Reza; Yu, Bin (2019-01-14). “Interpretable machine learning: definitions, methods, and applications”. Proceedings of the National Academy of Sciences of the United States of America. 116 (44): 22071–22080. arXiv:1901.04592
Edwards, L; Veale M “Slave to the Algorithm? Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review vol 16 No 1.
7. Expanding AI’s Impact With Organizational Learning, MIT Sloan Management Review, October 2020