Ensuring that responsible humans make good AI

February 3, 2021

We are seeing accelerating expansion in the range and capabilities of machine aids for human decision making and of products and services embodying artificial intelligence and machine learning (AI/ML). 

AI/ML is already delivering significant societal benefits including improvements in convenience and quality of life, productivity, efficiency, environmental monitoring and management, and capacity to develop new and innovative products and services.

The common feature of ‘automation applications’ – the term we’ll use in this article to cover diverse products and services that automate decision making, whether or not deploying ‘true’ AI or ML – is combination of data, algorithms and algorithmic methods, code and humans to derive insights and other outputs.

Humans are, or should be, the heart of automation applications: the heart in the Tin Man. 

Humans decide which data sets are discoverable, linked and analysed. 

Humans determine whether data, algorithms, algorithmic methods, code and people are reliable and safe to produce appropriate outputs that lead to appropriate outcomes on humans or the environment. Humans decide what data is brought together, and whether to deploy technical, operational and legal controls, safeguards and guide rails. Humans, directly or indirectly, consciously or otherwise, determine what is appropriate. Humans determine whether to care, or not care, and act, or not act.

Automation applications will only be safe, reliable and socially beneficial if the decisions that humans make are properly informed and responsible. Entities that design and entities that deploy automation applications need to implement good governance, to ensure that the right humans with the right skills and experience are brought together and empowered to make or influence the right (properly informed and responsible) decisions at the right time about design, deployment and context of use of those applications.

Diversity matters. Often, wisdom to make good decisions will not be found in a single human head. Diversity matters. Each individual involved in governance of automation applications bring a different perspective. Diverse teams can find the evolving and fluid social construct to apply in making decisions about whether and how automation applications are designed, deployed and used.

Context matters. Humans determine whether, how and in which particular contexts, an automation application is used. Automation applications will be reliably safe, reliable and socially beneficial only if appropriately used within the range of contexts for which they were designed. Humans that actually care about ethical, fair and socially responsible outcomes should work out what are ‘safe outputs’ and ‘safe outcomes’. If these humans are not the same humans that make use of automation applications, they need to exercise responsibility in ensuring that users, and overseers of users, know what uses are safe, and what are not. 

Automation applications cannot be safe for all potential applications. There are reasonable constraints as to how ‘safe’ an application can be made to be. These constraints include cost, specialization as to intended ‘safe’ use, and reasonably anticipated knowledge and other characteristics of users of those applications. Surgical scalpels need to be sharp, but need not have safety features to be ready and safe for use by skilled surgeons. Our toasters don’t need to carry big labels telling us not to insert knives. We know that Ferraris are not suitable for bush-bashing, but we can happily drive a Jeep on a racetrack. Tort law, including the judge-made law of negligence, and product safety statutes, have developed to address nuances of expected uses and users of particular products. However, automation applications do create some new challenges: partly because they are new and therefore present unfamiliar risks or carry risks of not readily foreseen harms. Often, we humans aren’t properly informed, and can’t work out, the capabilities and limitations of particular automation applications. Appropriate deployment and use of automation applications requires good humans to carefully consider the range of capabilities of possible users, and diversity of possible uses. 

Words may matter, but only if the right people read and take heed of them at the right time. Words readily fail to matter when incentives outweigh social responsibility. Over the last few years many of us have spent a lot of time and expended many words in many languages on refining ethical principles and frameworks for AI/ML. Socially responsible humans now have lots of statements of principles, checklists and frameworks to frame their thinking by humans about development and deployment of automated applications. Sensible statements and frameworks can structure thinking, enabling the right humans empowered by the right entities to bring the right thinking to the right table at the right time. Structured thinking reduces the risk that important factors are overlooked. Something as simple as the right checklist in the right hands can improve surgical outcomes, as Atul Gawande explains in The Checklist Manifesto. Oversight and review mechanisms can provide assurance that structured thinking reliably happens, or that an alert is raised whenever it does not, and that when (as it often will) something unexpected occurs during testing or deployment, any adverse effects are promptly addressed. 

Formal governance ensures that relevant entities – and their regulators – know where ‘the buck should stop’, with and within each entity involved in the supply and deployment chain for automation applications and associated data ecosystems. 

And often legal requirements for good governance within entities and their management of data ecosystems are the best defence against unsafe deployment and use of automation applications. The law cannot anticipate and directly address by legislated prohibition the full range of unsafe uses of automation applications. However, the law can anticipate and address what good humans should do to identify and mitigate reasonably foreseeable risks, and manage residual risks, in the  use of automation applications. We already have examples we can draw on from other  analogous, multifaceted problems. Consider environmental (air, water, noise, impact on other living organisms, visual amenity, externalities including impact on transport infrastructure) impact assessment. Environment protection laws require developers of major projects to conduct structured, multifaceted environmental impact assessments. Laws requiring conduct and publication of environmental impact assessments provide a good analogy for development of laws as to governance of automation applications. We don’t expect developers of industrial estates to comply with statements of ethical principles as to good environmental practice. What is different about automation applications? Yes, we still have to settle upon the definition of a threshold at which to require that a multifaceted impact assessment of an automation application must be conducted, or settle on who should do what (as between the developer, the deployer and the user). These are all important matters of detail for us to sort out: not reasons why it is too hard for us to sort out. 

Responsible governance has never been more important. However, this relatively new usage of the word ‘governance’,  has already acquired negative connotations in some business sectors. Some executives see creation of new legal requirements for formal governance of development and deployment of automation applications as an exercise of form over substance. Those executives query why it is not enough to simply empower developers to think sensibly about the possible uses and misuses of their outputs. Many executives see governance as a regulatory compliance function: a cost of doing business that is staffed by people that are a cost centre, not revenue enablers. Some executives have a view that governance personnel are happy to be a cost centre, not revenue enabler, predisposed to say ‘no’ and not concerned by friction and delay that they create in finalising time critical business decisions. 

Good governance of development and deployment of automation applications should be none of these things. 

Governance professionals need to help other executives and stakeholders designing, developing and deploying automation applications to understand what good governance looks like, as implemented across a diversity of data ecosystems and analytics environments, and by diverse business and government entities.

Governance needs to move from buzzword, and from a tick-the-box compliance checklist, to become reliable and verifiable everyday practice for product design and development across entities big and small.

Good governance requires consideration not only of who makes the particular decisions that should be made but also  oversight, accountability and allocation of responsibility to individuals to ensure that such decisions are well informed and reliable. It requires plaudits when things are objectively and demonstrably assessed to go right, and people who care about consequences when things go wrong. Good governance therefore requires consideration of how entities are structured and how humans work together at both the project level and the entity level. It also requires appropriate internal and external monitoring, oversight and review. 

Good governance needs to be closely aligned and compatible with entity and project risk management methodologies and processes as may be already in use by businesses, such as the ‘three lines of defence’ entity risk governance model as promoted by the Chartered Institute of Internal Auditors. If data and analytics governance is not integrated into everyday practice for product design and development, it will not reliably assure good outcomes.

Assessment of the use  benefits and risks of a particular automation application needs to take account of the range and possible magnitude of harms to humans, entities or the environment that might be occasioned by inappropriate use of it. An outcome of good governance should be minimization of risks and harms that should be reasonably anticipated. The range and magnitude of risks and harms is highly specific to context of deployment and use. This is one reason why good governance of automated applications is a newly developed field, where good practice is not yet well developed or subject to broadly accepted industry standards.

Good governance is required regardless of whether the input is personal data about identifiable individuals, proprietary or business data or data about the living or physical environment.

Good governance is required regardless of whether the outputs or outcomes are controlled by businesses, government agencies, not-for-profits and other civil society organizations, law enforcement agencies or intelligence organizations.

It is a truth universally acknowledged that poor data undermines reliability of automation applications. It is increasingly recognized that code embodying algorithms may also be discriminatory or produce otherwise illegal or unreliable outputs. Much has been written in recent years about each of these problems and how to assess whether they will arise. Surprisingly, little has been written about how to embed governance within data analytics environments and the entities that operate them, how to ensure that that the full range of concerns are anticipated and addressed, and how to responsibly manage any residual risks that remain after appropriate risk mitigation.

Attention has focused upon the what and the when, but not the how.

Good governance of automation applications should become an enduring source of differentiation and competitive advantage for those entities ready to demonstrate that it has been systemically embedded and reliably adopted. Viewed using the long-term lens of sustainable business value, good governance usually makes good business sense. Many boardrooms and C-suites understand this already. However, many businesses are not yet sure how to make good data and analytics governance real.

How do we ensure good governance of automation applications?

Data standardization does not itself assure provenance, reliability and quality of data as an input for automation applications. Processes for assessment of ‘data quality’ are more mature than processes for assessment of data analytics environments. However, good practice for assurance of data quality remains largely focused upon data discoverability and readiness for ingestion, and not suitability of data for use to create outputs that effect particular outcomes. 

It is often hard to know what is going on. Volumes have been written about problems with explainability of ‘black box’ ML applications. However, in trying to determine industry best practice, there is a bigger problem. Many data analytics methods and processes are, in and of themselves, sensitive business information that can only be legally protected by retaining their character as trade secrets (confidential information). Businesses quite reasonably don’t want to expose their proprietary processes and algorithms to their competitors. Exposure of how particular algorithms work may also enable them to be gamed to the disadvantage of the business. As a result, there is likely to continue to be limited transparency as to developing good industry practice in practical operational data science.

As to outputs and outcomes, there is continuing debate about the appropriate range of practical, operational controls, safeguards and guide rails to ensure that outputs from automation applications will be safe, reliable and socially beneficial for use in particular contexts.

Whenever legal obligations or evolving standards for good behavior by entities are expected, four organizational requirements need to be addressed.

First, senior management and boards should understand what is expected of the entity and accordingly of their management and oversight, particularly around strategy and risks.

Second, there must be designation and empowerment of appropriately skilled individuals who can give effect to those obligations and standards. Responsibility, skills, incentives, sanctions, escalation criteria and reporting lines must be properly aligned. Controls, safeguards, guard rails and properly approved processes and procedures must be reliably embedded within an entity. Professed accountability of an entity, without corresponding allocations of responsibility to specified individuals, often will not lead to real accountability at all. If consequences are assessed by decision makers as an externality for themselves, or for the entity, the likelihood of an irresponsible decision substantially increases.

Third, those individuals who are given responsibilities for identifying and mitigating AI/ML risks should be empowered with methodologies and tools that enable them to reliably and verifiably fulfill those responsibilities.

Fourth, data and analytics governance must be properly integrated into everyday business practices and project management methodologies.

These organizational requirements apply to all entities, regardless of industry sector, size or level of maturity in information technology. The ways of giving effect to these organizational requirements must, however, differ entity by entity.

Entities electing to develop or use automation applications have a wide range of capabilities and available resources. A start-up will need to implement these requirements in a different way than a large organization that has highly systematized business and project processes. Even large organisations will need to evolve. Many do not have high levels of maturity in functions and processes for data and analytics, particularly (but not only) in relation to non-conventional sources of data and data that is not clearly regulated personal information.

Implementation of good governance of automation applications often will require up-skilling of particular individuals within the entity, new allocations of responsibilities and reporting lines and other changes to technology and entity governance. Just as evolving standards for privacy by design and security by design have changed requirements for information governance, implementation of good governance of automation applications requires understanding by entities of the need for effective change management.

Good governance of automation applications should seldom be a matter for specialists alone. Often good governance can only be practical if the organizational requirements are addressed through multiple levels, roles and functions within an entity.

The pace of development, implementation and modification of automation applications is fast. Governance frameworks to ensure reliably good automation applications, tools and methodologies for assessing and mitigating risks of harms, are still developing. Each entity needs to ensure that its organizational arrangements catch up and keep pace with the entity’s AI/ML capabilities and its changing risk profile.

There are emerging opportunities for entities to differentiate themselves from competitors through good governance. Observe how some digital platforms giants are now advertising their commitment to privacy protective control of ecosystems of consumer data that they curate or enable. 

With new legal requirements and rising stakeholder expectations about transparency in design and use of automation applications, it will become easier to identify the organizations that are laggards, recalcitrant in their governance practices. Some organizations may only react to public relations crises, simply address existing legal obligations, or otherwise resist change. These organizations should, and will, be left behind.

Which entities will shine? And how quickly will there be naming and shaming of those entities that remain parked under a cloud? Time, and comparative market value of entities, will tell.

profile picture of peter leonard

Peter Leonard, Principal, Data Synergies and Professor of Practice (IT Systems and Management and Business Law), UNSW Business School

*This article has been adapted from a piece first published by the International Association of Privacy Professionals in Dec 2020