AI Ethics principles are crucial and now developers need to walk-the-talk

August 12, 2020

The initial rush to produce AI-powered applications has generated amazing automated systems that leverage the latest in Machine Learning (ML) and Natural Language Processing (NLP). The downside, as we all know, is that an inadvertent undercurrent of embedded biases and discriminatory outcomes are now disturbingly, coming to the surface.

In short, the ambitious aims of AI for Social Good are inexorably being tainted by systems that might contain hidden deficiencies and carry out deplorable acts of adverse algorithmic-based decision making.

Countering this, is the new trend of AI Ethics, calling for AI developers and those deploying AI applications to understand how the AI is designed, built, tested and deployed, doing so by embracing  ethical, regulatory, and policy covenants of a socially mindful nature.

As I have written elsewhere, AI Ethics principles need to get to the top of the list for everyone involved in AI and those too that are impacted by AI applications. All stakeholders must be aware of AI Ethics so that both the suppliers of AI systems and those using them ensure that ethical principles and practices are fully interleaved into their construction and delivery.

For example, a recent study entitled “Examples of AI National Policies” by the OECD reviewed the key details about various AI principles that have been crafted by each of the respective G20 nations. By and large, the stated principles are considered guidelines and recommendations, rather than serving as definitive laws or regulations.

In the United States, last year a presidential executive order established an effort known as the American AI Initiative and embarked upon identifying and promulgating a national strategy on AI. A subsequent Year One annual report in 2020 indicated how the federal government is seeking to heavily invest in AI R&D, along with stipulating regulatory guidance intended to remove barriers to AI innovation and embracing trustworthy AI-for-governmental services and missions.

The importance of the topic is further demonstrated by the launch of a new journal called AI and Ethics co-led by Editor-in-Chief’s John MacIntyre, Pro Vice Chancellor at the University of Sunderland, and Larry Medsker, Research Professor at The George Washington University  A key focus will be on the moral-ethical questions and concerns that arise from basic research and the design and use of practical applications of AI, along with the need for greater public understanding of AI, including the benefits and risks AI developments may bring.

A recent research paper in Nature has tried to make some sense of these diverse efforts. It examined over eighty sets of AI Ethics principles issued by a myriad of national and international public and private sector entities and identified  the most common factors underlying AI Ethics tenets (listed in order of most frequently cited to less so):

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity  

Plainly an arduous task is ahead to ensure that these AI Ethics principles are adopted in tangible and measurable ways. The talk must become the walk and not simply offer empty-handed promises that are ultimately unrealised, leaving us with damaging AI applications.

So what could be done? I suggest some of the following need to happen to make AI ethics effective:

  • Establish appropriate legal regulations. One approach of giving teeth to AI Ethics principles entails enacting the stated guidelines into sensible regulations that have the force of law. This has to be done mindfully so that there is not an inadvertent regulatory overreach that stymies the growth of AI-powered beneficial apps and that may unintentionally thwart innovation.
  • Bona fide lawsuits over AI Ethics observance. AI-enabled apps that fail to abide by AI Ethics guidelines are likely to falter and ultimately exhibit harm or injury to end-users. This in turn will undoubtedly spark lawsuits and if those prevail, it will serve as a bellwether that will spur AI app makers to be more observant of AI Ethics principles. A worrisome aspect will be the possibility of nuisance lawsuits that drain AI startups needlessly and are not being issued for bona fide concerns over AI Ethics assurance.
  • Update methodologies for AI development. Many existing AI development methodologies that provide step-by-step system building and testing instructions are woefully lacking in embracing AI Ethics principles. By adding AI Ethics awareness into these methodologies, AI developers will be more likely to infuse aspects such as transparency, fairness, privacy, and the like into their budding apps.
  • Showcase real-world AI Ethics examples and case studies. A valuable way to close the gap between abstract guidelines and real-world implementation involves showcasing fruitful examples and case studies that provide tangible tips and insights. Doing so will herald those that have adopted AI Ethics, while presumably also illustrating disastrous results that violated AI Ethics principles.
  • Availability of automated tools for AI Ethics inclusion. AI building tools that aid in AI Ethics adherence are expected to gradually come into the marketplace. As those tools emerge, it will be crucial for companies developing AI apps opt to obtain and put into active use those capabilities. It is important to keep in mind that such tools are not a silver bullet and will still require careful adoption by AI software developers.

Each of these approaches will take time to play out. Meanwhile, AI app developers ought to take seriously the value of abiding by AI Ethics guidelines and encourage their organisations to infuse such principles into their everyday efforts.

Dr. Lance Eliot is our Associate Editor for Computers & Law covering the USA and he is a globally recognised expert on AI & Law, including columns that have amassed over 3 million views in Forbes and AI Trends, he serves too as the Chief AI Scientist for Techbrium and is a Stanford Fellow at the Stanford Centre for Legal Informatics in Stanford, California, USA.This is the first of what we hope will be regular articles explaining how AI technology is evolving to handle the problems of ethical and legal decision making.