Simon Deane-Johns summarises some key takeaways from the recent SCL Ireland event on AI held on 17th September
Digital recordings cannot be trusted. Artificial intelligence can be used to substitute different words and synchronised lip movements seamlessly. Professor Barry O’Sullivan was able to demonstrate this before our very eyes in his Overview of Artificial Intelligence in Dublin last Tuesday.
AI concepts are easy, at least as Barry explains them, it’s the delivery that’s complex.
The difference between traditional computer programming and machine learning is that:
Use-cases for AI can be defined in terms of:
@BarryOsullivan discussing #ArtificialIntelligence, bias in AI systems and the concerns around digital identities. Fantastic event and insights organised by @computersandlaw #Dublin @LexTechIreland @LemanSolicitors https://t.co/xTUbWq2EoR pic.twitter.com/8MBJJZxvbx— Karl Manweiler (@KarlManweiler) September 17, 2019
Apparent feats of artificial intelligence are usually over-hyped and ignore the vast cost in terms of electricity consumed compared to the human brain ($50m in electricity to beat a human at Go compared to the 7 watts consumed by the human’s brain).
AI is also very “brittle” and unable to cope with any scenario in which it hasn’t been ‘trained’: it is very dependent on the quantity, quality and availability of data.
Computers and artificial intelligence itself have no understanding of the world. AI cannot answer causation or counter-factual inquiries.
No artificial intelligence is 100% accurate, raising the questions of how inaccurate is it? And what are the consequences of inaccuracy?
Even more problematic is the lack of (what I call) explainability: “if a neural network wants to turn right, no one can explain why.” Barry agreed that this makes the use of AI acceptable in cases where, say, two reinsurers use it to more efficiently assess and set-off their own liability for claims, on the basis that who pays more or less will even out over time, as opposed to situations where a false negative/positive means a person loses their life, or compensation that is actually due to them or their freedom or some other fundamental right.
Bias is also a huge problem. We are often unclear about what we mean by bias – there are many different types. Bias tends to be inherent in data sets used to train AI; and it is considered mathematically impossible to remove both selection bias (accidentally working with a specific subset of a population instead of the whole, making the sample unrepresentative of the whole) and prediction bias (false negatives/positives). You might be tempted to correct prediction bias by adding a calibration layer to adjust the mean prediction by a certain percentage, but that only fixes the symptoms, not the cause, and makes the system dependent on the prediction bias and calibration layer remaining up to date and aligned over time.
Lawyers need to be engaged in AI development/deployment
Barry is very concerned that, while certain policy bodies are led by lawyers, a lot AI is actually being developed and deployed without the involvement of any legal expertise. The various shortcomings in AI explained above mean this has to change if we are to develop and use AI responsibly.
Legal issues include:
5. The European Commission is (“suddenly”) considering regulation along the lines of the “Ethics Guidelines for Trustworthy AI” which contain 7 “requirements”: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability. If explainability were mandated as part of “transparency”, for example, then no AI could meet the requirement. Instead, Barry recommends a regulatory requirement for certification of AI so that the shortcomings of each AI are known and appropriate decisions can then be made about whether and, if so, how it may be deployed.
6. AI can be used for good: Global Pulse is a UN initiative to discover and ‘mainstream’ applications of big data and AI for development and humanitarian action (UNGlobalPulse.org)
“Rebooting AI: Building Artificial Intelligence We Can Trust”, Marcus, G. and Davis, E., Random House 2019