The ICO seeks contributions on the development of an auditing framework for AI.
Simon McDougall, Executive Director for Technology Policy and Innovation at the ICO, has written a blog post inviting contributions on the development of an auditing framework for AI.
The aim of the framework will be to set out a methodology for auditing AI applications and assessing and managing data protection risks of these applications. It will also inform future guidance for the use of AI within the law.
He points out that new and innovative applications of AI permeate many aspects of modern life such as in healthcare and recruitment. However, there are risks too which is why AI is one of the ICO’s three strategic priorities. The aim of the framework will be to provide a solid methodology to audit AI applications and ensure they are both transparent and fair and to ensure that the necessary measures to assess and manage data protection risks arising from them are in place.
The framework will also inform future guidance for organisations to support the continuous and innovative use of AI within the law, complementing existing resources, including their Big Data and AI report.
Any feedback will be used to inform a formal consultation paper, which is expected to be published by January 2020. The final AI auditing framework and the associated guidance for organisations is due for publication by spring 2020.