Chris Kemp introduces the concepts of AI governance and sets out a straightforward roadmap for private sector organisations looking to implement it. [1]
Introduction
Beginning with Open-source Software (“OSS”) governance 30 or so years ago, a structured approach to IT-related governance has become widely accepted in private sector organisations. Broadly, there are three pieces to this type of governance: (1) a statement of strategy and high-level principles; (2) a statement of policy to implement the principles; and (3) the nuts and bolts of processes to anchor the policy into the organisation’s operations.
Structured IT governance received a boost in the era of data protection as organisations operationalised GDPR compliance. The core techniques of privacy compliance (like data mapping, impact assessments and transparency disclosures) are directly relevant in the AI governance context.
Strategy and principles
The start point for AI governance is strategy (where do we want to get to?) and principles (how do we want to get there?).
Many private and public sector bodies have published AI principles, ethics frameworks and similar documents. This can make it difficult to know where to begin. We put forward two prominent examples that can be used as a starting point: (1) the European Commission High-level Expert Group on AI (“AI HLEG”)’s Ethics Guidelines for Trustworthy AI and (2) the US National Institute of Standards and Technology (“NIST”)’s AI Risk Management Framework.
AI HLEG’s Ethics Guidelines for Trustworthy AI (2019).[2] The AI HLEG identifies seven ethical AI principles: (1) human agency & oversight; (2) technical robustness & safety; (3) privacy & data governance; (4) transparency; (5) diversity, non-discrimination & fairness; (6) societal & environmental wellbeing; and (7) accountability. While the AI HLEG’s principles predate the EU AI Act, the regulation cites them approvingly noting that “all stakeholders… are encouraged to take into account, as appropriate, the ethical principles for the development of voluntary best practices and standards” (Recital 27).
NIST AI Risk Management Framework (“RMF”) (2023).[3] The more recent NIST AI RMF identifies seven “characteristics of trustworthy AI” and offers “guidance for addressing them”. The characteristics are that AI systems should be: (1) safe; (2) secure & resilient; (3) explainable & interpretable; (4) privacy-enhanced; (5) fair with harmful bias managed; (6) accountable & transparent; and (7) valid & reliable. The AI RMF also offers practical steps to implement AI governance, in particular through the Playbook and generative AI Profile, to which we return below.
AI principles like the two examples above tend to avoid including compliance with law as a standalone principle. This is because legal compliance is a separate aspect of an organisation’s responsible AI use. Legal compliance is assumed by the principles, which themselves focus on more general ethical objectives. For example, the AI HLEG treats “lawful” use of AI and use in accordance with “ethical principles” as separate components of “trustworthy AI”. Organisations may wish to follow suit and make a distinction between legal compliance and AI principles.
Implementing the principles with standards
Technical standards have emerged as an important part of implementing AI governance. Standards offer organisations a framework to implement their chosen AI strategy and principles. Third-party certification against a standard is a powerful way for organisations to demonstrate credibility and leadership in the way they use AI. This section introduces two emerging topics in AI standards: (1) the ISO/IEC 42001:2023 “AI Management System” standard; and (2) the EU AI Act and technical standards.
ISO/IEC 42001:2023 – the “AI Management System” standard.[4] The ISO/IEC 42001:2023 standard has been the centrepiece of the International Organization for Standardization (“ISO”)’s efforts to introduce technical standards for AI. Its purpose is to “provide requirements for establishing, maintaining and continually improving an AI management system within the context of an organisation”. The “management system” concept is key. ISO maintains a range of Management Systems Standards (“MSS”), each of which follows a common “high-level structure”. Another well known MSS is ISO/IEC 27001:2022, which covers information security management systems.
As for other MSS, the high-level structure of ISO/IEC 42001:2023 follows a range of operational topic areas which provide an end-to-end framework for AI governance: (1) context of the organisation; (2) leadership; (3) planning; (4) support; (5) operation; (6) performance evaluation; and (7) improvement. The aim of this common structure is to help organisations implement multiple MSS according to their requirements. A company that has already implemented an ISO/IEC 27001:2022 information security management system should find it easier to add an AI management system under ISO/IEC 42001:2023 into its existing policies and procedures.
As above, certification by a third-party certification body is an effective way for an organisation to demonstrate a responsible approach to AI. A high-profile example is Microsoft, which in March 2025 announced certification under ISO/IEC 42001:2023 for its Microsoft 365 Copilot and Microsoft 365 Copilot Chat services.[5] However, third-party certification is not essential and an organisation can implement and comply with the requirements of an MSS without also being independently certified.
The EU AI Act and standards. Technical standards are also an important part of the EU AI Act’s regulatory approach, particularly for high-risk AI systems where compliance with published standards offers a presumption of compliance with the AI Act’s rules.
The AI Act sets out a framework for the development of technical standards at Article 40. As at the date of this article, the Commission had requested the European Committee for Standardisation (“CEN”) and the European Committee for Electrotechnical Standardisation (“CENELEC”) (two of the European Standardisation Organisations (“ESOs”)) to develop standards relating to high-risk AI systems. It is expected that these standards will be finalised before the AI Act’s requirements for high-risk AI systems start to apply in August 2026.
It looks likely that there will be differences between the European standards and the requirements of international standards, ISO/IEC 42001:2023 in particular. The ISO standard focuses on organisational AI risk management. Front and centre of the European approach is an aim to “minimise risks to the health and safety and fundamental rights of persons”.[6] A 2024 briefing document published by the European Commission’s Joint Research Centre noted “there are fundamental differences between managing risk to organisational objectives and addressing possible risks of AI systems to individuals” and that “this will require consideration of various aspects not covered in existing ISO/IEC work”.[7]
A practical point is that the main thrust of the European standards is high-risk AI systems, which are expected to be a small percentage of overall AI applications.[8] In this context, international approaches like ISO/IEC 42001:2023 and the NIST AI RMF (on which more below) will still offer a relevant and helpful AI governance framework for organisations with an EU presence.
Anchoring AI governance into the organisation’s operations
The NIST AI RMF and its accompanying Playbook and generative AI Profile are useful because they offer suggested actions organisations can use to implement AI governance at a “nuts and bolts” level. The approach is flexible and organisations can choose whichever of the suggestions they consider to be appropriate for their circumstances.
The AI RMF follows a hierarchical structure with 4 top level functions: Govern, Map, Measure and Manage. Below those functions are categories and sub-categories. As an example of the AI RMF’s practical approach for the first Govern category (policies, processes and procedures), the Playbook suggests maintaining policies for training organisational staff and connecting AI governance to existing organisational governance and risk controls.
The AI RMF’s generative AI Profile offers an implementation of the AI RMF’s functions, categories and sub-categories for specific AI use cases involving generative AI. The Profile identifies specific risks posed by generative AI systems and associates them with detailed suggested actions. The generative AI profile offers a helpful starting point for organisations looking to implement governance for generative AI tools.
Conclusion: how to get started
This article has broken AI governance down into three key steps and fleshed them out with a deliberately narrow selection of source material. The aim is to provide food for thought to organisations getting started with AI governance. For Step 1 – the statement of strategy and high-level principles – we have suggested organisations look to the AI ethics principles and trustworthy AI characteristics published by the EU’s AI HLEG and NIST in its AI RMF. For Step 2 – a statement of policy to implement the principles – we have pointed readers towards the ISO/IEC 42001:2023 standard for AI management systems. For Step 3 – the nuts and bolts to anchor AI governance into organisational processes – we have discussed NIST’s AI RMF and its supporting Playbook and generative AI Profile.

Chris Kemp, Partner at Kemp IT Law LLP
[1] A White Paper will soon be available from Kemps. To receive a copy when it is published, contact info@kempitlaw.com.
[2] https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
[3] https://www.nist.gov/itl/ai-risk-management-framework
[4] https://www.iso.org/standard/42001
[5] Microsoft, Microsoft 365 Copilot Achieves ISO/IEC 42001:2023 Certification (25 March 2025) <https://tinyurl.com/4w7b2m3a>.
[6] Commission Implementing Decision (EU) 2023/3215 of 22 May 2023, Annex II, para. 1 <https://tinyurl.com/4bcjxav4>.
[7] European Commission Joint Research Centre, Harmonised Standards for the European AI Act, pp. 3 and 4 <https://tinyurl.com/2s3e4v22>.
[8] The European Commission’s AI Act Impact Assessment from 2021 noted, at p. 69, that “only 5% to 15% of all AI applications are estimated to constitute a high risk” <https://tinyurl.com/yxx2hukw>.