Mark Hendry explains how good AI governance is a now cornerstone of cybersecurity
AI is already embedded into our day-to-day personal and professional lives. As adoption expands, so do questions about how to govern its use within and across organisations. With the horse having already bolted, will attempts at governance be meaningless, after-the-fact endeavours for the sake of compliance, or might they have real impact and value?
It does seem as though the conversation around governance is shifting. Once viewed primarily as a compliance burden, or even as being unnecessary, AI governance is increasingly recognised as a strategic capability – one that enables innovation, builds resilience, and earns trust.
This article explores how firms can reframe governance not as a burden, but as a source of competitive advantage with application not only in risk management but unlocking the most powerful use and value cases within their own firm. It also introduces the concept of AI observability which is a vital but often overlooked component of responsible AI adoption and outlines practical steps for embedding governance and observability into organisational culture and strategy.
The Business Case for Strong AI Governance
AI systems are powerful, but as most individuals and companies will know by now (by personal experience if not by reading the news) they are far from infallible. Without proper oversight, they can introduce bias, cybersecurity risks, data protection and privacy issues, or result in outcomes that are opaque and unaccountable, all with myriad potential legal consequences. Governance, when done in a balanced and pragmatic way, can provide the guardrails that empower organisations to innovate with AI in a safe and optimised manner.
But beyond risk mitigation, governance offers tangible strategic benefits:
- Regulatory alignment: With the UK’s outcome-based regulatory model and the EU AI Act coming into force, firms that proactively govern AI are better positioned to meet evolving expectations.
- Operational resilience: Governance frameworks help firms detect and respond to model drift, data degradation, and other systemic risks.
- Stakeholder confidence: Investors, clients, and regulators increasingly expect transparency and accountability in AI use.
- Innovation enablement: Clear governance allows firms to experiment with AI while maintaining control and oversight.
In short, AI governance is not just about avoiding harm; it’s about unlocking value responsibly.
AI Observability: The Missing Link in Governance
While governance sets the rules, observability provides the insight. AI observability refers to the ability to monitor, understand, and explain how AI systems behave in real time. It’s a dynamic capability that complements governance by enabling continuous oversight and adaptation.
Key benefits of observability include:
- Model performance tracking: Detecting when models deviate from expected behaviour or degrade over time.
- Explainability: Understanding why a model made a particular decision — essential for regulatory compliance and user trust.
- Auditability: Providing evidence for internal reviews or external investigations.
- Intervention: Quickly identifying and resolving issues in production AI systems, including clamping down on bad user behaviour and providing training, through to incident response.
Observability transforms governance from a, sometimes, difficult practice underpinned by a static checklist into a real time capability that evolves to truly support an organisation’s use of AI and other innovative technology.
When Governance Goes Wrong: A Real-World Example
Consider the case of a UK Headquartered multinational client that embarked on an AI strategy led primarily by its IT function. While the firm did attempt to engage its staff, via internal surveys and some working groups, the responses and involvement were less than hoped and lacked meaningful detail. Hardly surprising, when the firm’s colleagues are typically stretched to deliver on commercial growth initiatives. As a result, the strategy was largely speculative, based on the tools being marketed to IT professionals at the time, rather than being informed by actual colleague driven use cases or operational needs.
AI Governance at the firm? Monthly meetings involving a handful of individuals who were more motivated by being seen to be involved in an AI initiative than by a desire to drive meaningful oversight. No real consideration or drive toward observability tools and no real understanding of how AI was being used across the firm – the strategy was an exercise in guesswork.
The consequences were predictable. Risky and even malicious prompts went undetected. In some cases, LLMs trained on internal datasets produced outputs containing personal data and other commercially and legally sensitive information, which were accessed by users who should have been denied. The firm had no mechanism to identify underuse, misuse, or emerging risks.
Turning the Strategy Around
Recognising the gaps, the firm took decisive steps to rebuild its AI governance framework. Observability became a central tenet. By collecting real-time data on how LLMs were being used across different practice areas, the firm was able to:
- Identify risky prompts and flag malicious behaviour, using reference frameworks such as the NIST AI Risk Management Framework, OECD AI Principles, and ISO/IEC 42001.
- Detect which teams and offices were underutilising AI and share successful use cases to encourage adoption.
- Intervene where necessary, both to clamp down on inappropriate use and to support teams with training and guidance.
- Configure LLMs with tighter controls to prevent recurrence of newly identified data leakage incidents.
This shift from speculative strategy to data-driven governance not only reduced risk; it unlocked new opportunities for efficiency, collaboration, and innovation.
In sectors like financial services, healthcare, and legal tech, firms that have good observability capabilities underpinning their governance and strategies on AI are benefitting from higher returns in trust, efficiency, and market differentiation. This includes firms that can demonstrate that they have functioning:
- Observability platforms: Tech to provide real-time insights into model and user behavior.
- Cross-functional governance committees: Bringing together legal, compliance, tech, and business leaders to oversee AI use.
- AI usage registers: Tracking all AI tools used across the organisation, including third-party and shadow AI systems.
- Model audit protocols: Regular reviews of AI models for fairness, transparency, and performance.
Embedding Governance and Observability into Strategy and Culture
To realise the full value of AI governance, firms must go beyond frameworks and checklists. They must embed governance and observability into their strategy, operations, and culture.
Here are five practical steps:
1. Leadership alignment
Governance must be championed at the highest levels. Boards and executives should treat AI oversight as a strategic priority, not just a compliance issue.
2. Cross-functional collaboration
Legal, compliance, risk, technology, and business units must co-own governance. Siloed approaches are ineffective in managing complex AI systems.
3. Tooling and infrastructure
Invest in observability platforms and model monitoring tools that provide real-time insights and support explainability.
4. Policy and culture
Create clear policies that define acceptable AI use, including shadow AI. Foster a culture of responsible innovation through training and awareness.
5. Continuous improvement
Governance and observability must evolve with the technology. Regular reviews, scenario testing, and stakeholder feedback are essential.
Conclusion
AI Governance isn’t just about ticking boxes or avoiding trouble. If it’s done well, it can be the difference between bolt-on burdensome compliance and real value and benefits realisation. To really do AI governance well it must be underpinned by real time observability, without that you will struggle to see what’s happening, respond quickly, and steer AI use in the right direction.

Mark Hendry is Partner at S&W where he leads their Digital Resilience and Compliance team. He advises clients on all aspects of “Digital Trust” encompassing Technology, Data, Cyber, Privacy and AI.