Shanthini Satyendra separates the wood from the trees on what matters to leaders in companies seeking to adopt AI. Drawing out how pivotal a role lawyers can have in this respect. Including in this article insights she has curated from leaders in AI including General Counsel and C-suite members who, together with Shanthini, are part of an expert panel on this topic at the SCL AI Conference 2025.
“80% of boards have no process to audit use of AI in their firms and said they did not know what questions to ask. Over 86% of businesses use AI without the board being aware of it.”
AI in the Boardroom: The essential questions for your next board meeting, Institute of Directors (IoD).
These results were based on survey data gathered by the IoD in 2022, before GPT and generative AI reached mainstream board-level awareness. A sobering read, nonetheless, as it shows the steep learning curve that many boards will have faced since then, given generative AI’s unprecedented impact after ChatGPT announced itself to the world in November 2022. So, fast forward to 2025 what are boards facing when it comes to AI? And how can lawyers help?
This article will look at that. This being relevant not just to Partners and General Counsel, but to all lawyers advising on AI, in-house and in private practice – as the board’s concerns cascade down to impact the legal input that needs to be prioritised at every level.
What are the AI challenges for Boards in 2025?
IoD’s follow up report “AI Governance in the Boardroom”, published in June 2025, had three core findings that are summarized below.
- Increasing adoption of AI, but governance gaps remain – in particular, a quarter of directors are concerned about the lack of an internal AI policy, strategy or data governance framework in their organisation.
- Benefits of AI are recognised, but scepticism persists – benefits such as productivity gains and use of AI for data analysis are mentioned. But, overhyped claims are a significant concern. With a tension between enthusiasm for gains and concerns about reliability and implementation.
- Skills gaps, lack of trust in AI outcomes and security risks are the biggest inhibitors to AI adoption – the report states skill gaps exist at every level, including board and management level. One can see correlations between this and the first two findings. For instance, how poor AI literacy makes it harder to assess the risks and benefits of AI, in turn increasing the risk of hype (finding 2). Also, how factors that make boards sceptical in finding 2 (such as concerns on reliability that signal a lack of trust) inhibit AI adoption (as decision makers will be slow to green light AI projects if sceptical about their outcomes).
So, what needs to be done?
The IoD report concludes with a clear call for action saying that “AI adoption and governance must be treated as a strategic boardroom issue, not a purely technical matter”.
This conclusion and the need for lawyers to play their role in making this a reality are strongly supported by Lord Clement-Jones, CBE and Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence. When invited to comment on this article, he was clear that: “Leadership on AI strategy and governance needs to come from the top of the company and that’s where lawyers need to ensure they position their relationships and their advice. “
Support can also be seen on the tenor of the IoD findings from recent Harvard and MIT analysis:
- Harvard Business Review: companies that take a strategic approach to build robust infrastructure and internal capabilities in AI will emerge as winners. AI’s impact will be significant but a long adoption curve is to be expected, not a quick win (‘The AI Revolution Won’t Happen Overnight’, June 2025).
- MIT: 95% of Generative AI pilots are failing to deliver measurable business impact or return on investment, with only about 5% driving rapid revenue growth and successful enterprise integration (‘The GenAI Divide: State of AI in Business in 2025’).
The MIT report triggered widespread reaction recently, initially making share prices of AI firms drop with many surprised by its findings. It is posited this is however a “predictable surprise” as these findings chime with Amara’s law: we overestimate the impact of technology in the short-run and underestimate its effect in the long-run. We’ve seen this in every technology to date, including the internet, which took nearly a decade to “take off”.
As Jas Narang, Chief Data, AI & Controls and Transformation Officer of Santander UK put it to me: “We need to believe and invest but with eyes wide open”. He is laser focussed on benefits. The technology being a means to that end. Also, unequivocal on the need for robust success criteria one can measure. Whether that be cost reductions, revenue uplift, customer experience improvement or risk mitigation. What you measure, happens.
This will resonate with those at the coal face advising and implementing AI. They will know that adopting AI to drive benefits, means getting the change management, benefits definition and foundational work like legal compliance and governance right. This work takes time, but ultimately is an accelerator when it comes to driving benefits. Indeed, those prone to over-estimate AI’s benefits, tend to under-estimate the need for this foundational work. So, my view is that the findings set out in these reports are not cause for disheartenment or surprise, but simply a timely cue to focus on the less-headline grabbing aspects needed to drive benefits. Issues that lawyers are well placed to help with, if as indicated below, they embrace the full repertoire of what is needed and not simply apply the law.
How can lawyers help Boards with AI?
First, a word on the lawyer’s role. It is often seen as being to mitigate or avoid risks. Whilst that does add value, it is posited that focusing on navigating risk to drive business benefit adds most value when advising Boards, shaping what matters strategically, rather than playing in defense. This being key as the AI landscape is fast evolving with few template approaches that work. Indeed, when it comes to AI, one person’s risk can be another’s opportunity. Real value is added by those able to navigate the risks by applying first principles to aid AI adoption and drive benefits where no ready-made solutions exist. So, what might good look like? This will now be looked at in the context of the first two IoD findings above. What follows is necessarily illustrative, not exhaustive, given the many contexts one encounters.
- Addressing absent or insufficient AI policies and governance. When shaping a way forward, lawyers who can spot the root causes can help unlock issues quicker. Typically, two triggers underlie the above issues. First, an organisational lack of clarity as to which function owns policies and governance – Technology? Legal? Data? Second, even if there is clarity over the functions, there is the question of how to frame governance and policies. Lawyers who see this and act, can pre-empt issues arising, not just deal with the fall out by mitigating or avoiding risks later by for instance:
- Getting AI governance, AI policies and foundational issues on the business and Board agenda to help unlock ownership. By working with other leaders to co-ordinate this but also by identifying necessary next steps. For example, if no AI Steering Group exists, seek a mandate to set one up and include the functions that will be key to foundational work on AI (e.g. the technology, compliance and data functions). If one exists but has lost impetus, seeking a mandate to change it. The GC and legal need not shoulder the full responsibility but can still drive this – governance and legal compliance being a team sport, not an individual endeavour. No template exists as to which function(s) lead, albeit data, technology and legal functions are pivotal to success. Or indeed how formal or informal their intervention on AI is. But given the speed with which AI technology is developing, the approach adopted must be agile if it is to keep up with the pace of change. When looking at firms new to AI, particularly small to medium sized firms, Will Scrimshaw, who was General Counsel at Benevolent AI, advocates for “An informal regular update to the Board on all AI-related topics, rather than getting bogged down in a matrix of committees and reporting structures from the start”. He notes that steercos and committees may well have merit down the line for such firms once the Board is more on top of the AI issues they are facing. As stated, there is no one size that fits all scenarios and, for larger corporates, particularly in regulated sectors, formal AI committees can be most effective from the start given their scale and the expectations of regulators. In each case, what is essential for any lawyer driving this, is strong rounded AI literacy (across business, technical, ethical/legal aspects of AI). Excellent legal knowledge and skills being table stakes. A track record of leadership and in getting things done is key too.
- Help with how AI governance and AI policies are framed. Lawyers are well placed to work with the business to identify what needs to be at the heart of AI governance or policies. Using their forensic legal skills, not just knowledge, to tailor that to the businesses focus areas. Also, to promote options that can make the next step easier like uplifting existing policies/governance to cover AI versus starting from scratch. Advising where the EU AI Act or other regulations or non-binding approaches (e.g. ISO 42001) might help. Also, guide on specialist areas like intellectual property laws and AI as warranted. Being output minded and looking not just at the rules, but also how to incentivise compliance. Latter often under-rated may include gates in AI projects hinged on governance criteria being met, performance reviews of those involved linked to compliance with controls and oversight by AI accountable board members. Nicolette Henfrey, Executive Vice-President, General Counsel and Company Secretary, IHG Hotels and Resorts concurs, emphasised the need for legal teams to raise awareness of internal governance processes that are being put in place and of the potential downsides of AI tools. This being key to manage AI use in an organization given the pace of change in AI tools and how accessible AI is. Governance and potential downsides can sometimes be perceived by business teams as dampeners. It is posited they are quite the opposite – they are accelerators – when delivered right. Don’t be coy in saying this upfront in awareness training. It can help set the right tone for what follows. Indeed, all too often AI projects get delayed or derailed because of issues that in hindsight one sees were quite simple to side-step if teams just had the awareness (and acted on it). Lawyers have a key role, as Nicolette Henfrey alludes to, in helping teams perfect that side-step.
- Lawyer’s role on Board concerns about hyped AI Benefits and promised outcomes. Starting with root causes, hyped AI benefits often arise due to: 1. the absence of an effective transparent framework for assessing AI benefits and risks and/or; 2. those seeking or giving approval for AI use cases not having sufficient AI literacy. Lawyers can add value as follows:
- Garnering board support to create a strong framework for assessing the benefits and risks of AI. That can be mandated under the governance and AI policies referred to above. No single template exists and many formulations work but one of the simplest being a 2 X 2 matrix with business benefits plotted on one axis and risks on the other with lawyers and other functions working together to define how the results of each axis are calculated. Approached correctly, the matrix has rigour whilst being intuitive to use. For instance, use cases with a high benefit and low risk can be identified early and given the green light. Equally, low benefit, high risk use cases can be spotted early and put to one side. The outcome will be as good as the work and detail underpinning it.
- Lawyers are well placed to press for AI literacy training. Anchoring training around governance and issues specific to the business so teams engage. Starting with the board, as an initial level-setting exercise is often needed to get them up-to-speed on how AI applies to the underlying business of the company, often in ways they had not appreciated. The Board will also need to contextualize the kind of AI usage that they should have a view on in relation to risk appetite. A view Will Scrimshaw holds, noting that “The Board often needs to be brought along on that journey so that they can assess risks/opportunities through the right lens/framework.” Training on the job is key too and to be encouraged. For instance, those creating a risk/benefit AI matrix as above, at every level in the business, will increase their knowledge about AI, creating an understanding the vantage points of other functions involved. Many Boards are acutely aware of the need for AI literacy, as flagged by the IoD report, so they will tend to support initiatives to improve it. AI literacy is covered in here with further valuable insights on this topic. Given the pace of change, investing in AI literacy not being a one-time event but a continuing project given the pace of change in AI.
- Lawyers can be pivotal in helping businesses avoid hype by using their analytical skills to help define AI strategy and prioritise use cases – some lawyers may demur, arguing that this is not within their remit. Chris Martin, Associate General Counsel at Boston Consulting Group (BCG) is not one of them, stating there is “A real opportunity for General Counsels to take a proactive leadership role in shaping AI strategy, moving beyond traditional advisory functions.” Viewing legal leaders as being uniquely positioned to advise from a holistic, end-to-end perspective on AI strategy as they have the benefit of advising across the entire business. Seeing the wide-angle view whilst also being close to the detail. Putting them in pole position to make the vital connection between regulatory requirements, ethics, operational practices, reputational issues and the actual roll out of AI across the company’s markets. In turning, helping businesses to shift the odds of success in the favour. That being crucial work given what we saw earlier about how elusive success can be when it comes to AI adoption.
In conclusion, GCs and lawyers can add significant value. Working with boards and leaders to set the tone from the top. Providing on-going strategic legal input on AI that is pro-active and helps shape the path to AI adoption. This will also complement the more day to day legal advice on AI provided by legal teams. With the best time to do so being before AI adoption matures in the businesses and issues the IoD flagged have manifested. The next best time is now.

Shanthini Satyendra is a Fellow for Society for Computers & Law (SCL), Vice-Chair AI Committee SCL, CEO and Founder of Manisain. Her practice focusses on the smart adoption of AI – where the law intersects with business. With AI expertise gained as both a lawyer and in a business role scaling AI. Formerly headed the digital legal team at a major bank. Founder member of its executive level AI working groups and chair of its AI Accelerator Group. With 15+ years as a lawyer leading high-stakes (8-figure) digital projects navigating risks many AI projects encounter today. A regular speaker on AI and the law at events such as at the FT Innovative Lawyers Summit 2025 and as a guest lecturer at Said Business School, University of Oxford and King’s College London University. Accredited under Said Business School’s AI, Executive Leadership and Ethics and Regulations programmes and as a Leading IT Lawyer by SCL.
This article is also available in the special AI issue of Computers & Law, which is available to download here.