AI Data Leaks & Shadow AI: The Legal Minefield Facing UK Organisations in 2025

June 18, 2025

Camilo Artiga-Purcell, General Counsel at Kiteworks, identifies some of the ever-increasing risks and potential consequences of rushing to use AI in legal practice

Picture a partner at a leading UK law firm, racing to finalise a high-stakes merger. With a deadline looming, they turn to a free online AI tool, uploading sensitive deal documents for rapid analysis. The tool delivers, and the work is completed on time. Months later, a rival firm using the same AI platform receives uncannily precise insights about the merger’s structure in an AI-generated response. An investigation reveals that the original documents were incorporated into the AI’s training data, inadvertently exposing confidential strategies. The fallout is swift: a regulatory probe, eroded client trust, and a legal battle over compromised attorney-client privilege.

This scenario is not a hypothetical – it reflects a growing crisis across UK organisations. Legal departments and businesses are embracing artificial intelligence at an unprecedented rate, driven by its promise of efficiency in tasks like contract drafting and legal research. Yet, a survey of 300 corporate legal departments found that 81% are using unapproved AI tools without data controls, creating a legal and compliance minefield. For UK organisations, governed by the UK GDPR and facing emerging AI regulations, the risks are acute. Without action, legal teams face breaches of confidentiality, multimillion-pound fines, and reputational damage. This article explores the scale of this problem, its legal implications, and practical steps to safeguard sensitive data while leveraging AI responsibly.

Scale of the Problem

The adoption of AI in UK legal departments is surging, with tools promising to streamline contract reviews, legal research, and document analysis. However, this enthusiasm has birthed a dangerous trend known as “Shadow AI,” where employees use personal or unapproved AI tools for work tasks without oversight. According to a recent survey, 83% of in-house counsel use AI tools not provided by their organisations, and 47% operate without any governance policies. The Stanford AI Index Report highlights a 56% rise in AI-related incidents globally, with data leaks a primary concern. In the UK, 57% of organisations admit they cannot track sensitive data exchanges involving AI, amplifying the risk of breaches.

We recently surveyed 461 organisations on this issue, across a range of industries, and the results reinforce these concerns with alarming specificity. Only 17% have automated controls with data loss prevention capabilities to block unauthorised AI access though the legal sector fares even worse, with just 15% implementing technical controls – the lowest of any industry surveyed. Perhaps most troubling for UK law firms, 38% of legal organisations admit that over 16% of data sent to AI tools contains private or sensitive information, with 23% reporting that more than 30% of their AI-processed data is private.

The UK’s regulatory landscape heightens these challenges. The UK GDPR, aligned with the EU’s GDPR, imposes stringent obligations on data processing, storage, and cross-border transfers, with fines up to £17.5 million or 4% of global annual turnover for violations. The proposed UK AI Bill signals increased scrutiny of AI governance, while existing regulations like the Network and Information Systems (NIS2) Directive demand robust cybersecurity. For legal departments, a single employee uploading client data to an unapproved AI tool can expose privileged communications, trade secrets, or merger strategies to servers in unknown jurisdictions, undermining the foundations of legal practice.

Legal and Compliance Risks

The legal and compliance risks of ungoverned AI use are profound for UK organisations. Data protection violations top the list. The UK GDPR requires organisations to establish a lawful basis for processing personal data, adhere to data minimisation principles, and ensure security by design. When lawyers upload client data to consumer AI tools like ChatGPT or Claude, they relinquish control over that information. The data may be processed via third-party APIs, stored on servers in multiple jurisdictions, or used to train AI models, all potentially breaching UK GDPR requirements. Such violations can trigger severe penalties and lasting reputational harm.

Confidentiality and privilege concerns are equally grave. Attorney-client privilege, a bedrock of legal practice, can be waived when communications are shared with third-party AI providers. Consider when a UK litigation team uploaded privileged strategies to an AI tool, only to have opposing counsel argue successfully that privilege was lost, rendering years of communications discoverable. Trade secrets and intellectual property face similar risks, as AI platforms may inadvertently expose proprietary information through model outputs or data breaches, violating confidentiality agreements.

Regulatory compliance failures add further complexity. The NIS2 Directive mandates robust cybersecurity controls, while the Financial Conduct Authority (FCA) requires strict data governance for financial services firms. The Solicitors Regulation Authority (SRA) imposes ethical obligations under Rule 2.1, requiring solicitors to maintain competence in the technologies they use. Failure to understand AI risks can lead to disciplinary action, as seen in recent SRA investigations into tech mismanagement, where firms faced fines and reputational damage for inadequate data security. As AI regulations evolve, legal departments that fail to govern AI use risk becoming targets for enforcement actions. In this case, “Attorney-client privilege can be lost with a single upload.”

How AI Data Leaks Occur

AI data leaks stem from a mix of technical vulnerabilities and human error. When lawyers upload documents to consumer AI tools, the data may be used to train the AI model, stored indefinitely on external servers, or shared with third-party APIs without transparency. These platforms, not designed for the rigorous security needs of legal work, make it nearly impossible to retrieve or delete data once uploaded – a risk coined the “irrevocability problem.” This is particularly alarming for legal departments handling privileged or sensitive information.

Common scenarios include lawyers using AI for contract drafting, legal research, or document analysis under tight deadlines. A junior associate might paste a draft settlement agreement into an unapproved AI tool to refine its language, unaware that the data is now stored on a server abroad. Similarly, a senior lawyer might use AI to summarise merger documents, not realising that the tool’s outputs could later reveal confidential strategies to client competitors, targets, or potential buyers. These actions, driven by the need for efficiency, create vulnerabilities that can lead to data leaks, regulatory violations, loss of privilege, or loss of bonafide competitive advantage.

Our recent survey cited above confirmed that these scenarios reflect current industry realities. Despite the legal profession’s heightened awareness of data risks – 31% of legal firms cite data leaks as their top AI concern, the highest of any sector – this awareness hasn’t translated into action: 15% of legal organisations operate with no formal AI data policies whatsoever, while 70% rely solely on human-dependent controls like training sessions and warning emails. This creates what the report calls an “awareness-action gap,” where firms recognise the danger but fail to implement the technical safeguards necessary to prevent catastrophic breaches.

Real-World Scenarios

The dangers of AI data leaks become clear when we imagine what could go wrong. Picture the scenario from our opening: a legal team uploads confidential merger documents to an AI tool for analysis. The platform uses those documents to train its model, and suddenly, sensitive deal information surfaces elsewhere − triggering expensive disputes and destroying client relationships.

Consider another possibility: a UK company’s legal department runs personal data through an unauthorised AI tool. The result? A full GDPR investigation, hefty fines, and damaging headlines that tarnish the firm’s reputation. Perhaps most alarming is this scenario: a litigation team uploads privileged attorney-client communications to an AI platform. When opposing counsel discovers this, they successfully argue that privilege has been waived. The entire case strategy unravels, and what should have been protected conversations become fair game in court.

These aren’t just theoretical risks, they represent the very real consequences that await organisations operating without proper AI governance. Each scenario shows how quickly simple upload can transform into a professional catastrophe.

Building a Compliant AI Framework

To mitigate these risks, UK legal departments must establish a robust AI governance framework tailored to their needs. The foundation is a clear governance structure. Comprehensive AI usage policies should outline acceptable tools, data handling protocols, and consequences for non-compliance, addressing confidentiality, privilege, and data security. Regular risk assessments are vital to identify vulnerabilities, while a formal approval process ensures only secure, compliant AI platforms are used.

Technical controls are critical. Data classification systems should identify sensitive information before it is processed by AI tools. Access controls, such as role-based permissions and monitoring, can prevent unauthorised use of consumer AI platforms. An approved list of enterprise-grade AI tools, designed with legal and compliance requirements in mind, ensures efficiency without sacrificing security. These tools must integrate with existing cybersecurity infrastructure and incorporate data loss prevention measures to protect sensitive information.

Training and awareness underpin effective governance. Mandatory training for all legal staff, from partners to associates, should cover the technical and legal risks of AI, including UK GDPR obligations and SRA requirements. Regular updates on emerging threats, such as new data breach tactics or regulatory changes, keep teams informed. Clear reporting mechanisms for AI-related incidents foster transparency and enable swift responses to potential breaches, minimising damage.

Practical Recommendations for Legal Teams

Legal teams must act swiftly to address AI data risks, with immediate, medium-term, and long-term strategies. In the short term, conducting a Shadow AI audit is essential to uncover unapproved tool usage. This involves surveying staff to identify all AI tools in use, assessing the data being processed, and documenting potential exposures. This could be backed up by technical solutions, such as an “AI Gateway” to help enforce these policies by automatically detecting and blocking sensitive client data from reaching unauthorised AI platforms, providing real-time protection while policies are developed. Emergency controls, such as blocking access to consumer AI platforms and providing approved alternatives, can halt further risks. Clear communication ensures staff understand the urgency and comply with new protocols.

In the medium term, comprehensive AI policies should align with UK GDPR, SRA, and FCA requirements. Again technical controls, not just documentation, could be used to apply sensitive data classification, access controls and audit trails, regardless of which AI tool employees attempt to use. Vendor vetting procedures are crucial, ensuring AI providers meet stringent security and compliance standards, with contracts that protect client data and include audit rights. An AI-specific incident response plan prepares teams to act decisively in case of a breach, minimising regulatory and reputational fallout.

For the long term, investing in enterprise-grade AI solutions designed for legal work, such an AI Gateway described above, is vital. Annual policy reviews keep governance measures aligned with evolving technology and regulations, embedding AI governance into the broader compliance strategy to maintain client trust while leveraging AI’s benefits.

Future Outlook and Conclusion

The UK’s regulatory landscape is evolving rapidly, with the proposed AI Bill, UK GDPR, and NIS2 Directive signalling heightened scrutiny of AI governance. Legal departments that fail to act risk becoming cautionary tales, facing fines, client loss, and reputational damage. Conversely, those that implement robust governance can gain a competitive edge, demonstrating to clients their commitment to security and compliance while harnessing AI’s efficiency.

The urgency of addressing AI data leaks is undeniable. Legal teams must act now to audit AI usage, implement controls, and educate staff. By balancing innovation with risk management, UK organisations can protect sensitive data, uphold client trust, and navigate a complex regulatory landscape. The legal profession is built on trust and diligence. In the AI era, these principles demand proactive governance to ensure technology serves as a tool for progress, not a source of peril.

Camilo Artiga-Purcell serves as General Counsel at Kiteworks, where he leads legal strategy and governance initiatives for secure content communications and collaboration. With extensive experience in data privacy, cybersecurity, and emerging technology law, he advises organizations on managing AI-related risks while maintaining competitive advantage.