The (Speculative) Rise of AI Agents in Legal

June 3, 2025

Amanda Chaboryk, Chris Cartmell, and Stephanie Baker, of PwC, raise some early considerations about the seemingly inevitable spread of agentic AI into legal departments

Introduction
Artificial Intelligence (AI) has played an increasingly prominent role within legal, starting as early as the 90’s, when legal databases started incorporating natural language processing to optimise search and retrieval. Leaping into the future, the 2020’s have seen the further expansion of natural language processing and advent of generative AI. Since the release of GPT-4 in March 2023, generative AI and advanced large language models (LLMs) have been a key focus for legal professionals. The advent of GenAI as such represents a new era for legal professionals, as LLMs have the capability of generating nuanced responses to complex legal questions and much more. Key uses within legal at present include intelligent summarisation of complex materials, context-aware drafting of legal documents, and precise research that delivers relevant authorities in seconds. More advanced and tailored use cases involve fine-tuned large language models, trained on carefully curated datasets to perform specific workflows with greater accuracy and relevance. Building on this foundation, the next frontier is the rise of AI agents – sophisticated software entities that integrate perception, cognition, and action to interact with their environment and accomplish tasks. Powered by LLMs, these agents can autonomously or semi-autonomously execute mutli-step legal processes, interact with users, and navigate digital environments to complete end-to-end workflows.

Demystifying AI Agents
A common question is how the functionality of generative AI differs from AI agents and what their underlying relationship looks like. In summary, unlike traditional rule-based systems, AI agents can reason through problems, dynamically access relevant data sets and tools, and autonomously take purposeful actions to achieve defined goal. While generative AI tools like ChatGPT excel at producing human-like text, answering questions, and assisting with tasks such as drafting or summarising, they conventionally operate in a single interaction and rely on users to guide each step. AI agents, in comparison, go a step further. They are built not solely to generate responses, but to act—autonomously completing tasks with minimal input. Unlike traditional software that follows rigid, predefined instructions, AI agents possess a remarkable ability to:

            ➜adjust their behaviour in response to changing circumstances
            ➜improve performance through ongoing interaction
            ➜make decisions independently, with defined parameters
            ➜tackle complex tasks with minimal human supervision

It’s helpful to think of AI agents as goal-orientated assistants that can plan, reason, interact with tools and systems, and take action on your behalf. For example, in the legal sector, rather than prompting a generative AI tool to draft a response and then manually searching for a relevant precedent, an AI agent could autonomously search a legal database, retrieve the appropriate case law or contract clause, assess its relevance, and generate a tailored draft—all within a single workflow. This same principle applies across other industries: in software engineering, agents can accelerate coding, conduct reviews, and suggest improvements; in sales, agents qualify leads, manage global communications, and schedule meetings; in research, agents gather critical insights, synthesise information, and produce strategic reports. AI agents are already being used to manage calls, coordinate workflows, automate finance tasks, and personalise digital outreach. Unlike traditional tools, they don’t just assist—they act. By initiating and completing tasks across systems, AI agents behave more like digital team members than static software, marking a shift from tools you use to assistants you delegate to.

AI Legal Use Case
A practical and generally low risk use case for AI agents within legal is tracking spend and reconciliation in the context of large-scale litigation. Consider a scenario where a legal team is managing a high-stakes dispute involving substantial external legal spend, including fees from external law firms, King’s Counsel (KC), and expert witnesses. An AI agent could be deployed to extract spend data from the organisation’s financial management systems autonomously – such as SAP or Oracle—and cross-reference it against third-party invoices received from counsel and experts. The agent would be able to identify and align key data points, such as billed hours, rates, expense categories and engagement periods, and then flag anomalies such as duplicate charges, inconsistencies in rates, or out-of-scope line items. It could also assess whether the incurred costs remain within the agreed litigation budget or cost cap, highlighting any overspend or deviation from financial expectations in real time. Because the underlying data—from finance platforms and structured invoice formats—is typically well-governed and auditable, the outputs of the AI agent are relatively easy for legal operations or finance teams to verify. This makes it a safe, efficient and highly actionable starting point for introducing AI agents into legal workflows, freeing teams from manual reconciliations and allowing for more strategic oversight of legal spend.

Agentic AI
Agentic AI represents a more advanced and autonomous evolution of AI – where systems don’t merely carry out predefined tasks but can independently initiate, prioritise, and pursue complex goals. Much like how the transformer architecture enabled GenAI to shift from narrow outputs to multi-purpose capability, agentic AI builds on this by introducing autonomy and coordination across tasks. It marks a shift from single-task execution to proactive, multi-step orchestration. Rather than relying on a single AI agent, agentic AI typically involves a network of agents, each responsible for a specific part of a broader process, all coordinated by an intelligent orchestrator. Think of the orchestrator as the lead Partner managing a large corporate reorganisation. While the corporate team often takes the lead, success hinges on the seamless coordination of multiple specialised workstreams – such as data privacy, employment, tax, and regulatory – each handled by dedicated legal experts. The orchestrator in an agentic AI system operates in much the same way: it understands the overarching goal, breaks it down into discrete tasks, assigns them to the appropriate AI agents, monitors progress, and dynamically adjusts the plan in response to new developments – just as a Partner might respond to a change in deal structure, stakeholder feedback, or late-stage due diligence issues. Whereas traditional AI agents operate more like junior associates following explicit instructions, agentic AI systems more closely resemble a well-coordinated legal team, capable of reasoning through what needs to be done, adapting priorities, and executing end-to-end legal workflows with minimal supervision. This evolution opens the door to adding automation to sophisticated legal processes that previously required ongoing human coordination, oversight, and strategic input.

Is legal ready?

Realistically, is Big Law ready for AI Agents? While the potential is immense, the path to realising that potential is far from simple. AI Agents are not ‘plug-and-play’ tools; they must be purpose built to interact with firm-specific systems, workflows and, of course, data. Deploying AI Agents demands well structured, clean data, along with non-trivial engineering and integration work to enable agents to perform tasks autonomously across different platforms. In theory, law firms already possess some valuable structured data assets – contract management systems with clause level tagging, deal bibles, litigation databases with matter-specific data, and timekeeping systems with uniform phase and task codes. These sources could serve as the start of the foundation for enabling AI agents to perform tasks such as automated document retrieval, clause comparison, or triggering workflows. However, in practice, data within law firms is often fragmented, inconsistently maintained, and heavily dependent on manual processes. This presents a significant obstacle to AI agents, whose effectiveness hinges on accessing high-quality, machine-readable data. Unlike in other industries where roles such as data stewards, information architects, and data governance specialists are commonplace, law firms have yet to widely adopt such functions. Without dedicated resources to oversee data quality, structure and accessibility, firms may struggle to operationalise AI agents beyond narrow, control environments. As a result, for many firms, the effort to build and maintain these systems may not yet be worth the squeeze. However, for forward looking firms that are investing in digital transformation, embedding data governance, and treating know-how as a knowledge asset, AI agents offer a compelling opportunity to move beyond point solutions, towards more advanced, workflow-integrated legal delivery.

Legal Risks
Along with the above hurdles to implementation, the deployment of AI agents introduces significant legal and regulatory considerations to an even greater extent than traditional generative AI tools. The autonomy of AI agents, combined with reduced human oversight, means that it is increasingly important to consider safeguards and strategies to reduce legal risks. This includes the development of robust security measures designed to prevent exploitation and ensure safe, predictable interactions.

Ensuring data privacy remains a key concern, particularly considering the varied, stringent privacy and data protection laws around the world. AI agents are likely to be granted access to a wider array of systems and databases, thereby increasing the risk of potential breaches of data protection obligations. Users of AI agents must be vigilant in maintaining strict access controls to minimise the likelihood of unauthorised access to information, including personal data. Similarly, access to multiple systems also heightens the risk of cybersecurity attacks. Continuous monitoring of systems is essential, along with regular updates and patches to address emerging threats and vulnerabilities. Comprehensive incident response plans should be established and tested regularly, and include clearly defined obligations to ensure accountability and swift action in the event of a cybersecurity incident.

From a contracting perspective, using AI agents presents additional risks given they may act as agents on behalf of the user. This creates additional liabilities, particularly where AI agents are procured from third party providers. It is imperative to address these risks early in the negotiation process as contractual measures can help mitigate associated liabilities. For example, contracts could include indemnification clauses to cover scenarios where an AI agent operates outside the scope of its defined role and responsibilities.

Regulation in the AI space continues to present a complex and evolving challenge in relation to AI agents. One major difficulty lies in how AI agents, and more broadly (as above), agentic workflows, fit within existing regulatory frameworks. For example, under instruments like the EU AI Act, the term “AI system” is vital in determining whether a technology falls within scope. AI agents often consist of multiple components, only some of which may be AI-enabled, raising a fundamental question: should regulation focus solely on AI-enabled steps, or consider the entire-mutli-step agentic workflow? This distinction matters because in many cases, risk does not sit neatly within the AI component alone. Instead, it can accumulate across the broader workflow, including the non-AI steps, leading to what is referred to as the ‘snowball’ effect of risk. If regulators define the boundary too narrowly, they may miss critical points of failure. If they define it too broadly, it could introduce legal ambiguity and overreach, particularly in how responsibilities are assigned across interconnected systems. Defining what serves as an ‘AI system’ and interpreting terms like ‘autonomy’ and ‘adaptiveness’ will likely become contentious issues as regulation seeks to catch-up to the real-word use of AI agents.

For legal teams and in-house terms looking to adopt AI agents, these grey areas underscore the importance of, not only focusing on technical and data readiness, but also anticipating future compliance needs. Legal teams will need to consider how to track, explain and audit agentic behaviour across entire workflows, especially when outputs rely on sequences of both AI and non-AI components. In this environment, proactive data governance, clear documentation and a cautious approach to deployment will be key to balancing innovation with accountability.

Balancing excitement with caution
The potential of AI Agents is undeniably exciting and the opportunities they present are vast. They can optimise the way legal teams operate by automating routine, administrative heavy tasks, allowing legal teams to focus on more complex work that requires human judgement, empathy, and collaboration. However, optimism needs a touch of caution. Technology is only as good as the data behind it. Without clean, structured, and accessible data, agents won’t perform as promised. For most teams, this will be the great hurdle – and the biggest opportunity. The wise approach is to begin with small, low risk areas where the challenges are tangible, and the data is reliable. This strategy enables teams to learn, adapt, and gradually scale up to more ambitious applications. It’s not about chasing hype – it’s about addressing real problems, one task at a time. It’s of course vital to remain mindful of ethical implications and biases in AI, ensuring transparency, accountability, and fairness to maintain trust in legal practice. Ensuring transparency, accountability, and fairness in AI applications is essential to maintain trust and integrity in legal practices. By balancing excitement with caution, legal teams can harness the true potential of AI agents whilst mitigating risk and ensuring responsible use.

Remember, with great power comes great responsibility – and a lot of paperwork!

profile picture of amanda chaboryk pwc

Amanda Chaboryk is the Head of Legal Data and Systems within Operate, for PricewaterhouseCoopers (PwC) in London where she focuses on leading the operational delivery of managed legal programmes at PwC. Within her role she is also responsible for supporting clients and colleagues in navigating emerging technologies, such as GenAI.

Chris Cartmell is a solicitor and leads the Legal Digital & Data team at PwC UK. He advises on cyber security, digital regulations and data protection, and is a member of PwC’s EMEA Privacy Senior Leaders Team. Chris is qualified in England & Wales and Hong Kong.

Stephanie Baker is an Australian-qualified solicitor within the Legal Digital & Data team at PwC UK. She specialises in data protection and wider regulatory compliance, with a keen interest in emerging digital regulation.