Notes From the Jagged Frontier: Agentic AI, Coding and the Law

October 16, 2025

James Phoenix, a litigator, shares insights from a six month sojourn into agentic GenAI software development.

Imagine, if you will, a trainee, but a rather unusual one; their output needs careful review and they never show up to meetings but what they are is indefatigable; they don’t get ill and they are ready to work on any task for you, day or night. Now imagine they have an encyclopaedic (if sometimes outdated) knowledge of the law of every major jurisdiction, across specialisms and sectors.

Oh, and while we are at it, let’s imagine this trainee produces work product faster than you ever could, free of typos; if they were physically typing the friction would melt the keyboard.

How would one supervise such a trainee? Well, obviously, you’d have a window into their thought processes; you would watch them plan out and prepare work product in real time, and you could intervene to correct them at any point, without precipitating burn-out.

Then, if their thought process is satisfactory, you can just leave them to work independently, safe in the knowledge that they will come back to you in minutes or at most hours with work product that (with appropriate instructions) is frequently 80-100% correct, including a clear audit trail. 

Finally, this trainee’s “salary” is somewhere between $20 and $200 per month, and you can have as many as you can afford, working non-stop in parallel or even directing each other.

In a legal context that may sound far-fetched, but if we shift domains that is the daily reality, right now, of how software developers are currently working with the latest agentic GenAI coding tools like Claude Code.

I’m a litigator with no programming qualifications or training but I’ve been using Claude Code daily to develop a GenAI tool. The tool uses Linklaters’ in-house GenAI chatbot Laila to review and amend thousands of fee-earner narratives in minutes for common issues, with users able to quickly vet Laila’s suggestions. Results from the Summer pilot indicate the tool can cut down time spent by lawyers on non-billable WIP review by more than 60%. I built the tool using Laila and other “normal” chatbots, demonstrated it at last year’s SCL AI Conference and spent the last 6 months on an internal secondment to get it enterprise-ready for its successful firm-wide launch on 1 September.

Now, of course the code I produce with Claude Code is audited by experienced software engineers and tested before it goes live. However, the consensus has been that, while Claude Code’s output is certainly not perfect, frequently it produces code in minutes that only needs a polish to be ready for deployment.

By all rights this shouldn’t work. My risk-averse litigator’s brain knows I’ve no business moonlighting as a programmer for Linklaters without any software engineering qualifications. Yet Claude Code is such a productivity step-change that I can sensibly operate as junior developer, product owner and target audience. Honestly? It feels like finding a cheat-code.

So, we’ve ticked off the “computers” bit of this SCL article, but what about the “law” bit? Well, while Claude Code is focused on code and OpenAI’s recently released agent tool is focused on general desktop tasks, both are harbingers of what we can likely expect from agentic legal GenAI applications in the near future.

Rather than sequential exchanges with chatbots, users will increasingly be architects and orchestrators, instructing GenAI agents capable of autonomously planning and fulfilling an increasingly broad range of complex tasks to create human-equivalent output in minutes. Current “deep research” functionality is one early example of that workflow.

So why aren’t we seeing truly agentic legal-specific GenAI tools yet? Firstly, coding languages and key programming libraries are widely accessible (to people and GenAI models) and natively standardised. The same cannot currently be said of the case law, legislation, precedents and other (often confidential and/or privileged) documents such agents would need to take on key legal tasks.

That matters because “tool-calling” is a key functionality of agentic GenAI; the ability to use “tools” like file search and amendment, running a web-search or executing commands enables solutions like Claude Code to actually “do things”, in contrast to traditional prompt/response chatbots. Such tool usage requires interoperability between different programs and services. In a software development context such agentic interoperability is increasingly well-advanced; in a legal context it is a rarity for different programs, databases and services to be able to efficiently “speak” to each other. This appears unlikely to change in the short-term, particularly given key legal-specific knowledge platforms are advancing their own GenAI solutions.

Such fragmentation is a structural blocker to effective agentic legal AI; no matter how smart the model, if it can’t access the tools and information a human can then its impact will be circumscribed.

Another key distinction is that code, unlike law, is deterministic: a given line can be directly tested; either it compiles and functions, or it fails. By contrast a given clause or section of legislation can sometimes take years of work and millions of pounds to “debug” through the court systems. That is not to say debugging programs is trivial, but rather that the basic unit of code is significantly more amenable to agentic automated review and empirical testing than any “basic unit” of law.

Finally, companies like Anthropic and OpenAI have clear incentives to “bootstrap” GenAI progress by having increasingly autonomous agentic models engage in recursive self-improvement to beget further model improvements – an ouroboros-like feedback loop considered critical to attaining artificial general intelligence. The major model developers have no such incentive to develop legal-specific agentic AI.

All that being said, just as we have seen growing adoption of legal-specific GenAI tools, like Legora or Harvey, that trail and build upon the leading general-purpose GenAI models, given the extraordinary growth of Claude Code in particular (with a reported user base increase of 300% from May to July) and the potential associated productivity gains, it is likely to be only a matter of time before we see increasingly agentic legal AI solutions deployed.

There is already a vast deal of energy, money and thought being expended, both in the legal sector and more generally, to integrate current GenAI tools that fundamentally operate on the same prompt/response paradigm as models like GPT4, which was released over two years ago. Even if frontier AI research was to stop today, it would likely still take years to fully exploit current model capabilities, let alone their imminent agentic capabilities.

The adoption S-curve for current GenAI tools already raises acute or even existential questions for the business of law. Can the billable hour survive? What does GenAI mean for the role, experience of, and demand for, trainees and junior associates, and future talent pipelines in turn? How will highly capable and widely available GenAI legal tools shift the balance of work between in-house lawyers and private practice? Buy or build?

I don’t have the answers to those questions; I doubt there are definitive answers at this stage. However, looking back from the “jagged frontier” of truly agentic coding GenAI tools I feel pretty confident in saying this: as a sector law is barely getting started moving up that AI adoption S-curve, and it’s not levelling off any time soon.

As for that unusual trainee? They’re on their way; they’re just doing a stint of programming work experience before starting their training contract.

James Phoenix is a Managing Associate in Litigation, Arbitration and Investigations, specialising in investigations, commercial litigation and GenAI. He is also a commercial litigator with experience in first instance, Competition Appeal Tribunal, Court of Appeal and Supreme Court proceedings.

This article is also available in the special AI issue of Computers & Law, which is available to download here.