AI: opening the door to justice

August 16, 2023

The use of artificial intelligence in the law has become a subject of intense discussion. In early July, Richard Susskind, President of the Society for Computers and Law, provided his thoughts, including: “Lawyers and the media would do well to ask more often what generative AI means for access to justice and for the client community generally.”

Access to justice in the UK remains inconsistent or, for some, elusive. While many barriers are related to money, such as the high cost of living, legal fees, and the scarcity of legal aid, there are wider hurdles affecting
citizens not just in terms of the justice system but in their everyday life
interactions, such as understanding consumer rights or welfare benefits.

AI provides an unprecedented opportunity for today’s legal profession to improve accessibility to justice. But, if we don’t act, there is a real risk that inequality could grow, making justice harder to reach for many.

Overview of proposed solution

A solution is available right here and now which could transform access to justice. By combining the power of AI with the integrity of UK law, we can create a national legal ecosystem that not only respects but actively promotes the rights and interests of all its citizens.

The proposed solution isn’t accessing ChatGPT 4, Google Bard or other advanced AI language model through a web browser: it would use a customised large language model within an environment where data such as case law, codes of practice and guidance have been uploaded and embedded. The LLM is developed with the citizen as the predominant user and not the legal
professional.

Picture a scenario where the citizen asks questions in natural language and the LLM answers – the output may not always be perfect but it’s certainly good enough, and as Susskind says:
“The pandemic has helped us recognise that good enough is frequently good enough and certainly better than nothing” (Tomorrow’s Lawyers (3rd ed, 2023),
30).

Access to justice and open domain content

At the heart of any legal system lies the principle of ensuring access to justice so that all citizens, irrespective of their social or economic status, can protect their rights and seek redress. It’s a cornerstone of democratic societies, vital for maintaining social order and harmony. Yet, despite best intentions, real-world hurdles underscore the necessity for innovation to strengthen the system’s capacity to serve all citizens equitably.

When we think about information technology, we often focus on the “technology” element, such as devices, software and the internet. These technologies are the tangible manifestations that have revolutionised our world and day-to-day lives.

The “information” element of IT can be overlooked. Digital technologies are essentially tools for creating, storing, manipulating and transmitting information. Without information, these technologies would be useless.

We have a wealth of public domain legal resources such as codes of practice, case law and government guidance containing information that can answer many, if not most legal questions. Much of this information has been painstakingly crafted and produced with citizen collaboration, and paid for through taxes; but various challenges can restrict its full understanding and use. Some essential resources are trapped behind paywalls, and legal jargon can intimidate and confuse those without a legal background.

AI has the potential to greatly improve the accessibility and comprehension of UK law. Technologies such as natural language processing can analyse and simplify legal texts, breaking down complex language to offer clear insights into legal matters. AI tools can convert legal terms into everyday language, making it simpler for people to understand their rights, responsibilities, and legal position. Moreover, these tools can provide personalised help for people navigating the maze of law, delivering advice specific to their situation and allowing non-technical queries to be interpreted.

Legal profession’s concerns

Many legal professionals have expressed reservations about the integration of AI into the legal sector.

The term “hallucinations” refers to instances where AI might generate outputs that seem coherent but are incorrect. One notable instance involved New York attorney Steven A Schwartz. Schwartz’s use of ChatGPT for legal research led to him citing six fabricated cases generated by the AI in a legal brief. Facing potential sanctions, Schwartz confessed he wasn’t familiar with the workings of the chatbot. He was fined $5,000. His case highlights the importance of understanding how AI works and the possible risks involved, particularly in the context of legal research.

Data privacy is another critical concern. The application of AI in law can involve processing sensitive and confidential data. Despite strict data protection protocols, there are concerns over potential data breaches or misuse of information by big tech.

Proposed solution

The proposed solution is not about deploying AI models like ChatGPT 4 directly, but using their underlying technology in a customised LLM. This is a controlled environment that contains limited data. “Limited” does not refer to size – it means a defined dataset as opposed to the trillions of parameters of data which inform ChatGPT 4.

Let’s think first about how LLMs work. Essentially, they are prediction machines which take a sequence of words and predict the next word. They do not, and many think should not, act like humans. By predicting one word after another, they generate full sentences and paragraphs. These models are trained on large amounts of text data, learning the patterns and structures of language. Public domain legal data such as case law, codes of practice and government guidance can be used to train the customised LLM.

Next, the application programming interface (“API”) is used to interact with the model, using backend prompts and “temperature settings”.

Backend prompts are essentially instructions given to the model to guide its responses. You can think of them as whispering in the model’s ear to ask it to generate text in a certain way. The customised LLM uses the prompt as a starting point. Prompts have huge potential to enhance access to justice, as you can create a persona optimised to be a helpful expert who is skilled at providing accessible advice and asking questions that help the user understand more about legal standing. Prompts can also be used to answer in the user’s choice of language.

Temperature settings which control the randomness of the AI model’s responses are another key factor enhancing the quality of the customised model. A “high” temperature makes responses more random, while a “low” temperature makes them more deterministic or focused. By adjusting these settings, we can make the AI tool’s responses more relevant and reliable.

A customised LLM can funnel queries through indices to pertinent data so that, for example, a consumer question is mapped to consumer rights data and a question about welfare benefits is channelled through to the appropriate guidance.

The model can also be hosted securely (including through Microsoft Azure and Amazon Web Services), which should support mitigation of data privacy issues. It could be hosted on trusted websites such as gov.uk or mygov.scot and signposted as a resource for access to justice.

A customised LLM is not as costly as you might think. The LLM framework is open source (free), and the API uses a token count charging model, where around 1,500 words costs $0.09. The
highest cost is hosting the database. As an example, one hosting service charges $79 per month to embed approximately 2,000 pdf pages. There is no token charge for analysis of data.

The contrast between the use of general LLMs such as ChatGPT 4 and a customised version using an API such as OpenAI’s gpt-4 can be seen in the table.

General LLM (e.g. ChatGPT 4) Customised LLM
Description A highly intelligent and versatile model capable of writing about nearly any topic. A specialised model that has been trained on a specific topic or type of data.
Strengths Extremely versatile and capable of generating coherent and credible text based on a wide variety of prompts. Because it has been “taught” certain information, it can write about a specific subject with more nuanced knowledge.
Weaknesses While it can generate text on a wide array of topics, it may lack specific, nuanced knowledge on specialised topics. While it has specialised knowledge, it might not be as effective outside of its trained domain.
Best used for General purposes where a wide array of topics need to be covered, such as answering varied questions, creating diverse content, or general conversation. Situations where specific knowledge is required, like when answering complex questions about a specific subject.
Back end prompts Can use backend prompts to guide its responses and adjust to different styles or topics. End users might not be aware of this ability and there is variation in end user competency in prompt engineering. Can use backend prompts that are more specific to the domain it has been trained on, allowing for more specialised responses. Developers can use these prompts to fine-tune the model’s behaviour.
Temperature setting End users might not be aware of the ability to adjust the temperature to control the randomness of its responses. The developer can control the temperature based on the specific needs of their application.
Hosting Hosted by owner of model, e.g. on OpenAI’s servers. Can be hosted securely on trusted servers, providing the developers more control over data security and privacy.
Quality testing Testing and quality control are primarily managed by the AI provider. Developers can conduct their own quality testing, ensuring the model’s performance meets their specific needs and standards.
Data processing control Data processing control is primarily in the hands of the AI provider. Developers have greater control over data processing, giving them more flexibility to handle data in a way that aligns with their requirements.
Hallucinations Can generate “hallucinations”, which are plausible-sounding but inaccurate or nonsensical statements. Have a reduced tendency to hallucinate due to their specialised training. However, they are not immune to this issue entirely.
Cost Free versions available; users can pay for premium tier (currently $20 a month for ChatGPT 4) Variable – depends on token usage and size of data.

Use case example

I tried inputting identical queries into ChatGPT 4 and a customised LLM, which has been trained on the Equality & Human Rights Commission’s Code of Practice on Employment and Code of Practice on Services, Public Functions and Associations (a total of 577 pdf pages) and screenshots included. While generally helpful and demonstrating empathy, the ChatGPT is unclear on jurisdiction, refers to outdated legislation and does not address legal standing. The customised and trained model (also empathetic) is more specific on these points and broadly able to meet the needs of the citizen.

Risks – action or no

All innovation carries risks, and in this context the most important include ethical considerations, data privacy issues, and the risk of relying too heavily on AI for legal advice.

We must seek assurance that the developers of and contributors to such systems act in good faith and to serve the best interests of users. It’s crucial to ensure that AI doesn’t perpetuate
biases or unfair practices, and that it respects the principle of equality under the law.

On the data privacy front, processing user data must be restricted to the sole requirement of improving quality. Stringent data privacy and security measures could be implemented to prevent unauthorised access or misuse of user data. Users should be informed about how their data will be used and protected, to support transparency and building trust.

There’s also the issue of risk perception. AI access to justice tools should provide enough information to empower users to make informed decisions about their use. Users need to understand that while AI can provide guidance and insights, it doesn’t replace the expertise of a human lawyer.

However, there’s another risk that often goes unmentioned – the risk of inaction. If we don’t seize the opportunity to leverage AI for enhancing access to justice, others might. Large tech corporations and established legal service providers could use AI to further profit, potentially leading to more inequality in access to justice. There are wider issues here about law as a commodity, and varying levels of comfort about global legal tech monopolising truly our native.

If we don’t act, there could be a future where AI-driven legal services are primarily available to those who can afford them. The AI value add-on could increase cost for all legal service users, including those in the not for profit sector who already struggle with increasing costs and lower income. By proactively deploying access to justice AI in a responsible, ethical and user-focused way, the legal profession could help avoid the worst case scenario where the justice gap increases.

AI as a complementary tool

We don’t know yet what the impact of newer generation AI will be on the legal profession. Given that the profession is centred around knowledge, knowledge is data, and data is the main fuel for AI, the impact could be transformative. At this early stage, perhaps we could consider AI as a form of triage for legal advice. AI can deal with simpler, more common legal queries, providing immediate answers and guidance. More complex cases, or cases requiring human judgment and expertise, can then be escalated to human lawyers.

This triage approach doesn’t just make legal advice more accessible: it could also free up time for lawyers to focus on more nuanced aspects of their work. This could be especially
valuable for law centres or other non-profit legal services, allowing them to assist more clients and deal with more complex cases

Call to action

AI has the potential to reshape the perception of the legal profession by making services more approachable and understandable, bridging the gap between legal system and public. We have a unique opportunity to boost access to justice.

The UK has the potential to be a leader in this technological transformation, leveraging AI and its rich legal data to enhance the legal system. This not only upholds the principles of law and justice but also brings remarkable benefits in accessibility, efficiency, and fairness.

These efforts could include:

  1. Technological audit and evaluation: establish a baseline – what tools are already being used; members’ use cases; a rapid review of international solutions to identify opportunities for growth and enhancement.
  2. Establish clear ethical guidelines: outline how AI can and cannot be used, addressing bias, data privacy, accountability, mitigation of user risk.
  3. Launch a pilot project: trial small scale pilot projects to address specific areas of access to justice, e.g. an AI assistant that advises users whether they are eligible for legal aid.
  4. li>Training and education: educate practitioners about the benefits, limitations, and ethical considerations of AI to improve understanding and support alignment with access to justice.

  5. Use results of the pilot to inform next steps. This could include exploring the possibility of obtaining government technology (GovTech) funding. GovTech refers to the application of innovative technology solutions designed to improve the efficiency, effectiveness and accessibility of public services.
  6. Consider whether the technological elements of professional codes remain relevant. For example the second Law Society of Scotland Standard of Service staes on Diligence: “With the increasing advancement of technology, it is expected that the solicitor
    will regularly look at ways in which technology can support client service. By way of example, this may include client reporting systems, file and data management systems and use of knowledge management systems.” These examples do not represent the increasing advancement of technology. Could the inclusion of technical skills as a competency support the profession in this transition? Compare the American Bar Association re Maintaining Competence: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefit and risks associated with relevant technology”.)

The final words must go to Richard and Daniel Susskind, in The Future of the Professions (updated edition, 2022), 412: “We now have the means to share expertise much more widely across our world. We should also have the will.”

Posted in UncategorizedTagged