16th May 2024

Artificial Intelligence: The impact on the future of litigation

By Phillip Green

The growing adoption of semi-autonomous software, commonly known as Artificial Intelligence (AI), to support workplace processes is still a divisive topic. For its advocates, AI could be as impactful as the search engine was in the 1990s, and has the potential to completely transform the landscape of how we all work, generating greater efficiency and productivity, and simplifying the most onerous of tasks.

But for all the enthusiasm for what the AI revolution could bring, there are very real concerns about the risks that could come with it. Nowhere is this truer than in litigation where the stakes are so high and the potential consequences could be significant if AI is used in the wrong way.

Five focus areas of AI in litigation

Artificial Intelligence is an umbrella term which describes computer systems, programmes and codes capable of performing tasks that would usually require human intelligence to complete. What does and does not fall within the remit of AI is hotly contested. The AI Judicial Guidance, published in December 2023 and produced by a cross-jurisdictional judicial group, defines five key areas of focus from a litigation perspective:

  1. Generative AI – software capable of generating new content, such as new text, images or computer coding. It can also include AI which is designed to take actions without human intervention.
  2. AI Chatbots – computer programmes designed to mimic or simulate human-like conversations using generative AI.
  3. Large Language Models (LLM) – programmes designed to draw upon vast data sources to accurately predict the most appropriate words to use in a sentence, giving it the ability to understand and respond to queries in a more realistic and human way.
  4. Machine Learning – AI systems which use data and algorithms to imitate how humans learn, improving upon its accuracy through feedback with every answer given.
  5. Technology Assisted Review (TAR) – AI tools used to assist with the review of large quantities of documents, and to search and identify potentially relevant documents for manual review (often used as part of the disclosure process).

How is Artificial Intelligence being used by lawyers?

Law firms have been relatively early adopters of AI tools, particularly in areas of research and disclosure tasks. AI tools focused on technology assisted review have been around for several years and are an increasingly common aspect of disclosure review.

However, the prospect of relying on AI for research purposes is not without its risks.  New York Attorney, Stephen Schwartz, found himself in a great deal of trouble in 2023 when he relied on ChatGPT to research precedents for cases involving personal injury claims against airlines. Six of the seven cases he cited to the Court had been completely fabricated by the AI system. There are fears that it is only a matter of time before something similar happens in the UK.

The UK AI Judicial Guidelines state:

“All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate. Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.

Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.”

The use of AI by Clients, Unrepresented Litigants and Litigants in Person

AI tools are also frequently being used by individuals seeking legal advice to research the law prior to instructing lawyers or to ‘verify’ the advice given by legal representatives. Unrepresented litigants are using AI tools in place of legal representation altogether.

The latter is the most worrying trend as the process completely bypasses receiving expert guidance from a qualified legal specialist to support and manage lay people through the litigation process. This puts individuals at risk and in a potentially vulnerable position when navigating litigation, which can be complex.

This includes lay litigants who believe they can run their own cases without the assistance of lawyers purely supported by AI. The AI Judicial Guidelines makes the following observations:

“AI chatbots are now being used by unrepresented litigants. They may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error. If it appears an AI chatbot may have been used to prepare submissions or other documents, it is appropriate to inquire about this, and ask what checks for accuracy have been undertaken (if any).”

The use of AI by the Courts

In contrast to law firms and lay clients, the Courts have been slower, and more restrained, in incorporating AI tools into their work. However, this more cautious approach does look set to change, particularly in relation to Court administration with His Majesty’s Courts and Tribunals Service (HMCTS) purportedly funding a £1.3 billion reform programme aimed at modernising the Courts by digitising existing paper-based services and centralising administration processes. It is believed AI will play a crucial role in this modernisation with the aim of improving efficiency whilst simultaneously lowering operating costs.

The Judiciary has also generally taken a more conservative stance to the use of AI in aiding its work, which is perhaps less surprising given the crucial role Judges have in upholding the rule of law.

There are some exceptions. Sir Geoffrey Vos (present Master of the Rolls) has been a keen advocate and supporter of a careful adoption of AI tools to assist the Judiciary, and has admitted to using ChatGPT when preparing a Judgment. He is not the only one; in a speech to the Law Society’s Dispute Resolution Conference in September 2023, Lord Justice Birss said:

“I thought I would try it. I asked ChatGPT ‘can you give me a summary of this area of law?’, and it gave me a paragraph. I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful”.

It should be said that most Judges are more cautious when it comes to using AI tools, but the examples above do show that change is perhaps on the horizon.

The advice given to the Judiciary by the AI Judicial Guidelines state:

“Judicial office holders are personally responsible for material which is produced in their name.

Judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment. Provided these guidelines are appropriately followed, there is no reason why generative AI could not be a potentially useful secondary tool.

If clerks, judicial assistants, or other staff are using AI tools in the course of their work for you, you should discuss it with them to ensure they are using such tools appropriately and taking steps to mitigate any risks.”

Artificial Intelligence is here to stay, and its use will only grow. It is vital this use is governed, especially within litigation. The Artificial Intelligence (AI) – Judicial Guidance confirms the current 2023 guidelines is only the first step in a planned series of work which will be continually reviewed, as the technology continues to develop. It intends to publish a Frequently Asked Questions document to support the guidance, which will be in part-created by questions submitted through a survey by the judiciary in both courts and tribunals.

Quick tips for using AI tools

  • Understand the AI tool you are using, know the advantages as well as its limitations.
  • Be wary of the data you are inputting into the AI tool – particularly in relation to sensitive information, client confidentiality and privacy.
  • Be alert to the risks of errors, fabrications, or even of bias within the AI programme.
  • Do not become over reliant on the tool you are using. Ensure that there is a process for verifying the information provided and take responsibility for using AI tools.

Hamlins has expertise in a variety of different litigation areas. We seek to obtain the best outcome possible for every client, no matter how big or small the issue may be. If you would like to find out more about how Hamlins can help you, please don’t hesitate get in contact.

Artificial Intelligence: The impact on the future of litigation

Have a question? Contact Phillip

Have a question? Contact Phillip


New message for


    We will only use this email to contact you regarding your enquiry. We will not pass this on to any 3rd parties. See our privacy policy.