This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Updated Guidance on AI for Judicial Office Holders

Picture of Tom Whittaker
Passle image

Updated guidance has been published by the Courts and Tribunals to assist Judicial Office Holders in relation to the use of AI. The core purpose of the guidance is to ensure the proper use of AI by or on behalf of the judiciary in order to uphold the judiciary’s core duty to safeguard the integrity of the administration of justice.

The full guidance can be found here.  Whilst it is guidance, it is expected to be instructive as to how the courts view AI systems and their use more generally, or at least a potential starting point. It is now the second update, reflecting the court's continued monitoring of (potential or actual) AI use in the judicial system. 

We summarise the key points below.

Understand AI and its applications

Prior to using any AI tool, judicial office holders are required to have a basic understanding of the capabilities, functions and limitations associated with the use of the AI. AI tools, similar to surfing the internet, can help confirm known information but are unreliable for undertaking deep research, analysis or finding new and verifiable information. 

Public AI chatbots do not have access authoritative databases and generate text based on patterns from training data, which means outputs are predictions and not the most accurate answer. Responses depend on prompt quality and underlying datasets, which may contain outdated, biased, or incorrect data. Most LLM products are also trained on or constrained by material that is publicly available on the internet, with nuances and limitations as to the interpretation and processing of data.

Uphold confidentiality and privacy 

The guidance provides a strict prohibition on inputting of private or confidential information into AI models.  The assumption is that any information inputted to a public AI system will lose confidentiality. Guidance is also provided to require that judicial office holders:

  1. disable AI chat history to prevent data being used to ‘train’ the AI model;
  2. refuse any app permissions that grant access to mobile device data when using AI platforms on smartphones. 

To the extent that the use of AI risk the sharing of confidential or personal data, officer holders are required to notify the respective leadership judge and the Judicial Office, and report personal data disclosures as a data incident using the official form.

Ensure accountability and accuracy

The guidance emphasises the need to validate any output provided by an AI tool prior to its use, with the risks associated of inaccurate, incomplete, misleading, or outdated information. 

AI tools may “hallucinate”, with previous infamous examples of LLMs inventing fictitious cases, citations, quotes, or refer to legislation and legal texts that do not exist. Outputs of AI models may also provide incorrect or misleading interpretations of the law or its application and make factual errors. 

Be aware of bias

AI tools based on LLMs generate responses from training data, which means any output may reflect errors or biases present in that data, even if alignment strategies attempt to reduce them. Users should always be aware of this risk and take steps to verify and correct inaccuracies before relying on AI-generated information.

Take Responsibility 

Judicial office holders are personally responsible for any material produced in their name, and judges must always read the underlying documents themselves. The guidance emphasises the role for AI tools in assisting lawyers but never replacing judicial work. 

Be aware that court/tribunal users may have used AI tools

AI tools have long been used in legal practice, for instance the use of technology assisted review (TAR) and continuous active learning (CAL) in electronic disclosure— and are common in everyday applications like search engines, social media, and predictive text. 

Lawyers remain fully responsible for the accuracy of any AI generated material submitted to courts or tribunals. Whilst the current framework on the responsible use of AI does not require its use to be disclosed, the guidance requires that any information be independently verified. 

The guidance acknowledges the prevalence of unrepresented litigants relying on the use of AI systems in preparing materials to be submitted to a judicial process, often without verification or validation. The guidance places responsibility on judicial officer holders to make appropriate inquiries as to the use of LLMs in the preparation of documents and to remind the litigant of their responsibility for this output.

Appropriate use cases of AI

Finally, the guidance provides examples of appropriate ‘use cases’ for AI by judicial office holders:

  1. AI tools offer practical support for routine legal tasks, such as summarising large volumes of text, drafting presentations, and handling administrative work like email management, meeting transcription, and memo preparation. These applications can improve efficiency, provided outputs are carefully reviewed for accuracy.
  2. However, caution is essential when using AI for substantive legal work. Current AI systems are unreliable for deeper legal research or analysis, as they may generate inaccurate or false information. 

If you would like to discuss how current or future regulations impact what you do with AI, please contact  Brian WongTom WhittakerLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by Zac Bourne and Tia Leader.