This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Use of AI in preparing court documents – consultation

Picture of Tom Whittaker
Passle image

The Civil Justice Council working group has published a consultation on the use of AI in preparing court documents. The consultation closes on 14 April 2026.

The consultation summarises its proposals as:

in specific circumstances, and depending on what use of AI has been made, legal representatives involved in the preparation of court documents should be required to make a declaration relating to its use. … Essentially, [circumstances] in which AI has been used to generate evidence on which the court is being asked to rely. Other uses of AI, such as administrative uses for transcription, spell checking, assistance and the like do not require a declaration.

In this article, we summarise the key points from the consultation, including proposals related to witness statements, experts, and disclosure.

The consultation

The CJC's functions include keeping the civil justice system under review, and considering how to make the civil justice system more accessible, fair, and efficient. Its working group includes senior judiciary and representatives from the Law Society and Bar Council.

The purpose of the consultation paper is to consider whether procedural rules are needed to govern the use of AI by legal representatives for the preparation of court documents.  It recognises and briefly identifies international examples which reflect differing approaches - ranging from no upfront statements required and no prohibition (Singapore) to more restrictive approaches. 

Court documents are those created for the purpose of being provided to the court. These are primarily statements of case, skeleton arguments, witness statements, and expert reports. summaries.

The consultation is concerned with legal representatives and experts in the preparation of court documents.  The paper notes that legal representatives are subject to regulatory obligations, as explained in Ayinde v Haringey [2025] EWHC 1383 (Admin) (see our summary here). It is not concerned with use of AI by litigants in person or judges, for administrative tasks, or to deliberately place false evidence before the court.  Further, there are wider topics outside the working group's remit, such as AI use by litigants in person, which may be helpful to be considered separately. 

The proposals seek to maintain a balance between “ensuring that the latest technology can be used to maximum advantage in the civil justice system in order to enhance access to justice by improving efficiency and reducing costs; while at the same time maintaining confidence in the rule of law."

What is AI?

The definition proposed is specific to the consultation. The paper considers:

  • concern should be with “AI systems which are said to be able to perform tasks requiring ‘intelligence’, such as reasoning, problem solving and learning, particularly so called generative AI, which generates text, images or videos based on inputs or prompts from users.
  • it should not be concerned with “the use of AI or advanced technology which merely corrects spelling or grammar, provides transcription, operates as accessibility software, or assists with formatting and otherwise does not generate substantive content” - administrative uses
  • there is an open question about legal research software using AI, including when the results of such use may find their way into skeleton arguments. The working group have different views about this.

Proposals

The paper proposes:

Statements of case and skeletons

Provided the statement of case or skeleton bears the name of the legal representative who is taking professional responsibility for the statement of case, there is no need for any (further) rules relating to statements of case or skeletons produced with the assistance of AI.

An alternative would be to require a specific declaration to make clear if the legal representative has used AI in the preparation of the statement of case.

Disclosure

No proposal to introduce a requirement that disclosure lists/statements have a section addressing the extent to which AI tools/software have been used. However, the paper does recognise that the court will need to grapple with how parties seek to use AI, in particular generative AI, in disclosure.

Witness statements

The proposals differ depending on whether AI is used for statements in trial or otherwise, reflecting the different procedural rules that apply and whether a legal representative currently has to take professional responsibility for the preparation of the document.

For statements not for trial, the paper considers that, provided the document bears the name of the legal representative (or their firm) taking professional responsibility for its preparation, then there is no need for any further rules.

For statements for trial, in summary, these should be in the witness' own words and any legal representative assisting should not have asked leading questions. The paper proposes a rule requiring a declaration that AI has not been used for the purposes of generating the content of such a statement (including by way of altering, embellishing, strengthening, diluting or rephrasing the witness’s evidence) would be consistent with the aims of the Practice Direction and reinforce the importance of witness statements being in the witness’s own words.

For witness statements requiring translation, there is an open question about whether additional rules are required, for example, relating to the translator's responsibilities and the use of publicly available tools.

Experts

The paper notes there are circumstances where an expert may seek to rely on AI, such as for research, and it is difficult to see why disclosure is required. However, the paper also notes the risk of erroneous evidence being put before the court.

The paper proposes a requirement that the expert explains what use of AI has been made other than for transcription (or other administrative uses) and that the expert identifies the AI tools used.

AI tool

An open question relevant throughout is whether the specific AI tool used should be identified to help verify the that the task has been carried out by a recognised and trusted AI model. This transparency may be beneficial but it could also have downsides, such as implying that responsibility for the content rests with the Ai tool rather than the legal representative. 

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Griffiths or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us