This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

How does the UK judiciary perceive AI for judicial work?

Picture of Tom Whittaker
Passle image

This is the topic of a recent academic qualitative study of “judicial perceptions of how the integration of AI into judicial systems might transform the way judges and legal professionals work”. It was based on a focus group of 12 UK judges: 5 members of the UK Supreme Court; 1 of the Court of Appeal; 5 of the High Court; and 1 of the County Court. It is not clear, but it appears that the focus group may be from a range of civil, family and criminal courts.

Whilst not representative of the judicial as a whole or those courts, it provides some insight into issues that may be being considered. 

Further, the aim of the paper is to take:

initial steps toward understanding the opportunities, user expectations and requirements for integrating AI into the administration of law in the UK

The purpose is not to look at the role of AI within other parts of litigation, such as how solicitors and barristers use AI (on which, see the recent case of Ayinde v Haringey).

Here we summarise the key findings, opportunities, concerns and considerations.

Findings

The focus group identified 'several areas where the human factor is critical and this should not be ignored when developing AI tools'.

  • Justice is rooted in human decision making and reasoning
    • The focus group considered those to be human capabilities, not those of AI. 
    • Further, judges in the higher courts appeared to want to write judgments themselves and did not expect AI would play a substantive role in judicial writing. However, judges in lower courts with a high workflow saw potential for AI to play some role to help with efficiencies.
  • The human component in justice holds value. For example, to provide emotional and psychological closure, and a sense of ‘dignity’. The human role may be particularly important in certain types of cases, such as those involving children.
  • The value of the human component is case-dependent and role-dependent. For example, the focus group expressed some caution about use of AI to set out ‘facts’ in lower courts, but that AI could play a greater role for that in the appeals courts where facts had been established.
  • Decision making is collaborative with multiple points of view. The focus group thought that AI is unable to replace that social side of judicial work, for example, during judicial deliberation.

AI opportunities

The paper found that there ‘was a general sense of opportunity regarding the use of AI for various tasks within the judiciary, particularly to improve efficiency and access to justice’.

Several potential benefits were identified, including:

  • increased consistency, efficiency, access to justice;
  • improving information;
  • reducing bias, cost, and tedious work.

Several use cases were also identified.  Initial judgments or backgrounds could be drafted, AI could help proofreading, some types of small cases could be fully resolved through AI, support could be provided for identifying potential settlement amounts, and documents could be analysed more efficiently.  

Judges also identified the potential for AI to enable tasks that might not otherwise be performed.  Notably, some judges 'saw exciting potential for AI to “interrogate” recently digitalized legal data and provide insights into the overall state of the judicial system - something we have done by analysing tens of thousands of High Court claims to identify trends in Public Sector litigation (see here).

Concerns and considerations

Understandably, 'many concerns and considerations were expressed that would need to be addressed before wide adoption of AI in judicial work':

  • Reliability is currently insufficient for legal information. Case citation hallucinations, incorrect summarisations, errors in transcripts, references to non-UK (mainly US) law.  
  • Use of language is precise and critical in the work of judges.  For example, understanding what others say and then expressing it in a judge's own words.
  • Privacy is a concern, albeit not specific to AI.
  • AI may develop misconceptions from uninformed user behaviour. For example, where an AI system is trained on previous questions and answers which may be wrong.
  • AI bias needs to be understood.
  • AI could lead to de-skilling. For example, by over-reliance on AI, junior judges are not developing key skills.

The authors finish by identifying potential future work to continue to inform how, when, where, and if AI could be used to support the judiciary, recognising the multiple opportunities and also concerns expressed.

The paper is:

Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary | Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work

ACM Reference Format: Erin Solovey, Brian Flanagan, and Daniel Chen. 2025. Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary. In CHIWORK ’25: Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (CHIWORK ’25), June 23–25, 2025, Amsterdam, Netherlands. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3729176.3729192

If you would like to discuss how current or future regulations impact what you do with AI, please contact  Brian WongTom WhittakerLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

Related services

Related sectors