Law, Ethics and AI: Judicial Empathy in the Age of Artificial Intelligence
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
On 17 July 2025, a roundtable titled “Law, Ethics and AI: Can, or Should, AI be Trained for Judicial Empathy?” was held at the International Dispute Resolution Centre in London. The event brought together leading figures from the judiciary, academia, legal practice, and technology to examine the evolving role of artificial intelligence in the justice system.
Can AI Be Empathetic?
A central theme of the discussion was whether AI can, or should, be trained for judicial empathy. While participants agreed that AI cannot genuinely experience empathy, there was consensus that it may be increasingly capable of recognising and responding to human context. Nonetheless, moral understanding, compassion, and discretion were deemed inherently human responsibilities and that AI should serve as a supportive tool in judicial processes, not as a decision-maker. Trust, rather than empathy, was identified as the critical factor for AI adoption in justice.
Transparency and Accountability
Transparency and accountability emerged as essential principles for any AI system designed to support judicial empathy. The roundtable stressed the importance of systems being transparent, auditable, and compliant with governance and data standards. Avoiding opaque or “black box” decision-making was considered vital. Public trust, explainability, and accountability were highlighted as important prerequisites for the legitimate use of AI in the justice system.
Current Applications
In terms of practical application, the roundtable highlighted how AI is already being used in the justice system for tasks like document review, data extraction, and summarising written submissions. It was noted that whilst these functions can improve efficiency without compromising fairness, participants agreed that substantive legal judgments and credibility assessments must remain the exclusive domain of human judges. A recent judicial example was cited: VP Evans & Ors v The Commissioners for HMRC [2025] UKFTT 01112 (TC) (see our summary here). This case illustrated the use of AI under judicial supervision, reinforcing the importance of transparency and the value of limited, well-defined applications of AI.
Governance and Standards
The conversation also addressed the need for robust governance frameworks to ensure ethical deployment of AI. Standards-based models, such as ISO 42001, were cited as examples. Collaboration among regulators, industry stakeholders, and professional bodies was deemed crucial for designing trustworthy AI systems.
A Cautious Approach
A cautious approach to AI integration was recommended by the roundtable. They advocated for gradual implementation, beginning with low-risk applications in areas such as technical legal tasks. They maintained that complex legal decisions should continue to be made by humans, and institutions must prepare for the societal impacts of AI through education and strong regulatory frameworks.
The roundtable concluded that while AI offers transformative potential for improving efficiency and consistency in the justice system, it cannot replace the human values of empathy, wisdom, and discretion. Trust and accountability, grounded in transparency and appropriate governance, is, in their view, essential for public acceptance. The UK was encouraged to take a leading role in responsible AI adoption, thereby strengthening both justice and institutional legitimacy.
For further information on AI regulation and incident preparedness, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Griffiths, Kerry Berchem or any other member of Burges Salmon’s Technology team.