This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Thought Leadership

Experts and AI – guidance from the Academy of Experts

Picture of Tom Whittaker
Passle image

The Academy of Experts has published guidance to assist expert witnesses on the use of artificial intelligence (“AI”) in their work. The Academy is the professional society and accrediting body for expert witnesses of all disciplines. It is independently run by experts for experts and those using them. The guidance is intended to apply to expert witnesses providing independent expert evidence in legal proceedings.

The guidance highlights the following key points:

  • Experts need to recognise when they are using AI: This will be obvious in some cases, but not always. For example, some data analytics and e-discovery tools are increasingly using AI in a manner that is not always apparent to users. If in doubt about whether a technology uses AI, experts should undertake further due diligence to establish this.
  • Experts cannot divest responsibility or evade duties when using AI: AI is not a substitute for expert opinions. Experts need to use AI cautiously and exercise appropriate oversight of any AI-generated outputs used.   
  • Experts should be familiar with the risks of using AI: Risks include inaccuracies, hallucinations, confidentiality, privilege, bias, data protection, intellectual property concerns, and regulatory compliance.
  • Experts should consider whether using AI impacts their duties: Experts need to ensure that they can and do continue to comply with their duties at all times (e.g. professional duties under the Civil Procedural Rules of England & Wales, and arbitral guidelines).
  • Prior to using AI, experts should conduct a number of checks: These include ensuring experts are permitted to use AI at all; determining the purpose of the AI use; considering the risk levels of AI use (categorised into prohibited, high-risk, or low-risk); selecting the appropriate AI tool and understanding how it works; liaising appropriately with instructing lawyers; and documenting key decisions taken in relation to the use of AI.
  • During the use of AI, experts should implement safeguards: These include adequate human oversight and avoiding over-reliance on AI to ensure accuracy; documenting how broadly the AI tool was used and how the output was used (in sufficient detail to enable a court to understand why the expert took each step); considering any team members’ use of AI; and considering disclosing the use of AI to the court and/or opposing counsel.
  • Experts should keep up-to-speed with AI: Given the evolving nature of AI, experts should be aware of legal and regulatory developments, emerging risks, and new technologies.

The guidance also contains a non-exhaustive checklist and examples of AI activities with corresponding risk levels to assist expert witnesses. Given the wide-ranging use cases of AI technologies (from word editing software and e-discovery platforms to GenAI and analytical tools), experts need to be prudent at all times to ensure they can provide independent evidence in line with their duties.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Griffiths or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by Anusha Kasture and Tom Whittaker.

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us