Disclosure and AI – use of ChatGPT in evidence about keywords

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
In early 2024 the First Tier Tribunal considered, amongst other things, whether evidence generated by ChatGPT could be used in court about potential keywords to search for documents to imply that the keywords actually used were too narrow.
The key points about whether ChatGPT evidence could be used in evidence were:
Firstly, we must assess the weight that we give to the ChatGPT evidence. We place little weight upon that evidence because there is no evidence before us as to the sources the AI tool considers when finalising its response nor is the methodology used by the AI tool explained. If comparisons are drawn to expert evidence, an expert would be required to explain their expertise, the sources that they rely upon and the methodology that they applied before weight was given to such expert evidence. In the circumstances we give little weight to the ChatGPT evidence that searches should have been conducted in the form set out within that evidence.
Parties need to carefully consider (amongst other things) their legal obligations to search and, consequently, how they search for potentially relevant documents. Sometimes, evidence is required to demonstrate that a search strategy and methodology was appropriate or to make the case it was not. This case is a useful illustration of how courts will approach the use of AI systems with caution, but, as implied, the possibility that in the right circumstances AI systems could provide evidence of some weight.
The case is Oakley v Information Commissioner [2024] UKFTT 315 (GRC) (18 April 2024).
For more information about the law, technology and practice of disclosure, contact Tom Whittaker or David Hine.