AI and privilege – when could use of AI mean confidentiality is lost?
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The Upper Tribunal of the Immigration and Asylum Chamber in UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC), in the context of two cases about alleged use of AI by legal representatives, made notable comments about how use of AI systems could result in a loss of confidentiality and, consequently, any claim to privilege.
What happened in Munir?
Munir consolidates two cases.
In the first case, grounds of judicial review lodged with the claim form in the Upper Tribunal contained multiple inaccuracies, including citations to non‑existent cases. Notably, in response to these allegations, the law firm stated that the grounds had been drafted by a “part‑time trainee lawyer”.
The second case involved grounds of appeal to the Upper Tribunal that cited case law unrelated to the subject of the appeal. The individual who drafted the grounds, a Level 3 accredited adviser, initially denied using AI in preparing the documentation, but later stated that he “cannot dismiss the fact that the case was an AI creation as there is no other explanation”.
AI and professional conduct
The Court was clear that legal professionals are obligated to ensure that documentation presented to the court is accurate. Their failure to do so is a breach of professional obligation and waste of judicial resources.
In addition, legal professionals must supervise their juniors and are responsible for the work they produce. The judge emphasised that:
In our judgement, a supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such "hallucinations”
The courts recognise the risks posed by AI and have taken steps to reduce the potential impact on the justice system. For example, certain court forms now require legal professionals to confirm, by way of a statement of truth, that any authority cited (a) exists, (b) can be located using the citation provided, and (c) supports the proposition of law for which it is cited. The Law Society has also issued guidance on Generative AI for solicitors and firms to help practitioners understand the technology and its risks.
Despite this, the Upper Tribunal has seen a significant increase in the citation of fictitious authorities in 2025 (and we have previously written about some of these cases).
Judicial commentary on AI and confidentiality
Although the decision in Munir is primarily about supervision and the obligations owed to the court, the decision also remarks on the risks of AI use for confidentiality:
Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the SRA and should, in any event, be referred to the Information Commissioner’s Office.
Notably, the judgment does not refer to the Updated Guidance on AI for Judicial Office Holders. That guidance states:
Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world.
This indicates that the court's starting position may be that any use of (what we refer to as) a public AI chatbot’ means that any confidentiality in the inputs is lost. One reason why this is important is because confidentiality (or loss of or lack of confidentiality) is relevant to whether or not one can claim privilege, as seen in recent US cases on AI in privilege.
Further, the difference in language, such as between ‘public AI chatbot’ and ‘open-source AI’, and explanations about what happens to data, also illustrates that there may be differences in terminology and understanding. Consequently, care needs to be paid when explaining and referring to specific AI systems and how they operate to ensure clarity.
If you want to know more about how AI may affect your organisation, please contact Tom Whittaker or your usual contact within Burges Salmon.
This article was written by Tom Whittaker and Beata Kolodziej.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us