This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

AI and legal privilege: no confidentiality or privilege where third-party commercial AI system used

Picture of Tom Whittaker
Passle image

On 10 February 2026, the U.S. District Court for the Southern District of New York ruled that documents created by a client using a commercial generative AI tool and sent to their lawyer were not covered by attorney-client privilege or work product privilege. 

At the time of writing this article, the arguments for the government (the plaintiff) (here) and transcript of the hearing (here) are available.  We expect the court may hand down an order, and potentially a judgment, also which may shed further light on the court's approach and what's relevant more broadly. However, these are insightful for the arguments a party may make against a claim to privilege over AI generated documents and how a court may respond.

First, we pick two key points that we can learn from the case in England and Wales before turning to the specific arguments run.

What can we learn from this case?

Although the case took place in the US where laws on privilege are different to those in England and Wales, it demonstrates possible issues for litigation in England and Wales.

Confidentiality

Concerns were raised in the US case about confidentiality and the fact that the AI's terms allowed disclosure to government authorities and whether the AI system user had any expectations of 'privacy' of their inputs.

Organisations must carefully assess AI systems during onboarding - or which can be accessed - including any sharing rights the AI system has with external parties, including regulators or Governments. 

In England and Wales, confidentiality is at the heart of privilege. There has been no ruling in England to date on whether inputting to, or generating content with, an AI system affects whether or not the content is confidential. Creative legal arguments may develop in the future as to what confidentiality means, but it is interesting to note that the Guidance for Judicial Office Holders in England and Wales states: 

Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world. The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known

The Guidance does not define ‘public AI chatbot’ and is only guidance, not law, but it might reflect how judges approach issues of confidentiality in the context of legal professional privilege.

AI documents produced without lawyer involvement

Another point in the US case was that the AI documents were not produced by the Defendant's lawyers.  This reflects that an AI system would not be considered a lawyer. If litigation is not in contemplation, then for the protection of legal privilege to apply in England and Wales there needs to be a communication between a client (which is restricted to individuals within an organisation tasked with seeking or receiving legal advice) and a lawyer.

The term 'lawyer' has been strictly defined within England and Wales. Judicial decisions have consistently declined to broaden this to include other professionals, such as accountants, who provide legal advice. See R (Prudential PLC) v Special Commissioner of Income Tax. At present, it appears unlikely that the English Courts will alter the definition of "lawyer" in relation to legal professional privilege and any change will need to come from Parliament. Organisations should therefore address this risk in their internal AI Use Policies and inform employees accordingly.

Organisations procuring and using AI should consider how to manage the practical risks. For a short overview of privilege in England & Wales, and the risks of using AI and key mitigations, please visit our guide: AI and Privilege Overview

What happened in the case?

During a search of the Defendant’s property, electronic devices containing documents generated using Anthropic’s AI tool Claude were seized. The Defendant claimed privilege over the documents as “[a]rtificial intelligence generated analysis conveying facts to counsel for [the] purpose of obtaining legal advice.”

The government’s arguments were:

“First, the AI Documents fail every element of the attorney-client privilege. They are not communications between a client and attorney—the AI tool is plainly not an attorney, and no attorney was involved when he created the documents. They were not made for the purpose of obtaining legal advice—the AI platform’s terms of service expressly disclaim any attorney-client relationship and state that the tool does not provide legal advice. And they are not confidential— the defendant voluntarily shared his queries with the AI tool, and the AI responses were generated from a third-party commercial platform whose privacy policy permits disclosure to governmental authorities.”

Second, the defendant cannot retroactively cloak unprivileged documents with privilege by later transmitting them to counsel. Well-settled law holds that preexisting, non-privileged materials do not become privileged merely because a client eventually shares them with an attorney.

Third, the work product doctrine does not protect these materials. Defense counsel has represented that the defendant created the AI Documents on his own initiative—not at counsel’s behest or direction. The doctrine shields materials prepared by or for a party’s attorney or representative; it does not protect a layperson’s independent internet research”

According to the transcript, the judge said that “I'm not seeing remotely any basis for any claim of attorney-client privilege.” Whilst the Defence raised that the AI documents ‘incorporated information’ conveyed from the Defendant's lawyers to the Defendant, the judge noted the Defendant “…disclosed it to a third-party, in effect, AI, which had an express provision that what was submitted was not confidential.” Further, the judge asked “Isn't it also true that the AI tool that Mr. Heppner used expressly provided that users have no expectation of privacy in their inputs?

Also of note, according to reports, Defence counsel also argued that if prosecutors try to use the AI-generated information at trial, it could give rise to a "witness-advocate conflict" since his law firm would become a witness in such a scenario. According to the reports, the judge acknowledged the point; we can anticipate this developing in due course.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Griffiths or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by Tom Whittaker and Beata Kolodziej and Stacie Bourton.

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us