This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Thought Leadership

AI and legal privilege: no confidentiality or privilege where third-party commercial AI system used

Picture of Tom Whittaker
Passle image

This article was updated on 25 February 2026 in light of the published court opinion

On 10 February 2026, the U.S. District Court for the Southern District of New York ruled that documents created by a client using a commercially available generative AI tool - Anthropic's Claude - pending a criminal investigation and sent to their lawyer were not covered by attorney-client privilege or work product privilege. 

At the time of writing this article, you can access the arguments for the government (the plaintiff) (here) and the judge’s written opinion (here). 

First, we pick two key points that we can learn from the case in England and Wales before turning to the specific arguments run.

What can we learn from this case?

AI and the Law are just beginning to intersect.  Although the case took place in the US where laws on privilege are different to those in England and Wales, it demonstrates possible issues for litigation in England and Wales.

Confidentiality

Concerns were raised in the US case about confidentiality given the Defendant's use of a third-party AI platform and the fact that the platform's written privacy policy terms provided that the AI developer could collect data on both users' ‘inputs’ and ‘outputs’ for 'training the tool, and allowed disclosure to government authorities and whether the AI system user had any expectations of 'privacy' of their inputs.

Organisations must carefully assess AI systems during onboarding - or which can be accessed - including future use of input and output information and any sharing rights the AI system has with external parties, including regulators or Governments. 

In England and Wales, confidentiality is at the heart of privilege. There has been no ruling in England to date on whether inputting to, or generating content with, an AI system affects whether or not the content is confidential. Creative legal arguments may develop in the future as to what confidentiality means, but it is interesting to note that the Guidance for Judicial Office Holders in England and Wales states: 

Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world. The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known

The Guidance does not define ‘public AI chatbot’ and is only guidance, not law, but it might reflect how judges approach issues of confidentiality in the context of legal professional privilege.

AI documents produced without lawyer involvement

The Judge in the US case concluded ‘the AI documents lack at least two, if not all three, elements of the client-attorney privilege’. Claude was not a lawyer and the judge noted ‘that alone disposes of the accused’s claim of privilege.’ If litigation is not in contemplation, then for the protection of legal privilege to apply in England and Wales there needs to be a communication between a client (which is restricted to individuals within an organisation tasked with seeking or receiving legal advice) and a lawyer.

The term 'lawyer' has been strictly defined within England and Wales. Judicial decisions have consistently declined to broaden this to include other professionals, such as accountants, who provide legal advice. See R (Prudential PLC) v Special Commissioner of Income Tax. At present, it appears unlikely that the English Courts will alter the definition of "lawyer" in relation to legal professional privilege and any change will need to come from Parliament. Organisations should therefore address this risk in their internal AI Use Policies and inform employees accordingly.

The Judge in the US case accepted that Heppner did not communicate for the purposes of legal advice, but acknowledged that had Counsel directed the accused to use Claude, Claude might arguably be said to have functioned in a ‘manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.’ Substantial change is needed by Parliament, but if change comes then arguably this is an area to watch and it remains possible that England and Wales may seek to develop similar arguments along the lines of that taken with non-qualified personnel operating under supervision. 

But, as Heppner's counsel also conceded, Heppner did not do so at the suggestion or direction of counsel … (noting that counsel “did not direct [Heppner] to run Claude searches”). Had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege… But because Heppner communicated with Claude of his own volition, what matters for the attorney-client privilege is whether Heppner intended to obtain legal advice from Claude, not whether he later shared Claude's outputs with counsel. [emphasis in the original]

Organisations procuring and using AI should consider how to manage the practical risks. For a short overview of privilege in England & Wales, and the risks of using AI and key mitigations, please visit our guide: AI and Privilege Overview

What happened in the case?

During a search of the Defendant’s property, numerous documents and electronic devices were seized. Among the seized materials were documents that memorialised communications the accused had with the AI platform, Claude. The documents were prepared after the Defendant had received a grand jury subpoena and after it was clear with discussions with the Government that the accused was the target of an investigation.  They were prepared independently, without a suggestion from Counsel.  These documents outlined defence strategy, what the accused might argue with respect to the facts and the law that was anticipated the government might be charging. 

The Defendant claimed privilege over the documents and sought to argue that these documents had been created for the purpose of speaking with Counsel and subsequently shared. 

Attorney-Client Privilege

In relation to the application of the attorney-client privilege, the government’s arguments were:

“First, the AI Documents fail every element of the attorney-client privilege. They are not communications between a client and attorney—the AI tool is plainly not an attorney, and no attorney was involved when he created the documents. They were not made for the purpose of obtaining legal advice—the AI platform’s terms of service expressly disclaim any attorney-client relationship and state that the tool does not provide legal advice. And they are not confidential— the defendant voluntarily shared his queries with the AI tool, and the AI responses were generated from a third-party commercial platform whose privacy policy permits disclosure to governmental authorities.”

Second, the defendant cannot retroactively cloak unprivileged documents with privilege by later transmitting them to counsel. Well-settled law holds that preexisting, non-privileged materials do not become privileged merely because a client eventually shares them with an attorney.

The judge agreed. He found that the documents are not communications between the Defendant and his counsel since the AI tool cannot be seen as a lawyer. The Court acknowledged the argument, raised by some commentators, that a user’s AI inputs rather than being communications are more akin to the use of other software. However, he emphasised that all recognised forms of privilege require “a trusting human relationship”. 

For attorney-client privilege to apply, communications must be made for the purpose of obtaining legal advice. This was not the case here as the Defendant’s use of the AI tool was not at the suggestion or direction of counsel.  This conclusion was not affected by the fact that the communications were intended to be, and eventually were, shared with counsel. 

In any event, the judge found that the information included in the documentation was not confidential as it was communicated with a third-party AI platform and the platform’s written policy provides that Anthropic uses customers’ “inputs” and Claude’s “outputs” to train the tool and reserves the right to disclose such data to third parties, such as governmental regulatory authorities. As such, the Defendant could not have “reasonable expectation of confidentiality in his communications”. 

Work Product doctrine

In relation to the work product doctrine, the government argued:

Third, the work product doctrine does not protect these materials. Defense counsel has represented that the defendant created the AI Documents on his own initiative—not at counsel’s behest or direction. The doctrine shields materials prepared by or for a party’s attorney or representative; it does not protect a layperson’s independent internet research”

Again, the court agreed. The doctrine applies to materials in the possession of a client where they concern the thought process of the client’s counsel. Here, the Defendant prepared the documents on his own initiative and not as his counsel’s agent, and they did not reflect counsel's strategy.

Also of note, according to reports, Defence counsel also argued that if prosecutors try to use the AI-generated information at trial, it could give rise to a "witness-advocate conflict" since his law firm would become a witness in such a scenario. According to the reports, the judge acknowledged the point; we can anticipate this developing in due course.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Griffiths or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by Tom Whittaker and Beata Kolodziej and Stacie Bourton.

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us