Generative AI, Confidentiality, and Privilege

Thursday, April 2, 2026

Written by: Guy Jeffress

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled in United States v. Heppner, 2026 U.S. Dist. LEXIS 32697 (S.D.N.Y. Feb. 17, 2026) that documents generated through a consumer version of Anthropic’s Claude AI tool were not protected by the attorney-client privilege or the work-product doctrine under the circumstances presented.

The decision is described as the first to address privilege and work-product claims arising from a non-lawyer’s use of a consumer-grade, non-enterprise AI tool for legal research, including the consequences of inputting privileged information into such a tool. Although the ruling is from a federal court in New York, the risk it highlights is operational rather than geographic: if personnel in any organization (including Virginia-based organizations) use public AI tools to analyze legal exposure or litigation strategy, they may create materials that adversaries are able to obtain.

After receiving a grand jury subpoena and retaining counsel, the defendant used a non-enterprise, consumer version of Claude to research legal issues related to the government’s investigation. Without counsel’s direction or involvement, the defendant input information he had learned from his attorneys into the AI tool, generated reports outlining defense strategy, and later shared those materials with his lawyers. The government moved for a ruling that the AI-generated materials were not protected by either attorney-client privilege or work product, and the court granted that motion.

Attorney-client privilege generally requires three elements: (i) a communication between client and attorney, (ii) the intention that the communication be kept confidential, and (iii) for obtaining or providing legal advice. In Heppner, the court concluded the AI communications failed those requirements on the facts presented.

The court reasoned the defendant could not maintain that the Claude AI tool is an attorney, and communicating with a tool, even about legal issues, does not, by itself, create the kind of attorney-client communication the privilege is designed to protect.

The court found no reasonable expectation of confidentiality because the defendant shared the content with a publicly available, third-party platform. A key factor in the analysis was the AI platform’s own terms of use. The AI platform’s privacy policy indicated that the tool collects user inputs and outputs, may use data for training, and may disclose data to governmental authorities and in litigation. In practical terms, the court treated use of this consumer tool as tantamount to disclosure to a third party, undermining any reasonable expectation that the communications were confidential.

The court noted that counsel did not instruct the defendant to use the AI tool and that non-privileged communications do not become privileged simply because they are later shared with an attorney.

“Documents generated through a consumer version of an AI tool were not protected by the attorney-client privilege or the work-product doctrine.”

The court also held the work-product doctrine did not apply because the defendant conducted the AI research independently and not at counsel’s direction. The court reasoned that client-generated AI materials created independently may fall outside the work product doctrine.

  • Determine if clients have “discussed” legal or risk matters with any AI tool.
  • Consider adding AI usage and chats inquiries to deposition questions and discovery requests.
  • Require privilege-related decisions about AI use to be made by those who best understand how the use of AI may waive application of the attorney-client privilege or the work product doctrine.
  • Using consumer-grade AI tools to analyze legal exposure, assess complaints or risk, research regulatory issues, or prepare for litigation may generate documents and information which adversaries may be able to obtain through discovery.
  • Confirm whether consumer-grade AI tools are permitted in the work environment and ensure only appropriate uses are allowed (consider prohibiting uses involving privileged or confidential information).
  • Make sure employees, agents, and clients understand that running confidential or trade secret information through public AI is a confidentiality risk and may lead to the waiver of various legal privileges.
  • Business owners and employees should assume anything entered into a public AI tool may be discoverable.
  • Feeding legal analysis or correspondence with counsel into an open AI system can potentially waive privilege and confidentiality protections.
  • Consider restricting the input of privileged, confidential, or investigation-related information into consumer AI tools, absent clear permission and the implementation of internal protocols.

For Virginia-based professionals and organizations, the practical takeaway is to treat the decision as an impetus to revisit policies and practices pertaining to the use of AI tools. In particular, identify if, where, and to what extent consumer AI tools are being used in work processes (especially in workflows involving legal matters) and assess whether those uses could undermine the attorney-client privilege and/or work product doctrine and thereby create discoverable materials.

Generative AI may be new, but longstanding doctrines of privilege and work product still turn on confidentiality and the role of counsel in creating protected materials. Organizations should reassess AI use to mitigate litigation and regulatory risk, with particular attention to tool selection, training, and guardrails around privileged or sensitive information.

This blog post is not intended to provide legal advice or substitute for the advice of legal counsel with respect to specific facts and situations. See disclaimer