Judge Jed Rakoff’s recent opinion in United States v. Heppner has generated quite a bit of discussion among litigators for its conclusion, apparently a first, that divulging information to a consumer-grade generative artificial intelligence tool prevented assertion of attorney-client privilege over both the inputs to, and outputs of, the tool.
The ruling may not come as a surprise to lawyers, but it could be news to their clients.
It is black-letter law that non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel
Keeping in mind that Heppner is a criminal case, and the documents at issue were uncovered during the execution of a search warrant – i.e., not via pretrial discovery – the court’s ruling is a cautionary tale for clients who increasingly are using artificial intelligence to conduct legal research even after they’ve engaged counsel.
Attorney-Client Privilege Applied to AI
The court’s discussion of attorney-client privilege in the context of artificial intelligence is not surprising or novel. For several years now, the American Bar Association and numerous state bar regulators have urged lawyers to be extremely careful when sharing client confidential information with generative AI tools. Matters of professional ethics, and attorney-client privilege, have been thoughtfully raised and discussed at length.
In Formal Opinion 512, published July 29, 2024, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility identified several ethical obligations triggered when a lawyer uses a generative AI tool in a client representation.
The first is the ethical duty of competence and the related duty of technology competence. ABA Model Rule 1.1 requires lawyers to understand “the benefits and risks associated” with the technologies they deploy. Regarding AI, the ABA wrote that a lawyer need not become an AI expert but must acquire a reasonable understanding of how the specific tool functions — and must stay current as the technology evolves.
The second is the ethical duty to protect client confidential information. ABA Model Rule 1.6 requires lawyers to make “reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation.” The ABA concluded that self-learning GAI tools — tools that train on user inputs — pose a direct threat to this ethical obligation.
Earlier, on Nov. 16, 2023, the State Bar of California published Practical Guidance For the Use of Generative Artificial Intelligence in the Practice of Law. In that document, bar officials cautioned that lawyers “must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections.”
And this blog published an extensive list of other state bar regulator guidance on the ethical use of artificial intelligence in 2023 and again in 2024.
Privilege Misunderstood by Client, Perhaps
It’s fair to say that the dangers of using generative AI tools in law practice are well-known to lawyers in 2026.
But those dangers are not so well-known to clients.
In the Heppner case, the defendant, after receiving a grand jury subpoena and engaging defense counsel, used the consumer version of Anthropic’s Claude AI tool to prepare reports outlining his defense strategy and potential legal arguments. Federal agents subsequently seized the resulting documents — 31 in total — during a search of his residence. Heppner’s lawyers attempted to assert attorney-client privilege over those documents, but it was too late. That horse was already out of the barn.
Judge Rakoff identified three reasons why attorney-client privilege couldn’t be asserted over the seized documents:
- The documents were communications between the defendant and the Claude tool. Claude is not an attorney. Judge Rakoff remarked that Claude is closer to a word processor than an attorney who owes any sort of fiduciary duty to a client.
- The documents contained communications that were not confidential. Claude’s privacy policy warned that user-submitted information could be shared with “’third parties,” including the government.
- Although the defendant may have communicated with Claude with the intention of sharing its output with his attorney, he didn’t, in fact, have the requisite intention of seeking legal advice from Claude. The defendant used Claude on his own, without his lawyer’s knowledge or encouragement.
“The communications between Heppner and Claude were not privileged at the time they took place,” the court wrote. “Moreover, even assuming that Heppner intended to share these communications with his counsel and eventually did so, it is black-letter law that non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel.”
Similar considerations led the court to reject the defendant’s contention that the documents were protected attorney work product.
The real takeaway from Heppner may very well be that clients using consumer-grade artificial intelligence tools to gain insights into their cases are unwittingly creating evidence that could be used against them in criminal or civil litigation. Remember: Heppner involved documents that contained both the defendant’s prompts to Claude and Claude’s replies. If relevant, both types of documents could be sought via document requests, interrogatories and deposition questioning in pretrial discovery in a civil case.
Without the protection of attorney-client privilege, the release of this information could be highly damaging to the client’s interests. Careful litigators might want to consider warning clients about the dangers of artificial intelligence tools as early as possible in the representation – assuming they are not already doing so.