After Heppner: The Shadow AI Trap

Nutter McClennen & Fish LLP
Contact

A corporate executive used a free AI tool to assess his legal exposure. Federal agents seized the AI-generated documents. Defense counsel claimed privilege. The court ordered production. In United States v. Heppner, No. 25 Cr. 503 (S.D.N.Y. Feb. 17, 2026), Judge Rakoff rejected claims of attorney-client privilege and work-product protection. The ruling puts companies and law firms on notice: AI use without appropriate terms and attorney supervision can defeat privilege and work-product claims.

But Heppner’s implications reach beyond privilege doctrine. The court’s reasoning exposes an independent ethics risk for lawyers who enter client information into consumer AI tools. The case also confirms that AI prompts and outputs may become discoverable electronically stored information (ESI), whether generated on consumer or enterprise platforms.

The practical response starts with limiting legal and client matters to approved enterprise platforms and restricting consumer AI for those uses. It continues with training: every employee who handles legal or client information must understand that AI is not a lawyer. Enterprise terms may help preserve confidentiality. They do not create an attorney-client relationship. And the response requires more than policy alone. A written prohibition on consumer AI is a sticky note on an unlocked door. It does not prevent entry.

Heppner Decision

Bradley Heppner, a financial-services CEO charged with securities fraud and related offenses, used the free consumer version of Anthropic’s Claude to prepare documents analyzing his legal exposure. He acted on his own, without his attorneys’ direction. Some of his prompts reflected information he had learned from counsel.

On attorney-client privilege, Judge Rakoff rejected the claim for three reasons.

  • Claude is not an attorney. All recognized privileges, Judge Rakoff wrote, require a “trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.” No such relationship exists between an AI user and a platform.
  • Heppner lacked a reasonable expectation of confidentiality. Anthropic’s consumer terms reserved rights to collect prompts and outputs, use them for training, and disclose them to third parties.
  • Heppner did not use Claude to obtain legal advice from an attorney. Claude’s own terms also disclaim any provision of legal advice.

On work product, Judge Rakoff found the materials were neither prepared at counsel’s direction nor reflective of counsel’s strategy. Defense counsel conceded at oral argument that Heppner prepared the documents “of his own volition” and that they “affect[ed]” but did not “reflect” defense strategy. The court rejected work-product protection, reasoning that Second Circuit precedent has “repeatedly stressed” that the doctrine protects lawyers’ mental processes.

Privilege Risk

Heppner rejected privilege claims for AI-generated materials. The doctrine was not new. The facts were. Businesses and firms that use AI should take notice.

Attorney-client privilege requires a confidential communication with an attorney for the purpose of obtaining legal advice. Consumer AI can compromise confidentiality at the threshold: consumer terms often permit the provider to retain prompts and outputs, use them for training, and disclose data to third parties. And although enterprise AI, under the right contractual terms, may better support confidentiality, privilege still requires an attorney and a legal-advice purpose. AI is not an attorney. An employee who uses a commercial chatbot to analyze legal exposure is not communicating with counsel, regardless of the platform’s terms. No fiduciary duty runs from a chatbot to its user. No licensing body governs its conduct. Contract terms cannot convert a chatbot into counsel.

The court noted a qualification. Judge Rakoff explained that, had counsel directed Heppner to use Claude, the tool might have functioned as counsel’s agent. The ruling suggests that counsel-directed use of AI, through a platform with proper confidentiality terms, could fortify a privilege claim.

Work-Product Risk

Work-product protection rests on its own foundation. The doctrine shields materials created in reasonable anticipation of litigation. Confidentiality is not a core element, though disclosure to an adversary can waive the protection. Even so, unsupervised AI use may give opposing counsel a basis to challenge work-product claims.

In Heppner, the court, applying Second Circuit precedent, rejected the defendant’s work-product claim because the materials were not prepared at counsel’s behest and did not reflect counsel’s strategy.

Warner v. Gilbarco, Inc., No. 2:24-cv-12333 (E.D. Mich. Feb. 10, 2026), went the other way. The court held that a pro se litigant’s AI-assisted litigation materials qualified as work product. It then rejected the argument that disclosure to consumer ChatGPT waived the protection, reasoning that waiver requires disclosure to an adversary or in a way likely to reach an adversary. The court added that AI tools are “tools, not persons.”

These decisions expose a fault line: some courts require attorney direction or involvement before work-product protection attaches;[1] others do not.[2] The takeaway: discoverability will turn on the facts and the governing standard.

Ethics Risk

For lawyers, entering client information into a consumer AI tool can create an ethics problem even if privilege or work-product protection applies to the underlying material.

Heppner was not an ethics case. But the facts that helped defeat privilege there—including consumer terms authorizing training on prompts, human review, and third-party disclosure—raise the same concerns Rule 1.6 addresses. ABA Model Rule 1.6(c) requires lawyers to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. The duty covers information relating to the representation, regardless of source. Consumer AI terms may be difficult to reconcile with that standard.

Some jurisdictions, including Massachusetts and New York, define confidential information more narrowly than the ABA Model Rules.[3] But even under those narrower definitions, the risk remains: case facts, litigation strategy, and deal terms fall within the duty’s scope.

From Risk to Response

The risks are distinct, but the response is integrated. It spans contracts, technology, training, and enforcement.

  • Review your enterprise AI agreements. Confirm that they bar training on customer data, restrict access, and impose confidentiality commitments. Look for carve-outs for anonymized data, safety review, or analytics that could weaken a confidentiality argument. The analytical framework for enterprise AI and privilege tracks what courts have recognized for cloud storage, hosted email, and e-discovery vendors. The technology is new. The diligence is not.
  • Designate approved enterprise AI tools. Identify platforms the organization has vetted and approved. Make them easy to access and use. The harder the approved tool is to access, the more likely employees are to bypass the prohibition and use the consumer alternative. Ease of access is part of compliance.
  • Use technical controls to restrict consumer AI. Work with IT to deploy network-level controls. Restrict consumer AI domains on company networks and managed devices. Use a cloud access security broker to detect unsanctioned AI endpoints. Deploy data-loss-prevention tools that scan for sensitive data sent to AI platforms, including through browser inputs and clipboard activity. No single control is airtight; layered defenses reduce exposure. The goal is to make unauthorized use difficult, not impossible. Even well-designed controls depend on employee discipline and monitoring.[4]
  • Monitor for compliance. Technical controls have limits. Employees using personal devices or off-network connections can bypass corporate gateways. Active monitoring, audit logging, and periodic compliance reviews address what perimeter controls miss.
  • Train your people. Training should make clear that AI is not a substitute for legal advice from counsel. Every employee who handles legal or client information should understand the line between enterprise and consumer AI and why that line matters. Employees should understand that what they type into an AI tool may later be produced in discovery.
  • Ask about prior AI use. At the outset of any legal, regulatory, or compliance matter, determine whether anyone used AI tools to analyze or discuss the issue. Prior consumer-AI use may have compromised privilege before the matter reached counsel.
  • Update litigation-hold and preservation protocols. Companies should assume that AI interactions can create ESI subject to preservation obligations and discovery requests. Litigation-hold notices and retention policies should account for AI-generated content.
  • Define expectations and enforce them. Technical controls reduce the problem. They do not eliminate it. Clear expectations and accountability for unauthorized consumer AI use, consistent with company policy, address what technology cannot.

Conclusion

AI use can fall outside the legal protections many users assume exist. That gap is where the risk lies. Companies and firms that close it, through sound contracts, usable enterprise tools, technical controls, and disciplined training, will be better positioned when privilege is tested and records are demanded. Those that wait may confront the consequences in discovery, in a disciplinary proceeding, or both.

Paul Ayoub, a Nutter partner, prompted this article with questions about privilege. Michael Leard, a Nutter partner; Meredith Lawrence, Nutter’s General Counsel; and Charlie Wise, Nutter’s Chief Information Officer, reviewed the draft and sharpened it.

________________________

[1] See United States v. Heppner, No. 25 Cr. 503, slip op. at 9-11 (S.D.N.Y. Feb. 17, 2026) (collecting cases).

[2] See, e.g., Blattman v. Scaramellino, 891 F.3d 1, 5 (1st Cir. 2018); Shih v. Petal Card, Inc., 565 F. Supp. 3d 557, 574 (S.D.N.Y. 2021); Goff v. Harrah’s Operating Co., 240 F.R.D. 659, 660-61 (D. Nev. 2007).

[3] Massachusetts and New York limit the duty to information protected by privilege, information likely to be embarrassing or detrimental to the client, or information the lawyer has agreed to keep confidential. See Mass. R. Prof. C. 1.6; N.Y. R. Prof. C. 1.6(a).

[4] For companies whose competitive advantage depends on proprietary data, the calculus goes further and may require on-premises infrastructure, a challenge beyond the scope of this article.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Nutter McClennen & Fish LLP

Written by:

Nutter McClennen & Fish LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA

  • Increased readership
  • Actionable analytics
  • Ongoing writing guidance

Join more than 70,000 authors publishing their insights on JD Supra

Start Publishing »

Nutter McClennen & Fish LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide