The U.S. District Court for the Southern District of New York recently signaled an important warning for both corporate and individual litigants (be it potential or actual), intent on using generative artificial intelligence (“AI”) tools in connection with legal matters: Communications with publicly available AI platforms may not be protected by the attorney-client privilege or the work product immunity.
In its decision, the court held that materials created through a defendant’s use of a public AI platform while preparing for a criminal investigation were discoverable because they lacked the hallmarks of privilege. The case arose from a securities fraud prosecution in which, after receiving a grand jury subpoena, the defendant used Claude, a public generative AI platform, to analyze facts, outline potential defenses, and prepare reports anticipating charges. Following execution of a search warrant, the government seized documents reflecting those AI communications, and the defendant asserted privilege and immunity on the grounds that the materials incorporated information from counsel, were intended to facilitate legal advice, and were later shared with counsel. The court rejected each of those arguments.
The court held that the AI communications were protected by neither attorney-client privilege nor work product immunity. It found no attorney-client privilege because (1) the communications were with an AI platform rather than a lawyer; (2) there was no reasonable expectation of confidentiality given the platform’s terms permitting Claude to collect and disclose AI searches and results; and (3) the communications were not made for the purpose of obtaining legal advice because the defendant used the tool independently and the platform disclaims providing legal advice. The court further concluded the materials were not attorney work-product because they were not prepared by or at counsel’s direction and did not reflect counsel’s mental impressions or strategy; the fact that the defendant later shared the materials with counsel did not create privilege where none previously existed.
In addressing the confidentiality prong of privilege, the court emphasized that Claude’s public system – which uses inputs and outputs to train its model and may share data with third parties as required – eliminated any reasonable expectation of confidentiality. This distinction highlights the importance of using closed enterprise AI systems when handling sensitive information, as enterprise systems are built for a single organization and offer greater control over privacy and security. Put simply, public AI is a shared service open to many users, while enterprise AI is a private system tailored to one organization.
The decision underscores significant privilege risks associated with using public AI tools in connection with investigations, disputes, or other legal matters, and signals that courts are likely to apply traditional privilege principles strictly even as technology evolves. Users should avoid inputting confidential or legally sensitive information into public AI platforms, assume that AI communications may be discoverable, consult counsel before using AI tools in connection with pending or anticipated litigation, and consider implementing internal policies or usage guidelines governing AI use.
However, courts are not uniformly concluding that no privilege or protection can extend to AI-related communications. For example, in Warner v. Gilbarco, Inc., a federal court in the Eastern District of Michigan denied a motion to compel plaintiff to produce all documents concerning plaintiff’s use of an AI tool in connection with the lawsuit, holding that the information was protected by the work-product doctrine and that use of ChatGPT did not waive that protection because waiver requires disclosure to an adversary or a likelihood the material will reach one. The court characterized the request as an improper attempt to obtain a party’s/attorney’s internal mental impressions and drafting process.
The key takeaway is that while generative AI can be a powerful tool, using public platforms to analyze legal issues may risk waiver of privilege protections, and parties should treat AI prompts with the same caution as communications with any third party. Even when using closed enterprise AI systems, clients would be wise to check with counsel about the safest way to use AI so as not to risk disclosing the client’s searches and results to an adversary in litigation.