GenAI Enters the Litigation Landscape
Generative AI tools like OpenAI’s ChatGPT and Anthropic’s Claude are transforming workplace productivity. However, their use creates new categories of electronically stored information (ESI) – namely prompts (user inputs) and generated outputs – that could wind up as discovery in litigation. This raises pressing questions for in-house counsel: Are AI interactions discoverable, and, if so, how can you mitigate the attending legal risks?
In short, yes – AI prompts and outputs can be discoverable if they are relevant to a party’s claims or defenses. Companies using GenAI should assume these records may be subject to discovery and plan accordingly. No special “AI privilege” shields these communications. As discussed below, courts will apply familiar principles of relevance, proportionality and privilege to generative AI data, even as unique challenges emerge around preservation and confidentiality. It's also important to consider how generative AI companies like OpenAI and Anthropic treat and store user data in their free vs. enterprise offerings.
Relevance and Duty to Preserve GenAI Interactions
Relevance remains the threshold. Under FRCP 26(b), information is discoverable if it is relevant to a party’s claim or defense and proportional to the needs of the case. The introduction of AI does not alter this basic inquiry. If an employee’s use of ChatGPT or Claude has any tendency to make a disputed fact more or less probable, those AI interactions could be considered relevant evidence. For example, if an employee asked a GenAI tool to edit a draft contract section later alleged to be misleading, the content of the prompt and the AI’s suggested edits might shed light on the employee’s knowledge or intent. In a trade secret case, a prompt containing proprietary data could itself be evidence of a disclosure threatening the trade secret status. Conversely, in many situations, AI “chats” will be tangential. Courts are unlikely to compel preservation or production of AI interactions that have no bearing on the issues in suit.
Analogies to traditional internet activity. Courts have tackled analogous questions when litigants seek web search or browser history. If the search/browser history is probative of claim, there is a good chance it could be deemed discoverable. For example, if, in a discrimination lawsuit, an employer argued it fired the plaintiff due to unauthorized personal use of work computers, the plaintiff might respond that was a pretextual reason for a discriminatory firing. In that situation, a court would be likely to find the employer had a duty to preserve and produce the plaintiff’s and other employees’ internet usage logs because the justification for termination, i.e., personal use of work computers, put those logs directly at issue in the case. Similarly, if a party’s use of generative AI was put at issue in a lawsuit, it is likely the corresponding ESI would need to be produced.
When does the duty to preserve attach and how far does it extend? The duty to preserve relevant evidence is triggered when litigation is reasonably anticipated or pending. At that point, a company must take reasonable steps to preserve ESI in its “possession, custody, or control.” Notably, inputs and AI-generated outputs may reside outside typical enterprise systems – often on a vendor’s cloud servers. If employees use personal or free AI accounts, those records might not be automatically retained by corporate IT, and certainly not if an employee is using a program like ChatGPT for work purposes on a personal computer. Nevertheless, if the content is relevant and the organization can access or direct the preservation of that data, it is obligated to act. Once a company is on notice that GenAI interactions are relevant to a dispute, it must not allow chat histories or related data to be deleted (for instance, by a routine auto-delete setting or an employee failing to save a chat). Reasonable steps may include instructing employees to download or screenshot relevant AI conversations, disabling auto-delete features or engaging the AI provider to ensure data isn’t wiped.
Proportionality and practical limits. Even if relevant, discovery must also be proportional pursuant to FRCP 26(b)(1)). The 2015 Advisory Committee notes explicitly recognize that it is often impossible to preserve all ESI. The volume and ephemeral nature of AI interactions could make comprehensive preservation impractical. Courts are likely to assess whether the likely benefit of the AI-derived evidence justifies the burden and cost of preserving and producing it. If an AI tool was used pervasively in a business process central to the case, broad discovery may be warranted. Otherwise, companies can argue that fishing expeditions into AI chat logs fail proportionality, especially if those logs are hard to collect or contain mostly irrelevant or personal content.
Take Aways: In-house counsel should consider include AI interactions in litigation hold notices when appropriate. Identify key custodians who have used AI for work on relevant matters. Coordinate with IT and the AI vendor on how to export or secure those records. If certain AI data is not retained or accessible (for instance, free chatbot sessions that weren’t saved), document that and be prepared to explain it. At the same time, avoid over-preserving trivial or personal AI use. Early dialogue with opposing counsel is wise to set reasonable parameters – for example, agreeing that only specific custodians or date ranges of AI usage will be collected. And this advice goes both ways – when issuing discovery demands, consider whether generative AI inputs and outputs should be included.
Privilege and Confidentiality Pitfalls
No attorney–client privilege for AI. Communications with a GenAI tool are not privileged in themselves. Sam Altman of OpenAI himself made this evident during a podcast interview. The attorney–client privilege protects confidential communications between a client and a licensed attorney made for the purpose of obtaining legal advice. A chatbot is not an attorney, and prompts or responses from AI are not legal advice from counsel. Even if an AI-generated answer sounds like legal analysis, it lacks the human attorney involvement required for privilege. Some commentators have mused about an “AI privilege” for users’ interactions with these tools, but no U.S. court has recognized such a doctrine. Until or unless that changes, which is extremely unlikely at least in the short term, companies should assume that GenAI prompts and outputs will be treated as ordinary business communications – discoverable if relevant and not otherwise protected.
Risks of waiving privilege. More subtle is how GenAI use could theoretically jeopardize existing privileges. The confidentiality of attorney–client communications can potentially be lost if those communications are shared, even inadvertently, with a third party, which arguably could include an AI provider. For example, if an employee takes a confidential email from in-house counsel and pastes it into ChatGPT to “rewrite in simpler language,” that act could be deemed disclosure to an outside party, potentially waiving privilege – though that is probably unlikely. More likely would be the output text from the AI program being deemed not privileged, which would be just as bad as the input being deemed a disclosure to a third party.
Work product doctrine considerations. In-house and outside counsel are using generative AI programs – that’s simply a fact. The work product doctrine (FRCP 26(b)(3)) shields materials prepared by or for attorneys in anticipation of litigation. A significant question is whether AI-generated material prepared at an attorney’s direction might be work product. Suppose outside counsel uses ChatGPT to help generate a draft due diligence report during litigation - the drafts could be considered work product if they reveal counsel’s thought process. However, disclosing work product to an AI service could amount to sharing with a potential adversary unless the service is effectively acting as the firm’s agent. In-house counsel should treat AI like any third-party vendor: ensure there is at least an argument the AI tool is a “necessary aid” to the legal representation (akin to a translator or e-discovery contractor) so its involvement doesn’t waive work product protection. This argument is strongest for enterprise AI tools under confidentiality agreements, and weakest for free AI services where the provider might use or reveal the data.
Bottom line: Until clearer law emerges, exercise caution and control when using GenAI in any context touching privileged matters. Avoid inputting verbatim privileged communications or attorney mental impressions into AI, unless using a vetted enterprise system with robust privacy. If AI outputs are used to help form legal advice, document the attorney’s role and do not simply forward raw AI text to clients without review. And anticipate adversaries may seek discovery of AI-related materials – be ready to justify any withholding on privilege/work product grounds or to rapidly review and redact those materials if ordered produced.
GenAI Provider Terms: ChatGPT (Free vs. Enterprise)
When managing GenAI-related risk, both generally and with an eye toward potential future litigation, it is critical to understand how your AI provider’s terms handle data retention, training use of data and privacy/security controls. For exemplary purposes, ChatGPT offers different guarantees depending on whether you use their free consumer services or paid enterprise solutions.
OpenAI (ChatGPT) – Free Version: By default, user prompts and ChatGPT responses in the free plans are logged and used to improve the models. With regard to training, unless a user pro-actively opts out via the account settings, OpenAI will analyze those conversations and may incorporate them in future training. If you want to opt out, the instructions are here.
Training is one thing, however, and data retention is another. If you’ve used ChatGPT, you’re probably aware you can see your previous chats in the left sidebar. Those chats remain there indefinitely unless you delete them. And, after you affirmatively delete those chats, they are removed from your account immediately, but they remain on OpenAI’s servers for 30 days unless the chat has already been de-identified and disassociated from you, or OpenAI has to retain it longer for security or legal reasons. The same applies to files you upload to ChatGPT. In Temporary Chat mode, chats are automatically deleted from OpenAI’s servers within 30 days.
Thus, an employee using the free ChatGPT program at work could have their prompts stored on OpenAI’s servers potentially forever unless they manually delete them. Even deleted chats would likely persist for 30 days in backups – and longer if needed for legal reasons, such as a court order or preservation request. In fact, in the New York Times AI copyright case in SDNY, Magistrate Judge Ona T. Wang issued a May 13, 2025, preservation order directing OpenAI to “preserve and segregate all output log data that would otherwise be deleted” on a going-forward basis; the Court later clarified that ChatGPT Enterprise is excluded. OpenAI says it has placed all covered consumer ChatGPT and API content under legal hold and is appealing the order. This is critical because it means even chats that are manually deleted by users are being maintained on OpenAI’s servers rather than being automatically deleted after 30 days. As such, those chats are theoretically subject to subpoena.
The simple fact is that employees are using ChatGPT and it is unlikely they are being diligent about deleting their chat history. That creates an incredible amount of potentially discoverable ESI and the legal risk that follows. This is an enormous blind spot right now in the corporate world, and in-house counsel should be proactive with regard to mitigating that risk via corporate governance, training and even IT-based solutions.
OpenAI – Enterprise and API: OpenAI’s enterprise offerings flip these policies to prioritize privacy. For ChatGPT Enterprise (and its API or Application Programming Interface), OpenAI does not use customer-provided prompts or outputs to train its models by default. Unless you explicitly opt in to share feedback, OpenAI will not feed your content into model improvement. Data retention is customizable: enterprise admins can set how long chat data is retained, with options even for no retention (i.e. data deleted immediately after processing) or short retention windows. This is a massive advantage over the free ChatGPT program and a powerful tool for in-house counsel in their quest to mitigate risk.
Much like the free version of ChatGPT, any conversations that are deleted by the user or by policy will be expunged from OpenAI’s systems within 30 days (absent legal requirements to hold them). Additionally, OpenAI Enterprise provides encryption in transit and at rest, and strict access controls limiting even OpenAI’s internal access to your data. OpenAI certifies compliance with SOC 2 Type II and offers signed DPAs (data processing addenda) to meet GDPR and other privacy law requirements. In short, ChatGPT Enterprise is designed to address business confidentiality: data isn’t used to train models, and the organization retains a lot more control over its AI interactions.
Why these differences matter for litigation: If a dispute arises over what a particular employee or team asked a generative AI program, if the employee was using the free version, it’s entirely possible those chats are available on the employee’s account. Questions then arise with regard to whether those chats are available as party discovery or whether a subpoena to the generative AI company would be required. If the employee was using the Enterprise version of ChatGPT, the company will have had much more control over how long the data relating to that chat was retained. Further, enterprise admin could potentially enforce a legal hold on their AI workspace (e.g. by exporting relevant chats or using an admin API to capture them). And remember that you will have to move quickly if you’re the plaintiff – OpenAI removes manually deleted chats from its serves after 30 days. Expedited discovery, or at very specific litigation hold communications, should be strongly considered.
Finally, data security and auditability are far superior in enterprise tools. In-house counsel can be more confident that any AI-generated work product remains in company hands and that there are logs of access. For example, OpenAI’s enterprise plans allow audit logging of all prompts an employee makes. This not only aids internal compliance, but it means in discovery you could search those logs for keywords rather than relying on individual recollection of “I think I asked ChatGPT about X.” With consumer tools, unless the individual saved each chat, it may be difficult to reconstruct what was asked weeks or months later if the history is gone.
Outside of the copyright litigations against AI companies themselves, there hasn’t yet been a significant amount of case law on motions to compel or motions for protective orders relating to generative AI evidence – but trust, it’s coming.
In summary: Upgrading to enterprise-level GenAI solutions is a key risk mitigator. It gives companies ownership of their AI data, avoids unwittingly sharing sensitive info for AI training, and provides the administrative controls needed for compliance and discovery. General Counsels should update policies to strongly prefer or require enterprise accounts for any business AI use, and to prohibit using personal/free accounts for work-related prompts, especially those containing proprietary or sensitive information.
Practical Takeaways
- Educate and issue guidelines: Ensure employees know that prompts and outputs from AI tools may be corporate records that could be disclosed in litigation. Implement a policy on approved AI use (e.g., only via enterprise accounts) and on not inputting sensitive data into public AI. This not only protects confidentiality but also makes preservation feasible if litigation hits.
- Include AI in litigation holds: When a dispute arises, ask custodians if they used tools like ChatGPT/Claude related to the matter. If yes, suspend any auto-deletion of those chats. For enterprise AI, coordinate with the provider (or use admin tools) to preserve relevant conversations. For personal/free accounts, consider asking employees to export their chat history (or at least not delete anything) and possibly get consent to access their account if needed.
- Scope discovery appropriately: Be ready to negotiate with opposing counsel on the scope of AI-related discovery. Overbroad requests (“all employees’ ChatGPT use in the last two years”) should be resisted as disproportionate. Focus on specific individuals and time frames where AI use is concretely tied to the claims. Leverage the proportionality factors – burden, cost, privacy – to limit irrelevant fishing expeditions.
- Leverage provider tools: Companies like OpenAI offer enterprise customers compliance features (APIs, admin consoles) to search and retrieve AI interactions. This can significantly streamline e-discovery efforts.