A recent federal court decision offers an important warning for anyone facing divorce, custody, or other family law issues: conversations with public AI chatbots are not protected the way discussions with your lawyer are. In United States v. Heppner, decided on February 17, 2026, the court held that materials a defendant created using a publicly available AI tool were not shielded by attorney-client privilege or the work product doctrine. Although the case arose in a criminal context, the principles apply broadly and have clear implications for family law matters. The bottom line is simple and important: if you use AI to think through your case without your lawyer’s direction, what you type may be discoverable by the other side and potentially used against you in court.
What happened in Heppner?
The case involved Bradley Heppner, who was charged with federal offenses, including securities and wire fraud. After hiring lawyers, he independently turned to a public AI chatbot to help analyze his situation. He entered information he learned from his attorneys, asked the AI questions about legal strategy, and generated documents laying out possible approaches. He then shared those AI-generated materials with his legal team.
Investigators later seized the documents during a search. Heppner argued they were protected by the attorney-client privilege, which protects confidential communication between a lawyer and client, or by the work product doctrine, which protects materials prepared for litigation. The court rejected both arguments and allowed the government to review everything he had typed into the AI chatbot and the resulting outputs.
This factual setup is not unusual in family law. People under intense stress often look for immediate help and may experiment with AI to organize thoughts or test arguments. Heppner shows why that can be risky when done outside counsel’s guidance.
Why weren’t the documents protected?
The court’s reasoning was straightforward and highlights several concepts that matter in divorce and custody disputes.
- An AI chatbot is not a lawyer. There was no attorney-client relationship with the AI, which means no privilege. Privilege only covers communication between you and your lawyer (or your lawyer’s agents) made in confidence for the purpose of obtaining legal advice. Software, no matter how sophisticated, does not fall into that category.
- There was no reasonable expectation of confidentiality. Many public AI platforms explain in their privacy policies that user inputs may be collected, used to improve the service, or shared under certain circumstances. By entering sensitive information into such a tool, the user effectively discloses it to a third party. In legal terms, that disclosure defeats the confidentiality required for privilege to apply.
- The work was not directed by legal counsel (lawyer). The work product doctrine can protect materials prepared at a lawyer’s direction in anticipation of litigation. Because Heppner acted on his own rather than at his lawyer’s instruction, the court found the doctrine did not apply. The distinction between independently created materials and those prepared as part of counsel’s strategy was pivotal.
- Giving non-privileged materials to your lawyer does not retroactively make them privileged. The court emphasized that documents created outside the privileged relationship do not become protected simply because you later share them with your lawyer.
The practical takeaway is clear: if you independently use a public AI tool to analyze your case or your lawyer’s advice, you risk creating materials that the opposing party can request and review.
What does this mean for your divorce or custody case?
The implications for family law are significant because these cases often involve highly personal facts, evolving strategy, and digital discovery. The following scenarios illustrate common risks and one narrow circumstance where protection may be stronger.
Hypothetical #1: The Custody Strategy Session
Sarah is preparing for a custody hearing and worries about how a judge will view older allegations of her ex’s substance use. Late at night, she asks an AI: “My lawyer said the incident from two years ago may not be admissible. How can I address that? What arguments should I make?” The AI generates several pages of strategy, which she emails to her lawyer the next morning.
The risk here is twofold. By relaying what her lawyer told her to a public AI platform, Sarah likely waived the confidentiality of that specific legal advice. And because her lawyer did not direct her to create the document, it likely is not protected as work product. In contentious cases, opposing counsel often seeks texts, emails, and files from personal devices; Sarah’s AI-created strategy memo could be discoverable.
Hypothetical #2: The Hidden Assets Investigation
David suspects his spouse is concealing marital assets in a closely held business. He enters detailed financial information, some of it shared in confidence by his attorney, into AI to “map the money.”
The danger is that David may expose confidential strategy and sensitive financial data to a third-party platform. If that data is retained or later disclosed, the other side could gain insight into his legal team’s approach. Even if the content never becomes public, the mere act of disclosure can undermine privilege.
Hypothetical #3: The Domestic Violence Safety Plan
Maria is planning to leave an abusive relationship. She uses an AI to outline her exit plan, including her spouse’s behavior patterns, the assets she intends to take, and her lawyer’s guidance about a protective order.
This scenario raises acute safety concerns. If her spouse gains access to her devices or if the AI provider’s data is compromised or disclosed, her safety plan could be exposed. Beyond privilege, digital security and timing are critical in domestic violence situations. Sensitive planning should occur through secure, attorney-managed channels.
Hypothetical #4: When AI Use Might Be Protected
Tom’s divorce lawyer asks him to use a specific AI tool to organize a financial timeline and gives precise instructions on what to input. Tom prepares the document at his lawyer’s request for use in the case.
This looks closer to protected work product because Tom acted at his lawyer’s direction in anticipation of litigation, potentially as the lawyer’s agent. Even so, there remains a separate and serious question: whether the AI platform’s terms and data practices erode confidentiality.
Direction from a lawyer helps, but it does not cure the risk posed by a public tool’s privacy policy. The consistent lesson across these scenarios: who directs the work and where it happens both matter. Attorney guidance can strengthen protection, but public AI platforms can still undermine confidentiality.
Practical advice for anyone in a family law matter
Talk to your lawyer before using AI for anything related to your case. This is not anti-technology advice; it is about protecting your rights. Your lawyer can tell you whether AI makes sense for a task, what to avoid inputting, and how to handle any outputs.
Treat public AI tools as if what you type could someday be read in court. Because of how many platforms handle user data, that is a real possibility. Avoid entering sensitive facts, legal strategy, or anything your lawyer has told you in confidence.
Do not paste your lawyer’s advice into an AI. Sharing confidential legal advice with a third-party risks waiving privilege. If you need a second explanation, ask your lawyer directly.
If AI will be part of your case, ask about attorney or law firm-managed options. Some law firms use tools with contractual safeguards and stricter confidentiality terms. These solutions are safer than consumer apps used at home and are better aligned with privilege and product protection.
If you are a legal practitioner reading this, set expectations early. A clear instruction – do not use AI to discuss your case unless we advise you to – can prevent avoidable privilege problems and safety risks.
The Bigger Picture
Courts and clients are adapting to powerful new technologies faster than the law can fully account for them. Heppner underscores that the legal system still relies on long-standing rules about confidentiality, third-party disclosures, and attorney direction. Those rules apply just as much to AI as they do to emails, texts, and cloud storage.
For families and individuals navigating divorce, custody, support, or domestic safety, the message is encouraging as well as cautionary. The risk is real, but it is also manageable. Keep sensitive communications within the lawyer-client relationship, involve your lawyer before you use AI for case-related tasks, and rely on secure, lawyer-approved tools when technology can help. That approach harnesses the benefits of innovation without compromising your legal protections or personal safety.