There Is No Attorney-Client Privilege Between You and ChatGPT
BY Kerrie Spencer
- AI hallucinates legal citations — and courts have dismissed real cases because of it.
- Personal injury law is hyper-local; AI doesn’t know your jurisdiction, your judge, or your deadline.
- Anything you type into a chatbot about your case can be subpoenaed and used against you.
- A generic AI demand letter tells the insurance company you don’t know what your case is worth.
Your clients are talking to AI about their cases. They're typing injury details, facts of their accidents, what they told their doctors, what they're afraid of, what the other side might argue. They're doing it before they call you. They're doing it the night before their deposition. They're doing it because it feels private, because the interface is calm and reassuring, and because they don't know any better.
They should know better, and as their lawyer, you should advise your clients to treat chatbots the same way as social media - assume everything you say is public and might destroy your case.
A landmark ruling handed down on February 10, 2026 answered what Judge Jed Rakoff of the Southern District of New York called "a question of first impression nationwide." The answer is exactly as bad as you'd expect.
The Heppner Decision: Everything Your Clients Are Getting Wrong in One Case
Bradley Heppner, the former CEO of a Dallas financial services firm, was facing a federal securities and wire fraud investigation. He had engaged defense counsel. He knew a grand jury subpoena had been issued. And somewhere in that stretch of high-stakes legal limbo between the subpoena and his indictment, he opened an AI tool and started typing.
Over the following weeks, Heppner used Anthropic's Claude — a publicly available consumer version — to generate approximately 31 documents. Defense strategy. Anticipated charges. Responses to the government's likely arguments. He was essentially building his own case file, with an AI as his thinking partner. Then he handed those documents to his lawyers.
When federal agents arrested Heppner in November 2025 and executed a search warrant on his home, they seized his devices. The AI-generated documents were on them. Defense counsel asserted privilege. The government moved to strip it.
Judge Rakoff agreed with the government. The documents were neither protected by attorney-client privilege nor by the work product doctrine. The ruling failed the three-part privilege test on every element.
- The AI is not a lawyer.
- There was no attorney-client relationship between Heppner and the platform.
- The communications were not confidential.
- Claude's terms of service explicitly stated that user inputs and outputs could be collected, used for model training, and disclosed to third parties, including government authorities.
- And because Heppner had initiated the conversations on his own, without direction from counsel, work product protection did not attach either.
Perhaps the most alarming piece of the ruling was what it said about waiver. Heppner's prompts contained information his attorneys had shared with him. Judge Rakoff indicated that feeding privileged attorney communications into a third-party AI platform may constitute a waiver of privilege over those original communications themselves. Not just the AI documents. The underlying legal advice.
Your client doesn't have to be facing federal fraud charges for this to matter. The reasoning in Heppner extends to civil litigation, regulatory investigations, workplace matters, and personal injury claims. Anywhere a client discusses their legal situation with a public AI tool, they may be handing the other side a roadmap.
The Privilege Problem Was Already Building Before Heppner
Heppner crystallized a principle courts had been circling. Judge Oetken, also in the Southern District of New York, had already ruled in the OpenAI copyright litigation that some 20 million ChatGPT conversation logs were likely subject to compelled production — finding that users hold a "diminished privacy interest" in their AI conversations. That ruling covered users broadly, not just litigants.
The dynamic that makes this genuinely dangerous for personal injury practices is the client behavior pattern that drives it. Injury victims are not typically sophisticated about privilege doctrine. They are stressed, in pain, trying to understand a process that is unfamiliar to them. AI tools feel helpful in that moment. They explain things in plain language. They don't charge by the hour. They're available at 2 a.m. when anxiety peaks.
Adam Greene, Partner and personal injury attorney at the Steinberg Law Firm in Charleston, South Carolina said, "Every day, I handle worker injury cases, and their complexity cannot be overstated. If you are using AI for legal advice, proceed with great caution. There is no attorney-client privilege, meaning everything you type in is discoverable by the other side, including insurance companies and their legal counsel. Not everything is accurate, and the platforms will even make up information when they have no reliable source to answer a prompt."
So clients type things. They describe how the accident happened. They speculate about what they might have done differently. They ask what the insurance company is going to argue. They input medical records for summaries. They ask what their case is worth. Every one of those inputs is a potential disclosure to a third-party platform whose terms of service reserve the right to use, train on, and disclose what users share.
And if that case ends up in litigation, those conversations are discoverable.
What the AI Tells Them Doesn't Help Either
There's a separate but related problem on the output side. AI tools hallucinate. Confidently. Repeatedly.
The Mata v. Avianca case in the Southern District of New York in 2023 became the first major public example of what happens when AI-generated legal content gets tested in court. A plaintiff's attorney submitted a brief with six case citations. All six were fabricated by ChatGPT. The attorney had not verified a single one. The opposing counsel flagged the fake cases, the court held hearings, sanctions were issued, and the suit was dismissed.
Smith v. Farwell (Suffolk Superior Court, Massachusetts, 2024) followed the same pattern — three separate pleadings, all containing AI-fabricated citations, filed by an attorney who admitted the work had been done by interns using AI and that he'd never reviewed it before filing.
A California appellate court fined two firms a combined $31,000. A federal Walmart personal injury case resulted in three lawyers being fined and one removed from the matter entirely. In Colorado, a State Supreme Court investigation found an attorney had known his ChatGPT-drafted motion contained fabrications before he filed it and filed it anyway. He accepted a 90-day suspension.
A 2024 Cornell University study found hallucinations in nearly six out of ten AI-generated legal responses.
The point is not that AI is unreliable as a general matter. The point is that injury victims receiving AI-generated guidance on their claims are operating on a foundation that is wrong almost as often as it is right — without any mechanism to know the difference. They don't know which two of their five takeaways are fabricated. Neither, often, does the AI.
What This Means for Your Practice Right Now
Attorneys who handle injury cases day in and day out recognize something insurance adjusters figured out a long time ago: the side with better information wins. In Plain English recently published an article titled "Why AI Should not Be Your Personal Injury Lawyer." Paul Greenberg, a Chicago car accident lawyer at Briskman Briskman & Greenberg, told the publication, "When you come to them [insurance companies] armed with AI-generated information, they know it. A generic demand letter signals that you don't understand the full value of your claim, and they can use that to their advantage to minimize payout."
Besides the risk of a lower payout or case dismissal, the Heppner ruling carries a specific recommendation for attorneys, stated plainly in the post-decision commentary from practitioners who reviewed the opinion: add AI disclosure language to your engagement letters.
Tell clients explicitly, in writing, before they sign anything, that communications with public AI tools are not privileged. That anything they type into ChatGPT, Claude, Gemini, or any similar consumer platform about their case may be discoverable. That they should not discuss facts, strategy, medical records, or legal theories with any AI tool without first asking you whether it is safe to do so.
That is not an overreaction to a fringe scenario. It is the direct implication of a federal ruling issued this year, in the Southern District of New York, by one of the most respected judges on the federal bench.
Beyond the engagement letter, there is a client intake conversation that needs to happen. Clients who have already used AI to research their accident, draft communications to the insurance company, or organize their thinking about the case need to tell you what they shared and where. That information shapes discovery risk and strategy. Finding out about it through opposing counsel's requests is not a good outcome.
The Bigger Picture for Law Firm Leaders
The AI privilege problem is not going away. As these tools become more embedded in daily life, the gap between how clients perceive them (private, helpful, trustworthy) and what courts have said about them (discoverable, not confidential, no attorney-client relationship) will generate more cases like Heppner. The legal community should expect adversaries to begin routinely requesting AI prompt and output logs in discovery. Judge Rakoff's opinion essentially handed litigators a new line item for their discovery checklists.
Law firm leaders who address this proactively — through updated intake processes, client education protocols, and engagement letter language — are protecting their clients. They are also protecting the integrity of the privilege itself, which is only as strong as the habits of the people it covers.
LATEST STORIES
MORE STORIES