HB Ad Slot
HB Mobile Ad Slot
Are AI Tools Practicing Law? Courts Are Starting to Weigh In
Tuesday, April 7, 2026

In February, the internet was abuzz with commentary[1] regarding a decision of the United States District Court for the Southern District of New York that treated a criminal defendant’s “chat” with a popular AI as a waiver of confidentiality, subjecting such conversations to use by the prosecution. See United States v. Heppner, 2026 U.S. Dist. LEXIS 32697. Coincidently that very day, another decision of a U.S. District Court went what might seem to be the other way, though that was in a case where the party was acting as her own attorney. See Warner v. Gilbarco, Inc., 2026 U.S. Dist. LEXIS 27355.

When it comes to AI, a lot can happen in the blink of an eye, and by March the courts have now been pulled into yet another novel AI controversy. Nippon Life Insurance was a corporate defendant that settled with Graciela Dela Torre, a former employee, but found itself wallpapered with pleadings to set the settlement and dismissal aside filed by Dela Torre, pro se. Nippon is now a plaintiff, suing OpenAI and alleging that ChatGPT prepared, falsely verified the validity of, and encouraged filing pleadings Nippon claims to have spent $300,000 to defend. See Nippon Life Insurance Company of America v. OpenAI Foundation and OpenAI Group PBC, 1:26-cv-02448, (N.D. Ill. March 4, 2026).

We all know that under Fed. R. Civ. P. 11 and its state equivalents one can seek sanctions from an attorney of record, as opposed to the client, where counsel knew or should have known that the pleadings could not be filed in good faith. Nippon asserts that its former employee went back to counsel that handled her claim “expressing her belief that the terms of the settlement Agreement resulted from potential errors or omissions of important facts and documentation. Dela Torre further expressed her desire to challenge or reopen the settlement due to those perceived errors and omissions.” Complaint at ¶48. Her counsel allegedly refuted their former client’s allegations of error or omission and explained the import of a release and dismissal with prejudice to their client. Id., at ¶49. Like any enterprising pro se litigant, Dela Torre appealed to higher authority—ChatGPT, which agreed with her characterization of her former attorney’s response as “gaslighting” and happily ground out pleadings complete with fictitious citations to reopen the case.

A review of this trilogy is insightful as to the complexities that lie ahead.

Chats Between a Represented Party and ChatGPT Aren’t Protected

On its face, Judge Rakoff’s decision in Heppner seems rather obvious. A client doing what clients are frequently doing now—that is, asking their buddy Claude (or ChatGPT) to help analyze their case, so they can “explain” to their lawyer what the attorney “needs” to consider in connection with their representation— and doing so with one of the popular large language models that train themselves on the user’s prompts, does not have the benefit of attorney-client or work product privileges over their communication to and from the AI. United States v. Heppner, 2026 U.S. Dist. LEXIS 32697. The court’s logic is compelling. The client certainly isn’t talking to a lawyer, and a client giving copies of their own, non-confidential chats with a lay person to their lawyer after the fact can’t shield the original communication with the bot, that was foolishly kept in the possession of the client, if those files are obtained lawfully via search warrant at the time of their arrest.

Defendant knew he was a target of a federal investigation, and the fact that he claimed to be doing this expressly for the purpose of preparing communication to counsel didn’t make communication with someone other than his lawyer attorney work product, or subject to attorney-client privilege. Applying some human, as opposed to artificial, intelligence, one knows that if a party reviewed copies of notes relating to what they planned to discuss with their counsel with a buddy over drinks, and left copies of those notes with their pal, a subpoena that gets copies of those notes is not likely to be stopped by a claim of privilege. Heppner treats the chats like conversations with a human. Id at Page 7. Hence, the government could have at it with electronic records of those conversations seized at the time of defendant’s arrest.

But What About When the Party is Representing Themselves?

Interestingly, Warner v. GilbarcoInc., takes what could be viewed as the opposite tact, although that decision turns at least in part on the fact that Warner was pro se. The court noted that for a pro se, such chats are their work product, and their mental impressions, all formed in anticipation of litigation. Warner v. Gilbarco, Inc., 2026 U.S. Dist. LEXIS 27355. Whereas Heppner treated the chat as if it were a disclosure to a person, Warner looked at whether the pro se litigant’s storage of work product in a manner that was not reasonably likely to reach their adversary constituted a work product waiver. Since the pro se litigant’s opponent had no real access to their work product, the court viewed the disclosure to a large language model as administrative and of no consequence to the adverse party’s waiver argument. To the extent Defendants argue that Plaintiff waived the work-product protection by using ChatGPT, the work-product waiver would have had to be a waiver to an adversary or in a way likely to get in an adversary’s hand. Warner v. Gilbarco, Inc., 2026 U.S. Dist. LEXIS 27355, at 12.

Two Takeaways

First, what is a real lawyer with human intelligence to do?

One might ask who is to blame for the foolishness of spilling your guts about conduct that is the subject matter of a criminal investigation or ongoing civil litigation to an unsecure large language model. The most obvious answer is the client, but trying to protect clients from their own mistakes falls within counsel’s job description as well.

The time has come to recognize that even sophisticated clients don’t get that ChatGPT is not a closed system, and even if they do, they don’t connect that fact with its implications—sharing information with consumer-level AI is like giving your buddy a copy of what you are sending to your lawyer. Don’t do it.

All of which suggests that prudent counsel would provide a simple AI warning to go to every client among the documents executed in connection retention. We started telling clients years ago to keep their litigation matters out of social media. Now it’s time to tackle the new online fixation. Clients should acknowledge in writing at retention something akin to:

“Consumer large language models (LLMs) like ChatGPT, Grok and Claude, are frequently being trained in part with the data that users supply. We use only secure, confidential, closed AI systems. If you, as the client, make the mistake of sharing case related information with any of the popular consumer AIs, or using those systems to prepare or review our communication, you risk information that should be confidential potentially leaking out over the internet, or worse, the other side may get a hold of your discussions with the AI, and use them against you.”

Second, this genie is not going back in the bottle. The legal system is going to have to make and enforce judgments as it confronts questions about what AI can and cannot do. If AI is a virtual person, what do we do with ones that don’t have the sense to respect their own limitations, or abide by a law that imposes strict limits on the conduct of humans? If a virtual person is not subject to law, is an entity that markets one without reasonable constraints against its known misuse liable when it acts in a way that would be illegal if it were human, and causes foreseeable harm?

This is not a First Amendment issue akin to the freedom from liability that is afforded a platform designed to support public discourse.[2] This is not a plea to lock away law books from the public so that lawyers can gatekeep the practice of law.[3] This is about the effectiveness of disclaimers as a tool to isolate a commercial entity from liability, when the entity knows those disclosures are ineffective, and that it is selling a product that is being used in a manner that violates the law and causes harm.

Nippon pleaded it well:

“A programmer may be held liable for tortious interference with a contract when they knowingly design, market, and support software intendent [sic] to facilitate unlawful conduct. See MDY Industries, LLC v. Blizzard Entertainment, Inc., 629 F.3d 928 (9th Cir. 2010). Intent may be inferred where the developer has actual knowledge of user’s violations and takes affirmative steps that materially contribute to those violations.” Complaint at ¶17.

Tesla encountered liability for misuse of its ‘Full-Self Driving’ and ‘Autopilot,’ software,[4] which very terms contradict the warnings it gave about its cars needing supervision. See California Department of Motor Vehicles, Case No. 21-02188 (“In the Matter of the Accusation Against: TESLA INC., dba TESLA MOTORS INC., a Vehicle Manufacturer”). Tesla knew or should have known that its warnings were ineffective,[5] and has since stepped back from those terms.[6]

OpenAI and Anthropic are aware that their tools can be used to review and prepare legal documents. Claude’s legal plugin purports to be “for commercial counsel, product counsel, privacy/compliance, and litigation support teams,”[7] and its use is promoted to automate contract review, prepare legal briefings and redline suggestions for clause negotiation. So, I downloaded and installed it—without ever being asked to produce a law license. Sure, it comes with the disclaimer—“All outputs should be reviewed by licensed attorneys.”

Can the large language models convince a jury that they credibly rely on such disclaimers, particularly when phrased as suggestions? Do they want to have to produce even a week’s worth of the pleadings ChatGPT has prepared for lay litigants?

One thing I feel safe in saying—OpenAI is not going to let ChatGPT represent itself.

Conclusion

Taken together, HeppnerWarner, and Nippon underscore that courts are now confronting AI not as a curiosity but as a force reshaping privilege, work product, and even the boundaries of what it means to practice law. They also highlight the growing tension between how developers market these tools and how consumers use them, despite disclaimers about the risks. As Judge Rakoff observed, “the implications of AI for the law are only beginning to be explored”[8]— a point these cases vividly illustrate as judges, lawyers, and technologists are pushed to define responsibility and guardrails in a landscape where disclaimers alone no longer feel sufficient.

[1] See, e.g., CLIENT ALERT: Southern District of New York Judge Rakoff Rules that Defendant Accused of Fraud Cannot Assert Privilege Over AI-Generated Documents – Sher Tremonte; A.I. Documents Deemed Not Privileged | EDRM – Electronic Discovery Reference Model – JDSupra; AI Docs Sent By Exec To Attys Not Privileged, Judge Says – Law360; SDNY Rules AI-Generated Documents Are Not Protected by Privilege – Debevoise Data Blog; Motion for a Ruling that Documents the Defendant Generated Through an Artificial Intelligence Tool Are Not Privileged at 7, United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 6, 2026), Dkt. No. 22. – Google Search; Court Declines Privilege Protection for Client-Generated AI Documents – Leech Tishman: Legal Services; Conversations with AI Not Protected By Attorney-Client Privilege; US v. Heppner (SDNY): AI-generated documents aren’t privileged. : r/Lawyertalk; Your AI Conversations Are Not Privileged | Falcon Rappaport & Berkman LLP; https://x.com/TheValueist/status/2022040868618920001; When AI Isn’t Privileged: SDNY Rules Generative AI Documents Not Protected | McGuireWoods LLP – JDSupra;

[2] Social media platforms such as Meta, Tik-Tok, Facebook, and YouTube claim protection under 47 USCS § 230. See e.g., M.P. v. Meta Platforms Inc., 127 F.4th 516 (2025); Doe v. Facebook, Inc., 142 S. Ct. 1087 (2022); Anderson v. TikTok, Inc., 116 F.4th 180 (2024); Gonzalez v. Google LLC, 598 U.S. 617 (2023).

[3] See e.g.Henderson v. Crosby, 883 So. 2d 847, 851 (2004).

[4] See e.g., Benavides v. Tesla, Inc., 2026 U.S. Dist. LEXIS 34587.

[5] In 2024, the NHTSA determined that insufficient controls from Tesla’s Autopilot system could lead to foreseeable driver disengagement while driving and avoidable crashes. https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf

[6] California DMV confirms that Tesla agreed to stop using the term “Autopilot” in marketing its cars: https://www.dmv.ca.gov/portal/news-and-media/tesla-takes-corrective-action-to-avoid-dmv-suspension/.

[7] https://claude.com/plugins/legal.

[8] Heppner at 1

HB Mobile Ad Slot
HTML Embed Code
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters