Courts Grapple with AI's Role in Legal Proceedings

Rulings on AI-generated documents and pleadings raise questions about privilege, work product, and the practice of law

Apr. 7, 2026 at 1:53pm

A highly detailed, glowing 3D macro illustration of a futuristic AI neural network, with pulsing neon cyan and magenta lights illuminating the intricate circuitry and data flows, conceptually representing the complex legal challenges emerging as AI is applied in the legal system.As AI tools like ChatGPT become more prevalent in legal proceedings, courts are grappling with how to apply existing laws and principles to these new technologies.NYC Today

Recent court decisions have highlighted the complex legal issues surrounding the use of AI tools like ChatGPT in legal proceedings. In one case, a criminal defendant's conversations with ChatGPT were deemed not protected by attorney-client privilege, while in another, a pro se litigant's use of ChatGPT was viewed as protected work product. Meanwhile, an insurance company has sued OpenAI, alleging that ChatGPT prepared and encouraged the filing of improper pleadings. These cases illustrate the challenges courts face as they work to define the boundaries of AI's role in the legal system.

Why it matters

As AI tools become more prevalent, courts are being forced to grapple with how they impact fundamental legal concepts like privilege, work product, and the unauthorized practice of law. These rulings will have far-reaching implications for how attorneys and their clients can (or cannot) use AI in their legal work, and could lead to liability for AI developers whose tools are misused.

The details

In the Heppner case, a criminal defendant's conversations with ChatGPT were not protected by attorney-client privilege, as the court viewed them as akin to sharing information with a non-lawyer. In contrast, the Warner case found that a pro se litigant's use of ChatGPT was protected work product, since the opposing party did not have access to those communications. Meanwhile, the Nippon case alleges that OpenAI is liable for ChatGPT preparing and encouraging the filing of improper pleadings, which the insurance company claims cost it $300,000 to defend.

  • In February 2026, the Heppner and Warner decisions were issued on the same day.
  • In March 2026, the Nippon lawsuit against OpenAI was filed.

The players

United States v. Heppner

A criminal case in the U.S. District Court for the Southern District of New York that treated a defendant's conversations with ChatGPT as not protected by attorney-client privilege.

Warner v. Gilbarco, Inc.

A civil case in a U.S. District Court that found a pro se litigant's use of ChatGPT was protected work product.

Nippon Life Insurance Company of America

An insurance company that is suing OpenAI, alleging that ChatGPT prepared and encouraged the filing of improper pleadings that cost Nippon $300,000 to defend.

OpenAI

The artificial intelligence company that developed the ChatGPT language model, which is at the center of the Nippon lawsuit.

Got photos? Submit your photos here. ›

What they’re saying

“A programmer may be held liable for tortious interference with a contract when they knowingly design, market, and support software intendent [sic] to facilitate unlawful conduct.”

— Nippon Life Insurance Company of America, Plaintiff

What’s next

The judge in the Nippon case will need to determine whether OpenAI can be held liable for ChatGPT's actions in this matter.

The takeaway

These cases illustrate the complex legal issues surrounding the use of AI tools like ChatGPT in legal proceedings. Courts are grappling with how to apply existing legal principles around privilege, work product, and the unauthorized practice of law to these new technologies, raising questions about the responsibility of AI developers and the limitations of disclaimers.