Lawyers Warn Clients: AI Chats Could Be Used Against You in Court

Federal judge's ruling that AI conversations are not protected by attorney-client privilege sparks warnings from US lawyers.

Apr. 15, 2026 at 5:10pm

A highly detailed 3D illustration of a glowing, futuristic-looking AI server rack or data center infrastructure, with neon cyan and magenta lights illuminating the complex hardware components, conceptually representing the powerful yet fragile nature of AI technology and the potential legal risks associated with its use.As AI chatbots become more prevalent, the legal risks of using them for sensitive communications are becoming increasingly clear.Los Angeles Today

As more people turn to AI chatbots like ChatGPT and Claude for advice, US lawyers are cautioning clients that their conversations with these AI systems could be demanded as evidence in criminal or civil cases. This comes after a federal judge ruled that a former CEO's chats with Anthropic's chatbot Claude were not protected by attorney-client privilege and had to be handed over to prosecutors.

Why it matters

The ruling highlights the legal risks of using AI chatbots, even when seeking advice related to legal matters. Lawyers are now advising clients to be extremely careful about what they discuss with AI systems, as those conversations may not be considered confidential and could potentially be used against them in court.

The details

Lawyers at major US law firms have been issuing advisories to clients, warning them to proceed with caution when using AI chatbots. They suggest selecting AI platforms carefully, phrasing prompts carefully to indicate the involvement of a lawyer, and generally treating AI as a tool rather than a confidant. The case that sparked these warnings involved former GWG Holdings CEO Bradley Heppner, who used Anthropic's Claude chatbot to prepare reports for his lawyers. A judge ruled that Heppner had to turn over those AI-generated documents to prosecutors, as attorney-client privilege does not apply to chatbots.

  • In February 2026, a federal judge in New York ruled that a former CEO had to hand over documents generated by Anthropic's chatbot Claude.
  • In March 2023, the law firm Sher Tremonte included a clause in a client contract stating that disclosing privileged communications to an AI platform could waive attorney-client privilege.

The players

Bradley Heppner

The former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent, who was charged with securities and wire fraud and used Anthropic's chatbot Claude to prepare reports for his lawyers.

Anthropic

The artificial intelligence company that created the chatbot Claude, which was used by Heppner and is the subject of the court ruling.

OpenAI

The artificial intelligence company that created the chatbot ChatGPT, which a different court ruled could be considered part of a person's 'work-product' in a lawsuit.

Jed Rakoff

The Manhattan-based U.S. District Judge who ruled that Heppner had to hand over the documents generated by Anthropic's chatbot Claude.

Anthony Patti

The U.S. Magistrate Judge in Michigan who ruled that a woman representing herself in a lawsuit did not have to hand over her chats with OpenAI's ChatGPT.

Got photos? Submit your photos here. ›

What they’re saying

“We are telling our clients: You should proceed with caution here.”

— Alexandria Gutiérrez Swette, Lawyer, Kobre & Kim

“No attorney-client relationship exists 'or could exist, between an AI user and a platform such as Claude'.”

— Jed Rakoff, U.S. District Judge

“ChatGPT and other generative AI programs 'are tools, not persons'.”

— Anthony Patti, U.S. Magistrate Judge

What’s next

Courts are expected to issue more rulings to clarify when AI chats can be used as evidence, as the use of AI in legal contexts continues to raise new challenges.

The takeaway

This case highlights the growing legal risks of using AI chatbots, even for tasks related to legal matters. Lawyers are now advising clients to be extremely cautious about what they discuss with AI systems, as those conversations may not be protected by attorney-client privilege and could potentially be used against them in court.