Federal Court Rules Consumer AI Use Can Waive Legal Privilege

Judge Jed S. Rakoff of the Southern District of New York finds that a defendant's use of the AI chatbot Claude did not qualify for attorney-client privilege or work-product protection.

Published on Mar. 4, 2026

In a first-of-its-kind decision, a federal judge ruled that a criminal defendant's written exchanges with the publicly available AI chatbot Claude were not protected by attorney-client privilege or the work-product doctrine, even when the defendant used the AI to draft materials related to his legal defense. The court emphasized that AI platforms like Claude are not attorneys and do not have the same confidentiality requirements, so communications with them cannot be considered privileged.

Why it matters

This ruling provides important guidance on how traditional legal privilege rules apply when clients use consumer AI tools to assist with legal matters. It suggests that if a client enters privileged or sensitive information into a public AI platform, that act may waive confidentiality, and the information could potentially be accessed by third parties, including the government.

The details

The defendant in the case, who was charged with securities and wire fraud, claimed that documents he generated using the AI chatbot Claude were protected by attorney-client privilege and the work-product doctrine. However, Judge Rakoff rejected these arguments, finding that communications with an AI system like Claude do not meet the standard for privilege because the AI is not an attorney and there is no confidentiality, as the AI's privacy policy allows for disclosure to third parties. The court also ruled that the defendant's independent use of the AI to prepare materials, without direction from counsel, did not qualify for work-product protection, which is designed to protect a lawyer's mental impressions and strategy, not a client's independent brainstorming.

  • On February 17, 2026, Judge Jed S. Rakoff of the Southern District of New York issued the ruling in the case of United States v. Heppner, No. 25-cr-503 (JSR).

The players

Judge Jed S. Rakoff

A federal judge in the Southern District of New York who issued the ruling in the case.

Anthropic

The company that developed the AI chatbot Claude, which was at the center of the court case.

United States

The plaintiff in the criminal case against the defendant.

Heppner

The criminal defendant who used the AI chatbot Claude to assist with his legal defense.

Got photos? Submit your photos here. ›

What they’re saying

“Communications with an AI system—even one used to generate content for an attorney—do not meet this standard because AI platforms are not attorneys or agents of attorneys.”

— Judge Jed S. Rakoff (jdsupra.com)

“The court emphasized that Claude expressly disclaims providing legal advice and is not a professional subject to fiduciary duties or professional discipline.”

— Judge Jed S. Rakoff (jdsupra.com)

What’s next

The defendant may appeal the ruling, which could lead to further legal proceedings and potentially set a precedent for how courts treat the use of consumer AI in legal matters.

The takeaway

This case highlights the potential risks of using consumer AI tools to assist with legal matters, as the court found that such use can waive attorney-client privilege and work-product protections, even if the intent was to prepare for conversations with a lawyer. Attorneys and clients should exercise caution when incorporating AI into legal work.