- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Lawyer Cited Fake Cases Generated by ChatGPT in Legal Filings
Courts across multiple jurisdictions have flagged AI-fabricated case citations in legal filings, leading to sanctions and reputational damage for lawyers.
Published on Mar. 10, 2026
Got story updates? Submit your updates here. ›
In June 2023, a federal judge in Manhattan faced an unusual problem when two lawyers submitted a legal brief citing six court cases that did not actually exist. The cases were completely fabricated by ChatGPT, complete with realistic citations, plausible holdings, and convincing legal reasoning. When confronted, one of the lawyers testified that he had asked ChatGPT whether the cases were real, and the AI had said yes. This was not an isolated incident, as courts across multiple jurisdictions have flagged similar issues with AI-generated case citations in legal filings.
Why it matters
The use of AI-generated content in legal filings raises serious concerns about the integrity of the judicial process. Fabricated case citations that appear legitimate can lead to sanctions, mandatory ethics disclosures, and reputational damage for lawyers. This trend also highlights the broader challenge of distinguishing AI-generated content from genuine information, which has implications across fields like medicine, academia, and finance.
The details
The Mata v. Avianca case in New York is one of the most widely cited examples, but similar issues have been reported in courts across multiple jurisdictions. In some instances, opposing counsel caught the fake citations, while in others, judges discovered the fabricated cases during their review. The common thread is that the AI-generated citations were specific enough, with real-sounding case names, realistic court citations, and plausible legal reasoning, that they passed the drafting lawyer's review.
- In June 2023, a federal judge in Manhattan faced the issue of lawyers submitting a legal brief with fabricated case citations.
- The Mata v. Avianca case is the most widely cited example of this problem.
The players
Mata v. Avianca
A fabricated court case cited in a legal filing, which was one of the first widely reported examples of AI-generated content being used in legal proceedings.
Federal judge in Manhattan
The judge who faced the unusual problem of lawyers submitting a legal brief with fabricated case citations.
What they’re saying
“We must not let individuals continue to damage the integrity of the judicial process through the use of AI-generated content.”
— Robert Jenkins, San Francisco resident (San Francisco Chronicle)
What’s next
Courts and legal professionals are likely to increase scrutiny of AI-generated content in legal filings, and may implement new policies or guidelines to address this issue.
The takeaway
The use of AI-generated content in legal filings highlights the broader challenge of distinguishing fabricated information from genuine data, which has implications across various industries. Addressing this issue will require a combination of technological solutions, such as multi-model verification, and changes to professional practices and incentives.
New York top stories
New York events
Mar. 10, 2026
The Lion King (New York, NY)Mar. 10, 2026
Chasing AbbeyMar. 10, 2026
Death Becomes Her




