- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Mount Sinai Today
By the People, for the People
AI-Generated X-Rays Fool Radiologists, Raising Fraud Risks
Study finds medical professionals struggle to distinguish real scans from AI fakes
Mar. 27, 2026 at 2:25pm
Got story updates? Submit your updates here. ›
A new study led by researchers at Mount Sinai's Icahn School of Medicine has found that radiologists struggle to reliably detect AI-generated X-ray images, even when they are aware the dataset contains fakes. The findings raise concerns about the potential for fraudulent use of this technology, including in medical litigation and cybersecurity attacks on hospital networks.
Why it matters
As generative AI technology advances, the ability to create convincing medical imagery forgeries poses serious risks. Indistinguishable AI-generated X-rays could be used to falsely claim injuries, manipulate diagnoses, or cause widespread disruption in healthcare settings. This vulnerability threatens patient safety, doctor-patient trust, and the integrity of medical evidence.
The details
Researchers tested 17 radiologists from 6 countries on their ability to distinguish real X-rays from AI-generated fakes. In one test using ChatGPT-4o-created images, radiologists had a 75% accuracy rate when aware of the AI fakes, but only 41% when unaware. In a second test with a specialized diffusion model called RoentGen, radiologists scored 62-78% accuracy. The study also found that even advanced language models like ChatGPT-4o struggled to reliably detect the AI-generated X-rays.
- The study was published on March 27, 2026.
The players
Mickael Tordjman
The lead author of the study, an MD and post-doctoral fellow at the Icahn School of Medicine at Mount Sinai.
Mount Sinai's Icahn School of Medicine
The academic medical center where the research was conducted.
ChatGPT-4o
The large language model used to generate some of the AI-created X-ray images in the study.
RoentGen
A specialized diffusion model AI trained to generate believable chest radiographs, used in the second part of the study.
What they’re saying
“Our study demonstrates that these deepfake X-rays are realistic enough to deceive radiologists, the most highly trained medical image specialists, even when they were aware that AI-generated images were present.”
— Mickael Tordjman, Lead author of the study
“Believable AI-generated medical imagery creates a high-stakes vulnerability for fraudulent litigation if, for example, a fabricated fracture could be indistinguishable from a real one.”
— Mickael Tordjman, Lead author of the study
“There is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos.”
— Mickael Tordjman, Lead author of the study
What’s next
Tordjman said future work will focus on developing educational datasets and detection tools to help radiologists and other medical professionals better identify AI-generated medical imagery.
The takeaway
This study highlights the growing threat of AI-powered forgeries in the medical field, which could undermine trust in diagnoses, medical evidence, and the healthcare system as a whole. Addressing this vulnerability will require ongoing vigilance, advanced detection methods, and education for medical professionals.


