- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
AI in Medicine Needs Less Hype, More Honesty
A Yale professor argues that total transparency is the only way AI can become a trustworthy tool for physicians and patients.
Published on Mar. 4, 2026
Got story updates? Submit your updates here. ›
A Yale School of Medicine professor argues that while AI is already helping in areas like radiology and drug discovery, the public conversation around AI in healthcare often treats it as an all-knowing oracle rather than a powerful tool with clear limitations. The danger is that when reality falls short of the hype, trust in both science and medicine erodes. The author calls for more honest communication about AI's capabilities and limitations to build public trust.
Why it matters
AI tools in medicine risk following the same path as past biomedical advances that were overhyped and then faded when real-world complexities emerged. When AI systems reproduce social biases or make opaque decisions, it can lead to real harms for patients, especially underrepresented groups. Maintaining public trust in science and medicine requires transparent communication about AI's strengths and weaknesses.
The details
AI models are only as reliable as the data they are trained on, and when that data underrepresents certain groups, the predictions become less accurate and less safe for those populations. AI systems have also been shown to perpetuate social biases, like an Amazon recruiting tool that favored resumes with male-associated terms. Many AI systems are also 'black boxes' that can't explain their reasoning, which is problematic in a field like medicine where clear explanations are crucial. Studies have also found large gaps between AI's benchmark performance and its real-world reliability for providing medical advice.
- In December 2025, the FDA qualified its first AI tool to help pathologists analyze biopsy images in clinical trials for liver disease treatments.
The players
María Rodríguez Martínez
An associate professor of biomedical informatics and data science at Yale School of Medicine and a Public Voices Fellow with The OpEd Project in partnership with Yale University.
Amazon
A company that developed an AI recruiting tool that was found to favor resumes with male-associated terms and penalize those mentioning 'women's' activities or women's colleges.
COMPAS
A Correctional Offender Management Profiling for Alternative Sanctions algorithm that was widely adopted to assess recidivism risk and guide bail and sentencing decisions, but was found to falsely label Black defendants as high-risk nearly twice as often as White defendants.
What they’re saying
“The danger is not that the technology is useless — far from it. AI is already helping radiologists flag suspicious scans and accelerating drug discovery. The danger is that our public conversation treats AI as an all-knowing oracle, rather than a powerful tool with clear limitations.”
— María Rodríguez Martínez, Associate Professor (medscape.com)
The takeaway
To build public trust in AI's use in medicine, scientists, journalists, and tech companies must communicate with honesty about AI's capabilities and limitations, emphasizing what remains unknown, what assumptions were made, and who might be left out. Overselling AI's promise while downplaying its flaws will only erode trust when reality falls short of the hype.
Boston top stories
Boston events
Mar. 5, 2026
TopHouseMar. 5, 2026
Boston Symphony Orchestra




