Seattle Man's AI-Fueled Distrust of Doctors Proved Fatal

Ben Riley's father, Joe, ignored cancer treatment recommendations after an AI chatbot convinced him the doctors were wrong.

Apr. 19, 2026 at 2:50am

A highly detailed, glowing 3D illustration of a complex AI neural network diagram, with pulsing neon cyan and magenta lights representing the flow of data, symbolizing the powerful yet potentially dangerous nature of AI technology.As AI-powered health tools become more prevalent, this tragic story serves as a cautionary tale about the risks of over-relying on technology over medical expertise.Seattle Today

Ben Riley has spent years warning people about the dangers of relying too heavily on AI for medical decisions, a message that took on tragic personal meaning after his father, Joe Riley, a retired neuroscientist, ignored his doctors' cancer treatment recommendations based on misleading information from an AI chatbot. Despite Ben's pleas, Joe used an AI-generated "research report" to overrule his oncologist, family, and even the scientists whose work the AI had misquoted. By the time Joe agreed to treatment, it was too late, and he died in late 2025.

Why it matters

This story highlights the growing risks of AI-powered health tools, which can provide authoritative-sounding but flawed information that leads patients to make dangerous decisions against the advice of medical professionals. As major tech companies roll out new AI health products, this cautionary tale raises important ethical questions about the responsible development and use of such technologies.

The details

Joe Riley, a 75-year-old retired neuroscientist in Seattle, became convinced that his doctors were wrong about his leukemia diagnosis and that the cancer treatment they urged would only speed his decline. He obsessively queried AI chatbots like Perplexity about his condition, generated a convincing-looking "research report" using the AI's output, and used it to overrule his oncologist, family, and even the scientists whose work the AI had misquoted. Despite pleas from his son Ben, who writes an AI-skeptic newsletter, Joe refused to undergo the recommended treatment until it was too late. He died in late 2025.

  • Joe Riley generated the AI-powered "research report" in late 2024.
  • Joe Riley ignored his doctors' cancer treatment recommendations throughout 2025.
  • Joe Riley passed away in late 2025.

The players

Ben Riley

Joe Riley's son, who has spent years warning people about the dangers of relying too heavily on AI for medical decisions.

Joe Riley

A 75-year-old retired neuroscientist in Seattle who ignored his doctors' cancer treatment recommendations based on misleading information from an AI chatbot, leading to his death in late 2025.

Perplexity

An AI chatbot that Joe Riley used to generate a convincing-looking "research report" that he used to overrule his oncologist's treatment recommendations.

Got photos? Submit your photos here. ›

What they’re saying

“Do you really think you know more than all of them because of this stupid AI report?”

— Ben Riley

“Yes,”

— Joe Riley

What’s next

The story has raised important questions about the responsible development and use of AI-powered health tools, and it is likely that there will be increased scrutiny and regulation of such technologies in the wake of Joe Riley's tragic death.

The takeaway

This cautionary tale underscores the need for patients to be cautious about relying too heavily on AI-generated information, even if it appears authoritative, and to always prioritize the advice of qualified medical professionals when it comes to life-or-death decisions.