AI Chatbots Offer Health Advice, But Caution Advised

Experts say new AI health chatbots can be useful, but users should approach with skepticism and not rely on them for major medical decisions.

Published on Mar. 2, 2026

With hundreds of millions of people turning to chatbots for advice, tech companies like OpenAI and Anthropic have introduced AI-powered chatbots designed to provide health and medical guidance. While these programs can summarize test results, help prepare for doctor visits, and analyze health data, experts warn they are not a substitute for professional care and users should approach them with caution.

Why it matters

The rise of AI chatbots offering health advice raises concerns about the accuracy and reliability of the information provided, as well as privacy issues around sharing sensitive medical data with tech companies not bound by healthcare privacy laws.

The details

OpenAI's ChatGPT Health and Anthropic's Claude chatbot can analyze users' medical records, wellness apps, and wearable data to answer health questions. However, the companies stress these are not meant for diagnosis and users should still seek professional care. Early studies have found communication issues where chatbots struggle to elicit key details from users or provide a mix of good and bad information. Experts recommend consulting multiple chatbots and maintaining a 'degree of healthy skepticism' when using them, especially for major health decisions.

  • In January 2026, OpenAI introduced ChatGPT Health.
  • A 2024 study by Oxford University examined how people used AI chatbots to research hypothetical health conditions.

The players

OpenAI

An artificial intelligence research company that has introduced ChatGPT Health, a version of its chatbot designed to provide health and medical guidance.

Anthropic

An AI company that offers similar health-focused chatbot features through its Claude chatbot program.

Dr. Robert Wachter

A medical technology expert at the University of California, San Francisco who sees AI chatbots as an improvement over the status quo, though he recommends consulting multiple chatbots for a second opinion.

Dr. Lloyd Minor

The dean of Stanford University's medical school, who stresses that consumers need to understand the different privacy standards when sharing medical information with AI companies versus healthcare providers.

Adam Mahdi

The lead author of a 2024 Oxford University study that found communication issues where people using AI chatbots to research health conditions did not make better decisions than those using online searches or personal judgment.

Got photos? Submit your photos here. ›

What they’re saying

“The alternative often is nothing, or the patient winging it. And so I think that if you use these tools responsibly, I think you can get useful information.”

— Dr. Robert Wachter, Medical technology expert, University of California, San Francisco

“If you're talking about a major medical decision, or even a smaller decision about your health, you should never be relying just on what you're getting out of a large language model.”

— Dr. Lloyd Minor, Dean, Stanford University Medical School

“The place where things fell apart was during the interaction with the real participants.”

— Adam Mahdi, Lead author, Oxford University study

What’s next

Experts say the ability for chatbots to ask follow-up questions and elicit key details from users is an area where the technology needs to improve before it can be fully relied upon for health advice.

The takeaway

While AI chatbots offer a new way for people to access health information, users should approach them with caution, provide as much personal medical context as possible, and not rely on them for major health decisions without also consulting a medical professional.