Experts Debate Risks and Rewards of AI in Medicine

Liability, bias, and the blurring of "wellness vs. medicine" pose challenges as AI transforms healthcare

Published on Feb. 15, 2026

Experts at the Davos Imagination in Action event discussed the promise and pitfalls of using AI in healthcare. While AI has the potential to streamline processes, improve efficiency, and save lives, concerns remain around liability, bias, the ambiguous line between wellness and medicine, and the risk of "jagged intelligence" where AI models exhibit plausible but subtly flawed outputs. Panelists emphasized the need for AI systems to have more "epistemic humility" and know when they are uncertain, rather than overconfidently making decisions. The story also explores how regulations can sometimes hinder the real-world application of AI-powered health insights, as seen when Whoop was denied emergency use authorization to alert users to potential COVID-19 cases.

Why it matters

As AI becomes more prevalent in healthcare, it is crucial to address the ethical and practical challenges to ensure the technology is deployed responsibly and equitably. Unresolved issues around liability, bias, and the blurring of wellness and medical domains could undermine public trust and lead to harmful outcomes if not properly managed.

The details

The panel at the Davos Imagination in Action event featured experts discussing key issues with AI in medicine. Emily Capodilupo of Whoop noted the difficulty in clearly defining "health data" versus other types of data, as the lines are becoming increasingly blurred. Vivek Natarajan of Google DeepMind emphasized the need to focus on "agency and empowerment" for healthcare workers and patients, rather than framing AI as a replacement. Anurang Revri of Stanford Healthcare suggested AI could make healthcare workflows more intelligent by automating administrative tasks so clinicians can focus on clinical care. Shalabh Gupta, CEO of Unicycive, discussed how AI is improving drug development by leveraging larger datasets, as well as helping to match patients to the right clinical trials. However, Natarajan also warned of "jagged intelligence" where AI models can produce plausible but subtly flawed outputs, making them difficult to verify.

  • The Davos Imagination in Action event took place in January 2026.
  • Whoop originally created its wearable device to help elite athletes, but realized it could detect COVID-19 cases before symptom onset during the pandemic.

The players

Emily Capodilupo

Co-founder and Chief Data Officer at Whoop, a wearable device company.

Vivek Natarajan

AI researcher at Google DeepMind.

Anurang Revri

Chief Enterprise Architect at Stanford Healthcare.

Shalabh Gupta

CEO of Unicycive, a pharmaceutical company.

Ami Bhatt

Physician who moderated the panel discussion at the Davos Imagination in Action event.

Got photos? Submit your photos here. ›

What they’re saying

“I think one of the big challenges that's going to be interesting to navigate is people think that when we talk about intelligent health or what health data is that you can define that, put it in a neat box and clearly say what's health and what's medicine versus what isn't. And these are increasingly getting incredibly blurred.”

— Emily Capodilupo, Co-founder and Chief Data Officer, Whoop (Davos Imagination in Action)

“I feel that discourse is generally not helpful, because I think what we should be striving for, and we are hopefully striving for, is agency and empowerment. When we talk about replacement narratives, whether that's for healthcare workers or maybe even doctors, I mean, that's not the goal over here. The goal is agency and empowerment for people, and people who are providing care for them.”

— Vivek Natarajan, AI Researcher, Google DeepMind (Davos Imagination in Action)

“Healthcare is a very workflow-driven system. There are regulated steps of doing things, there are medical best practices. So intelligent medicine, in my mind, would be to actually make those workflows intelligent, actually. So we can split the task, which is the physicians, the clinicians, can focus on the clinical tasks, and all of the toil can be delegated down to the agents.”

— Anurang Revri, Chief Enterprise Architect, Stanford Healthcare (Davos Imagination in Action)

“Sometimes when you inspect the outputs from the model, they look absolutely plausible, like it's in some ways shockingly good. But then, when you spend enough time digging into the details, there can be subtle errors, and there can be hallucinations, and I can say that as the pace of AI progresses, the difficulty of verifying and catching these hallucinations also is increasing.”

— Vivek Natarajan, AI Researcher, Google DeepMind (Davos Imagination in Action)

“I think what we need is less of intelligence, more of epistemic humility from these models, knowing when they don't know, because when we reach that phase, then the model will be able to say that 'okay, this is a complex case.'”

— Vivek Natarajan, AI Researcher, Google DeepMind (Davos Imagination in Action)

What’s next

Experts at the Davos event emphasized the need for ongoing collaboration between the healthcare and AI communities to address the challenges of deploying AI responsibly in medicine. As the technology continues to advance, policymakers and regulators will also play a crucial role in establishing appropriate frameworks to ensure patient safety and equity.

The takeaway

While AI holds immense potential to transform healthcare, making it more efficient and accessible, there are significant ethical and practical hurdles that must be overcome. Addressing issues of liability, bias, the blurring of wellness and medical domains, and the risks of "jagged intelligence" will be critical to realizing the full benefits of AI in medicine without compromising patient care and public trust.