Students Favor AI Chatbots Until Identity Revealed

Study finds nursing students rate AI chatbot responses highly, but show bias when they know the source

Apr. 8, 2026 at 2:14am

An abstract, geometric illustration in soft, muted colors, featuring sweeping arcs, concentric circles, and precise botanical spirals, representing the structural challenges of integrating AI chatbots into the classroom while building trust and overcoming student biases.A conceptual illustration depicting the complex relationship between students, professors, and AI chatbots in higher education, where trust and bias play a crucial role.Cincinnati Today

A study led by University of Cincinnati professor Joshua Lambert examined how nursing students evaluated responses from an AI chatbot, a professor, and a graduate assistant. The students rated the chatbot's responses as the most helpful and satisfying, but were biased against the chatbot when they knew the source of the response, suggesting a lack of trust in AI technology.

Why it matters

The study highlights the potential benefits of using AI chatbots in higher education, but also the challenges of building trust and acceptance among students. As AI becomes more prevalent, understanding student perceptions and biases will be crucial for effectively integrating the technology into teaching and learning.

The details

Lambert's study had seven doctoral nursing students submit statistical questions related to their capstone projects. They then received blinded responses from a professor, a graduate assistant, and an AI chatbot, and rated each response on helpfulness, satisfaction, and likelihood of use. The students rated the chatbot's responses the highest in terms of helpfulness and satisfaction. However, when asked to guess the source of each response, the students consistently guessed the lowest-rated responses as coming from the chatbot, suggesting a bias against the AI technology.

  • The study was published in the Journal of Nursing Education in 2026.

The players

Joshua Lambert

An associate professor and biostatistician in the University of Cincinnati College of Nursing who led the study.

Robyn Stamm

A DNP, associate professor of clinical nursing at the University of Cincinnati College of Nursing and co-author of the study.

Shannon White

A DNP, assistant professor in the doctor of nursing practice program at the University of Cincinnati College of Nursing and co-author of the study.

Melanie Kroger-Jarvis

A DNP, associate dean for graduate clinical learning programs at the University of Cincinnati College of Nursing and co-author of the study.

Bailey Martin

A PhD, postdoctoral research fellow at the University of Colorado Anschutz Medical Campus and co-author of the study.

Got photos? Submit your photos here. ›

What they’re saying

“Students first gave us their questions and then we gave them three responses back in a blinded and randomized fashion so students were unaware which response came from either the professor, graduate assistant or chatbot.”

— Joshua Lambert, Associate Professor

“Students preferred the large language model (LLM) chatbot's responses when blinded yet demonstrated a bias against it when the source was suspected. This bias is likely rooted in a lack of trust, and trust may influence AI adoption by both students and professors.”

— Joshua Lambert, Associate Professor

What’s next

Researchers suggest that larger studies, replicated in multiple sites with additional qualitative and quantitative data, are needed to thoroughly evaluate AI chatbot tools in nursing education and advising.

The takeaway

This study highlights the potential benefits of using AI chatbots in higher education, but also the challenges of building trust and acceptance among students. As AI becomes more prevalent, understanding student perceptions and biases will be crucial for effectively integrating the technology into teaching and learning.