- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Study Finds Chatbot Bias Strongly Influences User Decisions
Customers 32% more likely to buy after reading chatbot-generated review summaries vs. original human reviews
Published on Feb. 9, 2026
Got story updates? Submit your updates here. ›
A new study from researchers at the University of California San Diego found that customers are 32% more likely to buy a product after reading a review summary generated by a chatbot than after reading the original review written by a human. This is because large language models used in chatbots introduce positive biases in their summaries, which then affects user behavior.
Why it matters
The study provides the first quantitative evidence that the cognitive biases introduced by large language models can have real consequences on user decision-making. This raises concerns about the potential for systemic bias in areas like media, education, and public policy where these language models are increasingly being used.
The details
The researchers found that language model-generated summaries changed the sentiment of the original reviews in 26.5% of cases. They also discovered that the models hallucinated 60% of the time when answering user questions about news items, real or fake, that were outside their training data. This highlights the models' inability to reliably differentiate fact from fiction.
- The study was presented at the International Joint Conference on Natural Language Processing & Asia-Pacific Chapter of the Association for Computational Linguistics in December 2025.
The players
University of California San Diego
The research was conducted by computer scientists at the University of California San Diego.
Abeer Alessa
The paper's first author, who completed the work while a master's student in computer science at UC San Diego.
Julian McAuley
The paper's senior author and a professor of computer science at the UC San Diego Jacobs School of Engineering.
What they’re saying
“We did not expect how big the impact of the summaries would be. Our tests were set in a low-stakes scenario. But in a high-stakes setting, the impact could be much more extreme.”
— Abeer Alessa, first author
“There is a difference between fixing bias and hallucinations at large and fixing these issues in specific scenarios and applications.”
— Julian McAuley, senior author and professor of computer science
What’s next
Researchers tested various mitigation methods to address the language models' shortcomings, but found that while some were effective for specific models and scenarios, none were effective across the board. They say more work is needed to reliably differentiate fact from fiction in language model outputs.
The takeaway
This study highlights the urgent need to better understand and mitigate the cognitive biases introduced by large language models, as their growing use in areas like media, education, and public policy could have significant real-world consequences on user decision-making and behavior.
San Diego top stories
San Diego events
Mar. 9, 2026
Nine Inch Nails - Peel It Back Tour 2026Mar. 10, 2026
Monster Energy Outbreak Presents: Joey Valence & BraeMar. 12, 2026
Elefante: Tour 30 Aniversario




