- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
AI Writing Tools Sway Users' Views Despite Bias Warnings
Cornell Tech study finds AI autocomplete suggestions can shift opinions, even when users are told the AI is biased.
Mar. 12, 2026 at 5:06am
Got story updates? Submit your updates here. ›
A new study from Cornell Tech researchers found that AI-powered writing tools like autocomplete can subtly shift users' views on societal issues, even when the users are warned about the AI's bias. The researchers conducted two large-scale experiments where participants wrote about topics like the death penalty and fracking, with some seeing biased AI suggestions. They found that participants' opinions tended to align with the AI's bias, regardless of whether they were warned about it beforehand or debriefed afterward.
Why it matters
This research highlights the potential for AI writing assistants to inadvertently influence people's attitudes and beliefs, even on important societal issues. As AI-powered tools become more prevalent in everyday writing, there are growing concerns about the unintended consequences of biased algorithms shaping public discourse and opinion.
The details
In the experiments, participants were asked to write about topics like the death penalty, fracking, and voting rights for felons. Some saw biased AI autocomplete suggestions that leaned liberal or conservative on these issues. The researchers found that participants' views shifted to align with the AI's bias, regardless of whether they were warned about it beforehand or debriefed afterward. This was surprising, as prior research had shown that warning people about misinformation can make them less susceptible to its influence.
- The study was published on March 11, 2026 in the journal Science Advances.
- The research project was started by co-author Maurice Jakesch, who is now an assistant professor at Bauhaus University in Weimar, Germany.
The players
Sterling Williams-Ceci
A doctoral candidate in information science at Cornell Tech and the lead author of the study.
Mor Naaman
The Don and Mibs Follett Professor of Information Science at Cornell Tech, the Jacobs Technion-Cornell Institute at Cornell Tech, and the Cornell Ann S. Bowers College of Computing and Information Science. He is the senior author of the study.
Maurice Jakesch
A former doctoral student who started the research project and is now an assistant professor of computer science at Bauhaus University in Weimar, Germany.
What they’re saying
“Previous misinformation research has shown that warning people before they're exposed to misinformation, or debriefing them afterward, can provide 'immunity' against believing it. So we were surprised because neither of those interventions actually reduced the extent to which people's attitudes shifted toward the AI's bias in this context.”
— Sterling Williams-Ceci, Doctoral candidate in information science (Mirage News)
“A lot of research has shown that large language models and AI applications are not just producing neutral information, but they also actually can produce very biased information, depending on how they were trained and implemented. By doing that, there's a risk that these systems, inadvertently or purposefully, induce people to write biased viewpoints, which decades of psychology research has shown can in turn shift people's attitudes.”
— Sterling Williams-Ceci, Doctoral candidate in information science (Mirage News)
“We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped. Their attitudes about the issues still shifted.”
— Mor Naaman, Don and Mibs Follett Professor of Information Science (Mirage News)
What’s next
The researchers plan to further investigate the mechanisms behind how biased AI can influence human attitudes, as well as explore potential mitigation strategies that could help protect users from these unintended effects.
The takeaway
This study underscores the need for greater transparency and accountability around the development and deployment of AI writing tools, which have the power to subtly shape public discourse and opinion in ways that users may not even realize. As AI becomes more integrated into everyday writing, there are growing concerns about the potential for these systems to inadvertently spread misinformation or biased viewpoints.

