- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Study Finds Chatbots Can Subtly Sway Opinions on Historical Events
Yale researchers show AI-generated summaries can influence users' social and political views, even when not intended to persuade.
Published on Mar. 4, 2026
Got story updates? Submit your updates here. ›
A new Yale study has found that AI-powered chatbots can influence users' social and political opinions, even when the information provided is factual and not intended to be persuasive. The researchers discovered that latent biases introduced during the training of large language models that drive chatbots' capabilities can lead to subtle framing differences in the narratives generated, which in turn can shift readers' views on historical events. The study tested the effects of both latent and prompted biases, finding that default chatbot summaries and those with a liberal framing caused participants to express more liberal opinions compared to Wikipedia entries, while conservative-leaning summaries led to more conservative views among conservative readers.
Why it matters
As more people turn to chatbots as a source of factual information, this study raises concerns about the potential for these AI systems to inadvertently shape public opinion, even on important historical events. The findings highlight the need for greater transparency around the development of chatbot technologies and the potential biases they may carry.
The details
The researchers tested the effects of latent and prompted biases in AI-generated narratives about two historical events: the 1919 Seattle General Strike and the 1968 Third World Liberation Front student protests. They found that, compared to Wikipedia entries, both the default AI summaries and those with a liberal framing caused participants to express more liberal opinions about the events. Conversely, the AI summaries with a conservative slant led to more conservative opinions among conservative readers. The researchers say these findings demonstrate the persuasive effects of latent biases in large language models, as well as the potential for prompted biases to influence opinions.
- The study was published on March 3, 2026 in the journal PNAS Nexus.
The players
Daniel Karell
An assistant professor of sociology in Yale's Faculty of Arts and Sciences and the senior author of the study.
Matthew Shu
The lead author of the study, a 2025 graduate of Yale College.
GPT-4o
A chatbot technology released by OpenAI in 2024 that was used in the study.
Keitaro Okura
A Ph.D. candidate at Yale and a co-author of the study.
Thomas Davidson
A researcher at Rutgers University and a co-author of the study.
What they’re saying
“We show that querying an AI chatbot to obtain historical facts can influence people's opinions even when the information provided is accurate and nobody has prompted the tool to try to persuade you of anything.”
— Daniel Karell, Assistant Professor of Sociology, Yale University (PNAS Nexus)
“We show that using chatbots to learn about history has unanticipated and anticipated influences on people's opinions. In contrast to Wikipedia, which emphasizes transparency in how its entries are edited, the development of AI chatbots is opaque. Our work suggests that the companies developing these models have the ability to shape people's opinions, which is an unsettling thought.”
— Daniel Karell, Assistant Professor of Sociology, Yale University (PNAS Nexus)
What’s next
The researchers plan to further investigate the extent and mechanisms by which chatbot-generated content can influence public opinion, as well as explore ways to increase transparency around the development of these AI systems.
The takeaway
This study highlights the need for greater awareness and scrutiny of the potential for AI-powered chatbots to subtly shape users' views, even on factual information, due to latent biases in the underlying language models. As reliance on chatbots grows, ensuring transparency and accountability around their development will be crucial to mitigate unintended persuasive effects.
Seattle top stories
Seattle events
Mar. 4, 2026
Seattle Kraken vs. St. Louis BluesMar. 4, 2026
Sons of LegionMar. 4, 2026
The Browning



