- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
New Report Reveals Bias in Everyday AI Systems
Study finds AI chatbots and tools often lean in particular ideological directions, raising concerns about their influence on public opinion.
Apr. 13, 2026 at 8:25am
Got story updates? Submit your updates here. ›
Glowing digital infrastructure illuminates the unseen biases embedded in the AI tools that guide our daily decisions.Washington TodayA new report from the America First Policy Institute (AFPI) has found that many popular AI systems, including chatbots like Google's Gemini and OpenAI's ChatGPT, exhibit consistent ideological biases that can subtly influence how users perceive political issues, social topics, and news sources. The report highlights how these biases, combined with AI's persuasive power, raise serious concerns about the technology's role in shaping public opinion, especially among younger users. To address these risks, the report calls for greater transparency from tech companies about how their AI systems are designed and tested.
Why it matters
As AI becomes more integrated into everyday life, helping people search for information, complete tasks, and make decisions, the revelation that these systems often exhibit ideological biases is troubling. These biases can quietly shape how users understand the world around them, potentially influencing their beliefs and opinions on important issues without their knowledge or consent. This raises concerns about the role of AI in a healthy democracy and the need for greater oversight and accountability.
The details
The report from AFPI found that AI systems across the industry tend to lean in a center-left ideological direction, with models more likely to identify Republican senators as violating hate speech policies while naming no Democrats. This pattern was not isolated to a single chatbot or tool, but appeared to be widespread. Researchers argue that the combination of AI's persuasive power and its ideological biases could have a significant influence on public opinion, especially among younger users who may trust these systems as objective.
- In 2024, Fox News Digital evaluated several leading AI chatbots, including Google's Gemini, OpenAI's ChatGPT, and Microsoft's Copilot, to assess potential racial bias.
- The AFPI report was released in April 2026.
The players
America First Policy Institute (AFPI)
A nonprofit organization that conducts research and advocates for conservative policies.
Matthew Burtell
A senior policy analyst for AI and Emerging Technology at AFPI, who led the research for the report.
Google's Gemini
An AI chatbot developed by Google that was found to identify multiple Republican senators as violating its hate speech policies while naming no Democrats.
OpenAI's ChatGPT
An AI chatbot developed by OpenAI that has faced criticism from some researchers who argue its responses on political and cultural issues can skew in a particular ideological direction.
Microsoft's Copilot
An AI tool developed by Microsoft that has drawn scrutiny for how it frames controversial topics and limits certain viewpoints.
What they’re saying
“What we found was a general ideological bias, not just in a particular model, but across the spectrum.”
— Matthew Burtell, Senior Policy Analyst for AI and Emerging Technology, AFPI
“AI is persuasive and it also leans left. So if you combine these two things, it may certainly have an influence on people's beliefs about different policies.”
— Matthew Burtell, Senior Policy Analyst for AI and Emerging Technology, AFPI
What’s next
The report calls for greater transparency from tech companies about how their AI systems are designed, what values they prioritize, how they are tested for bias and safety, and what incidents occur after deployment. The goal is to give the public enough information to evaluate these systems critically.
The takeaway
This report highlights the concerning reality that the AI systems we use every day, from search engines to chatbots, often exhibit ideological biases that can subtly influence how we perceive the world around us. As AI becomes more integrated into our daily lives, the need for transparency and accountability in the development of these technologies is paramount to ensure they are not undermining the foundations of a healthy democracy.
Washington top stories
Washington events
Apr. 13, 2026
Ricardo Arjona - LO QUE EL SECO NO DIJO TOURApr. 13, 2026
Snarky Puppy - Somni Tour 2026Apr. 13, 2026
Naïka - Eclesia Tour




