- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Hartman Today
By the People, for the People
Google's AI Summaries Raise Accuracy Concerns
Investigation finds one in 10 Google queries result in at least one inaccurate AI-generated summary
Apr. 10, 2026 at 6:26pm
Got story updates? Submit your updates here. ›
The intricate digital infrastructure powering Google's AI summaries is a double-edged sword, as concerns over accuracy and reliability come to light.Hartman TodayA recent investigation has revealed that Google's AI-powered summaries, designed to provide concise overviews of search results, are not always accurate. The report found that one in 10 Google queries resulted in at least one summary with incorrect information, raising concerns about the reliability of AI-generated content. While Google has made improvements to its AI models over time, the analysis also uncovered a decline in the quality of source links used to support the summaries.
Why it matters
The inaccuracies in Google's AI summaries have broader implications, including potential disruptions to traditional media and the job market, as well as questions about the quality of information being presented to users. This issue highlights the need for robust testing, fact-checking, and human oversight to ensure the integrity of AI-generated content.
The details
The investigation, conducted by The New York Times and AI startup Oumi, found that one in 10 Google queries resulted in at least one summary with incorrect information. This suggests that tens of millions of search results every hour could be misleading or false. Additionally, even when the information in the summaries was correct, the sources provided did not always support the claims made in the summaries. The analysis also revealed a paradoxical improvement in overall accuracy but a decline in source reliability for Google's AI models over time.
- In October 2025, Gemini 2 had an accuracy rate of around 85%.
- In February 2026, Gemini 3 improved the accuracy rate to 91%.
- However, the same analysis also revealed a degradation in summary sourcing with Gemini 3, with erroneous source links appearing more than 56% of the time in 2026.
The players
The tech giant that developed the AI-powered summaries feature.
The New York Times
The media outlet that collaborated with Oumi on the investigation.
Oumi
The AI startup that worked with The New York Times on the investigation.
BBC journalist
A journalist who deliberately created a misleading article, which Google's AI summary bot then repeated within 24 hours, highlighting the vulnerability of AI systems to manipulation.
What’s next
As Google works to address the concerns raised by the investigation, the company must take a more proactive approach to improving the accuracy and reliability of its AI-generated summaries. This may involve investing in robust testing and fact-checking mechanisms to ensure the integrity of the information presented to users.
The takeaway
The inaccuracies in Google's AI summaries highlight the importance of balancing innovation and responsibility when it comes to AI-generated content. While the technology has the potential to revolutionize the way we access information, it is crucial that Google and other tech companies prioritize accuracy, transparency, and accountability to build trust with their users and ensure that AI systems are a force for good in the digital age.

