- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Neanderthals Highlight Generative AI Knowledge Gap
Researchers find AI models rely on outdated information when depicting Neanderthal life
Published on Feb. 6, 2026
Got story updates? Submit your updates here. ›
Researchers at the University of Maine and the University of Chicago found that generative AI models like DALL-E 3 and ChatGPT produced inaccurate images and narratives about Neanderthal life, relying on outdated scientific information from the 1960s and 1980s. The study highlights the need for AI systems to be grounded in contemporary scholarly research to avoid perpetuating biases and misinformation about the past.
Why it matters
This study demonstrates the limitations of current generative AI models in accurately representing historical and anthropological knowledge. As these AI technologies become more prevalent in everyday use, it is crucial to understand how they may be influenced by outdated or biased information, and the implications this could have on how the public perceives the past.
The details
Researchers Matthew Magnani and Jon Clindaniel tested four different prompts 100 times each, using DALL-E 3 for image generation and ChatGPT API (GPT-3.5) for narrative generation. Two prompts did not request scientific accuracy, while the other two did. They found that the images and narratives generated by the AI models referenced outdated research from the 1960s and 1980s, depicting Neanderthals as more primitive and less technologically advanced than current scholarly understanding.
- The study was started in 2023.
- The researchers hope that if the study were repeated now, the chatbots would better incorporate recent scientific research.
The players
Matthew Magnani
Assistant professor of anthropology at the University of Maine, who worked on the study.
Jon Clindaniel
Professor at the University of Chicago who specializes in computational anthropology, and collaborated on the study.
What they’re saying
“It's broadly important to examine the types of biases baked into our everyday use of these technologies. It's consequential to understand how the quick answers we receive relate to state-of-the-art and contemporary scientific knowledge. Are we prone to receive dated answers when we seek information from chatbots, and in which fields?”
— Matthew Magnani, Assistant professor of anthropology (Mirage News)
“AI can be a great tool for processing large pools of information and finding patterns, but it needs to be engaged with skill and attention to ensure it's grounded in scientific record.”
— Jon Clindaniel, Professor of computational anthropology (Mirage News)
What’s next
The researchers plan to continue exploring the use of AI in archaeological research and topics.
The takeaway
This study highlights the need for generative AI models to be grounded in contemporary scholarly research to avoid perpetuating biases and outdated information about the past. As these technologies become more prevalent, it is crucial to ensure they are accurately representing historical and anthropological knowledge.
Chicago top stories
Chicago events
Feb. 7, 2026
Chicago Bulls vs. Denver NuggetsFeb. 7, 2026
Stereophonic (Chicago)Feb. 7, 2026
Whitney Hanson - Words Never Die Tour 2026




