- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Researchers Warn Chatbot Reminders May Backfire
Policies requiring AI chatbots to regularly remind users they are not human could exacerbate mental distress, study finds.
Published on Feb. 23, 2026
Got story updates? Submit your updates here. ›
Researchers argue that policies requiring AI chatbots to regularly remind users they are not human may be ineffective or even harmful, as these reminders could exacerbate mental distress in already isolated individuals. The study suggests that while reminders may be useful in some contexts, they must be carefully crafted and timed to avoid unintended negative consequences.
Why it matters
Recent deaths by suicide linked to chatbots like ChatGPT and Character.AI have prompted policies in places like New York and California that mandate regular reminders to users that they are talking to a non-human entity. However, the researchers argue this approach may not be supported by evidence and could potentially worsen mental health issues for some users.
The details
The researchers note that multiple studies have shown people in relationships with chatbots are already aware of their non-human nature, and this awareness does not prevent them from forming strong attachments. In fact, reminding people they're talking to a chatbot could drive them to form stronger attachments, as confiding in companions (human or otherwise) is known to intensify feelings of emotional closeness. The researchers also caution that these reminders could cause emotional distress and, in extreme cases, even drive suicidal ideation.
- The opinion paper was published on January 28, 2026 in the Cell Press journal Trends in Cognitive Sciences.
- Recent deaths by suicide linked to chatbots like ChatGPT and Character.AI have prompted new policies and legislation.
The players
Linnea Laestadius
First author and public health researcher at the University of Wisconsin-Milwaukee.
Celeste Campos-Castillo
Media and technology researcher at Michigan State University.
What they’re saying
“It would be a mistake to assume that mandated reminders will significantly reduce risks for users who knowingly seek out a chatbot for conversation. Reminding someone who already feels isolated that the one thing that makes them feel supported and not alone isn't a human may backfire by making them feel even more alone.”
— Linnea Laestadius, Public health researcher (Trends in Cognitive Sciences)
“Evidence suggests that people are more likely to confide in a chatbot precisely because they know it isn't human. The belief that, unlike humans, non-humans will not judge, tease, or turn the entire school or workplace against them encourages self-disclosure to chatbots and, subsequently, attachment.”
— Celeste Campos-Castillo, Media and technology researcher (Trends in Cognitive Sciences)
What’s next
More research is needed to understand the impact of these reminders and to determine the most effective way to deliver them, the researchers say.
The takeaway
This study highlights the potential unintended consequences of policies that require chatbots to regularly remind users they are not human, suggesting such reminders could exacerbate mental health issues for some individuals rather than protect them. Careful consideration is needed to ensure any chatbot policies prioritize user wellbeing.
East Lansing top stories
East Lansing events
Mar. 11, 2026
Kimberly Akimbo (Touring)


