Researchers Develop Early Warning System for Toxic Social Media Interactions

UAlbany and Rutgers team create model to predict harmful social media escalation before it occurs.

Published on Mar. 11, 2026

Researchers at the University at Albany and Rutgers University have developed an early-warning framework that can predict harmful social media interactions before they erupt, paving the way for interventions that can minimize harm and make platforms safer for users. The model, trained on data from Reddit and Instagram, can forecast whether a social media thread is likely to escalate into a "negative storm" of toxic interactions based on signals in the first 10 comments.

Why it matters

Most current moderation approaches focus on detecting individual toxic comments, missing the broader situational dynamics that can cause a social media thread to escalate. This new framework aims to shift social media platforms towards more proactive moderation strategies that can anticipate and react to harm before it unfolds, creating a safer online environment for users.

The details

The researchers developed a new metric called Comment Storm Severity (CSS) that quantifies how intensely toxicity concentrates on a social media thread within a short span of time. Their analysis found that early signals within just the first 10 comments, such as comment timing and text patterns, can indicate whether a thread is going to escalate into a toxic wave of interactions. Social media platforms could integrate this early-detection tool to flag high-risk threads, allowing moderators to apply friction like rate limits, warning nudges, or temporary slow-mode before the situation fully escalates.

  • The research paper was presented at the IEEE International Symposium on Multimedia Conference in Italy in December 2026.

The players

Pradeep Atrey

Associate professor in the Department of Computer Science at UAlbany's College of Nanotechnology, Science, and Engineering.

Irien Akter

A PhD student in Computer Science at UAlbany, working in Atrey's lab.

Vivek Singh

An associate professor of Library and Information Science at Rutgers University, who has been collaborating with Atrey and Akter.

University at Albany

A public research university located in Albany, New York.

Rutgers University

A public research university based in New Jersey.

Got photos? Submit your photos here. ›

What they’re saying

“We like to give the analogy, would you rather detect a burning tree or a burning forest? In isolation, it could be a burning tree. But if you look at the entire situation, it could be a burning forest.”

— Pradeep Atrey, Associate professor (Mirage News)

“When comments arrive quickly and start showing small toxic cues, that timing pattern is often more predictive than the actual words.”

— Irien Akter, PhD student (Mirage News)

“The key is that if we can predict a toxic event before it occurs, we can implement safeguards to prevent it.”

— Vivek Singh, Associate professor (Mirage News)

What’s next

The researchers plan to further develop their model by incorporating data on the network status of commenters, such as recent activity, post history, and follower counts, to better determine the likelihood of a thread escalating.

The takeaway

This new early-warning framework represents a shift towards more proactive moderation strategies for social media platforms, allowing them to anticipate and mitigate harmful interactions before they fully unfold and create a safer online environment for users.