- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Malicious AI Swarms Pose Threat to Democracy
Researchers warn of coordinated bot networks using advanced language models to sway public opinion
Published on Feb. 12, 2026
Got story updates? Submit your updates here. ›
Researchers have uncovered a network of over a thousand AI-powered bots involved in crypto scams, dubbed the 'fox8' botnet. These sophisticated social bots, generated by advanced language models, can create the false impression of widespread public consensus around narratives they are programmed to promote. As AI technology becomes more accessible, the threat of malicious actors deploying large-scale, adaptive bot swarms to manipulate online discourse is growing, potentially compromising democratic decision-making.
Why it matters
The ability of AI-powered bots to generate credible, varied content and interact dynamically with human users poses a serious threat to the integrity of online discourse and the public's ability to distinguish genuine opinion from algorithmically-generated 'synthetic consensus'. This could undermine the foundations of democratic societies, where shared beliefs and trust in public discourse are essential.
The details
Researchers found that the 'fox8' botnet was able to create fake engagement through realistic back-and-forth discussions and retweets, tricking social media algorithms into amplifying their content. Even advanced bot detection methods were unable to distinguish these AI agents from human accounts. As more powerful language models become available, malicious actors can deploy large-scale, coordinated bot swarms to target online communities and elections with tailored, credible-sounding messages.
- In mid-2023, researchers uncovered the 'fox8' botnet involving over a thousand AI-powered bots.
- In the years since, access to advanced language models has increased, while social media moderation efforts have relaxed.
The players
Filippo Menczer
A researcher who is part of an interdisciplinary team warning about the threat of malicious AI swarms.
The Trump administration
Has aimed to reduce AI and social media regulation, favoring rapid deployment of AI models over safety.
What they’re saying
“I believe that policymakers and technologists should increase the cost, risk and visibility of such manipulation.”
— Filippo Menczer (The Conversation)
What’s next
Researchers are calling for regulation to grant them access to platform data, which would be essential for understanding how malicious bot swarms behave and developing effective detection methods. Social media platforms are also being urged to adopt standards for watermarking and labeling AI-generated content, as well as restricting the monetization of inauthentic engagement.
The takeaway
The threat of AI-powered bot swarms manipulating online discourse and public opinion is no longer theoretical. Policymakers and technology companies must take urgent action to mitigate the risks posed by this new frontier of coordinated disinformation campaigns, in order to protect the foundations of democratic societies.
Stamford top stories
Stamford events
Feb. 17, 2026
The Sleeping Beauty by International Ballet Stars


