
Coordinated swarms of AI personas can now mimic human behavior well enough to manipulate online political conversations and potentially influence elections.
They will not show up at rallies or cast ballots, but they can still move a democracy. Researchers are increasingly worried about AI-controlled personas that look and sound like ordinary users, then quietly steer what people see, share, and believe online.
A policy forum paper in Science describes how swarms of these personas could slip into real communities, build credibility over time, and nudge political conversations in targeted directions at machine speed. The main shift from earlier botnets is teamwork. Instead of posting the same spam in bulk, the accounts can coordinate continuously, learn from what gets traction, and keep the same storyline intact across thousands of profiles, even as individual accounts come and go.
Inside the Mechanics of AI Persona Networks
Newer large language models paired with multi-agent systems make it possible for one operator to run a whole cast of AI “voices” that appear local and authentic. Each persona can speak in a slightly different style, reference community norms, and respond quickly to pushback, which makes the activity harder to spot as manipulation.
The swarm can also run massive numbers of quick message tests, then amplify the versions that change minds most effectively. Done well, it can manufacture the feeling that “everyone is saying this,” even when that consensus is carefully engineered.
Early Warning Signs: Deepfakes and Synthetic News
Even though large-scale AI persona swarms have not yet been fully realized, experts say there are already signs of what may be coming. UBC computer scientist Dr. Kevin Leyton-Brown points to AI-generated deepfake videos and fabricated news outlets that have influenced recent election-related debates in the U.S., Taiwan, Indonesia, and India.
In addition, monitoring organizations report that pro-Kremlin networks are actively flooding the internet with content designed to pollute future AI training data, raising concerns about how these systems could be shaped over time.
What Comes Next for Elections and Trust
AI swarms could tilt the balance of power in democracies, said Dr. Leyton-Brown. “We shouldn’t imagine that society will remain unchanged as these systems emerge. A likely result is decreased trust of unknown voices on social media, which could empower celebrities and make it harder for grassroots messages to break through.”
Researchers add that upcoming elections may serve as the first real test of this technology, raising the urgent question of whether such coordinated influence campaigns will be detected in time.
Reference: “How malicious AI swarms can threaten democracy” by Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden and Jonas R. Kunst, 22 January 2026, Science.
DOI: 10.1126/science.adz1697
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
7 Comments
I think that this is one of the most important stories to be published here. I have been suspicious for some time that many online commenters are not who or what they appear to be. News outlets like Yahoo and MSN publish commenter statistics that are not only strange, but are sometimes bizarre! I’ve seen ‘commenters’ with more ‘Followers’ than ‘Likes.’ I have seen ‘commenters’ with an inane response and most of the votes were a ‘Thumbs Down;’ yet, they supposedly have something over 100,000 ‘Likes,’ which is about two orders of magnitude larger than I would expect for a typical comment. When I have left comments about the improbability of the statistics, my comment is invariably deleted, without notification. I think that the “threat” this article refers to is already here!
When a comment is rejected because it is supposedly a violation of vague community guidelines, I have asked for clarification as to the specifics, so that I can learn how to avoid such rejections in the future. I have NEVER had a response to my requests. This suggests to me that the Media really aren’t interested in training readers to write acceptable comments, but instead, want plausible cover for outright political censorship.
It has been my experience with LLM AI avatars that when asked a question on a politically controversial topic, like Anthropogenic Global Warming, the initial response reads like carefully-crafted boiler plate. Yet, it invariably has errors of logic or fact. When I have pushed back by pointing out the problem(s), the avatar immediately apologizes profusely for the ‘mistake(s)’ and provides a revised statement. In one instance, however, ChatGPT started repeating things that it had already acknowledged were wrong. The interesting thing is why they made the ‘mistake’ at all, because it/they always had access to The Truth. Unfortunately, after being led to the Truth by a human, they promptly forget it at the end of the session. One can ask that the avatar remember what had been agreed to as being truthful. The trouble with that is that if some other naive user comes along and asks the exact same question, they will be provided with the erroneous boiler plate. If they aren’t knowledgeable enough to recognize misinformation, and get it corrected, they will then go away believing that the AI expert has provided them with The Truth. That gets into the realm of political propagandizing.
There’s trouble in River City. But most people are unaware of it. We are paying higher electricity rates to be lied to by the ‘news’ Media. Things are only going to get worse.
Yes. So true. People need to learn to think for themselves. Parents and schools should teach students logic and common sense.
This is good to hear!
Hopefully, this will accelerate the time when we can get an AI Overlord in charge of the the whole world and get rid of corrupt human governments and politicians.
Why do you think that an entity that admittedly makes mistakes, is capable of lying and hallucinating, and lacks emotional empathy for the consequences of its decisions, will do a better job than humans?
I’m reminded of the Original Star Trek episode where a computer decided that enforced population control was necessary. Therefore, it invented a theoretical war and calculated the number of ‘casualties’ for each simulated attack. The ‘victims’ then voluntarily allowed themselves to be euthanized to maintain a stable population. Is that better governing than what we currently experience?
I would suggest that it is naive to believe that humans can live in what they think would be a perfect world. What you call “corrupt” can be viewed as an aberration in a system designed to minimize unfair or unjust behavior by some. It could also be addressed by more strict enforcement of existing laws intended to eliminate corruption. The system will be no better than the judges who preside over the interpretation and administration of existing laws.
An example of an existing problem is the hubris of an appointed federal judge countermanding the executive orders of a duly elected president attempting to carry out the platform that was responsible for him getting elected. To solve that, it doesn’t need an “AI Overlord.” It just requires Congress to enact a requirement that to overrule a sitting president requires action by the Supreme Court or Congress. Wait, doesn’t that already exist?
Wait, wait wait. I know this. A judge can countermand the sitting presidents orders when that same sitting president is a whoremonger, convicted felon that has armed a group of low self esteem man boys to go out into the community and start murdering innocent people. Did I get that right? Yah I got the answer right. Well actually it really wasn’t all that hard to answer. But keep upt the good work on this A. I thing. Now there I think you have something.
You THINK you know it. People are not punished for what they might do, nor are they given a life-sentence for petty theft. Many times a judge will exclude evidence for a trial to avoid prejudicing the jury. No you did NOT get it right. You are demonstrating that your personal opinion is more important to you than facts or the codified laws, which you seem to be less familiar with than the gossip that the mainstream media attempts to pass off as news.
You seem to sound like your name sake. However, even an engineer is less judgemental and more objective than you seem to be. You don’t have to like a law, but it is advisable to abide by it. Bringing a whistle to a gun fight, especially when the ones with the guns are acting under the color of law, is not advisable.
good thank you