
Social media participants may find themselves caught up in an AI-driven phenomenon.(Image credit: Andriy Onufriyenko via Getty Images)ShareShare by:
- Copy link
- X
Share this article 0Join the conversationFollow usAdd us as a preferred source on GoogleNewsletterSubscribe to our newsletter
Groups of artificial intelligence (AI) entities might soon flood social platforms widely to propagate deceptive stories, bother individuals, and erode democratic processes, researchers are cautioning.
These “AI groups” will represent a novel domain in the arena of information combat, having the ability to simulate human conduct to avert detection while forging the impression of a genuine digital movement, according to an analysis revealed on Jan. 22 in the journal Science.
You may like
-

‘It won’t be so much a ghost town as a zombie apocalypse’: How AI might forever change how we use the internet
-

Experts divided over claim that Chinese hackers launched world-first AI-powered cyber attack — but that’s not what they’re really worried about
-

AI can develop ‘personality’ spontaneously with minimal prompting, research shows. What does that mean for how we use it?
“Humans, generally speaking, are conformist,” noted Jonas Kunst, co-author of the analysis and a communications professor at the BI Norwegian Business School in Norway, in conversation with Live Science. “We often resist acknowledging that, and individuals differ to a certain degree, but assuming all factors are equal, we do lean towards believing what the majority does holds a certain value. That’s a trait which these groups can exploit with relative ease.”
Moreover, if one isn’t swayed by the crowd, the swarm might serve as a tool for intimidation, deterring arguments that challenge the AI’s narrative, the researchers posited. As an illustration, the swarm could simulate an irate crowd to target a person with conflicting viewpoints and expel them from the digital platform.
The researchers do not provide a specific period for the arrival of AI groups, leaving unclear when the initial agents will manifest in our feeds. However, they stated that detection would be challenging, making the current extent of their deployment uncertain. For many, signs of the increasing sway of bots in social networking are already visible, while the dead internet theory — asserting bots dominate online content creation and activity — has garnered momentum in recent times.
Shepherding the flock
The researchers caution that this novel danger is exacerbated by enduring weaknesses in our digital infrastructures, which have been previously impaired by what is portrayed as “the diminishing of critical thinking and a fragmented consensus among citizens.”
Anyone engaging with social media recognizes its growing divisiveness. The digital sphere is also populated with automated bots — non-human accounts controlled by computer programs, responsible for over half of all internet traffic. Typically, conventional bots perform repetitive, basic actions like posting identical inflammatory messages. While still capable of causing damage by spreading misinformation and amplifying skewed narratives, they tend to be easily identified and rely on mass human coordination.
Conversely, the next iteration of AI groups is governed by large language models (LLMs) — the foundational AI for popular chatbots. With an LLM steering them, a swarm exhibits heightened adaptability to the digital communities it enters, deploying a variety of personas capable of retaining both memory and identity, as detailed in the analysis.
“We characterize it as an independent organism, capable of self-organization, learning, adapting, and specializing in exploiting human vulnerabilities,” Kunst stated.
You may like
-

‘It won’t be so much a ghost town as a zombie apocalypse’: How AI might forever change how we use the internet
-

Experts divided over claim that Chinese hackers launched world-first AI-powered cyber attack — but that’s not what they’re really worried about
-

Popular AI chatbots have an alarming encryption flaw — meaning hackers may have easily intercepted messages

Researchers suggest an AI swarm will act like people, which will make it harder to detect.
This coordinated manipulation is not merely theoretical. Last year, Reddit threatened legal action against researchers who employed AI chatbots in a study to influence the perspectives of four million users in its popular forum r/changemyview. The researchers’ preliminary findings revealed their chatbots were three to six times more compelling than responses from real individuals.
A swarm could encompass a range of hundreds, thousands, or even millions of AI entities. Kunst pointed out that the quantity grows in proportion to computing capability and faces potential limitations from measures that social networks might implement to counter swarms.
Agent quantity isn’t the whole story. Swarms might target smaller community groups that would regard a sudden influx of accounts with suspicion. Here, only a few agents would be needed. Since swarms are more refined than normal bots, the team also highlighted they may possess greater influence with fewer members.
“I anticipate that the more advanced these bots become, the fewer you’ll actually need,” posited Daniel Schroeder, the primary author of the analysis and a researcher at the technology research organization SINTEF in Norway, in speaking with Live Science.
Guarding against next-gen bots
Agents possess an edge in debates with real users since they can post consistently, every day, for the duration necessary for their narrative to resonate. The team added that in “cognitive combat,” the unrelenting nature and perseverance of AI can be used as an instrument of coercion against time-constrained human endeavors.
RELATED STORIES
—AI may accelerate scientific progress — but here’s why it can’t replace human scientists
—AI can develop ‘personality’ spontaneously with minimal prompting, research shows. What does that mean for how we use it?
—Indigenous TikTok star ‘Bush Legend’ is actually AI-generated, leading to accusations of ‘digital blackface’
Social media enterprises seek authentic users on their platforms, not AI agents, leading researchers to anticipate companies will combat AI swarms through refined account verification methods — compelling individuals to prove their human identity. However, the team also noted concerns with this strategy, stating it could dissuade political objection in areas where secrecy is vital for criticizing governments. Also, verified accounts can be appropriated, complicating matters further. However, the researchers did note that improving verification processes would increase the difficulty and cost for deploying AI groups.
The researchers also put forward secondary swarm prevention measures such as monitoring current traffic for unusual statistical trends suggestive of AI swarms, and the establishing of an “AI Influence Observatory” structure, which includes NGOs, educational bodies, and various institutions to examine, increase awareness of, and deal with AI swarm threats. Largely, the researchers want to take action on the matter before it damages elections and significant events.
“We are presenting a future issue with reasonable assurance that could profoundly impact democracy, and readiness is essential,” Kunst expressed. “We need to be proactive instead of waiting for the first type of larger events being negatively influenced by AI swarms.”

Patrick PesterSocial Links NavigationTrending News Writer
Patrick Pester serves as Live Science’s writer on hot topics. Other science websites, like BBC Science Focus and Scientific American, have published his writing. Patrick transitioned to journalism after starting his professional path by working at zoos and in wildlife conservation. He was granted the Master’s Excellence Scholarship for study at Cardiff University where he completed a master’s degree in international journalism. He holds a second master’s degree as well, in biodiversity, evolution and conservation in action from Middlesex University London. Patrick researches human remain sales when not occupied with writing news.
Show More Comments
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
LogoutRead more

‘It won’t be so much a ghost town as a zombie apocalypse’: How AI might forever change how we use the internet

Experts divided over claim that Chinese hackers launched world-first AI-powered cyber attack — but that’s not what they’re really worried about

Popular AI chatbots have an alarming encryption flaw — meaning hackers may have easily intercepted messages

AI models refuse to shut themselves down when prompted — they might be developing a new ‘survival drive,’ study claims

Some people love AI, others hate it. Here’s why.
