Malicious social media bots tried, but failed, to diminish NATO during its 2018 exercise
Daniel Tkacik
Jul 10, 2019
In late October 2018, over 50 thousand air and sea soldiers convened in Norway, bringing with them thousands of aircraft, naval vessels, and armored vehicles. NATO was holding its largest exercise in decades to test its prowess in defending against potential adversaries.
Fake news trolls weren’t having it.
A new study by Carnegie Mellon University researchers illustrates how fake news was spread on Twitter by bots during NATO’s 2018 Trident Juncture Exercise in attempt to make NATO appear incompetent. The study is being presented this week at the 2019 SBP-BRiMS conference in Washington, D.C.
“Our evidence shows that bot accounts linked to Russia were spreading fake narratives about the exercise,” says Kathleen Carley, a professor in the Institute for Software Research (ISR), director of the Computational Analysis of Social and Organizational Systems (CASOS) Center, and a co-author on the study. “The fake stories made it seem as though NATO was going to attack Russia, or that the military exercises were resulting in civilian casualties, among other things.”
Bots even took advantage of one of the exercise’s mishaps, when the Norwegian warship Helge Ingstad collided with an oil tanker. The crewmembers were rescued with a few injuries, but bot accounts mocked the Trident Juncture by spreading the hashtag #TridentPuncture on Twitter.
Luckily for NATO, the Russian bots fell short of making a real impact.
“The bots’ overall impact on public opinion was minimal,” says Joshua Uyheng, a Ph.D. student in the ISR and a co-author on the study. “Our study shows that not all disinformation works.”
Our study shows that not all disinformation works.
Joshua Uyheng, Ph.D. student, Institute for Software Research
It would be inaccurate, though, to say no bots had any influence; one reason the disinformation campaign was unsuccessful, the researchers suggest, was that a large number of bot accounts were helping amplify pro-NATO content.
“We know bots were promoting pro-NATO messages, but our study could not directly connect them to NATO itself,” says Uyheng.
The exercise may have been so esoteric that those following it were already pro-NATO, Uyheng says. It also may not have been as much of an issue of public interest as, say, the US midterm elections that were happening around the same time, possibly landing the Russian bots’ false narratives on deaf ears.
In the study, the researchers analyzed the Twitter conversation surrounding NATO’s exercise by examining over 200 thousand tweets collected over a 23-day period encompassing the exercise. Each tweet included metadata on its corresponding user account and information on interactions with other users and tweets, like retweets or mentions.
Using a CMU-developed tool aptly named “Bothunter,” the team estimated the presence of between 10 and 25 thousand bots, corresponding to 12 to 30 percent of the total users in their dataset.