Q&A with Kathleen Carley
The spread of coronavirus disinformation
Daniel Tkacik
Mar 26, 2020
Amid the global coronavirus pandemic, disinformation about the situation has been spreading at lightning speeds on social media. In the words of Kathleen Carley, a CyLab faculty member and a professor in the School of Computer Science’s Institute for Software Research, “This is dangerous.” Her research group has been closely monitoring the situation and sharing their findings on a regular basis.
You’ve studied disinformation campaigns around all kinds of events, ranging from elections to natural disasters. What are you seeing now during this coronavirus pandemic?
In a preliminary study, we looked at Twitter posts about the coronavirus, named SARS-CoV-2, and the disease it causes, COVID-19, between January 29 and March 4—over 67 million tweets from around 12 million users. This period was just a little over a week after the US announced its first cases.
What we've seen so far is that disinformation about this fall into at least three different categories. First, there are a lot of stories containing inaccurate information about cures or preventative measures, such as ones saying that drinking bleach will protect you, or that coating your body in sesame oil helps. Second, there are a number of stories about the nature of the virus, such as the claim that children cannot get the virus, which is not true. Lastly, there are also a number of stories about weaponization or bio-engineering of the virus.
What role have social media bots played in this, and where are these bots coming from?
It's too early to say where the bots are coming from, but we're finding that 40 percent of the discussion around coronavirus and COVID-19 is coming from bots. Of the users themselves engaging in conversation around the virus, we're finding around 22 percent of them are bots.
A big issue is that these bots are very influential—the network around them is configured such that they have a lot of listeners. Forty-two percent of the top 50 influential mentioners are bots, 82 percent of the top 50 influential re-tweeters are bots, and 62 percent of the top 1,000 re-tweeters are bots. This means that, similar to the virus itself, disinformation about the virus is spreading quickly—only much, much faster.
How does the disinformation you're seeing now compare with disinformation in the past around, say, elections or natural disasters?
First off, there's just a lot more disinformation around this topic than you'd see during an election. The categories it falls in are also different. Around an election, a lot of disinformation is around the voting process itself, where to register, where to vote, etc. Or it's around candidates—disinformation leading to either defamation of the candidates or to boost their image.
In the current situation, the disinformation is less personalized. A huge fraction of these stories is suggesting fake cures and fake prevention. Compared to a natural disaster scenario, you're seeing similar kinds of things about fake emergency measures. For example, some disinformation was being spread at one point that New York City was locked down under martial law. That was and is not true. It's more like a natural disaster than a political event in that respect, but you're still just seeing a lot more during this pandemic. And unlike the situation during elections, a lot of social media companies are trying to combat disinformation around coronavirus right now.
Is your advice for users any different now than it has been in the past?
It's really similar, and it’s extremely important given how dangerous some of this disinformation is. If you see disinformation, call it out, because some of it is deadly. If you see satire or a joke about the virus or COVID-19, don't share it; a person who reads it might not realize that it's satire.
The danger around this disinformation is somewhat novel. During elections, sharing disinformation poses a threat to democracy. That is very bad. In this situation, if you follow the guidelines in some of this disinformation, you could actually harm yourself.
What are you focusing on moving forward?
In addition to continuing to monitor disinformation activity on Twitter, we are also interested in looking at YouTube and Reddit communities—the types of disinformation being shared there, and how much of it is being amplified by bot accounts. We hope to have some results about those networks in the coming weeks.
Carley’s ongoing research is being conducted in the Center for Informed Democracy and Social Cybersecurity (IDeaS) and the Center for Computational Analysis and Organizational Systems (CASOS) at Carnegie Mellon University.