Taking back the conversation: detecting malicious social media bots

Social media bots contribute nearly one-quarter of all tweets on Twitter, with malicious bots swaying the conversation in whatever direction their creators want. CyLab’s Kathleen Carley is working towards shutting them down.

Daniel Tkacik

Aug 24, 2018

Kathleen Carley

Source: CyLab

CyLab researcher Kathleen Carley studies how social media bots sway public discourse.

What content is real, and what content isn’t? Thanks to malicious social media bots – automated accounts in online social media networks who aim to sway conversations in whatever direction their creators want – answers to these questions are murky at best.

"We’re trying to spot bots because they’re basically warping the information environment," says CyLab faculty Kathleen Carley, a professor in the Institute for Software Research (ISR). "Bots are affecting everything from what you buy to who you vote for to what you think about a movie, and we’d like the information environment to be a much cleaner place where there can be open and free dialogue and discussion."

Carley doesn’t try to sugar-coat the reality users of social media are living in.

"Bots are controlling the way you think and who you interact with more than you’d ever realize," she says.

Using machine learning to find and study bots

Carley, who receives funding from a variety of sources including the Office of Naval Research, and her research group have used machine learning and network analysis techniques to analyze droves of social media content on Twitter, identifying the existence of bots and then studying their behavior.

"Usually, if an issue being discussed is at all politically-polarizing, there will be a pro- and a con- sub-group of people that are talking back and forth about that particular issues," Carley says. "The bots are on both sides of that issue, and they’re actually engaged in various kinds of information campaigns to make the discussion on both sides more extreme.  This can increase group polarization."

These bots, Carley explains, engage in spreading rumors and slander, spreading spam, spreading malware, impacting trending topics, and impacting who is viewed as influential.

"Although there are relatively few bots compared to real human accounts, their impact is so large that it can affect statistical properties of that social media platform," Carley says. "For example, when Twitter got rid of a whole bunch of bots a few months ago, the so-called popularity of all the top tweeters went down by almost 30% in most cases. That so-called popularity existed, in part, because of bots."

Bots are controlling the way you think and who you interact with more than you’d ever realize.

Kathleen Carley, Professor , Institute for Software Research (ISR)

Bait and switch… and its effects

To take control of a conversation around a particular issue, Carley explains that the bots can employ the old "bait and switch" technique.

First, the bots will start following and mentioning social media users that are discussing a particular polarizing issue, creating the appearance of a group. The bots continue to mention each other, creating the appearance of widespread interest and causing Twitter to recommend similar messages to users. Then, once a large enough community has been built, the bots send their "new" message, typically more controversial than the original topic.

"Once the new controversial message goes out, users in that community assume others feel like this, as it's coming from the same community they are engaging in," Carley says. "This builds an apparent consensus."

While occurring in the digital communication wires of social media, this manipulation of public opinion has shown to translate into real-world violence. For example, Carley says that bots played a big role in the wave of demonstrations and civil unrest in Ukraine in 2013, known as Euromaidan.

"We believe bot activity led to an increased level of violence around the Euromaidan issue," she says." And as soon as Euromaidan happened, many bots shut themselves down. They did not wait to be shut down."

But not all bots are malicious, Carley says, so shutting them all down isn't the primary goal. For example, some bots can be used to promote events or help people advertise a good.

"The issue then becomes a balance between facilitating advertising while inhibiting the crowd manipulation resulting from that advertisement," Carley says.

While most of Carley's work has focused on Twitter and Github, bots in other social media platforms behave similarly at a high-level; that is, they exploit the platform's rules for prioritizing information and other features to get their messages out.

"Bots in other social media platforms may also be exploiting social cognition and cognitive biases," she says. "However, to what extent and exactly how this is done is a point of current research."

Advice to social media users

While the future of social media is uncertain, Carley offers some advice for current users of social media:

  • If it seems like everyone is agreeing and the discussion is becoming more virulent, consider dropping out of the discussion. This may be a sign that you are in an echo-chamber that is being exploited by bots.
  • Don't sign up for systems that reserve the right to send messages from your account that you did not write.
  • Watch out for obvious bots. If you see a set of accounts that have highly similar names, or are sending way more messages than other accounts, or where the gender of the name does not match the gender of the image, or who follow many but are rarely followed – then the associated account or accounts are probably bots. However, bots are getting more sophisticated, and such simple bots that carry these characteristics may not exist in a few years.