On Twitter, some of the loudest Trump and Clinton supporters aren’t even human. According to a new study, approximately 20% of content related to the American election on Twitter is published by bots. These bots are created for partisan purposes—advocates for either Clinton or Trump set them up to automatically post content on their behalf, and to fool humans into thinking they’re real people at the same time.
The authors of the study, the University of Southern California’s Alessandro Bessi and Emilio Ferrara, published their findings in the journal First Monday. Using algorithms to detect activity tied to social media bots, they found that nearly one-fifth of the conversation surrounding the U.S. election on Twitter showed signs of being created by bots. Content posted by the bots ranges from simple Trump or Clinton advocacy posts to sophisticated trolling efforts designed to infuriate or drown out particular segments of the Twitter audience.
One of the biggest questions, however, is just who creates these bots. Ferrara and Bessi didn’t attempt to answer that question. According to them, “Although our analysis unveiled the current state of the political debate and agenda pushed by the bots, it is impossible to determine who operates such bots. State- and non-state actors, local and foreign governments, political parties, private organizations, and even single individuals with adequate resources could obtain the operational capabilities and technical tools to deploy armies of social bots and affect the directions of online political conversation.”
These bots are designed to fool ordinary Twitter users into thinking they’re real human beings. The account creators typically steal profile pictures of strangers from the internet, gives the account the name of a real person, and then creates a fake biography. After that, the bots begin to post new tweets, retweet, like other peoples’ tweets, and friend or unfriend strangers. The goal of that activity? To influence voter opinions one way or another.
The University of Southern California created this helpful video summing up their research:
Bessi and Ferrara say there are several tell-tale signs that an obsessive Clinton or Trump supporter isn’t online. The first is an overwhelmingly high post rate of over 100 tweets a day, especially when posts are made at a time when Twitter users in the United States are typically sleeping. Other factors such as mostly having retweets, a recently created tweet, or obsessively posting about one topic, can also suggest an account is a bot.
Reached by email, Ferrara added “We were surprised by the amount of content, 1-in-5, that the bots are producing, rather than just the sheer number of bots. This was much beyond our expectation. But most importantly, we are surprised of how much the bots are being retweeted, at the same rate of humans!”
Ferrara’s bot detecting algorithm, BotOrNot, is available free of use to the public as well.