The Computational Propaganda Project

Oxford Internet Institute, University of Oxford

Spammers, Scammers, and Trolls: Political Bot Manipulation

(This originally appeared in the Open Technology Institute at the New America Foundation’s “Data and Discrimination: Collected Essays“)

Social bots, bits of code that generate content and mimic real social media users, are nothing new. Since the launch of Friendster and MySpace, social networking platforms have regularly featured fake accounts. Savvy programmers, spammers, and promoters use these automated profiles to generate clicks (“like” stuff and sell stuff), pad follower lists (fake popularity), and collect information (sort, borrow, and steal data). According to news reports, Facebook has more than 83 million illegitimate accounts, and Twitter—approximately 20 million.

Loops of Manipulation and Misdirection.

What is relatively new and on the rise is the cunning use of social bots by politicians, astroturf activists, and ideological extremists. These “political” bots and the messages they produce represent a new form of discriminatory computational propaganda. Via targeted spamming and other tactics, political bots drown out oppositional voices, demobilize activists, and promote the status quo.

For example, in South Korea, public servants in the cyber warfare unit of the Defense Ministry used bots to propagate messages in favor of President Park Geun-hye and Saenuri Party, including some which attacked political rivals. Though discerning the precise impact of these messages is difficult, their occurrence has heightened concern throughout the country: President Park won the election by a margin of a million votes.

In Syria, intelligence officials have also used bot followers to both bolster government credibility and stymie opposition within a context of civil unrest. Meanwhile, the militant group, Islamic State of Iraq and Syria (ISIS), uses bots to trick Twitter, giving the world the impression of having a large following on social media, when in fact it “ghost-Tweets” its messages. Because messages come from otherwise ordinary functioning, legitimate Twitter accounts, Twitter filters aimed at curbing socially mediated hate speech fail to detect these examples of bot messaging.

Political bots have the capacity to produce an unending loop of manipulation and misdirection. Any political group can buy and deploy a bot as easily as an individual zealot: as a recent New York Times article put it, friends and influence are cheap and for sale online. New developments in social bot technology allow fake accounts to operate in complex, multifaceted, ways. These pieces of software not only search for and collect data from social media sites—by scraping sites for individual and group identifiers—they also use this data on such sites to manipulate, censor, and isolate specific populations.The online popularity and influence of politicians is also tainted by the presence of bots. A recent Politico piece reported that bots have “infiltrated nearly every politically linked account from the White House to Congress to the 2016 campaign trail.”

Finally, aspects of peoples’ ability to make crosscultural connections via social media are also at stake. What happens when a religious or cultural group is drowned out by a bot-led barrage of hateful messages that seem to come from other, oppositional groups?

Activists vs. Bots

While few responses to bot-generated social media attacks exist, some innovators are beginning to experiment with alternate uses of bot technologies in a political context. A technologist at the Electronic Frontier Foundation created Block Together, which allows users to collect and share lists of blocked users with their social networks, including bots that pollute social media feeds.“Watchdog” bots help with transparency efforts by monitoring Wikipedia edits coming from government I.P. addresses. An anti-abortion bot gained international attention when Internet users turned its own design against it, forcing it to tweet Rick Astley lyrics (instead of anti-abortion In the last three years, social bots have become a driving force for political manipulation and data-driven deceit on social media sites. Trolling-the-trollers may become a way to promote social causes or combat automated political manipulation.

Looking Forward

The type of computational propaganda wrought by bots represents one of the most significant developments in social media. Bot software will continue to evolve, and the presence of artificial intelligence on social media platforms will grow. Because of this, it is essential to build understanding of how governments use and interact with security firms and hackers who program and deploy bots for political use. The opinions of these bot builders, and of those who track and disable bots, will be crucial to combating new types of data-driven discrimination.

By Samuel Woolley

Samuel Woolley • 29th September 2014

Previous Post

Next Post