The Computational Propaganda Project

Oxford Internet Institute, University of Oxford

The Most Important Lesson From the Dust-Up Over Trump’s Fake Twitter Followers

Project members Tim Hwang and Sam Woolley have a new article in Slate discussing bots that follow political candidates.

Let’s be clear: Coordinated campaigns of misinformation and manipulation on social media are absolutely real and are becoming an increasingly prominent component of the online media landscape. A variety of state and nonstate actors are increasingly flexing their muscles on these platforms to achieve a range of propaganda ends around the world. Swarms of bots have been used to disrupt dissident activists in places like Turkey, Mexico, and Syria, and dedicated Russian psy-ops and cyberattacks certainly played a role in the 2016 U.S. election. This is a real threat, and one that bears a much closer look by society as a whole.

But, at the same time, this week’s story reveals another key truth about these emerging threats and the social media platforms on which they find success: The opacity of platforms like Twitter and their continued unwillingness to provide critical data to journalists and researchers makes it even more difficult to determine where campaigns of misinformation are emerging and who is behind them.

While the sudden boost of these fake accounts is suspicious, their actual origins and purpose are a matter of conjecture. We know that the follower count for a given account has changed, we have a list of those new followers, and we have a rough sense of the behavior of those accounts in ways that are indicative of whether they might be fake identities or bots.

But that is all the information we have, and all we are likely to get without a leak from a builder or the help of a platform. There is no place to download data about these accounts or quickly find any information about, say, the IP addresses or other registration details associated with them. We also can’t effectively compare them in a real, quantitative way against other campaigns of misinformation that we’ve seen in the past. This limits our ability to connect this particular situation to other things that we’ve seen before or that may be occurring on Twitter or other social media platforms at the same time.

Read the full article here.

 

algorithmsAutomationBotsSamuel C. WoolleyTim HwangUS

Robert Gorwa Robert Gorwa • 2nd June 2017


Previous Post

Next Post

Leave a Reply

Your email address will not be published / Required fields are marked *