The Computational Propaganda Project

Oxford Internet Institute, University of Oxford

Event: The US Election and Disinformation @ IFTF

Director of Research Samuel Woolley gave a talk on November 11, 2016 at an event sponsored by the National Democratic Institute and the US State Department. The theme of the event, which was held at the Institute for the Future, was the role of disinformation during the US Election.

Dan Swinslow of NDI wrote the following article on Democracy Works about “the Distributed Denial of Democracy” as a precursor to the event.

Social media and the Internet have had a drastic effect on the surprise results of yesterday’s election in the United States, driving the spread of information—and misinformation—at times bringing voters together and, perhaps more often, pushing them apart. As the spotlight shifts off of the U.S. in the aftermath of November 8, it’s important to recognize that this is not a uniquely American trend. More than half of Internet users now report using social media as a primary source of news, according to a study across 26 countries, and more than one quarter call it their main news source. In developing countries where reliable news sources are more limited, those numbers may be even higher.

As reliance on social media and the Internet for news and information rises exponentially, political discourse is also rapidly moving online. A free and open Internet, where citizens can engage in fair dialogue and access accurate information, is thus critical to modern democracy and human rights.

Many scholars have written about recent trends of shrinking space for civil society around the world. In most countries, the ability to engage in free and meaningful speech online is under threat as well. As authoritarian governments face increased pressure from an “Internet public” or “Twitter revolutions,” many are incorporating new online repression tactics to erode democratic dialogue and broader support for democracy around the globe. 

Through “troll farms” of professional online provocateurs, automated bots pumping out thousands of comments, or a “Web Brigade” of crowdsourced online abuse, authoritarian regimes are engaged in a long-term and well-resourced program of undermining the democratic rights of citizens online by polluting online democratic discourse through hate speech and disinformation. Like distributed denial-of-service (DDoS) attacks, these distributed denial-of-democracy (DDoD) attacks (#ddodattack) reduce the utility of the Internet and social media for genuine democratic discourse—but do so far more insidiously. 

While significant attention has been paid to the dynamics of online discourse in the context of yesterday’s election in the United States, the challenges faced elsewhere are perhaps even greater. In the absence of an effective global response, DDoD attacks have been increasingly successful.

Anti-democratic trolling is a rampant global problem

Some have framed DDoD attacks as primarily a Russian problem, but Freedom House’s 2015 report, Freedom on the Net, noted the presence of paid trolling operations across governments in all of the world’s regions. Azerbaijan, Bahrain, Ecuador, Ethiopia, and Turkey, among others, have deployed online armies of thousands of trolls working to attack government critics and independent media and to disrupt hashtags and social media fora.

Many DDoD efforts focus on polluting the airwaves, making dialogue on crucial political issues and controversies impossible. A study from researchers at Harvard University estimated that the Chinese government paid for 488 million bogus social media posts in 2015, the majority aimed at distracting the Chinese public from sensitive online discussions. Similar tactics have been employed in Saudi Arabia. Other governments have focused on manipulating online discussion with fake social media accounts and bots. It has been reported that President Rodrigo Duterte’s new government in the Philippines has utilized hundreds of online operatives and “sock puppet” accounts to manufacture viral stories that have reached millions of people. Other governments have used fake Twitter accounts to artificially inflate their popularity online, like the Maduro government in Venezuela and Hun Sen’s regime in Cambodia.

Examples aren’t limited to authoritarian regimes. In Mexico, civic activists have denounced the use of bots to drown out the voices of government critics and human rights defenders by overwhelming hashtags with spam or promoting fake ones in their place and have dubbed these “Peñabots,” named after the country’s president. Trolling for political purposes has taken place throughout Latin America, often around the context of elections, as described by a wide-ranging Bloomberg article detailing the work of a now-jailed hacker whose teams claimed to have worked on behalf of campaigns and governments in more than 10 countries. In other contexts, such as Myanmar, trolling operations may be undertaken only by a part of the state to advance their perspectives and interests. Officials in the South Korean military’s cyberwarfare unit were famously indicted in 2013 for meddling in the domestic election on behalf of the sitting government.

It is, however, Russia that is the most noted offender. A report from the New York Times Magazine in 2015 noted that many of the thousands of paid trolls that fill unmarked government office buildings in Russia reported quotas requiring them to produce dozens of blog posts and hundreds of comments each day. This is unlike the propaganda wars of the Cold War where governments worked to flood the airwaves to tout the successes of their respective regimes. As Interpreter Magazine noted in its report, “The Menace of Unreality,” Russia’s goal is not to convince people of a truth, but to erode faith in institutions, sow falsehoods, undermine critical thinking, encourage conspiracy theories, promote social discord, and use the freedom of information against liberal democracies.

Perhaps the greatest battleground is in countries such as Ukraine, where the Russian government is utilizing significant resources to skew online discourse and undermine the legitimacy of democratic institutions through the spreading of conspiracy and fear. The effects are being felt across Europe, in countries including Latvia and Finland. We’ve even seen examples like during this year’s Euro Cup soccer tournament, when Russian media manufactured false reports of Europeans provoking violence to support a narrative of disorder. Russian disinformation also spread to the United States during the 2016 election season, a story which has been widely covered.

Some anti-democratic actors have equated trolling and disinformation campaigns with democracy assistance work undertaken by civil society organizations—and organizations such as the National Democratic Institute (NDI)—around the world. Those comparisons must be rejected. Efforts to enhance the capacity of political parties to engage in a competitive process, ensure free and fair elections, and strengthen the openness and accountability of governments, aim to help guarantee every citizen’s essential human right of participation in government. DDoD attacks aim for precisely the opposite.

The challenges will only become greater over time. Governments have already begun to develop artificial intelligence and algorithms that can write and distribute 21st century disinformation at an alarming rate.

Working together to develop an effective response

To date, the response to DDoD attacks has been inadequate. Some countries have sought to combat the volume of material created by troll armies with volume of their own, so-called armies of “elves.” These initiatives, like one undertaken in Ukraine, have been met with skepticism. While the standard response to bad speech in open societies is more speech, under such circumstances attempting to shout down opponents with a countervailing flood sometimes serves only to make it more difficult for citizens to use online platforms for genuine civil dialogue. At the same time, simply countering disinformation with credible, truthful information is often ineffective. A seminal study by the RAND Corporation looking at the psychological implications of disinformation campaigns noted, “don’t expect to counter the firehose of falsehood with the squirt gun of truth.” Indeed, some research has shown that conspiracy theories, rumors and alternative media sources are shared three times more often than mainstream sources on Facebook.

More importantly, while civil society in individual countries is often aware of the deluge of trolling, it lacks the tools for discrediting and fighting back against DDoD attacks. Governments and social media companies have developed policy frameworks to address online hate speech, cyberbullying, harassment and extremism. However, while some actors have begun important conversations—like Twitter’s recent meetings with Congressional staff—few have developed a holistic, robust response to authoritarian efforts to deny citizens their democratic rights. And while many helpful tools, best practices and research are being developed, they are not widely deployed.

Technology companies have begun to explore possible solutions to DDoD attacks, though they have focused more strongly on parallel problems which may also offer meaningful lessons. Jigsaw has recently launched Conversation AI, meant to automatically detect instances of abusive and harassing speech. Facebook has deployed a counter-speech program to fight extremism, awarding ad credits to users who push back against violent or extremist behavior. Platforms like NextDoor have already adopted changes to their user interface to reduce discrimination. Sites like Wikipedia and Reddit, which include active moderation by volunteer users, might offer ideas that provide opportunities to transform online space into a place where voices are held accountable. 

Media outlets and investigative journalists offer another potential resource for developing strategies to help fight back against DDoD attacks. An effort by Global Voices identified a network of more than 20,000 Russian trolls on Twitter. Buzzfeed has launched a campaign to debunk false news stories as part of the First Draft Coalition, an important project to improve standards around the use of content from social media. The Guardian conducted an intensive study into harassment trends on its website, finding that women writers were substantially more likely to be attacked (the implications of gender in addressing the challenge of anti-democratic trolling are immense and merit substantial discussion). Some creative startups are developing verification mechanisms for investigative journalism. Even ‘grey-hat’ actors are involved. Anonymous has engaged in an online war to expose, hack and take down ISIL websites and social media accounts.

Significant academic and government-led efforts are underway as well. The University of Washington, Central European University and Oxford University have hosted an extensive project on the impact of political bots. The Truthy Project at Indiana University offers a study of the diffusion of information online. StopFake, started by faculty of the Mohyla School of Journalism, is dedicated to stopping the spread of false stories in Ukraine. The EU’s East Strategic Communication Task Force monitors Russian disinformation efforts as part of its Disinformation Review, and the Center for European Policy Analysis has launched an Information Warfare Initiative to shine a light on disinformation in Central and Eastern Europe. These initiatives are just the tip of the iceberg as the situation becomes increasingly salient.

Among the key recommendations offered by the RAND Corporation’s report on the “Firehose of Falsehood” is to: “find ways to help put raincoats on those at whom the firehose of falsehood is being directed.” Creating and protecting safe platforms on the web for genuine political discourse will require collaboration among a host of actors. Governments, technology companies, media outlets, the academic community and organizations around the world must come together to develop policies and practices to aid civil society and citizens in addressing this problem, and build norms and standards for democratic governments to support an open Internet.

Protecting the ability of citizens to participate online is a key human rights issue for the 21st century.

Editor’s Note: On November 11, the U.S. State Department, NDI and the Institute for the Future will co-host a roundtable in Silicon Valley on the challenges posed by disinformation operations in the face of the rise of bots and artificial intelligence. This is part of the U.S. State Department’s Innovation Forum and is also one of a series of roundtables that NDI is supporting to better understand the impact of technology on the future of democracy.

 

Institute for the FutureSamuel C. WoolleyState Department

Samuel Woolley • 1st December 2016


Previous Post

Next Post