The Computational Propaganda Project

Oxford Internet Institute, University of Oxford

Hiring: Post-Doctoral Researcher & Technical Development Officer

We’re hiring a post-doctoral fellow to help our bot detection efforts, as part of our new BOTFIND project. The primary task will be to build tools for detecting politically motivated and automated content production on social media platforms. The position is suited to candidates who have recently completed a doctorate in a relevant field, and requires someone…

Continue Reading

Spreading fake news becomes standard practice for governments across the world

The project’s research on government-sponsored social media manipulation was covered in the Washington Post. These propaganda efforts exploit every social media platform — Facebook, Twitter, Instagram and beyond — and rely on human users and computerized “bots” that can dramatically amplify the power of disinformation campaigns by automating the process of preparing and delivering posts….

Continue Reading

Please Prove You’re Not a Robot

The project’s research was featured in a New York Times column written by Tim Wu. Robots posing as people have become a menace. For popular Broadway shows (need we say “Hamilton”?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite…

Continue Reading

Government ‘Cyber Troops’ Manipulate Facebook, Twitter, Study Says

The project’s recent paper on government efforts to manipulate public opinion online was covered by Bloomberg. Governments around the world are enlisting “cyber troops” who manipulate Facebook, Twitter and other social media outlets to steer public opinion, spread misinformation and undermine critics, according to a new report from the University of Oxford. Adding to growing evidence…

Continue Reading

Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation

Cyber troops are government, military or political party teams committed to manipulating public opinion over social media. In this working paper, we report on specific organizations created, often 8ith public money, to help define and manage what is in the best interest of the public. We compare such organizations across 28 countries, and inventory them…

Continue Reading

How Bots Win Friends and Influence People

Project PI Phil Howard was interviewed for a short article in IEEE Spectrum. Every now and then sociologist Phil Howard writes messages to social media accounts accusing them of being bots. It’s like a Turing test of the state of online political propaganda. “Once in a while a human will come out and say, ‘I’m not a…

Continue Reading

Die Stimmungskanonen

Our project research was featured in a story published in the Süddeutsche Zeitung. Die University of Oxford unterhält darum seit einiger Zeit das Computational Propaganda Research Project, das unter anderem für Deutschland und die Bundestagswahl festhält, dass zwar alle im Bundestag vertretenen Parteien den Botnetzen abschwören und geloben, sie nicht einsetzen zu wollen, dass aber Angela…

Continue Reading

Computational Propaganda Worldwide: Executive Summary

We’re very excited to announce the launch of our case study series on computational propaganda in 9 different countries. Find the executive summary, written by Sam Woolley and Phil Howard, here. The Computational Propaganda Research Project at the Oxford Internet Institute, University of Oxford, has researched the use of social media for public opinion manipulation. The…

Continue Reading

Phil Howard on Danish Television

Project PI Phil Howard appeared on DR, the Danish TV station, to discuss some of the project’s latest research. The full episode is available here [interview begins in the second half of the programme].    

Germany’s anti-fake news lab yields mixed results

The project’s research was mentioned in a POLITICO story on Germany’s ‘fake-news’ regulation. A new report by the University of Oxford investigated what German fact-checkers may be up against, finding that “the majority of the misinformation pages identified were politically right, and xenophobic, nationalist, pro-Pegida, pro-AfD and Islamophobic content was common.” While in the U.S. the debate has frequently…

Continue Reading

1 2 3 4 17