The Computational Propaganda Project

Oxford Internet Institute, University of Oxford

Russian operatives used Twitter and Facebook to target veterans and military personnel, study says

The project’s latest memo on junk news and social media operations against veterans was covered in the Washington Post. They researchers also tracked information on several military-themed websites and used the traffic to these sites —  along with the Twitter data — to determine what Facebook accounts promoted similar content on publicly available pages. That yielded…

Continue Reading

Facebook has so much more to tell us

Phil Howard and Bence Kollanyi wrote an opinion article for the Washington Post, discussing how Facebook could share more data about political advertising and targeting with the public. Facebook and Twitter have taken the important step of handing over thousands of ads to Congress that were bought and circulated by Russian strategists to influence our…

Continue Reading

Spreading fake news becomes standard practice for governments across the world

The project’s research on government-sponsored social media manipulation was covered in the Washington Post. These propaganda efforts exploit every social media platform — Facebook, Twitter, Instagram and beyond — and rely on human users and computerized “bots” that can dramatically amplify the power of disinformation campaigns by automating the process of preparing and delivering posts….

Continue Reading

Pro-Putin bots are dominating Russian political talk on Twitter

Our recent case study series was covered in The Washington Post: Bots airing pro-Kremlin views have flooded the Russian-language portion of the social media platform Twitter, in what researchers from the Oxford Internet Institute say is an effort to scuttle political discussion and opposition coordination in Russia.  In a new study of “political bots” on…

Continue Reading

Facebook could tell us how Russia interfered in our elections. Why won’t it?

Team members Phil Howard and Robert Gorwa wrote an op-ed for the Washington Post which calls on Facebook to share important data on potential Russian interference in the 2016 US election, and touches on the importance of not just studying ‘fake news’, but also fake accounts and other false amplifiers. Read the full piece in the Washington Post.

As a conservative Twitter user sleeps, his account is hard at work

Our project’s work was covered in the Washington Post. CHICAGO — Daniel John Sobieski, 68, climbed the stairs in his modest brick home and settled into a worn leather chair for another busy day of tweeting. But he needn’t have bothered. As one of the nation’s most prolific conservative voices on Twitter, he already had…

Continue Reading

One in four debate tweets comes from a bot. Here’s how to spot them.

The project’s research into the US presidential election was featured in the Washington Post. Philip Howard has a fancy name for partisan election bots. He calls them “computational propaganda” — and lately, he sees them a lot. On Oct. 14, the Oxford University professor released a paper dissecting bot activity after the first American presidential…

Continue Reading

More than 26 million people have changed their Facebook picture to a rainbow flag. Here’s why that matters.

Phil Howard was featured in a Washington Post story on digital activism. “Profile picture campaigns are effective in showing the friends and family in your social network that you have some affinity for a political candidate or cause,” said Philip Howard, a sociologist at the University of Washington and the director of the Digital Activism…

Continue Reading