Our project work was covered by the OECD.
The phenomenon of junk news and its dissemination over social media platforms have transformed (some say destroyed) political debates. The combination of automation and propaganda, also called computational propaganda, can shape public opinion. The trouble is, how can we tell the difference between fake facts and real facts, and indeed, real fakes?
This is the question Samantha Bradshaw and her colleagues from the Computational Propaganda Project at the University of Oxford set out to answer when they analysed the distribution of junk news, including fake news, computational propaganda and ideologically extreme, hyper-partisan, and conspiratorial content, over the social media platform Twitter during the 2016 US presidential campaign in Michigan. As their findings show, junk news was shared to the very same extent as professional fact-checked news.
In this pioneering quantitative research on junk news, Ms Bradshaw and her colleagues studied Twitter conversations happening in Michigan, a swing state in the US presidential elections, between 1-11 November 2016. The research team was interested in finding out what people were sharing as political information and news. They collected tweets with website addresses (URLs), which were classified according to three categories: professional news outlets (both major and minor sites), professional political content (from political parties, experts, think tanks, government), and other political news, which included junk news and further sub-categories such as WikiLeaks and country-related links, notably from Russia. The team found that professional news content and junk news were shared in a one-to-one ratio, meaning that the amount of junk news shared on Twitter was the same as that of professional news.
See the full article in the OECD Yearbook.