The team’s research on the IRA and political polarization in the United States has been widely covered, including in the following:
A report prepared for the Senate that provides the most sweeping analysis yet of Russia’s disinformation campaign around the 2016 election found the operation used every major social media platform to deliver words, images and videos tailored to voters’ interests to help elect President Trump — and worked even harder to support him while in office…
The first report — by Oxford University’s Computational Propaganda Project and Graphika, a network analysis firm — offers new details of how Russians working at the Internet Research Agency, which U.S. officials have charged with criminal offenses for interfering in the 2016 campaign, sliced Americans into key interest groups for targeted messaging. These efforts shifted over time, peaking at key political moments, such as presidential debates or party conventions, the report found.
The New York Times: Five takeaways from new reports on Russia’s social media operations
The Senate Intelligence Committee released on Monday two new reports that it commissioned about the Russian campaigns on Facebook, Instagram, Twitter and other social media platforms during the 2016 election and beyond. The reports, by teams led by experts at the cybersecurity company New Knowledge and Oxford University, fill out a portrait of the impressive operations by the Internet Research Agency in St. Petersburg.
Together, they essentially burnish the résumé of Yevgeny V. Prigozhin, a loyal associate of President Vladimir V. Putin’s, who owns the Internet Research Agency and its multiple corporate siblings.
The research details a vast campaign spearheaded by the Internet Research Agency (IRA) – a Russian company that has been described by the United States Intelligence Community as a troll farm with ties to the Russian government.
The report says Russia had a particular focus on targeting conservatives with posts on immigration, race and gun rights.
There were also efforts to undermine the voting power of left-leaning African-American citizens, by spreading misinformation about the electoral process.
But neither ads nor automated “bots” were as effective as unpaid posts hand-crafted by human agents pretending to be Americans. Such posts were more likely to be shared and commented on, and they rose in volume during key dates in U.S. politics such as during the presidential debates in 2016 or after the Obama administration’s post-election announcement that it would investigate Russian hacking.
“These personalized messages exposed U.S. users to a wide range of disinformation and junk news linked to on external websites, including content designed to elicit outrage and cynicism,” says the report by Oxford researchers, who worked with social media analysis firm Graphika.
The New York Times: Facebook, Twitter and YouTube Withheld Russia Data, Reports Say
The reports noted how the incomplete information had caused problems. Because Google did not provide any data on how many times Russian-created videos were watched or shared on YouTube, researchers were forced to search for the videos through alternative websites to re-create how widely the propaganda had been shared.
Google’s data from 2014 to 2018 was “remarkably scarce,” the report from researchers at Oxford and Graphika said.
A new report prepared for the Senate Intelligence Committee reveals that the Russians, in their bid to boost President Trump, have been more fixated than previously understood on trying to dampen African American political engagement.
Researchers at Oxford University’s Computational Propaganda Project and Graphika, a network analysis firm, spent seven months analyzing millions of social media posts that major technology firms turned over to congressional investigators. Their goal was to understand the inner workings of the Internet Research Agency, which the U.S. government has charged with criminal offenses for interfering in the 2016 election.
It turns out that African Americans were targeted with more Facebook ads than any other group, including conservatives.
A report prepared for the Senate Select Committee on Intelligence (SSCI) due to be released later this week concludes that the activities of Russia’s Internet Research Agency (IRA) leading up to and following the 2016 US presidential election were crafted to specifically help the Republican Party and Donald Trump. The activities encouraged those most likely to support Trump to get out to vote while actively trying to spread confusion and discourage voting among those most likely to oppose him. The report, based on research by Oxford University’s Computational Propaganda Project and Graphika Inc., warns that social media platforms have become a “computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”
The New York Times: What we now know about Russian disinformation
The Russian disinformation operations that affected the 2016 United States presidential election are by no means over. Indeed, as two new reports produced for the Senate Intelligence Committee make clear, Russian interference through social media — contrary to the suggestion of many prominent tech executives — is a chronic, widespread and identifiable condition that we must now aggressively manage…
Regardless of what any tech executives may have said, the data indicate that this was not a small-scale problem fixable by tweaking a platform’s advertising purchase policy. Rather, it was a cross-platform attack that made use of numerous features on each social network and that spanned the entire social ecosystem.
Russian interference in the 2016 United States presidential election on social media was more widespread than previously thought and included attempts to divide Americans by race and extreme ideology, according to reports by private experts released on Monday by US senators from both parties.
The Russian government’s Internet Research Agency, based in St Petersburg, Russia, tried to manipulate US politics, said the reports, one by social media analysts New Knowledge and the other by an Oxford University team working with analytical firm Graphika.
Russia’s most inflammatory social media misinformation posts weren’t paid advertisements, according to a new report commissioned by the Senate Select Committee on Intelligence.
Accounts run by the Russian-backed Internet Research Agency (IRA) — at the center of Russia’s online efforts to interfere in U.S. presidential and congressional elections — saw far more traction with organic social media posts that purported to come from average American citizens, researchers said.
The report sheds light on the scale and scope of Russian social media campaigns, which have long been discussed in terms of ad spend and promoted posts. Facebook and Twitter first disclosed Russian-bought ads last fall, revealing posts paid for in rubles and ratcheting up the number of users who saw the advertisements in the months that followed.
The Oxford researchers found black Americans were also targeted with more advertisements on Facebook and Instagram than any other group. More than 1,000 different advertisements were directed at Facebook users interested in African American issues, and reached almost 16 million people.
The material was intended to inflame anger about the skewed rates of poverty, incarceration and the use of force by police among black Americans to “divert their political energy away from established political institutions”, the report said, adding that similar content was pushed by the Russians on Twitter and YouTube.
While it has been widely alleged that Russia, working through the Internet Research Agency, a Russian company, based in Saint Petersburg, used Facebook and Twitter to influence the election, little had previously been known of its alleged use of other platforms, including YouTube, Instagram, Google+, Tumblr, and Pinterest. It also looks at email accounts run by Google’s Gmail, Yahoo, and Microsoft’s Hotmail.
“What is clear is that all of the messaging clearly sought to benefit the Republican Party – and specifically Donald Trump,” the report says.
The Oxford report details how Russians broke down their messages to different groups, including discouraging black voters from going to the polls and stoking anger on the right.
“These campaigns pushed a message that the best way to advance the cause of the African-American community was to boycott the election and focus on other issues instead,” the researchers wrote.
At the same time, “Messaging to conservative and right-wing voters sought to do three things: repeat patriotic and anti-immigrant slogans; elicit outrage with posts about liberal appeasement of ‘others’ at the expense of US citizens, and encourage them to vote for Trump.”
ComProp Director, Phil Howard, as quoted in the Global News Podcast:
There’s an interesting sequencing here. Twitter was used first to test messages. It was used as early as 2009—that’s one of the big surprises for us in doing this research. By 2014-15, Facebook was the primary platform for misinformation. And since the election, Instagram has actually become the primary platform for Russian-origin misinformation.
The top post featuring Clinton came a month before the election, the researchers found — a soup of conspiracy theories alleging that she would win because of voter fraud and alluding to an armed uprising. It received 102,253 engagements, which can be anything from likes and shares to comments.
“This newly released data demonstrates how aggressively Russia sought to divide Americans by race, religion and ideology, and how the IRA actively worked to erode trust in our democratic institutions,” said Senate Committee on Intelligence chairman Sen. Richard Burr, R-N.C.
The New York Times: Russian Trolls Came for Instagram, Too
In total, posts from Instagram accounts linked to the I.R.A. received nearly 185 million likes during the two-year period reviewed by the researchers, and about four million comments, according to the researchers. This activity accelerated in 2017, as Facebook’s stepped-up security measures after the election pushed the Russians to social media sites where they could troll more freely.
“On Instagram, I.R.A. activities did not cease after the 2016 election but became substantially more vigorous,” read one of the reports, which was written by Oxford University researchers along with Graphika, a company that maps social network activity.
If you’ve only checked into this narrative occasionally during the last couple of years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.
If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.
The reports, however, found that a majority of the Kremlin content “focused on societally divisive issues, most notably race.” The IRA’s campaign, Oxford and Graphika wrote, sought to convince African-American voters to “boycott” the elections and turn away from political institutions “by preying on anger with structural inequalities … including police violence, poverty, and disproportionate levels of incarceration.”
The New York Times: Russian 2016 Influence Operation Targeted African-Americans on Social Media
The second report was written by the Computational Propaganda Project at Oxford University along with Graphika, a company that specializes in analyzing social media. The Washington Post first reported on the Oxford report on Sunday…
Both reports stress that the Internet Research Agency created social media accounts under fake names on virtually every available platform. A major goal was to support Donald J. Trump, first against his Republican rivals in the presidential race, then in the general election, and as president since his inauguration.
During the period studied by the report’s authors, IRA posts on Instagram garnered more than twice as many engagements (such as likes or comments) as IRA posts on Facebook – 187m on Instagram vs 77m on Facebook – despite the fact that Facebook offers many more ways for users to interact with content, and Instagram has no native “sharing” button to promote virality.
And as public awareness of inauthentic behavior on Facebook and Twitter increased in 2017, the IRA increased its activity on Instagram. In the six months following the US presidential election, the IRA’s activity on Facebook, Twitter and YouTube climbed (by between 45-84%), while activity on Instagram soared (by 238%), according to the second analysis, by researchers at the University of Oxford.
While ads purchased on Facebook were targeted at U.S. populations, no data was provided to identify the location of users who engaged with targeted ads or organic posts.
“We don’t know who they are because Facebook didn’t provide this kind of data,” Dr. Mimie Liotsiou, a postdoctoral researcher from Oxford Internet Institute and one of the study’s authors, told Yahoo Finance. “That would be really useful to have. We did request richer data. There are ways to make that information possible, of course, without exposing any particular user’s name.”
The New York Times: Yes, Russian trolls helped elect Trump
Russian propaganda, one of the reports found, had about 187 million engagements on Instagram, reaching at least 20 million users, and 76.5 million engagements on Facebook, reaching 126 million people. Approximately 1.4 million people, the report said, engaged with tweets associated with the Internet Research Agency. “The organic Facebook posts reveal a nuanced and deep knowledge of American culture, media and influencers in each community the I.R.A. targeted,” it said.
There is no way to quantify exactly what this barrage of disinformation and manipulation did to American politics. But it should be obvious that what happens online influences our perceptions of, and behavior in, the offline world.