Russia's manipulation of Twitter was far vaster than believed

A cybersecurity firm analyzed a massive data set Twitter released in October 2018 on nearly 3,900 accounts and 10 million tweets.

Russia's infamous troll farm conducted a campaign on Twitter before the 2016 elections that was larger, more coordinated and more effective than previously known, research from cybersecurity firm Symantec out Wednesday concluded.

The Internet Research Agency campaign may not only have had more sway — reaching large numbers of real users — than previously thought, it also demonstrated ample patience and might have generated income for some of the phony accounts, Symantec found.

Their research analyzed a massive data set that Twitter released in October 2018 on nearly 3,900 suspended accounts and 10 million tweets. It discovered that the average lag between account creation and first tweet was 177 days and the most retweeted account garnered 6 million retweets, and less than 2,000 of those came from within the IRA-linked network of accounts.

The huge delay between the creation of an account and the initial tweet points to a lot of patient preparation, and the retweets indicate that a lot of unaffiliated Twitter users were amplifying the IRA's message.

For some lawmakers, revelations about the broader scope of Russian disinformation on Twitter are a stark reminder that the U.S. government has precious little time to safeguard the 2020 election from foreign interference.

"In terms of the sophistication, there is a group of us who are looking at what we can do to protect ourselves in 2020," said Rep. Elissa Slotkin (D-Mich.), a former CIA analyst who urged her fellow lawmakers to adopt the sweeping House bill that includes provisions to strengthen security safeguards at the polls although it does not specifically deal with social media disinformation.

"[T]here's a whole half of [special counsel Robert] Mueller report that's just about straight old-fashioned Russian information warfare. We've been educating ourselves so that we can push forward legislation to fill some of those holes," she said, suggesting that lawmakers have a way to go in finding a legislative fix to the online disinformation problem.

For Congress, a major roadblock is just understanding how tech companies police their platforms — and what they can do to prevent a repeat of 2016.

"Our response to bad content on social media is generally to say it should be taken down. That's because that's something we as politicians who are not technical experts understand," said Rep. Tom Malinowski (D-N.J.), former assistant secretary of State for Democracy, Human Rights and Labor.

"What we've had a harder time grasping is the virality machine that's built into the social networks through the algorithms that determine what news we see on a daily basis," he said.

Lawmakers in the U.S. and abroad have increased pressure on social media companies to do more to stop the spread of disinformation, and companies such as Twitter and Facebook have responded by updating their policies and removing troll accounts and false information.

But ridding the platforms of false information is a tough challenge both for lawmakers and social media platforms, Malinowski acknowledged.

"It's difficult because it would mean challenging the business models of these companies," he said.

While most of the accounts that Symantec reviewed were automated, many frequently demonstrated evidence of manual manipulation, such as slight wording changes in an apparent bid to dodge detection, according to Symantec.

"While this propaganda campaign has often been referred to as the work of trolls, the release of the dataset makes it obvious that it was far more than that," the company wrote. "It was planned months in advance and the operators had the resources to create and manage a vast disinformation network."

Some accounts also appeared to generate revenue via URL shorteners, with one account even earning as much as $1 million, although those were apparently rogue accounts operating outside the IRA's main mission.

In a subsequent tweet on Wednesday, Symantec said the dollar amount was just a maximum estimate of earnings. The company removed that information from the initial report while they "investigate some additional data."

The research also found that the accounts played to partisans on the left and right even more than previously believed, and that most of them were fakes pretending to be regional news outlets, while a smaller subset amplified those messages.

"The campaign directed propaganda at both sides of the liberal/conservative political divide in the U.S., in particular the more disaffected elements of both camps," Symantec found.

And the company warned in the closing message of its study: "The sheer scale and impact of this propaganda campaign is obviously of deep concern to voters in all countries, who may fear a repeat of what happened in the lead-up to the U.S. presidential election in 2016."

In response to the Symantec research, a Twitter spokesperson said the company's "singular focus is to improve the health of the public conversation on our platform, and protecting the integrity of elections is an important aspect of that mission."

Additionally, the spokesperson said, Twitter has made "significant strides since 2016 to counter manipulation of our service, which includes our releases of additional data in October and January related to previously disclosed activities to enable further independent academic research and investigation."

European officials and lawmakers are equally worried about the effect of social media disinformation on the political process.

“There are plenty of reports such disinformation in the run up to the recent European elections, in particular activity by bots and fake accounts," said Julian King, EU commissioner for security. "We shouldn’t accept this as the new normal."

The Symantec report comes as they are evaluating the impact of interference attempts on the European Parliament election that took place end of May.

"This confirms analysis that social media disinformation of the scale seen in 2016 requires some long work," said Fabrice Pothier, chief strategy officer at Rasmussen Global and senior adviser to the Transatlantic Commission on Election Integrity.

Pothier said an earlier analysis of Russian interference, released by Microsoft in February, showed Russian "operatives were seeking to ‘harvest’ personal data of Republican-leaning voters in order to further build their database of individuals to target in future operations."

"The fact that disinformation operations support individuals/narratives from both sides of the aisle is not new," said Pothier. But he said the new finding point to the "troubling reality that we lack a full picture of what is really happening in the social media sphere because the main platforms are not fully transparent."

EU lawmakers pushed hard on social networks to provide transparency reports on what they were seeing in their data analysis. But many questions remain, as platforms like Google and Facebook are hesitant to fully open up for researchers and authorities to study the problem.

A group of European lawmakers, including 17 members of the European Parliament, endorsed a call to launch a parliamentary inquiry into interference into the 2019 election.

Experts following the vote have pointed out that the EU election campaign showed that the sophistication of social media influence networks is increasing, with an increased emphasis on promoting local content and promoting real Twitter users generating their own, often divisive political content.

"Creating something out of nothing is really hard," said Ben Nimmo, a disinformation expert at the Atlantic Council. "It's a lot easier to amplify existing content."

Despite wide-ranging investigations by a number of campaigning groups, as well as social media companies, no one has yet been able to categorically link any Russian-backed groups to interference with the EU elections.

One expert said much of the information released by Symantec wasn't new, but underlined that those planning disinformation campaigns have been working on continuous, persistent attempts to distort public opinion.