What global elections have taught Silicon Valley about misinformation

The swift action Twitter and Facebook took to stifle an unverified New York Post article and the crushing political pressure that forced them to reverse course illustrate a key reality: With just weeks until the U.S. election, tech giants have yet to sort out their misinformation problems.

That hasn't just been a challenge in the U.S., moreover. Silicon Valley's social media stalwarts have faced misinformation woes in elections all around the globe since 2016, prompting them to revamp their policies on content moderation or invent new ones, entirely, in response to emerging threats and political demands.

“It’s always an election year on Twitter — we are a global service and our decisions reflect that,” Twitter’s vice president of public policy for the Americas, Jessica Herrera-Flanigan, told POLITICO this summer. “We take the learnings from every recent election around the world and use them to improve our election integrity work.”

The New York Post incident elicited partisan howling last week, which led the companies to rethink how they handle content tied to hacked materials. Ultimately, they changed policies that had been put in place to avoid a repeat of 2016, when emails that were stolen and leaked as part of a Russian interference campaign rocked the race.

Other rules are also in flux. Earlier this month, Facebook announced a moratorium on all political advertising in the period after Election Day, despite CEO Mark Zuckerberg's previous pledge not to make any further election-related policy changes. Google imposed a narrower, post-election ban on its advertising platforms, as well.

And this week, Twitter revamped its process for how users across the globe retweet a post, prompting them to add their own commentary or insight as a way to mitigate the mindless spread of election-related misinformation with a single click.

The social media sites have had to constantly introduce new policies and tweak existing ones to account for emerging threats from domestic and foreign operatives. It’s an iterative process that the companies say is re-evaluated after every major election around the world. Those elections, which have spanned countries from Brazil, to Nigeria to India as well as the European Union and the U.K., have each offered lessons that are now being applied as voters cast ballots in the U.S.

“The ugly American thing to say is that all of those have been attempts to get election systems right in order to not get 2020 wrong here in the United States,” said Graham Brookie, director of the Atlantic Council’s Digital Forensics Lab, which studies misinformation around the world.

“2020 is this kind of Super Bowl moment for how we are able to collectively deal with disinformation,” Brookie added.

But elections outside the U.S. have been no less complicated — nor have they come with less scrutiny — for the tech companies. They aim to keep policies simple and consistent, but ultimately have to navigate unique rules and traditions around political speech and activity in each location where they operate. That's led to some hiccups.

In Europe, for instance, Facebook declared in April 2019 that political advertisers must register in each country where they run ads, effectively cutting off the European Parliament and political parties with members across the continent. The policy was meant to curb foreign interference, but didn't suit a system comprised of more than two dozen countries. Facebook amended it weeks later.

Direct messaging misinformation

Voters in Brazil and India were deluged with misinformation in 2018 and 2019 via WhatsApp. The Facebook-owned messaging app, which is more popular outside the U.S., became a breeding ground for fake news and images that spread rapidly through direct messages and private groups.

Facebook's crackdown was complicated by the app’s encryption technology, which makes it so that even the company cannot see what messages are being shared among users. That prompted Facebook to substantially reduce the number of people or groups to which a message can be forwarded, effectively slowing down the rapid spread of misinformation by making it more cumbersome. At the time, it led to a 25 percent decrease in message forwarding globally, the company said.

Facebook has now applied similar restrictions to its Messenger platform in the U.S. and around the globe. In September, the company announced it would only allow users to forward a message to five recipients at a time to provide “yet another layer of protection by limiting the spread of viral misinformation or harmful content.”

Political ad limitations

Google and Facebook have repeatedly amended their rules for political advertisers in the lead up to Election Day 2020, most recently putting limitations on the ads that can be published in the days after the vote, when a winner might still be uncertain.

Google previously curtailed the degree to which advertisers could target their messages to specific audiences, and Twitter did away with its smaller political advertising business, entirely, last year. Each of those changes has drawn criticism from both political parties, as well as created new hurdles for their vast networks of digital consultants.

Before making such moves, the companies first tinkered with political ad policies abroad. In January 2019, for instance, Facebook temporarily stopped allowing advertisers outside of Nigeria to buy ads related to the country’s election, as a way to tamp down on the kind of foreign meddling seen in the 2016 U.S. campaign and others. That policy was subsequently expanded to elections in Ukraine, Thailand and Indonesia, among other places.

Political ad transparency

In 2016, Russian operatives bought political ads on social media platforms, sometimes paying for them in rubles, without tech companies batting an eye. The outcry after the fact prompted Facebook and Google in 2018 to impose new rules requiring advertisers to prove their identity and location. They also created a public database of political ads with information about their buyers.

The companies have since extended similar authentication requirements and public disclosures to much of the globe, with Brazil, India and the European Union being among the first locations brought on board. Indeed, rules created for the 2018 U.S. midterm elections have served as a global model.

And both companies have added to those rules ahead of the 2020 election, requiring additional disclosures and further proof of identity in an effort to cut down on political advertisers who were exploiting loopholes to shield their real identity or funders.

Anti-misinformation policies have not always been applied at the same time around the world, leaving some countries waiting for new rules or transparency measures to take effect. In Europe, for instance, there was grousing that Google and Facebook made more information about political ads available there months later than in the U.S.

Expanded fact checking

Facebook first started working with third-party fact checkers in the weeks following the 2016 presidential election as the volume of fake news on the social network came into focus. Today, Facebook, Twitter and Google all engage in some degree of fact checking and apply labels to potentially misleading posts — a practice that has brought political headaches and accusations of bias.

Facebook’s move to curb the spread of the New York Post story until its independent fact checkers could determine its accuracy fits a playbook it has deployed ahead of elections in other parts of the world.

Facebook has expanded its fact-checking efforts in countries like Ireland, Portugal, Greece and Australia ahead of elections there. Partnering with fact checkers and journalists during elections was a practice Google and Facebook piloted in France in 2017 amid fallout from the U.S. election months prior.

Google started appending "information panels" to election-related YouTube videos in India and Brazil last year to point visitors to additional facts from credible sources. Those information panels were brought to the U.S. in April for videos related to the coronavirus pandemic and the election, and were later expanded to Germany and the U.K.

“Over the last few years, we’ve significantly increased our investments in the systems and processes that enable us to effectively remove violative videos, raise up authoritative content and reduce the spread of borderline content," YouTube spokesperon Ivy Choi said in a statement. "We’ve developed this solution to be scalable globally, and apply it to elections around the world, including the 2020 U.S. election.”