Fight Against Coronavirus Misinformation Shows What Big Tech Can Do When It Really Tries

Consumer Reports has no financial relationship with advertisers on this site.

Big tech companies are being confronted with the swift spread of online misinformation about the coronavirus—from dangerous health advice to racist conspiracies to scammy products—and the industry has launched what looks like all-out war to fight it. It’s a high-stakes test case for defense operations at companies including Amazon, Facebook, Google, and Twitter, and experts say their efforts appear more aggressive than any previous crackdown on false and misleading information.

The push shows how much the platforms can do when they pull out all the stops, according to scholars who study the subject—going far beyond their efforts leading up to the 2016 election, when political misinformation became a prominent issue, and in the years since. But it also reveals some inherent limitations to fighting bad information, even with Big Tech’s vast resources.

“They’ve definitely been more aggressive in responding to the coronavirus crisis than they have been in going after political misinformation,” says Paul Barrett, a New York University professor who studies online misinformation.

The companies are up against a buffet of misleading and potentially dangerous info, such as hoaxes alleging that the Chinese government or the pharmaceutical industry cooked up the coronavirus in a lab, false claims that it’s a stolen bioweapon, or a widely promoted “cure” that the FDA has likened to drinking bleach.

The response may be more forceful than ever before, but these aren’t some newly developed break-in-case-of-emergency superweapons, experts say. “All of a sudden, they’re doing some things that are actually quite effective. And they’re not magical, either—they didn’t require years and years of research,” says Jevin West, director of the Center for an Informed Public at the University of Washington.

The tech platforms’ weapons in the fight fall into three main categories: promoting good information, demoting bad information, and keeping misinformation from appearing in the first place.

The first method—highlighting the best information—is widespread. Search for “coronavirus” on Google, and you’ll get the latest stories from trusted news sources, followed by links to the World Health Organization and the Centers for Disease Control and Prevention—all with bright red badging. Those links are followed by page after page of authoritative information from public health organizations. The same search on YouTube, which is owned by Google, brings up reliable news clips. Several companies, including Amazon, Facebook, Google, and Twitter, are giving away advertising slots to trusted organizations and displaying prominent links on the social networks’ home pages or atop virus-related search results.

The second tactic is to take direct aim at misinformation after it’s posted. Facebook, for example, solicits feedback from dozens of outside fact-checking organizations, like PolitiFact, the Associated Press, and Reuters, which can label claims in “public, newsworthy posts” as false. When they do, Facebook attaches a message to the disputed post—the label on one recent post said, “The primary claims in the information are factually inaccurate”—and keeps it from spreading widely on newsfeeds and in groups. Facebook says it’s also removing some of the most egregious posts wholesale, and Amazon has told multiple news outlets that it took down more than a million products that had false claims about preventing, treating, or curing coronavirus and COVID-19, the disease it causes.

Finally, several companies have banned advertisements that disseminate bad information or try to make a quick buck off the virus. Facebook, for example, no longer allows any advertisements for face masks—protective wear that’s vital for health workers but that the CDC does not recommend for most other people—and Google and Twitter have announced policies against ads capitalizing on the crisis. Google says it has blocked “thousands of ads” related to the epidemic over the past month and a half.

West and other misinformation researchers are hard at work testing the effectiveness of the companies’ interventions. The results aren’t in yet, but anecdotally, West says the simple banners at the top of virus-related searches and free ads from the likes of the WHO and CDC are “by far the most effective thing we’ve seen.”

The Limits of Misinformation Defenses

The companies’ efforts, while beyond what they’ve done in years past, are nowhere near shutting down the online coronavirus “infodemic.” Misinformation still thrives on these sites—in one example, Consumer Reports’ Ryan Felton reported on fraudulent virus-related products on Amazon, which he found are still available even after the site’s purge; others have reported that price-gouging remains rampant on the platform, too, as do pernicious lies on Facebook and Twitter.

“I don’t think we’ve ever seen the social media world come together on an issue like this—and yet still it’s falling short,” says UW’s West.

That’s in part because the platforms’ misinformation defenses have never been tested with a crisis this fast-moving and big. Election-related skullduggery orbits one country or region at a time; other health- or science-related misinformation operates at a constant hum rather than inundating the internet all at once in the span of a few months. “There’s always been health misinformation on Facebook,” says Renee DiResta, research manager at the Stanford Internet Observatory. “But now the entire world is posting about the same thing.”

Even in an all-hands moment like this one, some efforts are controversial. For instance, Barrett says he supports removing “provably false content”—especially when health and safety are at stake. But takedowns can also backfire, DiResta says. “That then creates the perception that the information is being censored, and there’s a little bit of concern that that creates or feeds a conspiracy that the platform is trying to prevent you from knowing the truth.”

In interviews with the press, Facebook CEO Mark Zuckerberg has promised that better artificial intelligence tools are under development that could overcome the enormity of the misinformation problem. But an automated solution that works across languages and at scale is unlikely to arrive anytime soon, experts say. For now, Facebook uses AI to surface claims that need a closer look and pass them to fact-checkers, who are often overwhelmed. “This is not something AI does well,” West says. “There’s too much context and too many ways to subvert and adapt to the system.”

A Google spokesman contacted by CR pointed to Google-owned YouTube’s work to staunch misinformation as a sign of the company’s progress in this area. “In 2019 alone, we launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation, including climate change misinformation and other types of conspiracy videos,” said Farshad Shadloo. “Thanks to this change, watch time this type of content gets from nonsubscribed recommendations has dropped by over 70 percent in the U.S.”

Facebook and Twitter did not respond to CR’s requests for comment on the issue.

The companies haven’t exhausted all their options. But there’s likely a ceiling to their ability to keep bad information away from their users, especially during a sudden global crisis.

“They could do more—but they can’t do everything,” says Justin Brookman, CR’s advocacy director for consumer privacy and technology. “They can’t solve for human nature; they can’t police that racist or confusing or crazy email forward from Grandma.”

Individuals can also use the “SIFT technique” to investigate questionable content. The acronym stands for Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original context. Developed by Mike Caulfield, a digital information literacy expert at Washington State University, the method can help readers separate reliable information from sketchy posts online.

Extending the Coronavirus Playbook

Experts tell CR they hope the tech platforms transfer their vigor for battling coronavirus misinformation to the many other flavors of falsehood and trickery that live on their sites. If the coronavirus playbook were reapplied broadly, “you would see all the tech companies take a hard line on untrue content,” says Melissa Ryan, CEO of CARD Strategies, a consulting firm that researches misinformation. “They would be able to do it without political consideration.”

But the companies have been reluctant to give the same treatment to political issues, or even some scientific ones that have taken on political overtones. Consider climate change, another matter that affects public health where there is broad consensus in the scientific community. You won’t see any blaring warnings or curated fact-checks when searching for climate information on these platforms.

“Political influence questions definitely play a part” in the companies’ less rigorous policing of climate misinformation, says Sam Gregory, a global misinformation expert at the nonprofit Witness.

And lax policies extend further into the political world, where even bald falsehoods, if uttered by a politician or in a political ad, are often insulated from platforms’ rules barring certain types of misinformation.

“There’s an understandable reluctance to appear to be clamping down on one or the other side in the political sphere, and I think that probably helps explain the distinction—why there’s a difference between what we’re seeing in recent weeks with the health crisis as opposed to what we see generally in the political realm,” says NYU’s Barrett.



More from Consumer Reports:
Top pick tires for 2016
Best used cars for $25,000 and less
7 best mattresses for couples

Consumer Reports is an independent, nonprofit organization that works side by side with consumers to create a fairer, safer, and healthier world. CR does not endorse products or services, and does not accept advertising. Copyright © 2020, Consumer Reports, Inc.