‘The Internet Is a Crime Scene’

As law enforcement continues its nationwide manhunt for violent pro-Trump extremists involved in last week’s deadly insurrection at the U.S. Capitol, one of America’s top experts in disinformation is here to remind you that if you want to understand how we got here, you need to look beyond Donald Trump and Washington.

“The internet is a crime scene,” says Joan Donovan, research director at Harvard’s Shorenstein Center on Media, Politics and Public Policy. “We’re collectively witnessing the aftermath of probably one of the biggest lies ever told in terms of the amount of people it reached and the effects that it had.”

The internet makes radicalization easy, and in the insurrection, Donovan sees a textbook case of how a conspiracy theory—fueled by an unending buffet of disinformation served up by algorithms—can grow online and spawn a community that commits the kind of real-world violence that took place at the Capitol on January 6. But the seeds of insurrection were planted much earlier.

“The way in which we operate online has a lot to do with good faith. We have good faith that people are posting things that are true,” says Donovan. “Our algorithms have good faith that a thing that says it’s news is news. That’s the kind of openness of the system that we built. And now we’ve realized that you only get a few good years with that, because bad people with bad intentions figure out how to use that system to their own ends.”

In the months leading up to last week’s insurrection, pro-Trump conspiracy theories repeatedly found purchase among the president’s supporters. Some posited that the coronavirus pandemic was a hoax to hurt Trump. Or that mail-in ballots were a fraud being perpetrated to hurt Trump. Or that last summer’s protests against the police killings of Black Americans were secretly organized and funded by liberal billionaire George Soros to hurt Trump. And on and on.

“It was repetitive,” says Donovan. “Over and over and over again, it had the same form, but the target didn’t change.”

For true believers, that drumbeat of misinformation deepened conviction and “drew together a bunch of people who saw Trump as an embattled leader facing the biggest theft of the century,” says Donovan.

Where does America go from here? How should we think about the internet when it so easily radicalizes people? And is there any way to use the internet to deradicalize them? For answers to all of that, POLITICO Magazine spoke with Donovan this week. A condensed transcript of that conversation follows, edited for length and clarity.

We are a week removed from the violent insurrection at the U.S. Capitol, which was largely the result of misinformation and disinformation—about both the outcome of the election and things like the QAnon conspiracy theory. You study and track this stuff. If you pull back, what is the state of misinformation right now?

The internet is a crime scene. It was cacophonous in terms of building this community of people who went to D.C., and now we see the cleanup of all of the online ephemera being removed—for good reason. But while that cleanup happens, the DOJ and FBI and journalists and researchers and civil society are doing everything they can to collect data.

We’re collectively witnessing the aftermath of probably one of the biggest lies ever told in terms of the amount of people it reached and the effects that it had: the claims about election fraud, which then led to the event at the Capitol. What is different specifically about January 6 that we have to reckon with are the moments that led us there. Along the way, everybody knew that [pro-Trump attorneys] Rudy Giuliani and Sidney Powell and Lin Wood were conducting law in bad faith. They were doing everything they could online to make it seem as if they had legitimate court cases and that if not for the “horrible” courts and judges, we would have had a different outcome in the election. The repetition shows me there was a plan and that it was coordinated. There’s much more culpability to be spread around.

You’ve described the insurrection as the result of “networked conspiracism.” What do you mean by that?

When we use the word “networked,” we’re trying to address the scale by which people are communicating with each other about a particular topic, the way in which technology brings people and ideas together into the same space, and then the durability of those ideas in those communities.

Conspiracies usually take the form of rumors that travel through communities. For instance, Brandi Collins-Dexter, a research fellow with our team [at Harvard], has talked about conspiracies in the Black community that were somewhat protective—the idea that there was something bad in the water in Flint, which predated people getting evidence of it being a thing. Conspiracies can actually fulfill a function for communities, warning them about very powerful political and moneyed interests that may be harming them.

But in the case of QAnon, a networked conspiracy really drew together a bunch of people who saw Trump as an embattled leader facing the biggest theft of the century: the theft of the election. In the lead-up to that, you had iterations of different conspiracies circulating through different communities online—saying that the pandemic was a hoax to bring down Trump, and the vaccine was a hoax to bring down Trump, and mail-in ballots were a hoax to bring down Trump, and [Anthony] Fauci was a hoax to bring down Trump, and so on. It was repetitive. Over and over and over again, it had the same form, but the target didn’t change.

Facebook and Twitter and YouTube did take some action to try to delete some QAnon content, especially as it morphed into a more militarized social movement during the “reopen” protests. But the damage had already been done.

Before 2020, major tech platforms rarely interjected to quell falsehoods or conspiracy-minded content. That changed notably when it came to Covid-19 and election misinformation. Why do you think it took so long, and has what they’ve done been successful?

There’s no financial or political incentive to look for the evidence [of misinformation and conspiracies being spread]. It took these companies years to address the nature of the problem. There were numerous instances in which platform companies should have taken action. Several very public scandals about misinformation had to happen—around Brexit, around the Russian Internet Research Agency.

After Charlottesville, we saw a moment of exception, when platform companies decided they were going to take off [certain users and content] in what later became their policies on dangerous groups and dangerous individuals. Downstream, that’s where you see the removal of the Proud Boys and Alex Jones.

And that leads me to think about the failure in this moment. The way media-manipulators and disinformers operate is that they tend to leapfrog across different platforms. It’s to their benefit to have all of the platforms at their disposal. However, if they can use one or two of them effectively, they can get that information to circulate on other platforms.

The lack of guardrails that transcend the corporations that provide these services really led to this moment. And unfortunately, we’ve gotten to this situation because disinformation is an industry.

Describe that. What does the disinformation industry look like at this moment?

It’s good money. And you can use it to wage an insurrection. I’m thinking here about someone like Steve Bannon, who has always seen media as a war of position and is very effective at making sure that the disinformation campaigns that he designs stay in steady drumbeat in the more mainstream media ecosystem for months. It’s no surprise, of course, that Bannon was behind some of the pieces of disinformation related to Dominion Voting Systems, as well as probably one of the biggest scientific scams, or cloaked-science operations, that we’ve seen in a long time: He was responsible for flying a post-doc researcher from Hong Kong to the United States and then helping her write and publish this thing called the “Yan Report,” which alleges that Covid-19 was a Chinese bioweapon, and in which the author claims she worked in a lab developing it. That staging takes an incredible amount of money. So, Bannon is working with an exiled Chinese billionaire [Guo Wengui] who runs G News [a pro-Trump and anti-Chinese Communist Party online outlet] and shares an interest in destabilizing any and all foreign policy. Bannon really shows that there’s money to be made, as well as political gains.

When I think about this in terms of my research, I think about the true costs of misinformation. The openness and the scale, which used to be the virtues of platforms and the internet, have now been thoroughly appropriated and weaponized by disinformation-campaign operators.

There’s a concept I’ve heard you refer to when talking about media manipulation: “trading up the chain.” Can you describe how that works?

Yeah, so that’s actually from a book by Ryan Holiday called Trust Me, I’m Lying. He used to design viral media campaigns, and noticed that if low-level bloggers covered, say, a movie he was marketing, that it would quickly “trade up the chain,” because people in online news organizations and cable news looked to those low-level blogs for interesting stories. In his book, he even talks about posing as outraged people who have seen a certain billboard or a movie trailer or this controversial ad—then you can even make it seem like even more of a story.

The idea is simple: You create outrage about something, and then you try to get journalists to pay attention to it. We see that tactic happen often on the internet. On anonymous forums and message boards, it’s almost a game, where if there is a breaking news event, they will try to plant disinformation—misidentifying a mass shooter is one of the “favorites” in this world—and then try to get journalists to report the wrong thing.

We probably saw the most interesting case study of that mechanism when we saw an attempt at it fail with the Hunter Biden laptop story. [Trump allies] were going to plant it in the New York Post, which had very low standards of editorial verification, and it was very clear to everyone that there was something amiss—be it the acquisition of the laptop, or the way in which it found itself to Rudy Giuliani. Nobody knew what other ephemera was on the computer, like recipes or grocery lists, just that there were crimes and sex tapes and drugs.

In that instance, trying to “trade up the chain” backfired. Everybody who works in [the disinformation-debunking sphere] knows that disinformers consistently go back to the same tactics. We were all ready for a hack-and-leak operation. That particular tactic really wasn’t going to work the way that they had hoped.

You describe this cycle of media manipulation where people with power—whether elected officials or those with resources—spread disinformation about, for instance, the election being rigged. And then they point to the fact that their supporters believe that disinformation as evidence that this is something “real” people are worried about and needs to be taken seriously. How do you get out of that cycle? It seems like it’s a downward spiral.

It does, and it’s been feeling like that for several years now. At this stage, I think even calling it “disinformation” is doing a disservice, because as a researcher this is on a scale that I haven’t seen, and it touches every kind of media that we have.

Our media ecosystem is incredibly fragile and broken. The way in which we operate online has a lot to do with good faith. We have good faith that people are posting things that are true. We have good faith that people are communicating openly and honestly with one another. Our algorithms have good faith that a thing that says it’s news is news. That’s the kind of openness of the system that we built. And now we’ve realized that you only get a few good years with that, because bad people with bad intentions figure out how to use that system to their own ends. And they pay money to do it, profit off it and face no consequences for their actions.

It’s telling how many interviews I’ve done this week that center around platform companies taking away Trump’s [social media] toys, as if that’s the worst thing you could do to the leader of the U.S. government. And that’s because we have very low faith in institutional accountability. We have very low faith in our governance structure to be able to remedy the poison that Trump brought to the Capitol. And that’s what I worry about when I think about how we fix this, or how we move beyond it, or how we have peace again.

I place responsibility for what happened with our gatekeepers. That is, the politicians who were allowed to get away with this, who flew in a flank behind Trump in the lead-up to this and made it seem like there was going to be another outcome on January 6 if the crowd intervened. I place responsibility with them.

I don’t want to say that it’s as if some groups of people have been mindlessly deluded into this, or that somehow technology performed hypnosis. These are people who believe the system is woefully broken—which a lot of us can agree with. But they’ve also chosen who they’re going to believe. And in that case, they’re not operating on facts; they’re operating on belief. And they believe Trump was chosen by God, and that he called upon them to go to the Capitol and save him.

And if you listen, honestly, it’s not deep. It’s so shallow. It’s so in-your-face. It’s right there when you look at the Proud Boys the night before [the insurrection] chanting, “F--k Antifa” and “1776.” You’ve just got to believe that that’s what they believe. You can’t use your imagination and assume, “Well, they must not really believe that” or “They must not know.”

If the internet can radicalize people, can we also de-radicalize people online?

There’s a mass phenomenon in which you can bring people into a worldview and have them see it your way. When we study this, we look for signs along the way that a YouTuber or podcaster is trying to radicalize people. How do they talk about gender? Are they talking about women “denying you” their bodies, rather than women being thinking beings—and, hey, you’re not that cool? What are they saying about people of color? Are they telling you that Black people are stealing things from you, that they’re not doing as much in this world as you are, that they are not as deserving of college scholarships? How are they talking about immigrants? Are they saying immigrants are siphoning off of the goodwill of white Americans and that they’re better off in another country where they can’t get the benefits of living in America?

As you look at these “red pills,” you realize that they all they almost always come in a package. And the hardest one is actually the oldest: “It’s the Jews.” But very rarely do we hear a radicalized podcaster start with the Jews, because we have a society where people hear that and right away it raises flags: “Did I just hear you say ... ?” So, they start [more subtly] and move towards the “Jewish question.” They pose this idea that actually everyone you voted for has nothing to do with the governance of the country, and that it’s really this small group of people controlling the government, controlling the media, controlling entertainment. And you can see it laid out on some of the anonymous message boards, the memes that just freely circulate: “Here are all the Jewish people that work at CNN.”

If you’re living in those worlds where you see that stuff day-in and day-out, you’re no longer shocked by seeing swastikas. You’re no longer shocked by hearing that some woman was harassed and assaulted. You’re almost excited by it. When you see people of color assaulted by police, you get excited about that. That, to you, is redemption for a world that has wronged you.

When I think about deradicalizing someone or bringing someone back from the brink, that is a process of love and understanding and community-building. I don’t think it’s necessarily just like, “Oh, somewhere along the way, this individual must have been broken, and then this filled that void.” We’re all broken in some way. All of us have had something that we’ve had to overcome. It’s really about those moments when someone could have led you out of that.

But the technology we use that connects people, we failed in the design of it. [Instead of getting led out,] people get endless streams of Nazi propaganda under the guise of “history,” or endless streams of podcasts that are taking in-the-moment news about Jeffrey Epstein and the horrible things that he did, and tying that back to, you know, “Jewish people” or whatever. Technology gives you easy access to these things. It’s all laid out for you.

When we do this research in our lab, we have an adage: Let the algorithms do the work. If I want to discover white-supremacist content, I find a little bit of it and then let the algorithms do the work. I click every recommendation. I follow every suggested follow. And within an hour, I have a corpus of data to work with.

Until the moment that is no longer true, we’re going to keep doing this research and we’re going to keep banging on the doors of platform companies saying that something’s really broken, and we’ve got to fix it.