Expert dissects Tuesday's Supreme Court arguments in case that seeks to hold Google accountable for extremist content

Supreme Court justices seemed to indicate Tuesday that they will proceed with caution in the first of two cases they’re hearing this week that will help determine how internet companies will deal with speech and content moderation.

On Tuesday the justices heard oral arguments in Gonzalez v. Google, a lawsuit filed by the family of an American woman, Nohemi Gonzalez, who was killed in a 2015 Islamic State group attack in Paris. The family alleges that Google, which owns YouTube, is liable in her death because its algorithms promoted extremist content to people likely to be susceptible to it.

The case looks at whether Section 230 of the Communications Decency Act provides protection for an internet platform’s automated recommendations. Google argues that it has immunity under this statute because ISIS, not YouTube, was the creator of the video content.

“Section 230 is the foundational law of the modern internet. It was devised in 1996 to encourage the development of internet platforms that would facilitate speech,” Anupam Chander, a Georgetown University law professor, told Yahoo News. The 1996 statute text at the heart of Tuesday’s hearing doesn’t specifically address content based on algorithms, because their use was not widespread at the time the law was enacted.

“Everybody is trying their best to figure out how ... a pre-algorithm statute applies in a post-algorithm world,” Justice Elena Kagan said during Tuesday’s hearing. “You know, these are not like the nine greatest experts on the internet,” she later quipped, referring to the high court justices.

For nearly three hours, the justices asked attorneys representing the Gonzalez family, Google and the Department of Justice about how the court could draw a line without setting unintended consequences into motion on how users experience the modern internet.

“I think the justices are very much aware that this ruling will have huge implications, and so they are approaching it with great seriousness,” Chander said.

Yahoo News spoke with Chander following Tuesday’s hearing for his perspective on the proceedings. Some responses have been edited for length and clarity.

Yahoo News: What were some of the key arguments made Tuesday by the plaintiff’s attorney, Eric Schnapper?

Chander: The plaintiff is arguing that the anti-terrorism statute allows for aiding and abetting liability — that’s very broad. And YouTube’s actions in permitting, accidentally, material that might support terrorism should mean that they’re liable under that statute for the horrible terrorism that the plaintiffs faced.

The plaintiffs claim that Section 230 doesn’t protect Google.

Did the justices seem receptive to that argument?

I think there was real confusion about what the plaintiff was actually arguing. The plaintiff said when YouTube makes thumbnails of videos on the right side of the screen, that is content that Google makes, therefore, it is liable for those thumbnails, which did not persuade the justices in the least.

Justices Kagan and [Brett] Kavanaugh both mentioned that narrowing down Section 230 should be left up to Congress, but can’t the court also do that?

The Supreme Court certainly can decide they are not in a position to rewrite Section 230 in a way that they would prefer. They may think that Section 230 is bad policy. But that’s not their job. Their job is to interpret the statute and leave to Congress what the internet policy should be with respect to that particular statute, at least. So I think it’s easily possible that the court says, “This is what Section 230 says, we may not be happy about it in all instances, but we aren’t here to rewrite the statute in the way that we would have written it had we been senators of Congress.”

At one point, Justice [Amy Coney] Barrett questioned the lawyer for the plaintiff about his interpretation that Section 230 extends beyond internet companies to regular users. What do you think her reading of that issue was?

That argument comes from my brief, so I was pleased to see that she read it. As she pointed out, if you say Section 230 doesn’t cover recommendations, then it’s hard to make sense of what it means to immunize users, because users don’t host the material, they only recommend other people’s content. So she was trying to figure out what user immunity would look like from the plaintiff’s view. And she did not get an answer to that question.

U.S. Deputy Solicitor General Malcolm Stewart also presented arguments on behalf of the Justice Department. What is the government’s argument in this case and were there any notable highlights?

They immediately distanced themselves from the plaintiff’s thumbnail argument. Their central argument revolves around an algorithm that promotes discrimination in employment. If you have an algorithm that is doing discriminatory things — for example, sending some jobs to one race and not to another race — this shouldn’t immunize that behavior.

They’re trying to say that Section 230 has some room for plaintiffs to make arguments about the platform’s behavior, and I think there’s room for argument about exactly what a platform’s behavior is within the scope of Section 230 and what is outside of its scope.

What were some of the key arguments made by Google’s attorney, Lisa Blatt?

Everything on the internet depends on these algorithms, because there’s so much material on the internet, and you couldn’t expect the search engine to produce random results; we expect the world of the internet to serve us information. If it’s liable for its own conduct, the act of recommending something, then Section 230 immunity, it becomes really almost meaningless because all these platforms rely upon some kind of automated filter system.

Craigslist filed an amicus brief on Google’s side saying, “We’re an online marketplace and we are constantly in the business of organizing what people sell to each other, how we display that, so we’re going to be held liable for how that’s organized.”

And so Lisa Blatt is saying that this is a central core part of how the internet operates today, and narrowing Section 230 immunity would have serious ramifications for the internet.

How did the justices respond to Google’s argument?

Justice [Ketanji Brown] Jackson asked some important questions about the broad immunity in Section 230, asking whether or not it was really intended to be that broad. I think she offered some important critiques of Google’s provision in the case.

I think all the justices asked hard questions of all the subjects. This was a case where there was serious engagement with issues among all the justices. And it will be a very interesting opinion. Ultimately, I think in this case, Google will win. But the question is, how it wins. The question is what standard the court uses to decide that victory.

The Supreme Court will hear a related case, Twitter v. Taamneh, on Wednesday. Relatives of Nawras Alassaf, who was killed in a 2017 ISIS-related attack in Turkey, allege that ISIS used Twitter, Google and Facebook to recruit members and that the tech companies didn’t do enough to curb their extremist users and should be held legally responsible. How do you think oral arguments from today’s hearing will affect that case?

I think that Twitter was probably reasonably happy with this argument, because on the substantive support for terrorism claim, the justices seemed skeptical that there was liability here, and on a very similar set of facts. But it’s a different statute, and the justices, whenever there was discussion of aiding and abetting under the common law, said it’s a pretty high standard for aiding and abetting, but they also noted that it is the common law, and therefore subject to change a determination made by each of the 50 states.

So that variability is a risk for these companies. But for the Twitter case, there’s only one statute that’s at issue, which is the Anti-Terrorism Act, which has a very broad approach on support for terrorism — too broad in my view. I think it’s going to be an interesting argument, it’s just that the connection between what Twitter or YouTube did in these cases, and the actual terrorism that was grievously inflicted, is relatively attenuated.