The Supreme Court is about to hear 2 major cases that could transform the internet

3D illustration of cyber gavel.
Getty Images/Creative

This week the U.S. Supreme Court is set to hear two major cases involving big technology companies, with key aspects of the internet on the line with its rulings later this year.

First up, on Feb. 21, is Gonzalez v. Google. The underlying lawsuit was filed by the family of an American woman, Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris. The family alleges that Google, which owns YouTube, is liable because its algorithms promoted extremist content to people likely to be susceptible to it.

This case looks at whether Section 230 of the Communications Decency Act provides protection for an internet platform’s automated recommendations. Google argues that it has immunity under this statute because ISIS, not YouTube, was the creator of the videos.

“Section 230 is the foundational law of the modern internet. It was devised in 1996 to encourage the development of internet platforms that would facilitate speech,” Anupam Chander, a Georgetown University law professor, told Yahoo News. “Section 230 said the internet speakers who say harmful things are to be held responsible for what they said, but the platforms that might carry that kind of harmful speech will not be liable for that speech.”

In a related case, the Supreme Court will hear oral arguments on Feb. 22 in Twitter v. Taamneh. Relatives of Nawras Alassaf, who was killed in a 2017 ISIS-related attack in Istanbul, argue that Twitter, Google and Facebook should be held legally responsible, alleging that ISIS used the platforms to recruit members, and that the tech companies didn’t do enough to curb their extremist users.

Twitter argues that the lower court ruling created a "statute of impossible breadth," saying that businesses like banks and rental car companies would also be liable if a judge found that they could have done more to eradicate terrorists using those services.

This case concerns Section 2333 of the Anti-Terrorism Act and whether websites can be held liable for violence connected to their platforms.

“We've got these two cases, and they're going in different directions and have two different statutes,” Chander said. In an effort to understand the arguments of both sides of the cases and what sort of implications they could ultimately have on a user’s internet experience, Chander broke it down to Yahoo News. Some answers have been edited for length and clarity.

Yahoo News: Are these types of cases typical for the U.S. Supreme Court to hear?

Anupam Chander: These cases are cases of statutory interpretation. The Supreme Court is the ultimate arbiter of what a federal statute means. So in that sense they are typical. This is the first time that the Supreme Court has looked at this particular Section 230 of the Communications Decency Act.

Let’s take a closer look at the Gonzalez v. Google case. What do the plaintiffs argue?

This is the case that's gotten more attention, and I think deservedly so. What the plaintiffs argue is that automated recommendation services are the act of Twitter or Google or Facebook, and they are distinct from the bad acts of the speakers themselves. So Section 230 should not cover the recommendation in these cases, even if it covers the publication of that material.

The case arises out of tragic terrorism events in Paris, and the family of one of the victims sued the internet platforms, in this case YouTube, for what they say is recommending videos potentially to individuals who might have been motivated, thereby kind of radicalizing them.

What does Google argue?

Google, of course, has policies against terrorist content. Its automated and human moderation systems are not perfect, so it's possible that some terrorist content was recommended accidentally by its systems. But Google says the connection between its accidental acts and these events is very tenuous. And at that level, if this connection is enough, a lot of humanity's failures will be laid at the feet of internet platforms.

The legal defense in this context, on Section 230, is that it covers internet publishers, and one of the key things that publishers do is recommend: They filter, choose, decide and prioritize what to publish and what not to publish. The act of sorting among various items is a core act of publishers, and that is exactly what Section 230 protects internet companies from being held liable for.

What is the question at issue in the Twitter v. Taamneh case?

The argument is that Twitter allowed terrorist promotion materials on its services, even though this is clearly against Twitter's policies. There, the question is a federal statute called the Anti-Terrorism statute. The legal question in the case is whether or not Twitter can be held liable for aiding and abetting terrorism for these very indirect acts.

What does Twitter argue?

The core facts of these cases are very similar, so it's a very similar argument, which is that the statute, Section 2333 of the Anti-Terrorism Act, should not be read to allow liability for such a tenuous connection between the speech platform's activity and the actual terrorism.

If the cases are similar, why isn’t Section 230 of the Communications Decency Act a factor in the Twitter case?

The lower courts have not yet considered whether Section 230 can protect Twitter in this case, though, as you can see, the facts are very similar to the Google case. It seems likely that if Google wins, Twitter also finds protection under Section 230 in the Taamneh case.

These cases are similar, but could the Supreme Court rule differently in these cases?

Yes. That's possible because the statutory interpretation question is different in both cases. So it is possible for Google to win and Twitter to lose, or vice versa.

If the Supreme Court rules against Google and Twitter, what are the broader implications?

This is a case where there are nearly 50 briefs on Google's side, an immense outpouring of briefs on one side of the case. Sites like Reddit and Wikipedia and Craigslist have all submitted briefs in support of Google, arguing that they too use automated systems and provide other recommendations that they worry would now lead to liability for them.

How would a user’s internet experience change?

Internet companies would be far more nervous about hosting controversial speech, because now they could face lawsuits for hosting that speech because their systems might accidentally recommend it.

This, some people would think, is a good thing. It would lead companies to take down more speech online, but all sides of the political spectrum are engaged in sometimes controversial speech. And if Google loses, internet platforms will be more likely to remove controversial speech because of liability concerns.

What is your overall concern for the outcome of these two cases?

My general worry is that we don't want the internet companies to police our speech in ways that are simply because of liability concerns. We do want them to enforce their community guidelines, but not because they fear lawsuits, because that will lead to a lot of speech being suppressed online.

Lawsuits only come in one direction, for not taking content down, so internet companies have an incentive to take down controversial speech.

But they won't be sued for not saying something. You get an occasional lawsuit where someone says, "You took down my content; you shouldn't have." Those lawsuits are going to fail.