Supreme Court for first time casts doubt on Section 230, the legal shield for Big Tech

WASHINGTON, DC - OCTOBER 06: The Supreme Court of the United States on Thursday, Oct. 6, 2022 in Washington, DC. (Kent Nishimura / Los Angeles Times)
The Supreme Court of the United States in Washington, DC, is seen in October. (Kent Nishimura / Los Angeles Times)

Internet giants such as Google, Facebook, YouTube and Twitter owe much of their success to a legal shield erected by Congress in 1996.

Known as Section 230, it has been called the rule that launched Big Tech. Though it drew little attention at the time, the law is now seen as a pillar of the wide-open global internet we know today.

While newspapers and TV stations can be held liable for any false and malicious content they publish or broadcast, internet platforms are treated differently under Section 230.

Congress passed the special free-speech rule to protect the new world of online communication. It said: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Law professor and author Jeff Kosseff called Section 230 “the 26 words that created the internet” because it allowed websites to develop freely as platforms for the words, photos and videos of others.

And it went unchallenged in the Supreme Court — until now.

This week, the justices will hear two cases that may finally pierce that legal shield and dramatically alter the rules of the game for the internet.

And they are expected to consider a third case later this year involving the 1st Amendment rights of internet companies amid state efforts to regulate them.

The case to be heard on Tuesday began with a California family's suit against Google and YouTube for allegedly aiding and abetting an act of international terrorism. Their daughter Nohemi Gonzalez was killed in Paris in November 2015 when Islamic State terrorists fired into a restaurant where the 23-year-old student was dining with two friends. It was part of an ISIS rampage in the city that killed 129 people.

Their lawsuit alleged that Google, which owns YouTube, had "knowingly permitted ISIS to post hundreds of radicalizing videos inciting violence and recruiting potential supporters to join the ISIS forces." Further, they alleged that YouTube "affirmatively recommended ISIS videos to users.”

At issue on Tuesday is only their second claim. Can YouTube be sued over the algorithms it created to direct users to similar content — in this case allegedly directing potential terrorists to other ISIS videos? Or does Section 230 protect them against such claims?

More than four dozen tech firms, internet scholars and free-speech advocates have filed friend-of-the-court briefs arguing that the internet companies should not be held liable for using computer programs that direct users to content they might find interesting.

"Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack," said Washington attorney Lisa S. Blatt, representing Google and YouTube. She warned that opening the door to lawsuits over algorithms "risks upending the modern internet."

A federal judge had dismissed the family's suit based on Section 230, and a divided 9th Circuit Court of Appeals affirmed that decision in 2021.

Until this term, the Supreme Court had refused to hear appeals involving the law. On several occasions, however, Justice Clarence Thomas called for "paring back the sweeping immunity courts have read into Section 230," particularly in cases where websites knew they were posting dangerous lies or criminal schemes.

Some prominent liberals, including Judges Marsha Berzon and Ronald Gould on the 9th Circuit Court, have also called for paring back the scope of Section 230.

They have been joined by advocates — both liberal and conservative — who portray the internet as a cesspool of disinformation and hate speech, a home for stalkers and fraudsters and a contributor to teen suicides and mass shootings. Critics also say social media companies get rich and keep viewers online by amplifying the most extreme claims and the angriest voices.

Google and other tech firms were surprised in October when the high court voted for the first time to hear a direct challenge to Section 230 and decide whether websites such as YouTube can be sued for their use of algorithms and targeted recommendations.

Their alarm grew in December when the Biden administration took the side of the plaintiffs in Gonzalez vs. Google and said YouTube could be sued for algorithms that "recommend" more videos to viewers.

Justice Department attorneys said the 9th Circuit Court made a mistake by throwing out the claim, and they argued for a new understanding of Section 230. They agreed websites are shielded from liability for displaying content provided by others, including ISIS videos, but said they were not shielded for "their own conduct" in recommending further videos for viewing.

"When YouTube presents a user with a video she did not ask to see, it implicitly tells the user that she will be interested in that content based on the video and account information and characteristics," they wrote in their filing.

Many experts in internet law said they were puzzled by the Supreme Court's decision to take up the case and troubled by what it might mean.

"The internet needs curation. We need to be able to find what we're looking for," said Eric Goldman, a law professor at Santa Clara University. If websites cannot sort content based on algorithms, he said, "it would not be a functional internet."

Blatt, Google’s attorney, said, "YouTube does not 'recommend' videos in the sense of endorsing them, any more than Google Search endorses search results. YouTube displays videos that may be most relevant to users."

On Wednesday, the court will hear a related case but one focused only on whether Facebook, Google and Twitter may be sued for allegedly aiding international terrorists.

Congress in 2016 expanded the Antiterrorism Act to authorize lawsuits by victims or their survivors against anyone who "knowingly provided substantial assistance" to a person who committed an act of international terrorism.

The U.S. family of a Jordanian citizen who was killed in an ISIS attack on the Reina nightclub in Istanbul in 2017 sued Facebook, Twitter and YouTube, accusing them of aiding and abetting the murders. They said ISIS openly maintained accounts on all three social media platforms and used them to recruit members.

The 9th Circuit cleared this claim to proceed, but the Justice Department and the social media firms said that was a mistake. They said the suit should be tossed out because the plaintiffs could not show that the internet platforms provided "substantial assistance" to the terrorist who carried out the mass shooting.

It's not entirely clear why the court agreed to hear the second case, Twitter vs. Taamneh, but the justices may have decided they faced two questions: Can a social media site be sued for aiding terrorists? And if so, can it be held liable for directing viewers to ISIS videos?

It's unclear whether the justices will split along the usual ideological lines when it comes to the Section 230 debate, which has liberals and conservatives on both sides.

Still pending before the court may be an even larger question: Can the states regulate the internet and penalize social media companies for what they post or remove from their sites?

That clash began on a sharply partisan note. Republican leaders in Texas and Florida adopted laws two years ago that authorized fines and damage claims against Facebook, Twitter and other large social media sites if they "censor" or discriminate against conservatives. Upon signing the measure, Florida Gov. Ron DeSantis said the law was intended as "protection against the Silicon Valley elites."

Before the laws could take effect, they were challenged on free speech grounds and put on hold based on the 1st Amendment, not Section 230.

The justices are almost certain to grant review of one or both laws because appellate court judges, both appointed by President Trump, were divided on a major constitutional question.

Judge Kevin Newsom of the 11th Circuit Court in Atlanta blocked most of the Florida law from taking effect. The 1st Amendment "constrains government actors and protects private actors," he said. Social media sites are private companies, and "put simply, with minor exceptions, the government can't tell a private person or entity what to say or how to say it."

Shortly afterward, Judge Andrew Oldham of the 5th Circuit Court in New Orleans upheld the Texas law because the state sought to protect the free speech rights of Texans. A former counsel to Texas Gov. Greg Abbott and law clerk to Justice Samuel A. Alito Jr., Oldham said it is a "rather odd inversion of the 1st Amendment" to say the social media platforms have a "right to muzzle speech. ... We reject the idea that corporations have a freewheeling 1st Amendment right to censor what people say."

Last month, the Supreme Court asked the Justice Department to weigh in on the issue, and that will put off the cases until the fall.

If, as expected, the U.S. solicitor general's office submits its view on the issue by June, the justices are likely to schedule one or both cases for a hearing in the fall.

This story originally appeared in Los Angeles Times.