The Supreme Court Is Reconsidering Its Entire Approach to the Internet. Uh-Oh.

A collage with the text from a Supreme Court brief, the Supreme Court building, the logos for Meta, X (formerly Twitter), and YouTube.

This is part of Opening Arguments, Slate’s coverage of the start of the latest Supreme Court term. We’re working to change the way the media covers the Supreme Court. Support our work when you join Slate Plus.

The Supreme Court has been reluctant to resolve disputes concerning the scope of online free speech rights. Five years ago, in a decision Justice Anthony Kennedy described as “one of the first cases the Court has taken to address the relationship between the First Amendment and the modern Internet,” the court cautioned against making any broad pronouncements. Justice Kennedy wrote, “while we now may be coming to the realization that the Cyber Age is a revolution of historic proportions, we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be.” He warned that “courts must be conscious that what they say today might be obsolete tomorrow.” Despite these warnings, the court is poised this term to intervene in digital speech conflicts in a momentous way.

By the end of next June, the Supreme Court will decide whether state laws compelling social media platforms to carry content they would otherwise exclude violate the platforms’ free speech rights, whether federal agencies and officials have unlawfully coerced platforms to remove posts the government views as harmful, and whether government officials who operate social media pages or sites can exclude constituents’ speech from those pages based on the content. Deciding any one of these disputes would have significant effects on internet speech. In combination, the court’s decisions could fundamentally alter how social media platforms operate, when and how governments communicate with platforms to address public health, terrorism and other harms, and whether public officials can exclude constituents from websites they use for official business.

Consider state laws, enacted in Florida and Texas, that prohibit large social media platforms like Facebook and YouTube from deplatforming speakers or removing speech based on the message or idea communicated. These laws compel large social media platforms to carry hateful, derogatory, and other harmful speech the platforms’ own terms of service disallow. They force companies to publish speech that undermines or harms their online communities. The laws also threaten to bury the platforms in private lawsuits by speakers who claim the law grants them a right to speak on the platform. In short, they run roughshod over the platforms’ First Amendment rights to exercise editorial control over content published on their sites.

Even if the platforms have demonstrated some bias in terms of their policies or enforcement of service terms, the states’ proposed fix is much worse than the disease. If the court concludes that social media platforms—unlike, say, newspapers—do not possess editorial or similar rights, then state and federal governments can effectively determine what speech must be allowed on the platforms. Platforms would be compelled to carry the speech of white supremacists, antisemites, and terrorists because to exclude such speech would be to discriminate based on content in violation of state laws.

Murthy v. Missouri, another case the court will decide this term, addresses a different form of governmental control over internet speech. In Murthy the issue is whether the government, through various means of “jawboning” or persuasion, unlawfully pressured Meta, Twitter (now “X”), Google, and YouTube to remove posts and other content because the government viewed it as misinformation that harmed public health or undermined electoral integrity. Like the Florida and Texas cases, Murthy will address the extent to which governments can control what is communicated online. But in this case, the means of control is not legislation but behind-the scenes arm-twisting and threatened legal reprisals by government officials.

Plaintiffs in Murthy allege that the White House, Office of the Surgeon General, Centers for Disease Control, and FBI pressured Facebook and other platforms to remove posts criticizing government pandemic policies, supporting the COVID-19 “lab leak” theory, questioning the results of the 2020 presidential election, and promoting the Hunter Biden laptop controversy. For example, a White House official told a platform to take an offending post down “ASAP” and instructed it to “keep an eye out for tweets that fall in this same … genre” so that they could also be removed. The platforms apparently complied by removing posts and sending frequent reports to government agencies about their compliance. The U.S. Court of Appeals for the 5th Circuit sided with the plaintiffs on their First Amendment claim and enjoined certain government agencies and officials from coercing or significantly encouraging platforms to remove content.

Public health and other dangers can be exacerbated through viral online communication. It is imperative that governments be allowed to communicate and have open dialogues with social media platforms about dangers associated with online expression that may pose imminent threats to public health and safety. As the 5th Circuit recognized, governments are free to communicate their views to the public through press briefings, public service and other educational campaigns, and the passage of regulations and laws. However, when they strongarm or significantly encourage social media platforms to remove speech the government concludes is harmful or simply disagrees with, they cross a First Amendment line. Whether or not government compulsion to remove speech violates the platforms’ First Amendment rights, it makes the platforms complicit in the violation of their users’ First Amendment rights.

Finally, earlier this month the court heard arguments in a pair of cases raising the question whether city managers, school board officials, and other officeholders who have social media pages or sites can exclude critical or annoying constituent comments. The analysis will likely be intensely fact-specific, turning on the extent to which the officeholders addressed official as opposed to private matters on the sites, whether they set up the sites while they were private citizens, whether any law required or authorized the creation and operation of the sites, and whether any public employees assisted with their operation. In a decision that relied on similar factors, the U.S. Court of Appeals for the 2nd Circuit ruled that part of former President Donald Trump’s Twitter page was a “public forum” that his critics had a First Amendment right to access. After Trump’s reelection bid failed, the Supreme Court dismissed the case and vacated the 2nd Circuit’s decision, but now the issue is back.

As Justice Elena Kagan observed at the oral argument, constituents who seek to communicate with or petition officeholders may have limited opportunities to do so in real space. To the extent public officials increasingly turn to social media to communicate with voters and communities, it is critical that constituents be able to communicate with them in these places. As the Supreme Court has recognized, today social media platforms are among the most important places for the exchange of ideas. Public officials who operate private sites certainly have a right to determine what they will post and which comments they will tolerate. But when they use their platforms to conduct official business or perform governmental functions, they are bound by the First Amendment.

To decide all these cases, the court will have to enter a complex thicket of legal doctrines and principles concerning compelled speech, the distinction between government persuasion and coercion, and the contours of the “state action” and public forum doctrines. Its guiding principle ought to be to limit governmental control over internet speech—regardless of the form that control takes. Governments and public officials should not be allowed to compel platforms to carry content, coerce them to remove it, or block constituent comments from public fora based on their viewpoint. As it tackles internet speech issues in this era, the court should generally heed its own admonition in the aforementioned Kennedy opinion: “The nature of a revolution in thought can be that, in its early stages, even its participants may be unaware of it. And when awareness comes, they still may be unable to know or foresee where its changes lead.” So too here.