What I Learned on My Quest to Fix America’s Social Media Problem

One of the humbling experiences of starting a company in a new industry is that sometimes you don’t know the industry you’re in. In the case of NewsGuard, which I co-founded with fellow journalism veteran Steven Brill three years ago to help people protect themselves from the misinformation being fed them on the digital platforms, it took a group of Stanford academics to tell us what we were doing.

When political scientist Francis Fukuyama and his team of researchers last year began studying the dominance of platforms Facebook, Google/YouTube and Twitter, they first thought this was a problem for antitrust laws and regulations to solve. But in the end they concluded that being big was not what made these platforms bad. Instead, it’s the algorithms powering these platforms that cause “social harms, including loss of privacy and monopolization and manipulation of attention, and political harms, including threats to democratic discourse and deliberation and, ultimately, to democratic choice in the electoral process.”

The root of the issue is how the platform algorithms maximize usage. The more engagement and more time spent by users, regardless of the truth or harm of the content being consumed and shared, the more advertising revenue. Their algorithms learned that misinformation grabs people’s attention best, which is why their recommendation engines send people down rabbit holes of Covid-19 hoaxes, QAnon conspiracies and divisive Russian disinformation operations. This algorithmic amplification is the subject of a Senate Judiciary Committee hearing this week that will grill Facebook, YouTube and Twitter executives.

To counter this, Fukuyama and his colleagues proposed something called “middleware” as a reform to Section 230 of the Communications Decency Act of 1996 — the law that immunizes digital platforms from the duties of care under the common law that hold other industries responsible for the foreseeable harms they cause. (Unlike newspapers and broadcasters, internet companies can’t be sued for falsehoods and misinformation they publish and promote.) They envisioned “middleware” as third-party software that would verify the importance and accuracy of all kinds of information presented on social media and by search engines, and that would affect news feeds and search results accordingly. Fukuyama and his colleagues argued that the platforms should be required to give their users the option to filter their content with middleware — a rule that, unlike some other proposed reforms to Section 230, would avoid government control over content.

“A competitive layer of new companies with transparent algorithms would step in and take over the editorial gateway functions currently filled by dominant technology platforms whose algorithms are opaque,” Fukuyama and his colleagues wrote.

Until we saw Fukuyama’s work, we had never thought to describe NewsGuard as “middleware.” We had developed a journalistic solution to a technology problem by giving people information to decide if a source they encounter online is generally trustworthy. We have rated the news and information sources that account for 95 percent of engagement in the U.S., Britain, Germany, France and Italy. NewsGuard analysts apply nine basic, apolitical criteria of universal journalistic practice, such as whether these sources disclose ownership or publish corrections. Each source gets a weighted score between 0 and 100, a red or green rating and a Nutrition Label that details the nature of the site. Consumers can subscribe to a browser extension or mobile version, though more commonly get access through companies and other entities that license the ratings and labels, and provide them to people in their network.

While we hadn’t thought of our ratings as middleware, our licensees do implement our information this way. Microsoft was the first technology company to offer our ratings and labels by integrating them into its mobile Edge browser, and providing users of the Edge desktop browser free access. Internet providers such as British Telecom in the U.K., health care systems such as Mt. Sinai in New York and more than 800 public libraries and schools in the U.S. and Europe provide our ratings and labels through access to a browser extension that inserts red or green labels alongside news stories in social media feeds and search results. Research shows access to these ratings results in a dramatic decline in people believing or sharing false content and a boost in trust for high-quality news sites. Rating websites is more effective than fact checking, which only catches up to falsehoods after they have gone viral.

A middleware solution makes sense for the platforms: It’s not surprising that an industry told from birth 25 years ago that it would not be held responsible for its actions would now need help operating responsibly. But based on our experience at NewsGuard, it will take reform of Section 230 or other threats to remove platform immunity to force Silicon Valley to give their users options for safety tools.

It’s striking to compare the willingness of Microsoft — which went through its regulatory challenges a generation ago — to give its users choice, with the walled-off, no-choice approach of Silicon Valley.

Microsoft operates a corporate Defending Democracy Program. Microsoft President Brad Smith wrote a book in 2019 titled “Tools and Weapons” urging his industry to do better. “When your technology changes the world, you bear a responsibility to help address the world that you have helped create," Smith wrote. “This might seem uncontroversial,” he added, “but not in a sector long focused obsessively on rapid growth, and sometimes on disruption as an end in itself.”

Without violating confidences about our discussions with Silicon Valley platforms, I can say that, in contrast with Microsoft, they are not yet willing to give their users information about the trustworthiness of the sources promoted in their products. There are executives at these companies who privately admit they’d like to do more and would be relieved to rely on accountable third parties with transparent criteria. Twitter CEO Jack Dorsey has even mused, “You can imagine a more market-driven and marketplace approach to algorithms.” But for now, the official line is the platforms must fully control users’ experiences: Why let users adjust the algorithms with safety tools when the algorithms make these companies among the most valuable in the world? Which means Congress needs to reform Section 230 to force the platforms to open up to middleware.

The platforms would certainly need help if they’re forced to make their products less toxic. Facebook, Google/YouTube and Twitter each have secret internal ratings for news publishers for their trustworthiness, but don’t tell publishers how they rank or how the algorithms use the ratings. Whatever human judgments or artificial intelligence the social media companies use to craft their rankings, Russia’s RT, Vladimir Putin’s propaganda arm formerly known as Russia Today, is hugely successful. When RT became the first news channel with one billion views on Google’s YouTube in 2013, a YouTube vice president appeared on RT’s celebratory broadcast to praise RT’s reporting as “authentic” and without “agendas or propaganda.” He was sincere.

NewsGuard gives RT a red rating, explaining in a lengthy label that Putin funds RT in order to spread Kremlin falsehoods and promote divisiveness in democracies. RT is now required to register in the U.S. as a foreign agent.

Without using the term “middleware,” this is the approach a new British law will take to reform the platforms. The British government’s “Online Harms” plan would put large social media platforms and search engines that operate in the U.K. under a “new duty of care to make companies take responsibility for the safety of their users” while “at the same time defend freedom of expression.”

The U.K. regulations would require that “companies have appropriate systems and processes in place to tackle harmful content and activity.” The plan said, “Trustworthy content can be clearly marked and users can be provided with tools to manage the content that they see.” The European Commission Code of Practice on Disinformation likewise requires platforms to “empower” their users by detailing the trustworthiness of sources, based on journalistic principles sourced by third parties.

In anticipation of British regulations, a cottage industry of safety tools the platforms could offer to meet this new duty of care has grown in the U.K. The British government published a list of more than 80 “safety tech providers” that Silicon Valley could tap as a layer of protection between the platforms and their consumers. SafetoNet, for example, tracks the keystrokes children and teenagers make online for signs of stress and to prevent bullying.

Marketing gurus probably wouldn’t propose “middleware” as an exciting term for a new industry, but, still, it could revolutionize the way we interact with the Internet for the better. These tools would help the platforms meet new legal responsibilities to avoid harm — not by trusting them to continue to apply their own secret, insufficient algorithms, and not by having government oversee content, but by giving their users choices about their experiences online.

We have tried the alternative, and 25 years after Section 230, most people now get their news from social media platforms designed not for accuracy or safety, but to maximize engagement and revenues, even through misinformation and hoaxes. Requiring platforms to provide tools so that people can choose what to trust in their news feeds would finally counter the infodemic of misinformation online.