Facebook’s oversight board blew up in its face

Facebook CEO Mark Zuckerberg looks glumly off the the side while surrounded by members of the press and an American flag.
Facebook CEO Mark Zuckerberg looks glumly off the the side while surrounded by members of the press and an American flag.

Facebook’s oversight board seemed like the perfect answer to the social media giant’s moderation headaches: An outside group of respected experts in journalism, misinformation, free speech, and extremism would make the final call on high-profile moderation decisions. That would give the company cover to duck responsibility for controversial cases. Meanwhile, Facebook would be free to ignore the board’s policy recommendations, allowing it to maintain the moderation status quo.

But the board’s inconclusive ruling on former US president Donald Trump’s indefinite Facebook ban on May 5 shows it will not be the scapegoat the company might have hoped for.

The board refused to accept its role as the company’s lightning rod by kicking the decision back to Facebook. It upheld the company’s initial choice to block Trump from posting in the wake of the Jan. 6 insurrection at the US capitol, but handed responsibility for deciding Trump’s fate back to Zuckerberg and his executive team. “In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities,” the 19-member body wrote in a press release accompanying its ruling. “The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.”

Facebook’s history of moderation headaches

First proposed by CEO Mark Zuckerberg in 2018, the quasi-independent Oversight Board was tasked with reviewing Facebook’s most high-stakes moderation decisions and issuing final, binding rulings about whether those decisions should be upheld or overturned. The board may also issue non-binding recommendations about whether Facebook should update its policies, when asked, as it was in the case of Trump’s ejection from the platform.

All of this was meant to defuse the torrent of criticism Facebook has faced for its moderation policies—especially when it comes to world leaders. Prominent politicians and heads of state typically get more leeway to post content that violates Facebook’s rules on misinformation, harassment, and hate speech if the company deems those statements newsworthy. The company argues that the public has a right to know what its political leaders are saying, even in some cases where those statements might be harmful.

“We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society,” Zuckerberg wrote in a Facebook post in the run-up to the 2020 US presidential election.

In many ways, Facebook finds itself in an untenable position because of the extent to which decisions about the acceptability of online speech in the US are in the hands of private companies. As a result, American social media firms (like other private businesses) have wide latitude to set their own rules about what their users can post on their platforms. The difference, of course, is that these social media platforms have become vital venues for public discourse. A small group of companies are now making increasingly difficult decisions about how to balance speech rights against the harms of disinformation, harassment, and incitements to violence.

“It exposes the need for further clear action from the federal government,” said Jim Steyer, CEO of the non-profit Common Sense Media, which rates how appropriate books, movies, websites, and other forms of media are for children. “The only [solution is] to have a comprehensive, thoughtful regulatory regime that covers Facebook and other major tech companies, because we need independent democratically accountable oversight of Mark Zuckerberg and Facebook.”

As a rule, governments in the US cannot regulate speech under the US Constitution due to “its message, its ideas, its subject matter, or its content,” although the courts have carved out narrow exceptions in the case of incitement to violence, fraud, obscenity among others. But Steyer believes Congress and regulatory agencies like the Federal Communications Commission (FCC) should regulate social media platforms as publishers, imposing standards for how they moderate harmful content on their platforms. (There have been several proposals to update a key piece of US law known as Section 230 to do just that, but so far none of the proposals has come close to being enacted.)

The oversight board strikes back

In the absence of outside regulation, Facebook sought to create its own oversight body that could act as an independent check on its moderation decisions. Of course, the board’s members were chosen by Facebook and its work is funded by Facebook, so from the start there have been questions about how independent the board could really be. The board, however, refused to give Facebook a pass when ruling on Trump’s ban.

It criticized Facebook’s specific decision to block Trump indefinitely, writing that the choice “finds no support in the Community Standards and violates principles of freedom of expression.” It criticized Facebook’s general approach to moderating world leaders, arguing that “considerations of newsworthiness should not take priority when urgent action is needed to prevent significant harm.” And in a final act of defiance, the board tasked Facebook with making its own decision within six months about whether Trump should be permanently banned from the platform.

“Facebook was trying to create a scapegoat for itself,” said Samuel Woolley, an assistant professor of journalism at the University of Texas at Austin who heads propaganda research at the school’s Center for Media Engagement. “And what we see having happened is that Facebook’s oversight board basically said, ‘No, we’re not going to do that. You have to create more systematic policy in order to allow us to make decisions. So, effectively they’re forcing Facebook’s hand to take responsibility and do exactly the kind of arbitration [they want to avoid] and make political decisions.”

From one perspective, the confusing, inconclusive episode could be seen as a win for Facebook. The company now gets to kick the can down the road another few months, and shroud its decision in more dense bureaucracy making it harder to follow what’s going on. Ultimately, Facebook may deflect responsibility back to the oversight board anyway.

But in the short term, the report looked bad for Facebook. The company failed to get a neat, binding ruling from the review board that would have tied its hands and freed it from blame, and it got a very public drubbing from a panel of experts who put renewed pressure on the company to craft new, challenging policies governing the behavior of world leaders on its platform.

Warning signs for demagogues

Facebook has traditionally been the most lenient of the social media platforms in terms of moderating politicians’ speech. Twitter has already announced that its decision to ban Trump is permanent, and YouTube, which indicated its Trump ban would remain in effect until the “risk of violence has decreased,” still hasn’t said when it will contemplate letting Trump back onto the platform.

So the fact that Facebook decided to ban Trump indefinitely—and that its oversight board called for even more stringent moderation of world leaders, without giving them a pass because their statements are newsworthy—signals a broader shift in tech companies’ approach to governing politicians’ speech. Indeed, virtually every social media platform strengthened its content moderation policies in the chaotic period surrounding the 2020 US elections and the global coronavirus pandemic.

The shift could have serious ramifications for other prominent politicians who have borrowed from the Trumpian playbook of using social media to spread misinformation. “I believe other world leaders, whether it’s [India’s Narendra] Modi, [Brazil’s Jair] Bolsonaro, [the Philippines’ Rodrigo] Duterte, [or Turkey’s Recep Tayyip] Erdogon, they should be worried,” Woolley said. “No matter how we look at this, their days of being able to use social media as a megaphone or bully pulpit with impunity are numbered.”

Woolley says a lot has changed in the decade he’s been studying social media platforms’ moderation policies. “It would have been unthinkable to me 10 years ago that someone like Trump would have been banned at all,” he said. “We’re making progress. It’s excruciatingly slow progress, but we’re moving in the right direction.”

Sign up for the Quartz Daily Brief, our free daily newsletter with the world’s most important and interesting news.

More stories from Quartz: