Facebook has known for years about a major source of political vitriol and violent content on its platform and done little about it: individual people who use small collections of accounts to broadcast reams of incendiary posts.
Meet SUMAs: a smattering of accounts run by a single person using their real identity, known internally at Facebook as Single User Multiple Accounts. And a significant swath of them spread so many divisive political posts that they’ve mushroomed into a massive source of the platform’s toxic politics, according to internal company documents and interviews with former employees.
While plenty of SUMAs are harmless, Facebook employees for years have flagged many such accounts as purveyors of dangerous political activity. Yet, the company has failed to crack down on SUMAs in any comprehensive way, the documents show. That’s despite the fact that operating multiple accounts violates Facebook’s community guidelines.
Company research from March 2018 said accounts that could be SUMAs were reaching about 11 million viewers daily, or about 14 percent of the total U.S. political audience. During the week of March 4, 2018, 1.6 million SUMA accounts made political posts that reached U.S. users.
“A large amount of content comes from a small number of individuals,” said Katie Harbath, Facebook’s former director of public policy, in reference to the dangerous political content on the platform.
She argued that SUMAs’ proliferating posts hurt political discourse and said the company has failed to institute rules that could curb the spread of the inflammatory posts.
That’s backed up by disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by the legal counsel of whistleblower Frances Haugen. The redacted versions were reviewed by a consortium of news organizations, including POLITICO.
A Facebook spokesperson said the leaked documents don’t paint a comprehensive picture.
"It's not a revelation that we study duplicate accounts, and this snapshot of information doesn't tell the full story,” Facebook’s Joe Osborne said in a statement. “We enforce our community standards regardless of the kind of account that someone is using.”
Yet researchers who study misinformation in social media say the SUMA problem is a prime example of Facebook missing an opportunity to rein in inflammatory content.
“Facebook has completely lost control over the ways in which its platform has sort of pushed content that is not only not credible but also outrageous and at times extremely divisive,” said Ramesh Srinivasan, director of the Center For Global Digital Cultures at UCLA.
The March 2018 research warned that SUMAs artificially promote certain political viewpoints by providing a case study of an account under the name of Daisy Nunez, a “likely SUMA” who was participating in “unsavory behavior” that the company’s policies didn’t adequately address and couldn’t contain.
The research author said Nunez posted hundreds of links a day — sometimes at the rate of one per minute — and some 1,500 each week of “sensational and highly divisive” content. She saved links and built “a bank of some of the worst, most divisive content, to reshare later,” the author wrote.
A former Facebook employee who had worked on SUMA issues, and spoke to POLITICO on condition of anonymity to avoid unwanted attention to their current employer, said individuals running SUMAs use their authentic identities across all of the accounts, evading Facebook’s “fake account” policy by not impersonating another individual. The fact that these accounts weren’t lying about their identities, and some had relatively benign uses, led to a reluctance from the company to crack down on them heavily.
Even so, Facebook staff regularly identify SUMAs by finding groups of accounts that use the same identity — same birthday, the same or slightly different name — across multiple accounts.
SUMAs typically use the same email address and same first names across accounts, along with “other data that they recycle and that can be used to fingerprint people,” Haugen told reporters in a briefing.
The other former staffer said some SUMAs are benign, belonging to people who want to have separate personal and business profiles. Internal research from January 2018 viewed by POLITICO noted that they’re a trend with teens who want to keep at least one account more private. But SUMAs start to raise red flags when they post with great frequency.
Accounts that frequently post or comment, even if they do so manually, violate Facebook’s community standards against spamming. Yet SUMAs can easily wield their multiple accounts to avoid running afoul of the rules, simply by switching between profiles, the former Facebook staffer said.
“Duplicate accounts provide an avenue for people who are doing bad behavior just to restart immediately upon being kicked off the platform,” Haugen told reporters.
The company does move to stop people from making duplicate accounts in the first place, like redirecting them to recover their existing profiles.
Harbath and the former employee said Facebook could target SUMAs more aggressively if it chose to — particularly those posting dangerous political rhetoric. The anonymous staffer told POLITICO that the company’s existing algorithms are “pretty good” at detecting SUMAs posting political speech.
Facebook has also chosen to push back against more intensive efforts to remove SUMAs. The mere fact that an account is a SUMA usually isn’t enough to warrant a takedown. Instead the account would first need to make at least one or two clear violations of Facebook’s rules — such as posting violent, bullying or harassing content.
“When looking at a lot of these, there was a strong push from other parts of the company that actions needed to be justified and clearly explained as a violation of rules,” Harbath said, adding that they often did not have the “stomach for blunt actions” that could result in a “high number of false positives” — or accounts wrongly taken down.
Facebook did take action against some political SUMAs in October 2018, such as removing the Right Wing News page and other pages run by Brian Kolfage. According to Facebook, the company removed more than 5 billion inauthentic accounts in 2020 before they were flagged, although Facebook didn’t specify how many were SUMAs. The company describes both SUMAs and fake accounts as “inauthentic.”
Message board comments from 2018 show that staffers were torn about Facebook’s approach, with some arguing that since SUMAs represented real people they should be treated leniently despite their violation of Facebook policies on multiple accounts.
“A SUMA account represents the realistic views of a user, just under a pseudonym,” one employee commented in response to the March 2018 research that warned of the dangers of these accounts. “They generally aren’t posting as a drastically different individual or representing views that are not their own in an electorate to which they don’t belong.”
SUMAs make up a large portion of Facebook’s new sign-ups despite the company’s ban on multiple accounts. In a 2021 internal Facebook post titled, “Update on the FB unwanted SUMA problem,” one employee wrote that SUMAs comprised 40 percent to 60 percent of fresh accounts.
The same document warned that Facebook’s AI model that identifies SUMAs both undercounts them and underestimates their effects.
The problem is also evolving. Harbath noted some operators’ growing sophistication in using multiple devices for their accounts.
Facebook also could have business motives for leaving SUMAs mostly alone. Employees and academics who study social media ethics said trying to boot these accounts would likely disrupt sign-ups and use of the site, especially if people are wrongly targeted.
“You want the system to be frictionless, you want it to be easy to create an account, because that's where the money” is, said Hany Farid, a UC Berkeley professor specializing in misinformation and digital forensics.
It’s unclear if a crackdown would have a significant effect on Facebook’s advertising revenue. The company said it has disclosed to Facebookers, advertisers and investors alike that these accounts exist.
“Nothing in this story changes the estimate of duplicate accounts we disclose in our public filings, which includes new users, or that we provide context on in our ad products, ad interfaces, in our help centers, and in other places,” Facebook’s Osborne said.
Farid was skeptical that Facebook couldn’t parse out these accounts and remove them — arguing that the company tends to downplay or tout its powers depending on whether its executives are being hauled up before Congress or recruiting advertisers.
“You can't, on the one hand, monetize to the tune of hundreds of billions of dollars a year phenomenal amounts of data and personal information, and then on the other side when it comes to mitigating harms, say, ‘Yeah, we don't know how to do this,’” he said.