How social media recommendation algorithms help spread hate

It's getting so bad that even Congress is starting to pay attention.

Stadtratte via Getty Images
  • Oops!
    Something went wrong.
    Please try again later.

Last week, the United States Senate played host to a number of social media company VPs during hearings on the potential dangers presented by algorithmic bias and amplification. While that meeting almost immediately broke down into a partisan circus of grandstanding and grievance airing, Democratic senators did manage to focus somewhat on how these recommendation algorithms might contribute to the spread of online misinformation and extremist ideologies. The issues and pitfalls presented by social algorithms are well known and have been well documented. So what are we going to do about it?

“I think in order to answer that question, there's something critical that needs to happen: We need more independent researchers being able to analyze platforms and their behavior,” said Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, in an interview. Social media companies “know that they need to be more transparent in what's happening on their platforms, but I'm of the firm belief that, in order for that transparency to be genuine, there needs to be collaboration between the platforms and independent peer reviewed, empirical research.“

A feat that may more easily be imagined than realized, unfortunately. “There's a little bit of an issue right now in that space where platforms are taking an overly broad interpretation of nascent data privacy legislation like the GDPR and the California Consumer Privacy Act are essentially not giving independent researchers access to the data under the claim of protecting data privacy and security,” she said.

Even ignoring the fundamental black box issue — in that “it may be impossible to tell how an AI that has internalized massive amounts of data is making its decisions,” as Yavar Bathaee of the Harvard Journal of Law & Technology described it — the inner workings of these algorithms are often treated as business trade secrets.

“AI that relies on machine-learning algorithms, such as deep neural networks, can be as difficult to understand as the human brain,” Bathaee said. “There is no straightforward way to map out the decision-making process of these complex networks of artificial neurons.”

Take the Compas case from 2016 as an example. The Compas AI is an algorithm designed to recommend sentencing lengths to judges in criminal cases based on a number of factors and variables relating to the defendant’s life and criminal history. In 2016, that AI suggested to a Wisconsin court judge that Eric L Loomis be sent down for six years for “eluding an officer.” Because... reasons. Secret proprietary business reasons. Loomis subsequently sued the state, arguing that the opaque nature of the Compas AI’s decision making process violated his constitutional due process rights as he could neither review nor challenge its rulings. The Wisconsin Supreme Court eventually ruled against Loomis, stating that he’d have received the same sentence even in the absence of the AI’s help.

But algorithms recommending Facebook groups can be just as dangerous as algorithms recommending minimum prison sentences — especially when it comes to the spreading extremism infesting modern social media.

“Social media platforms use algorithms that shape what billions of people read, watch and think every day, but we know very little about how these systems operate and how they’re affecting our society," Sen. Chris Coons (D-Del.) told POLITICO ahead of the hearing. "Increasingly, we’re hearing that these algorithms are amplifying misinformation, feeding political polarization and making us more distracted and isolated.”

While Facebook regularly publishes its ongoing efforts to remove the postings of hate groups and crack down on their coordination using its platform, even the company’s own internal reporting argues that it has not done nearly enough to stem the tide of extremism on the site.

As journalist and author of Culture Warlords Talia Lavin points out, Facebook’s platform has been a boon to hate groups’ recruiting efforts. “In the past, they were limited to paper magazines, distribution at gun shows or conferences where they had to sort of get in physical spaces with people and were limited to avenues of people who were already likely to be interested in their message,” she told Engadget.

Facebook’s recommendation algorithms, on the other hand, have no such limitations — except when actively disabled to prevent untold anarchy from occurring during a contentious presidential election.

“Certainly over the past five years, we've seen this rampant uptick in extremism that I think really has everything to do with social media, and I know algorithms are important,” Lavin said. “But they're not the only driver here.”

Lavin notes the hearing’s testimony from Dr. Joan Donovan, Research Director at the Kennedy School of Government at Harvard University, and points to the rapid dissolution of local independent news networks combined with the rise of a monolithic social media platform such as Facebook as a contributing factor.

“You have this platform that can and does deliver misinformation to millions on a daily basis, as well as conspiracy theories, as well as extremist rhetoric,” she continued. “It's the sheer scale involved that has so much to do with where we are.”

For examples of this, one only need look at Facebook’s bungled response to Stop the Steal, an online movement that popped up post-election and which has been credited with fueling the January 6th insurrection of Capitol Hill. As an internal review discovered, the company failed to adequately recognize the threat or take appropriate actions in response. Facebook’s guidelines are geared heavily toward spotting inauthentic behaviors like spamming, fake accounts, things of that nature, Lavin explained. “They didn't have guidelines in place for the authentic activities of people engaging in extremism and harmful behaviors under their own names.”

“Stop the Steal is a really great example of months and months of escalation from social media spread,” she added. “You had these conspiracy theories spreading, inflaming people, then these sort of precursor events organized in multiple cities where you had violence on passers-by and counter-protesters. You had people showing up to those heavily armed and, over a similar period of time, you had anti-lockdown protests that were also heavily armed. That led to very real cross-pollination of different extremist groups — from anti-vaxxers to white nationalists — showing up and networking with each other.”

Though largely useless when it comes to technology more modern than a Rolodex, some members of Congress are determined to at least make the attempt.

UNITED STATES - FEBRUARY 26: Rep. Anna Eschoo, D-Calif., questions Health and Human Services Secretary Alex Azar as he testifies before the House Health Subcommittee of the House Energy and Commerce Committee on The FY2021 HHS Budget and Oversight of the Coronavirus Outbreak
UNITED STATES - FEBRUARY 26: Rep. Anna Eschoo, D-Calif., questions Health and Human Services Secretary Alex Azar as he testifies before the House Health Subcommittee of the House Energy and Commerce Committee on The FY2021 HHS Budget and Oversight of the Coronavirus Outbreak (Caroline Brehman via Getty Images)

In late March, a pair of prominent House Democrats, Reps. Anna Eshoo (CA-18) and Tom Malinowski (NJ-7), reintroduced their co-sponsored Protecting Americans from Dangerous Algorithms Act, which would “hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

“When social media companies amplify extreme and misleading content on their platforms, the consequences can be deadly, as we saw on January 6th. It’s time for Congress to step in and hold these platforms accountable.” Rep. Eshoo said in a press statement. “That’s why I’m proud to partner with Rep. Malinowski to narrowly amend Section 230 of the Communications Decency Act, the law that immunizes tech companies from legal liability associated with user generated content, so that companies are liable if their algorithms amplify misinformation that leads to offline violence.”

In effect the Act would hold a social media company liable if its algorithm is used to “amplify or recommend content directly relevant to a case involving interference with civil rights (42 U.S.C. 1985); neglect to prevent interference with civil rights (42 U.S.C. 1986); and in cases involving acts of international terrorism (18 U.S.C. 2333).”

Should this Act make it into law, it could prove a valuable stick with which to motivate recalcitrant social media CEOs but Dr. Nonnecke insists that more research into how these algorithms function in the real world is necessary before we go back to beating those particular dead horses. It might even help legislators craft more effective tech laws in the future.

“Having transparency and accountability benefits not only the public but I think it also benefits the platform,” she said. “If there's more research on what's actually happening on their system that research can be used to inform appropriate legislation regulation platforms don't want to be in a position where there's legislation or regulation proposed at the federal level that completely misses the mark.”

“There's precedent for collaboration like this: Social Science One between Facebook and researchers,” Nonnecke continued. In order for us to address these issues around algorithmic amplification, we need more research and we need this trusted independent research to better understand what's happening.”