How do you solve a problem like 8chan?

President Donald Trump’s vow Monday to scour “the dark recesses of the internet” came as this weekend’s deadly gun violence provoked ire over fringe online platforms like 8chan, an anonymous message board that has hosted a racist manifesto linked to Saturday’s deadly shooting in El Paso, Texas.

But any effort to curb dangerous extremism online will run into a host of obstacles: The Constitution and U.S. laws protect hateful speech, and obscure sites like 8chan are relatively immune to the kinds of political pressure that Washington is increasingly bringing to bear against mainstream platforms like Facebook, Twitter and Google. And the same big tech sites have made only slow progress in removing content supporting terrorist groups like ISIS, despite years of pressure from the Obama administration.

Trump promised to press forward anyway, saying in a televised address, “We must recognize that the internet has provided a dangerous avenue to radicalize disturbed minds and perform demented acts." He said he was directing the Justice Department to coordinate with government agencies and social media companies “to develop tools that can detect mass shooters before they strike.”

These are the some of the reasons the effort won’t be easy:

1) U.S. law offers a safe harbor

The First Amendment protects even racist, misogynistic and other hateful speech. And online sites enjoy broad legal immunity through another statute — Section 230 of the 1996 Communications Decency Act — that has become a major focus of the bipartisan congressional backlash against online sites like Facebook.

Under the law, websites enjoy almost blanket immunity from liability for content their users post. Lawmakers of both parties have questioned whether Section 230 offers too much cover to tech companies that fail to police their platforms — and the 8chan link to the El Paso shooting offers an opening to further attempts to weaken, change or do away with the law.

But industry groups argue that removing the 230 clause would only worsen the proliferation of dangerous material, because the provision is what allows tech platforms, acting in good faith, to take down harmful content without opening themselves up to legal troubles.

“Section 230 empowers platforms to stop the spread of vile content from the dark corners of the Internet,” said Carl Szabo, general counsel at NetChoice, an e-commerce trade group representing Facebook, Google and Twitter. “Without Section 230, extreme speech would become more prevalent online — not less."

Even some groups highly critical of the online industry’s efforts to crack down on extremist content are warning against dialing back the legal safeguards, which they say protects a vast swath of content aimed at countering hate speech.

“I don’t think we should be in a rush to change the law because these horrible things are happening,” said Heidi Beirich, who tracks online extremism for the Southern Poverty Law Center.

2) Fringe sites often escape scrutiny

Lawmakers and activists have all kinds of leverage against companies like Facebook and Google, which employ vast lobbying armies in Washington, sometimes vie for big government contracts and can be swayed by pressure from their shareholders and employees.

But that kind of leverage has little leeway with sites like 8chan.

8chan, whose owner resides in the Philippines, represents the self-proclaimed “Darkest Reaches of the Internet” seen as a haven for unbridled free speech and a breeding ground for domestic terrorism. It’s even more of a free-for-all site than its better-known counterpart 4chan, which developed a reputation for racist content.

Numerous reports have linked 8chan to misogynistic material, child pornography and the infamous QAnon conspiracy, which claims Trump is waging a secret war against pedophiles and so-called “deep state” actors.

That fringe quality, though, means it's harder to get such sites to remove content than it was to get big social media companies to remove videos and posts by Islamic State supporters.

“The difference is that with ISIS you’re mostly dealing with the mainstream sites of the world: YouTube, Google, and Facebook,” said Seamus Hughes, a former top staffer at the National Counterterrorism Center who now serves as deputy director of George Washington University’s Program on Extremism. “But with white nationalism, white supremacy, you’re dealing with fringe websites that aren’t part of the larger ecosystem of content moderation.”

In 2017, under pressure from Washington and after years of proclaiming the difficulties of determining what counts as terrorist content and removing it across their platforms, Facebook, Microsoft, Twitter and Google-owned YouTube formed a group called the Global Internet Forum to Counter Terrorism to share best practices and work together to combat violent and extremist posts. Pinterest and Dropbox later joined what Hughes calls a “coalition of the willing.”

Only more recently have lawmakers like Rep. Bennie Thompson (D-Miss.), the chairman of the House Homeland Security Committee, turned their attention to sites like 8chan.

“To be honest with you, most of us had never heard of that channel until a few months ago, and then you find out people have been using it for quite a while,” Thompson told POLITICO in May.

3) Even the big sites still provide gateways to radicalism

Experts on white supremacy say that while 8chan might grab headlines, some people — especially young white men — first get a taste of the ideology on popular sites like Twitter and YouTube, despite all the years of pressure for the services to stop fostering hate.

Beirich said tech industry leaders turned a blind eye to white nationalist speech on their sites until the deadly clash between white supremacists and counter-protesters in Charlottesville, Va., in August 2017 that prompted a public reckoning about online hatred.

“Until 2017, for 10 years, we had no idea how many young white men were radicalized into hardcore white nationalism” online, she said. “This is why we’ve been arguing hate groups should come off of these mainstream platforms for years.”

Since then, the companies have taken steps to crack down. Facebook earlier this year expanded its definition of hate speech to include white nationalist and white separatist content. But the move sparked objections from some right-wing commentators, who tied the policy to longstanding allegations that online platforms censor conservative speech. And advocacy groups warned that the policy shift could inadvertently sweep up groups looking to combat online extremism.

“When speech is censored by private parties based on the content of that speech, there's nothing stopping Facebook — or YouTube or Twitter — from using that same power to censor organizations fighting to protect abortion rights or individuals fighting against climate change tomorrow,” American Civil Liberties Union staff attorney Vera Eidelman told POLITICO earlier this year.

Fringe platforms, meanwhile, have pointed to the ongoing presence of hate speech and extremist content on top platforms to shield against criticisms. “My question is, why is the focus on 8chan? The El Paso shooter also had accounts on LinkedIn, Facebook, Instagram, and Twitter from my understanding,” read a statement posted on the Twitter account of Gab, a social network known as a hotbed for white nationalist content.

The gunman who killed more than 50 people in March at mosques in Christchurch, New Zealand, livestreamed that massacre on Facebook, he pointed out, “and no one called for Facebook to be shut down.”

4) Racially divisive speech can fail to trigger the necessary alarms

8chan has served repeatedly as the digital home base where suspects in mass shootings have promoted violent or racist views before carrying out their plans. In March, an account believed to belong to the Christchurch, New Zealand, gunman aired white nationalist sentiment in a manifesto posted to the platform. And in April, an individual who identified himself as the suspect in a deadly shooting near San Diego, Calif., aired racist views on the forum prior to that attack.

But Shahed Amanullah, a former senior adviser on technology to secretaries of State Hillary Clinton and John Kerry, said that both government and industry are less likely to view domestic expressions of racial extremism as problematic than they are with statements supporting international terrorism.

“Back when I was in the State Department, obviously people were very concerned about this behavior coming from Muslim groups, like al Qaeda, but there’s a disconnect: When it’s people who are familiar or closer to home, it doesn’t register as an existential threat, because people are just too familiar with it. There’s terrorism and there’s this thing we don’t think of as terrorism,” said Amanullah.

“But there’s no such thing as ‘domestic terrorism’ anymore because borders don’t mean anything,” he said. And the online platforms, he says, need to take homegrown white supremacist rhetoric seriously: “If you’re truly serious about keeping language off your platform that leads to violence, you need to be true to your word whether it’s Islamic extremists or white nationalists.”

Washington's failure to treat white supremacy as a serious threat can mean that online platforms don’t feel pressure to act, said Hughes, the former counterterrorism official. “Technology companies respond to the threat of regulation, and if that happens with white nationalism, you might see more action,” he said.

5) The real decision-makers — internet infrastructure companies — don’t want the responsibility

The real power brokers at the moment are the internet infrastructure companies that have the power to erase whole sites from the web, at least for a time.

On Monday, the CEO of Cloudflare — a company that helps protect websites from cyberattacks that can render them unreachable — said that he’d made the decision to pull his company’s protective services from 8chan in light of the El Paso attack. “The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths,” wrote Matthew Prince, the CEO.

Some lawmakers have pointed to those sorts of moves as useful progress. Rep. Mike Rogers (R-Ala.) told POLITICO that while fringe sites like 8chan “have yet to demonstrate any willingness to limit content depicting violence, torture, racism and child pornography,” public scrutiny, and the resulting response by services such as Cloudflare, has succeeded in limiting their ability to operate.

“Our efforts should be focused on containing, counter-messaging, and delegitimizing these bastions of hate,” said Rogers, the top Republican on the House Homeland Security Committee.

But Prince, the Cloudflare CEO, says he dislikes wielding that power. “We continue to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often,” he wrote. “Cloudflare is not a government. While we've been successful as a company, that does not give us the political legitimacy to make determinations on what content is good and bad. Nor should it.”

Eidelman, the ACLU staff attorney, told POLITICO that “as odious and truly reprehensible as the statements in the manifesto are, I think we have to be careful about a world in which a handful of actors can drive speakers off the web at their sole discretion.”

8chan was offline for parts of Monday, but is likely to return to the web when it lines up other service providers.