Senators Introduce Bill to Exempt AI from Section 230 Protections

Side by side photos of Senators Richard Blumenthal and Josh Hawley
Side by side photos of Senators Richard Blumenthal and Josh Hawley
  • Oops!
    Something went wrong.
    Please try again later.


Senators Richard Blumenthal (left) and Josh Hawley (right)

Two U.S. Senators are joining hands across the aisle to try to separate artificial intelligence from the rest of the internet. Senators Josh Hawley and Richard Blumenthal have introduced new legislation that, if passed, would exempt AI from Section 230 protections. The proposed bill would leave AI companies potentially liable for the often incorrect (TBD maybe sometimes defamatory) content that large language models and other generative AI tools produce.

The measure, straightforwardly titled “No Section 230 Immunity for AI Act,” comes as the question of who holds accountability for AI’s creations is being hotly debated—with no clear legal answer yet.

Read more

“AI companies should be forced to take responsibility for business decisions as they’re developing products—without any Section 230 legal shield,” said Connecticut Democrat Blumenthal in a press statement. Missouri Republican Hawley added, “We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230...the companies must be held accountable.”

Blumenthal had hinted toward his new legislation during a Senate Judiciary Committee hearing on the real-world impacts of AI on human rights earlier this week. During his questioning, Blumenthal claimed social media companies had benefited from an overly broad interpretation of Section 230 and argued it was critical Congress not repeat that scenario for the emerging generative AI era.

“We need to clarify Section 230 to say it does not apply to AI because if we don’t we’re in for a whole new world of hurt,” Blumenthal said.

The newly proposed legislation itself would amend 230 with the following text:

No Effect on Claims Related to Generative Artificial Intelligence—Nothing in this section...shall be construed to impair or limit any claim in a civil action or charge in a criminal proescution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.

Section 230 of the Communications Decency Act, is widely considered the foundational law of the internet. It allows for social media sites, search engines, forums, and comment sections to exist without platforms or internet providers being taken to task for the content that their users post. In many ways, though flawed, it is the policy that enables any degree of free speech online and it’s important.

But in recent years, it’s faced pushback from politicians of both parties—largely on the basis of how it shields big tech companies as misinformation proliferates on the internet. President Biden has repeatedly said he wants the regulation reformed. Republican lawmakers (and some Democrats), meanwhile, have made numerous efforts to scrap the rule entirely. Sen. Hawley, in particular, has come for 230 before. (In this latest attempt, he could be leveraging the open question of AI accountability as a strategy to weaken 230's protective shield overall.)

Regardless of all the flack Section 230 has been getting, the Supreme Court recently opted to keep the provision intact. In May, SCOTUS dismissed a major case and declined to weigh in on the issue. But just because 230 is still the law of the land, that doesn’t mean it necessarily applies to AI. If you ask the people who wrote the original 1996 policy, it doesn’t.

Senator Ron Wyden along with former SEC chair and congressman, Chris Cox, who co-authored Section 230 have argued the provision shouldn’t apply to the text, video, or images that programs like OpenAI’s ChatGPT or DALL-E spit out. “Section 230 is about protecting users and sites for hosting and organizing users’ speech” and “has nothing to do with protecting companies from the consequences of their own actions and products,” Wyden said in a statement to Reuters on the topic back in April.

Hawley and Blumenthal’s proposed legislation would be one way to solidify an exception for AI in the law. Though, the act could turn out to be entirely superfluous, according to Jeff Kosseff, a professor of cybersecurity law at the U.S. Naval Academy and author of a 2019 book on Section 230, The Twenty-Six Words That Created the Internet. “We don’t know if this would even be necessary,” Kosseff said of the pending bill in a phone call with Gizmodo. He cited Wyden and Cox’s position and noted that there’s been no legal test to show what stance U.S. courts might take.

Granted, Section 230 only protects online providers and platforms in the case of content that they don’t contribute to directly, Kosseff explained. And there’s an argument to be made that AI companies “contribute” in some way to everything that these programs produce.

Be it through legislation, or another route, if Section 230 doesn’t apply to AI content, all of it would be left subject to existing defamation laws, Kosseff explained. Earlier this month, a Georgia radio host filed the first-ever libel case against OpenAI over ChatGPT allegedly “hallucinating” and producing false text that the radio host had been accused of embezzling money. Others have threatened similar action. None of these claims have yet resulted in a verdict.

Officially opening up AI makers to liability for everything their models make would likely crush the newly booming industry. When Sam Altman and OpenAI co-founders called for regulation of artificial intelligence, formalized exemption from 230 clearly wasn’t what they were after.

In Kosseff’s view though, someone needs to take responsibility for artificial intelligence. “If it’s something that people are routinely trusting for information and it’s a defamation machine, someone needs to be accountable for that,” he said.

However—just as blanket protections under 230 might not make much sense for AI, as is—libel laws might not be the correct avenue through which to rein in the rapidly growing tech. For a plaintiff to win a libel case, they generally have to prove that a statement was made with knowing or reckless disregard for its falseness. But can an AI “know” anything? “Actual malice is about state of mind,” noted Kosseff. “How do you show that for an AI program?”

Then there’s the interplay between user and tool. AI programs routinely generate untruths and can easily be used to make offensive or potentially damaging content (e.g. sexual deepfakes). Undoubtedly, those recurring problems are at least partially a result of their back-end code and training—determined by the companies or developers peddling the tech. But large language models and image/video generators only produce their output at the behest of the humans typing in the prompt. So who should shoulder the blame when things go wrong?

It’s a complicated, unresolved issue set to determine a lot about the future of the AI and internet. If there aren’t any answers right now, at least, in Kosseff’s view it’s a “really fascinating question.”

More from Gizmodo

Sign up for Gizmodo's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.