Josh Hawley’s solution to concerns about artificial intelligence? Let people sue

  • Oops!
    Something went wrong.
    Please try again later.

Ask Sen. Josh Hawley about tech regulation and the Missouri Republican usually has a ready answer: Let people sue.

It was Hawley’s answer when he first came to Congress in 2019 with the aim of taking on large tech companies. It was his answer during a February Senate Judiciary Committee meeting when experts talked about the need for more safety for children online. And it’s his answer to how Congress should address the burgeoning field of artificial intelligence.

“I think that giving people the right, that individual right to sue, is a powerful way to check the company’s power,” Hawley said. “I think that is a more effective way, in general, than giving a bunch of administrative agencies authority, who then get immediately captured by the industry.”

Hawley and Sen. Richard Blumenthal, a Connecticut Democrat, co-sponsored a bill this month that would exempt generative artificial intelligence — the type of A.I. that powers popular new tools like Chat GPT — from Section 230, a law that prevents online companies from being sued over content others post to their website.

The pair’s bill comes amid an increased sense of urgency to create comprehensive regulations on technology, as lawmakers and the public have grown aware of the powerful potential of artificial intelligence — both in innovation and promoting misinformation. Already some campaigns have used the technology to craft text for fundraising emails, people have produced images that duped the public and a Congressman has delivered an A.I. written speech on the House floor.

Hawley and Blumenthal chaired a hearing specifically focused on the oversight of A.I. in May. But their first piece of legislation, which is unlikely to make it through Congress on its own, may be more about sending a bipartisan message than crafting substantive policy.

Experts question whether the exemption is needed — generative AI is a new tool that wouldn’t necessarily fall under the traditional auspices of Section 230, a law Hawley has pushed to eliminate for years. The law shields websites and apps from being sued for content that is posted to their site by users.

“Josh Hawley has strong incentives to preen and grandstand about Section 230,” said Berin Szóka, a lawyer at Tech Freedom, a think tank that specializes in technology policy. “So I would not infer anything at all about the state of the law from the fact that he sees an opportunity to promote himself.”

The widespread use of generative AI is still relatively new. And while people have been able to explore publicly available tools to make images or generate text, legal cases have yet to work their way through the system.

Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy, said people’s ability to sue over content generated by AI will largely be shaped by the facts of the case, particularly if the content produced by AI is defamatory.

“I think that we’ll have a lot more clarity in the next few years,” Kosseff said. “Because I think we’re gonna see, some of this AI has really been spewing out some pretty difficult stuff. So I think that it’s only a matter of time before we start to see judge’s ruling on it.”

Sen. Ron Wyden, an Oregon Democrat who helped write Section 230, said the law would not cover apps like Chat GPT, which allows a user to hold a conversation with an A.I. powered chat bot, because it creates content, rather than just pulling it from existing sources.

The idea of generative A.I. is that it is generating, or creating, something. Even if it is building on existing content, when a user asks the tool to create something, it becomes something new. Szóka said that true generative AI, the type that is creating something, wouldn’t fall under Section 230 because the law is intended to protect sites that host content created by other people. But if the artificial intelligence were to operate more like a search engine, pulling information that already exists, then it would be covered by the law.

While both Hawley and Blumenthal have found common ground on some areas, like requiring A.I. generated content to contain a watermark to help people determine whether something is fake, they are still far apart on larger issues, like whether there should be an agency created that would focus specifically on technologies like A.I.

Hawley is against administrative agencies — he said they often get “captured,” or overly influenced by the companies they’re supposed to be regulating. Blumenthal, on the other hand, said there is still a need for an agency that’s focused directly on the issue.

“I agree with him, first of all, that agencies are often captured,” Blumenthal said. “That is not a responsive argument against agency regulation, but it’s certainly a strong consideration. And that’s why I have favored private rights of action so that individuals can be, in effect, private attorneys general.”

Hawley and Blumenthal aren’t the only senators looking to pass a bill dealing with tech policy. For years, lawmakers, like Sen. Jerry Moran, a Kansas Republican, have been working on data privacy legislation to start to reign in companies enjoying an online Wild West, where there are few rules about how much personal information companies can collect and sell.

Anna Lenhart, a policy fellow at George Washington University’s Institute for Data, Democracy & Politics, said she believes it’s a misconception that lawmakers are just now learning about the complexities of technology. She said a bill sponsored last year called the American Data Privacy Protection Act would have been able to handle some of then issues that will arise from generative A.I.

“The issues, the problems, the concerns, the things that we’re worried about with this technology, are not actually new to Congress,” Lenhart said.

Szóka, too, said that there are other, more serious efforts to address tech policy in Congress.

“It’s important to note that this is really a just a messaging bill,” Szóka said.

He said the reason the bill can’t pass is because of the way it handles a provision in Section 230 that deals with content moderation. While the bill includes one exemption for moderation, Szóka said the majority of case law surrounding content moderation centers around a part of the law that wasn’t exempted.

While there’s bipartisan support for regulations on technology, the debate over content moderation has largely fallen under partisan lines. Republicans have criticized sites like Facebook and Twitter, who have suspended or banned political figures over posts about issues like transgender rights or, in the case of former President Donald Trump, the January 6, 2021 attack on the U.S. Capitol.

Kosseff said there are two separate visions for what the internet should look like — whether platforms should have to carry all constitutionally protected speech or whether there should be more moderation.

“I think that’s where a lot of the difficulties come in sort of finding compromise,” Kosseff said. “Because I don’t think there’s a tremendous amount of agreement as to what they want the internet to look like.”

Any stalemate over regulations could become a challenge, where innovation and profits for technology companies are prioritized over consumer protections and conflicts are left to the courts.

Lenhart said the problem with letting the courts decide, as Hawley has suggested, is that it takes a long time. She said the same policy laws that were needed when she was working on Capitol Hill as a staffer — competition reform, comprehensive data protection and transparency — are still needed today, only with some tweaked definitions.

Chris Lewis, the president and CEO of Public Knowledge, a nonprofit that advocates for an open internet said Congress shouldn’t rush and in the process put out policy that isn’t comprehensive. But he also added that they need to act.

“Taking their time to understand the technology and find that balance is important,” Lewis said. “And I wouldn’t say that if you’re clocking the efforts to regulate AI back to nine months ago or so when generative AI became widely known, I wouldn’t say that we’ve been waiting too long.

“Now, do we need to move quickly? Yes, we do.”