Congress Is Blowing It When It Comes to AI Regulation

Photo Illustration by Luis G. Rendon/The Daily Beast/Getty
Photo Illustration by Luis G. Rendon/The Daily Beast/Getty

Sen. Chuck Schumer convened AI industry leaders and stakeholders on Capitol Hill today in a closed-door summit to discuss how Congress should regulate the emerging technology. The AI Insight Forum brings together “top voices in business, civil rights, defense, research, labor, the arts, all together, in one room,” according to Schumer, in order to ostensibly inform a future policy framework for the tech.

These voices include big-name tech CEOs like OpenAI’s Sam Altman, Tesla’s Elon Musk, Google’s Eric Schmidt, Meta’s Mark Zuckerberg, and Microsoft’s Bill Gates; labor leaders like the WGA’s Meredith Stiehm and the AFL’s Elizabeth Shuler; as well as a handful of researchers and user advocates.

However, when you look at the full list of attendees, something is sure to stand out: The vast majority of them are CEOs of tech companies. In fact, 13 of the 22 reported attendees are tech CEOs.

The disparity between representatives from private companies and those from the likes of researchers, labor leaders, and user advocates is stark. It also underscores some of the biggest issues that experts have when it comes to the way that lawmakers are approaching AI regulation—namely: What happens when the very companies you’re trying to regulate have the loudest voices in the room?

“[The] tone is set by the first forum, and the dominance of industry perspectives in this first panel is a clear signal of whose voices Congress is taking more seriously,” Suresh Venkatasubramanian, Brown University’s director of the Center for Tech Responsibility, told The Daily Beast.

Venkatasubramanian, who also co-authored the Blueprint for the AI Bill of Rights published by the White House in Oct. 2022, said that while he was glad that Congress reached out to stakeholders, lawmakers are still falling well short of their promise to scrutinize and regulate the emerging technology. The issue, as evidenced by the guest list, is clear enough. There’s simply not enough input from the likes of AI researchers, ethicists, and advocates who more deeply understand the issues that might affect users today.

In fact, just three of the attendees fall under these categories: Jack Clark, the co-founder of AI safety startup Anthropic; Tristan Harris, the co-founder of the user advocacy non-profit Center for Humane Technology; and Deborah Raji, an AI researcher at the University of California, Berkeley. Raji even took to X (formerly known as Twitter) recently to voice her concerns about the lack of researchers and academics in the forum.

“This is unsurprising, but also disappointing,” Venkatasubramanian said. “We don't let the fox design the henhouse. I would hope that Congress takes the perspectives of the people affected by technology seriously, and makes sure that we the people write the rules, and not tech companies.”

“It’s a myth that only the people who build the tech understand it well enough to inform regulation,” Emily Bender, a professor of linguistics at the University of Washington, told The Daily Beast. “In fact, they are among the last people I’d like Congress to be spending time consulting.”

Schumer’s summit is also disconcerting in another way: Unlike the AI hearings that were held in May, this meeting was closed to the press and public—a fact that has stirred the ire of AI researchers and policymakers alike. In fact, Sen. Elizabeth Warren criticized the meeting due to its closed-door nature and because congressional members weren’t allowed to question the attendees directly, though they could submit written questions.

“These tech billionaires want to lobby Congress behind closed doors with no questions asked,” Warren told NBC News. “That’s just plain wrong.”

How Congress Fell for Sam Altman’s AI Magic Tricks

The sentiment is shared across the aisle too, with Sen. Josh Hawley telling the broadcaster that “it’s ridiculous that all these monopolists are all here to tell senators how to shape the regulatory framework so they can make the maximum amount of money.”

Hawley is the co-author of a bipartisan AI governance framework with Sen. Richard Blumenthal. The proposed legislation, which was introduced in June, outlines a blueprint for regulation and includes the establishment of an AI oversight body in Congress, transparency about the development process behind these systems, and requiring tech companies to obtain licenses in order to develop and deploy AI models.

“It’s a very positive step in terms of Congress starting to seriously consider that AI is going to be an extremely powerful and transformative technology over the coming years,” Daniel Colson, co-founder of research and advocacy non-profit AI Policy Institute, told The Daily Beast. He added that he was encouraged by both the forum and the framework introduced by Blumenthal and Hawley—though he was concerned that Congress still might not go far enough in order to properly regulate the technology.

Even ChatGPT’s CEO Is Worried About ChatGPT

AI experts like Venkatasubramanian say that even the framework leaves a lot to be desired. For one, there’s concern that the licensing proposal will create hurdles that ultimately favor Big Tech companies with the resources and funding needed to obtain them. Also, AI will be an incredibly difficult piece of technology to license due to its very nature.

You can compare it to drug regulation. The FDA approves or rejects certain drugs before they are released to market. This process works “because a drug has a clearly specified purpose,” Venkatasubramanian explained. That’s not the case with AI models, which can ultimately be used for a variety of different purposes outside of just one specific use case.

That means it can be used in harmful ways that the companies never originally intended. No amount of licensing can prevent that if anyone can use the deployed models—so it’s not really keeping anyone safe.

“The further the system is from direct impact, the harder it will be to talk about licensing as a useful model for regulation,” he added. “[Companies] will actively try to position their products to avoid regulation if that’s the case by claiming that they are not responsible for downstream misuse—as they already do.”

Why We Should Regulate AI Like We Do Drugs and Guns

There’s also the fact that much of the discussions surrounding AI on Capitol Hill and the public at large have been focused mostly on generative AI, the systems that are able to create content and media like text, videos, and audio. These technologies are no doubt impressive and are already having a negative impact on the jobs of writers and artists, but they obfuscate the fact that AI goes much deeper than ChatGPT or Midjourney.

It’s the algorithms that reject Black and Brown applicants for home loans or the ones that suggest harsher jail sentences for people of color, or the automated recruitment system that disproportionately rejects women applicants. AI has and will continue to harm people due to the real bias that is literally trained into these models—and, so far, it doesn’t seem like many on Capitol Hill fully grasp that fact.

“Congress needs to resist the urge to focus only on the latest incarnation of AI, and lay out strong protections that are principled and enduring,” Venkatasubramanian said. “For all of us.”

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.