OpenAI CEO Sam Altman Asks Congress to Regulate AI

Senate Judiciary Subcommittee Hearing On Artificial Intelligence
Senate Judiciary Subcommittee Hearing On Artificial Intelligence
  • Oops!
    Something went wrong.
    Please try again later.

Sam Altman, CEO and co-founder of OpenAI, speaks during during a Senate Judiciary Subcommittee hearing in Washington, DC, on May 16, 2023. Credit - Eric Lee/Bloomberg—Getty Images

OpenAI CEO Sam Altman made an appeal to members of Congress under oath: Regulate artificial intelligence.

Altman, whose company is on the extreme forefront of generative A.I. technology with its ChatGPT tool, testified in front of the Senate Judiciary Committee for the first time in a Tuesday hearing. And while he said he is ultimately optimistic that innovation will benefit people on a grand scale, Altman echoed his previous assertion that lawmakers should create parameters for AI creators to avoid causing “significant harm to the world.”

“We think it can be a printing press moment,” Altman said. “We have to work together to make it so.”

Joining Altman in testifying before the committee were two other AI experts, professor of Psychology and Neural Science at New York University Gary Marcus and IBM Chief Privacy & Trust Officer Christina Montgomery. The three witnesses supported governance of AI at both federal and global levels, with slightly varied approaches.

“We have built machines that are like bulls in a china shop: Powerful, reckless, and difficult to control,” Marcus said. To address this, he suggested the model of an oversight agency like the Food and Drug Administration, so that creators would have to prove the safety of their AI and show why the benefits outweigh possible harms.

The senators leading the questioning, however, were more skeptical about the rapidly evolving AI industry, likening its potential impact not to the printing press but a few other innovations—most notably, the atomic bomb.

Read more: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

Sen. Richard Blumenthal (D., Conn.), chair of the group’s subcommittee on Privacy, Technology, and the Law, revealed his wariness of AI when he replied: “Some of us might characterize it more like a bomb in a china shop, not a bull.”

The session lasted nearly three hours and the senator’s questions touched upon a wide range of concerns about AI, from copyright issues to military applications. Here are some key takeaways from the proceedings.

Consensus on the Dangers

This hearing was less combative than many of the other high-profile exchanges between legislators and tech executives, largely because the witnesses acknowledged the dangers of unfettered growth and usage of a tool like advanced conversational AI, such as OpenAI’s chatbot, ChatGPT. For their part, the Senators did not ask some of the thornier questions that experts have posed, including why OpenAI chose to release its AI to the public before fully assessing its safety, and about how OpenAI created its current version of GPT-4 in particular.

Early on, Sen. Dick Durbin (D., Ill.) remarked that he could not recall a time when representatives for private sector entities had ever pleaded for regulation.

Altman and the senators alike expressed their fears about how AI could “go quite wrong.”

When Sen. Josh Hawley (R., Mo.) cited research, for example, that Large Language Models (LLMs) like ChatGPT could draw from a media diet to accurately predict public opinion, he asked Altman whether bad actors could use that technology to finetune responses and manipulate people to change their opinions on a given topic. Altman said that possibility, which he called “one-on-one interactive disinformation,” was one of his greatest concerns, and that regulation on the topic would be “quite wise.”

Marcus added that the impact on job availability could be unlike disruptions from previous technological advances, and Montgomery was a proponent for regulating AI based on the highest risk uses, such as around elections.

Read more: The AI Arms Race Is On. Start Worrying

When pressed on his worst fear about AI, Altman was frank about the risks of his work.

“My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. I think that can happen in a lot of different ways,” Altman said. He did not elaborate, but warnings from critics range from the spread of misinformation and bias to bringing about the complete destruction of biological life. “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman continued. “We want to work with the government to prevent that from happening.”

Concerns about AI prompted hundreds of the biggest names in tech, including Elon Musk, to sign an open letter in March urging AI labs to pause the training of super-powerful systems for six months due to the risks they pose to “society and humanity.” And earlier this month, Geoffry Hinton, who has been called the “godfather” of AI, quit his role at Google, saying he regrets his work and warning of the dangers of the technology.

Specific Regulation Recommendations

Altman laid out a general three-point plan for how Congress could regulate AI creators.

First, he supported the creation of a federal agency that can grant licenses to create AI models above a certain threshold of capabilities, and can also revoke those licenses if the models don’t meet safety guidelines set by the government.

The idea was not new to the lawmakers. At least four Senators, both Democrat and Republican, addressed or supported the idea of creating a new oversight agency during their questions.

Second, Altman said the government should create safety standards for high-capability AI models (such as barring a model from self-replication) and create specific functionality tests the models have to pass, such as verifying the model’s ability to produce accurate information, or ensure it doesn’t generate dangerous content.

And third, he urged legislators to require independent audits from experts unaffiliated with the creators or the government to ensure that the AI tools operated within the legislative guidelines.

Read more: Why Microsoft’s Satya Nadella Doesn’t Think Now Is the Time to Stop on AI

Marcus and Mongomery both advocated for requiring radical transparency from AI creators, so that users would always know when they were interacting with a chatbot, for example. And Marcus discussed the idea of “nutrition labels,” where creators would explain the components or data sets that went into training their models. Altman, notably, avoided including transparency considerations in his regulation recommendations.

Lawmakers in Europe are further along in regulating AI applications, and the E.U. is deciding whether to classify the general purpose AI technology (on which tools like ChatGPT is based) as “high risk.” Since that would subject the technology to the strictest level of regulation, many big tech companies like Google and Microsoft—OpenAI’s largest investor—have lobbied against such classification, arguing it would stifle innovation.

Avoiding a Similar Social Media Problem

The senators at the hearing affirmed that they intend to learn from their past mistakes with data privacy and misinformation issues on social networks like Facebook and Twitter.

“Congress failed to meet the moment on social media,” Blumenthal said. “Now we have the obligation to do it on AI before the threats and the risks become real.”

Read more: The ‘Don’t Look Up’ Thinking That Could Doom Us With AI

Faced with an unknowable future of AI technology, the nearly dozen legislators at the hearing covered a wide range of issues with their questions. Each highlighted a different area of concern about the impacts of AI.

Sen. Marsha Blackburn (R., Tenn.) asked about compensation for musicians and artists whose work was used to train the models, for example, and then create similar works with their styles or voices. Sen. Alex Padilla (D., Calif.) asked about issues of language inclusivity and providing the same technology for people across cultures. Sen. Amy Klobuchar (D., Minn.) asked about protections for local news agencies, and Sen. Lindsey Graham (R., S.C.) asked about how AI could impact military drones and change warfare. Other topics included assessing the risks of an AI industry concentrated into very few corporate powers, and ensuring the safety of children who use the tools.

Altman, Marcus, and Montgomery all expressed readiness to continue working with the government in the future to find answers for those questions, and Blumenthal has indicated that this was just the first in a series of committee hearings.

“I sense that there is a willingness to participate here that is genuine and authentic,” he said.