Can new government regulation repack the Pandora’s box of emerging AI tools?

Michelle Budge, Deseret News
Michelle Budge, Deseret News

At a U.S. Senate committee hearing focused on artificial intelligence issues, lawmakers bemoaned their collective failure in taking timely action to protect the public from the harms associated with social media platforms before they became an issue and vowed to not make the same mistake when it comes to regulating emerging generative AI tools.

But, what exactly those preemptive moves might look like remains murky, even as companies developing advanced digital intelligence platforms say they not only welcome new regulation, but believe it’s necessary to prevent potential future cataclysms.

Sen. Richard Blumenthal, D-Conn., who chairs the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, opened the Tuesday morning proceeding noting the effort was the first in a planned series of hearings on oversight of artificial intelligence advancements and “intended to write the rules of AI.”

“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Blumenthal said.

He then put some of the latest AI-driven advancements on display by playing an audio recording that was created using voice cloning software and a statement created by the generative AI tool ChatGPT. The statement mimicked Blumenthal remarkably in both sound and content.

Related

But, he warned it could have just as easily been used to fake something much more incendiary and said that lawmakers now have an opportunity to create some regulatory boundaries to limit the potential harms of AI before it’s too late.

“Congress has a choice now,” Blumenthal said. “We had the same choice when we faced social media, we failed to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating dangers for them.

“Congress failed to meet the moment on social media, now we have the obligation to do it on AI before the threats and the risks become real.”

Witnesses before the committee on Tuesday included Sam Altman, the co-founder and CEO of OpenAI, the company behind the ChatGPT chatbot; Gary Marcus, artificial intelligence researcher and NYU professor emeritus; and Christina Montgomery, chief privacy and trust officer for IBM.

OpenAI CEO Sam Altman attends a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on Capitol Hill in Washington.
OpenAI CEO Sam Altman attends a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. | Patrick Semansky, Associated Press

Altman has become something of a de facto figurehead when it comes to generative AI tools thanks to the enormous response to the inquiry-based ChatGPT platform that went public last November and has since attracted over 1 billion users. ChatGPT answers questions and can produce prompt-driven responses like poems, stories, research papers and other content that typically read very much as if created by a human, although the platform’s output is notoriously rife with errors.

Since Altman co-founded OpenAI in 2015 with backing from tech billionaire Elon Musk, the effort has evolved from a nonprofit research lab with a safety-focused mission into a business, per The Associated Press. Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman readily agreed with committee members that new regulatory frameworks were in order as AI tools in development by his company and others continue to take evolutionary leaps and bounds. He also warned that AI has the potential, as it continues to advance, to cause widespread harms.

“My worst fears are that we, the field of technology industry, cause significant harm to the world,” Altman said. “I think that can happen in a lot of different ways. I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that.

“We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work we have to do to mitigate that.”

Several ideas were raised by lawmakers about how new government oversight efforts could be constructed, including extending responsibilities to existing agencies, launching an AI-dedicated regulatory agency in the vein of the FCC or FTC, or even creating a new cabinet-level position to oversee AI-related issues.

Montgomery advocated for a “precision” regulatory approach like the one currently under consideration by the European Union, the AI Act, that would create specific regulatory boundaries based on four risk levels that include unacceptable risk, high risk, limited risk, and minimal or no risk.

There was also discussion of U.S.-based versus international regulatory oversight and how a global AI entity might be formed.

Marcus noted that a widely agreed upon AI “constitution” could form a starting point for how to proceed on regulation and could apply to either a domestic or international body tasked with AI oversight.

Marcus also called for third-party, science-based independent auditors and higher transparency requirements for AI developers, noting that it’s impossible to anticipate where future harms could come from without knowing base information like what data sets are being used to train AI models.

“Science has to be a really important part of this,” Marcus said. “We don’t have the tools right now to detect and label misinformation.”

Sen. Peter Welch, D-Vt., said he was concerned about what “bad actors can do and will do if there are no rules of the road” and said AI tools need to be monitored for privacy protections, bias concerns, intellectual property rights, disinformation and potential economic impacts.

“It’s important for Congress to keep up with the speed of technology,” Welch said. “I’ve come to the conclusion that we absolutely have to have an (AI oversight) agency ... with a scope of engagement defined by us.”

Lawmakers and witnesses also touched on an idea proposed in an open letter published in March, titled “Pause Giant AI Experiments,” that was organized by the nonprofit Future of Life Institute and has since been signed by more than 27,000 including some notable scientists and AI experts. It calls for cessation of research on “all AI systems more powerful than GPT-4,” which is the latest iteration of ChatGPT from Altman’s company.

But most committee members discounted this effort with some noting that enforcing a moratorium on a global scale was likely functionally impossible.

“The rest of the global scientific community isn’t going to pause,” Blumenthal said. “Sticking our head in the sand is not the answer. Safeguards and protections, yes. But a flat stop sign … I would be very, very worried about.”