How do we avoid an AI-driven extinction event? Unknown, but experts sign ‘global priority’ declaration

Experts of intelligence technology are warning of potential “extinction” events and are calling on governments to step up regulations.
Experts of intelligence technology are warning of potential “extinction” events and are calling on governments to step up regulations. | Adobe.com
  • Oops!
    Something went wrong.
    Please try again later.

Are emerging artificial intelligence tools destined to evolve into an existential threat at the same level as a potential global nuclear war or unforeseen biological disaster?

That’s the contention of a new, single-sentence missive issued by the nonprofit Center for AI Safety on Tuesday that’s earned the signatures of a wide-ranging group of distinguished scientists, academics and tech developers including Turing Award winners Geoffrey Hinton and Yoshua Bengio, and leaders of the major AI labs, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads.

The Center for AI Safety says the statement, which has accrued hundreds of signatories, has the support of a “historic coalition of AI experts” along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists and climate scientists who believe establishing the risk of extinction from advanced, future AI systems is now one of the world’s most important problems.

OpenAI CEO Sam Altman speaks at a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. | Patrick Semansky, Associated Press
OpenAI CEO Sam Altman speaks at a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. | Patrick Semansky, Associated Press

The San Francisco-based Center for AI Safety, an organization founded in 2016 by a group of academics, has a stated mission to “reduce societal-scale risks from artificial intelligence” by equipping “policymakers, business leaders and the broader world with the understanding and tools necessary to manage AI risk.”

In press outreach accompanying the extinction statement, the group referenced concerns raised by J. Robert Oppenheimer, even as the theoretical physicist was overseeing the development of the world’s first operable nuclear weapons at the Los Alamos Laboratory in the 1940s.

“We knew the world would not be the same,” Oppenheimer once recounted and later called for international coordination to avoid nuclear war, according to the Center for AI Safety.

Related

“We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” said Dan Hendrycks, director of the Center for AI Safety, in a press release.

Hinton and Altman have also recently been sharing their personal concerns about where AI is headed and how to best manage the powerful systems as they become more advanced and, potentially, diabolical.

Altman, the co-founder and CEO of OpenAI, the company behind the ChatGPT chatbot, has become something of a de facto figurehead when it comes to generative AI tools. That’s thanks to the enormous response to the inquiry-based ChatGPT platform that went public last November and has since attracted over 100 million users. ChatGPT answers questions and can produce prompt-driven responses like poems, stories, research papers and other content that typically read very much as if created by a human, although the platform’s output is notoriously rife with errors.

Altman was among a panel of witnesses at a U.S. Senate committee hearing last month that was focused on AI concerns and potential new regulatory efforts.

He readily agreed with members of the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law that new regulatory frameworks were in order as AI tools in development by his company and others continue to take evolutionary leaps and bounds. He also warned that AI has the potential, as it continues to advance, to cause widespread harm.

“My worst fears are that we, the field of technology industry, cause significant harm to the world,” Altman said. “I think that can happen in a lot of different ways. I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that.

“We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work we have to do to mitigate that.”

Hinton is a British-Canadian scientist and researcher who is widely considered the “Godfather of AI” and recently quit his job working on Google’s artificial intelligence program so he could speak more openly about his concerns over the new technology. Hinton said he’s had a change of heart about the potential outcomes of fast-advancing AI after a career focused on developing digital neural networks — designs that mimic how the human brain processes information — that have helped catapult artificial intelligence tools.

“The problem is, once these things get more intelligent than us it’s not clear we’re going to be able to control it,” Hinton said. “There are very few examples of more intelligent things controlled by less intelligent things.”

In a March interview with CBS News, Hinton was asked if AI has the potential to wipe out humanity.

“It’s not inconceivable,” Hinton said. “That’s all I’ll say.”

Exactly what kind of measures are required to keep AI from sparking, or coordinating, some kind of extinction-level cataclysm, however, remains somewhat nebulous.

The Center for AI Safety’s director said the time to take action in anticipating, and mitigating, the potential harms of AI systems is now and it’s one that requires global participation.

“Pandemics were not on the public’s radar before COVID-19,” Hendrycks said in a release. “It’s not too early to put guardrails in place and set up institutions so that AI risks don’t catch us off guard.”

“As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence. Mitigating the risk of extinction from AI will require global action. The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems.”