What experts make of OpenAI's efforts to tackle 'superintelligence'

The News

OpenAI, the company behind ChatGPT, announced a new effort Wednesday to get ahead of “superintelligence,” which centers around the prediction that AI systems could become smarter than humans within the next decade.

We’ve rounded up experts’ insights on superintelligence and what OpenAI’s new effort hopes to accomplish.

Insights

  • OpenAI has previously voiced concern about the risks of superintelligence, calling for oversight from a regulatory body like the International Atomic Energy Agency. In a recent blog post OpenAI said that the power of superintelligence “could lead to the disempowerment of humanity or even human extinction.”

  • OpenAI is building a team that, within four years, will aim to keep future superintelligent systems under control. And they plan to do so using AI, essentially training AI systems to evaluate other AI systems, Tech Crunch explained. ”It’s OpenAI’s hypothesis that AI can make faster and better alignment research progress than humans can.”

  • The claims that future superintelligent tech poses a threat to humanity distract from the need to regulate current AI systems, Mhairi Aitken, an ethics fellow at The Alan Turing Institute, argues in New Scientist. “The loudest voices shouting about existential risk are coming from Silicon Valley” and aim to divert attention toward the “hypothetical future abilities of AI.”

  • The frequent conflation between AI and nuclear weaponry is overblown, Nir Eisikovits, a professor and director of the Applied Ethics Center at UMass Boston, argues in The Conversation. While AI can be used for harmful purposes and should be regulated accordingly, it’s unlikely that it will enslave us and end human life. AI does, however, pose an existential threat to “the way people view themselves,” Eisikovits writes. “It can degrade abilities and experiences that people consider essential to being human.”