Tech experts, researchers write open letter to slow down on AI

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans? That is the conclusion of a group of prominent computer scientists and other tech industry notables who are calling for a six-month pause to consider the risks. Their petition published Wednesday, March 29, 2023, is a response to San Francisco startup OpenAI’s recent release of GPT-4.
  • Oops!
    Something went wrong.
    Please try again later.

Tech experts have crafted an open letter calling on AI labs to “immediately pause” their work on AI technology stronger than GPT-4 for at least six months. The letter says AI poses “profound risks to society and humanity,” and therefore needs to be regulated.

Among those who signed the letter were Elon Musk, Apple co-founder Steve Wozniak, and other tech researchers, professors and developers — even some who are working on AI themselves. The document has 1,535 signatures as of 12:10 p.m. MDT on Thursday.

Related

GPT-4 is different from ChatGPT in that it can produce content based on text and images, rather than just text. Experts have deemed this as far as AI advancement should go, for now.

The letter speaks on the potential danger of AI, saying it can easily spread misinformation and is reaching a level of intelligence at which it can compete with humans, or even “replace us.” The authors claim this is a result of companies participating in “an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”

Related

To rein in these systems, the letter implores AI developers to take an “AI summer,” during which they should establish safety protocols, checked by independent experts. If this pause does not occur, the letter says, governments should step in and set their own limitations.

The letter also calls on policymakers to play a role in regulation by dedicating trained authorities to oversee AI, developing a certification system, instituting liability measures for “AI-caused harm” and funding extensive AI safety research.

The letter concedes that not all AI work should stop — just the kind that’s advanced enough to pose a threat to society. Once it’s well managed, AI can offer humanity a “flourishing future,” the authors write.

But in the meantime, it may be wise to take a step back.

Related