Will AI soon be as smart as — or smarter than — humans?

“The 360” shows you diverse perspectives on the day’s top stories and debates.

In an animated collage, a computer-generated male figure walks toward the camera, with images of circuitry and robots in the background.
Illustration by Shira Inbar for Yahoo News

What’s happening

At an Air Force Academy commencement address earlier this month, President Biden issued his most direct warning to date about the power of artificial intelligence, predicting that the technology could “overtake human thinking” in the not-so-distant future.

“It’s not going to be easy,” Biden said, citing a recent Oval Office meeting with “eight leading scientists in the area of AI.”

“We’ve got a lot to deal with,” he continued. “An incredible opportunity, but a lot [to] deal with.”

To any civilian who has toyed around with OpenAI’s ChatGPT-4 — or Microsoft’s Bing, or Google’s Bard — the president’s stark forecast probably sounded more like science fiction than actual science.

Sure, the latest round of generative AI chatbots are neat, a skeptic might say. They can help you plan a family vacation, rehearse challenging real-life conversations, summarize dense academic papers and “explain fractional reserve banking at a high school level.”

But “overtake human thinking”? That’s a leap.

In recent weeks, however, some of the world’s most prominent AI experts — people who know a lot more about the subject than, say, Biden — have started to sound the alarm about what comes next.

Today, the technology powering ChatGPT is what’s known as a large language model (LLM). Trained to recognize patterns in mind-boggling amounts of text — the majority of everything on the internet — these systems process any sequence of words they’re given and predict which words come next. They’re a cutting-edge example of “artificial intelligence”: a model created to solve a specific problem or provide a particular service. In this case, LLMs are learning how to chat better — but they can’t learn other tasks.

Or can they?

For decades, researchers have theorized about a higher form of machine learning known as “artificial general intelligence,” or AGI: software that’s capable of learning any task or subject. Also called “strong AI,” AGI is shorthand for a machine that can do whatever the human brain can do.

In March, a group of Microsoft computer scientists published a 155-page research paper claiming that one of their new experimental AI systems was exhibiting “sparks of artificial general intelligence.” How else (as the New York Times recently paraphrased their conclusion) to explain the way it kept “coming up with humanlike answers and ideas that weren’t programmed into it?

In April, computer scientist Geoffrey Hinton — a neural network pioneer known as one of the “Godfathers of AI” — quit his job at Google so he could speak freely about the dangers of AGI.

And in May, a group of industry leaders (including Hinton) released a one-sentence statement warning that AGI could represent an existential threat to humanity on par with “pandemics and nuclear war” if we don't ensure that its objectives align with ours.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton told the New York Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Each of these doomsaying moments has been controversial, of course. (More on that in a minute.) But together they’ve amplified one of the tech world’s deepest debates: Are machines that can outthink the human brain impossible or inevitable? And could we actually be a lot closer to opening Pandora’s box than most people realize?

Why there’s debate

There are two reasons that concerns about AGI have become more plausible — and pressing — all of a sudden.

The first is the unexpected speed of recent AI advances. “Look at how it was five years ago and how it is now,” Hinton told the New York Times. “Take the difference and propagate it forwards. That’s scary.”

The second is uncertainty. When CNN asked Stuart Russell — a computer science professor at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach — to explain the inner workings of today’s LLMs, he couldn’t.

“That sounds weird,” Russell admitted, because “I can tell you how to make one.” But “how they work, we don’t know. We don’t know if they know things. We don’t know if they reason; we don’t know if they have their own internal goals that they’ve learned or what they might be.”

And that, in turn, means no one has any real idea where AI goes from here. Many researchers believe that AI will tip over into AGI at some point. Some think AGI won’t arrive for a long time, if ever, and that overhyping it distracts from more immediate issues, like AI-fueled misinformation or job loss. Others suspect that this evolution may already be taking place. And a smaller group fears that it could escalate exponentially. As the New Yorker recently explained, “a computer system [that] can write code — as ChatGPT already can — ... might eventually learn to improve itself over and over again until computing technology reaches what’s known as “the singularity”: a point at which it escapes our control.”

“My confidence that this wasn’t coming for quite a while has been shaken by the realization that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better” at certain things, Hinton recently told the Guardian. He then predicted that true AGI is about five to 20 years away.

“I’ve got huge uncertainty at present,” Hinton added. “But I wouldn’t rule out a year or two. And I still wouldn’t rule out 100 years. ... I think people who are confident in this situation are crazy.”

Perspectives

Today’s AI just isn’t agile enough to approximate human intelligence

“AI is making progress — synthetic images look more and more realistic, and speech recognition can often work in noisy environments — but we are still likely decades away from general-purpose, human-level AI that can understand the true meanings of articles and videos or deal with unexpected obstacles and interruptions. The field is stuck on precisely the same challenges that academic scientists (including myself) have been pointing out for years: getting AI to be reliable and getting it to cope with unusual circumstances.” — Gary Marcus, Scientific American

New chatbots are impressive, but they haven’t changed the game

“Superintelligent AIs are in our future. ... Once developers can generalize a learning algorithm and run it at the speed of a computer — an accomplishment that could be a decade away or a century away — we’ll have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. ... [Regardless,] none of the breakthroughs of the past few months have moved us substantially closer to strong AI. Artificial intelligence still doesn’t control the physical world and can’t establish its own goals.” — Bill Gates, GatesNotes

There’s nothing ‘biological’ brains can do that their digital counterparts won’t be able to replicate (eventually)

“I’m often told that AGI and superintelligence won’t happen because it’s impossible: human-level Intelligence is something mysterious that can only exist in brains. Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesn’t matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers. AI has been relentlessly overtaking humans on task after task, and I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do.” — Max Tegmark, Time

The biggest — and most dangerous — turning point will come if and when AGI starts to rewrite its own code

“Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will — and this is what I worry about the most — be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.” — Tamlyn Hunt, Scientific American

Actually, it will be much harder for AGI to trigger ‘the singularity’ than doomers think

“Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they can’t generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools. Could A.I. programs take the place of those humans, so that an explosion occurs in the digital realm faster than it does in ours? Possibly, but ... the strategy most likely to succeed would be essentially to duplicate all of human civilization in software, with eight billion human-equivalent A.I.s going about their business. [And] we’re a long way off from being able to create a single human-equivalent A.I., let alone billions of them.” — Ted Chiang, the New Yorker

Maybe AGI is already here — if we think more broadly about what ‘general’ intelligence might mean

“These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is general — but we have to be a little bit less, you know, hysterical about what AGI means. ... We’re getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self. That, to me, is just fascinating.” — Noah Goodman, associate professor of psychology, computer science and linguistics at Stanford University, to Wired

Ultimately, we may never agree on what AGI is — or when we’ve achieved it

“It really is a philosophical question. So, in some ways, it’s a very hard time to be in this field, because we’re a scientific field. ... It’s very unlikely to be a single event where we check it off and say, AGI achieved.” — Sara Hooker, leader of a research lab that focuses on machine learning, to Wired