Science fiction authors and futurists have long speculated about the Singularity: a coming technological event that transforms humanity in ways people can’t even begin to understand.
The term “singularity” has been applied to many different types of developments, from accelerated technological progress to an event that suddenly disrupts the course of human history. But the most common idea of "The Singularity" may be the advent of smarter-than-human AI: machines or robots that learn, reason and grow on their own.
Scary visions of the Terminator or Cylons may spring to mind. But is the Singularity really something to worry about? Is it something that will happen in the foreseeable future? Will the rise of artificial intelligence happen at all?
Luke Muehlhauser and his organization, the Singularity Institute in Berkeley, Calif., offer some possible answers.
“We’re designing machines that are more and more intelligent at doing very specific things. As this progresses, machines will be more intelligent than humans at a greater number of things,” Muehlhauser, the institute's executive director, told TechNewsDaily. “So at some point, it looks like we’ll have machines that are smarter than humans in roughly all domains of activity.”
Muehlhauser doesn’t think that point is very far off. He predicts the Singularity will happen sometime between 10 and 140 years from today, with a likely date of 2060. But, Muehlhauser adds, “Humans are really bad at predicting AI, which is why we have very broad confidence intervals and we have to be very honest about our uncertainty.”
Other scientists are skeptical. Mary Cummings, who studies the intersection of humans and automation as an associate professor at the Massachusetts Institute of Technology, wonders how machines could become more capable than humans given how little people know about their own brains, from memory and intuition to logic and learning. Without understanding the model, how can scientists replicate it?
“I’m a big fan of the mutually supportive look at humans and technology, but this is a huge leap,” Cummings said. “We can manipulate basic electrical impulses, but for the scientific community to say we can completely replicate cognition, that to me is where the Singularity starts to fall apart.”
Muelhauser argues that that a total understanding of the human brain is not necessary to replicate the functionality of humans in machines. Just as a video game system can be emulated using totally new hardware, so can the brain. According to Muelhauser, there’s no need to know how the video game worked, just what it did.
While experts disagree on whether and when the Singularity will occur,
the event by definition will have serious implications on all facets of life. There are an endless number of possible outcomes of the Singularity, and most have to do with what AI optimizes – that is, what it considers its most important goals. Since AI's needs will be different from humans', it is likely to have goals that are at odds with our own, Muehlhauser said.
But there are lines of reasoning that suggest the Singularity could produce artificial intelligence that is friendly and useful to humans. A higher intelligence might have higher moral standards, for example. “The Singularity could enable enormous benefits if it goes well. Really powerful AIs could be like a thousand Einsteins working to cure cancer,” said Muehlhueser.
AI could help humanity avoid other significant dangers, Muelhueser continued, such as nuclear warfare, malicious nanotechnology or even an asteroid hitting the Earth.
Even skeptics such as Cummings don’t completely rule out the idea of the Singularity occurring. “Is the Singularity a possibility? Sure,” she said, “because everything’s a possibility and all research is worth doing. These are great ideas and people should be encouraged to keep thinking down these lines.”
Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.