Concerns over smart computers are nothing new, so why are so many experts suddenly wondering whether artificial intelligence will kill us all? Are we really at risk of extinction from robots?
As far-fetched as it may sound, these are reasonable questions, and they’re not the only ones.
Let’s deal with some of them.
Is AI a real threat?
It’s real. Computers are learning faster and faster each time anyone uses them. Geoffrey Hinton, the “Godfather of AI,” said as much when he resigned from Google recently.
“Look at how it was five years ago and how it is now,” Hinton said, speaking to The New York Times. “Take the difference and propagate it forwards. That’s scary.”
How quickly is AI growing?
Sticking with the 75-year-old Hinton, who now says, in effect, that he regrets his life’s work, computer intelligence is growing exponentially.
Computers, Hinton said, speaking to BBC News, “can learn separately but share their knowledge instantly. So, it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it.”
Now, imagine if they all decide that humans are an existential threat to the planet, and therefore to the computers, themselves.
How long would it take for computers to shut down the technology that powers global finance or food manufacturing or medicine?
What is AI? Is it like the movies?
That depends on the movie.
The character “Data” from Star Trek was certainly a best-case scenario. He always viewed himself (or “it” always viewed “itself”?) as something less than human, but strived to achieve humanity as an ideal, going so far as to own a pet and learn classical music.
But that character is the outlier in science fiction.
There was HAL in the 1968 film “2001: A Space Odyssey.” There was “The Terminator” franchise that started in 1984 with Arnold Schwarzenegger.
Will Smith fought a sentient machine in 2004’s “I, Robot.” Four years later, Disney-Pixar gave us “WALL-E.”
All of these examples are quite grim.
What is AI used for?
AI is everywhere. It’s probably in your pocket or your hand as you read these words.
Each time you send a text message, your smart phone predicts the next word you plan to type.
Each time you look for a Netflix show or YouTube video to watch, an algorithm predicts what you might want to watch next.
No lines, scanning: Tempe Circle K uses AI for checkout
Each time you use a tool like Google Translate, the language tools get more and more functional.
These are examples of AI that rob us of the opportunities for serendipitous discovery that we used to have whenever we walked into a video store to rent a movie or met a stranger on a trip abroad.
And it’s expanding every day.
The search engine Bing now uses a “chatbot,” essentially a computer program that communicates with us in our preferred language instead of an actual human.
Such bots are employed by countless companies to reduce the cost of labor.
Where else is AI used?
Students are using programs such as Chat GPT, which launched late last year, to write term papers.
Walmart is using AI to negotiate with suppliers.
And JP Morgan is developing an AI tool to provide financial advice.
How is AI bad?
Great question. Are students who use chatbots to write term papers “cheating”? Maybe.
Or maybe they’re just using a modern-era version of CliffsNotes. Maybe they’re displaying technical acumen that will help them far more in the future than anything they might have memorized in the classroom.
Maybe Walmart is creating savings that can be passed along to the consumer.
Maybe taking human bias out of investment strategy is a good thing.
But does it feel like that in your gut?
Do you really think there’s no value to a student analyzing Shakespeare, which could lead to empathy and understanding of the human condition?
Do you really think the honchos at Walmart will reduce the price of a gallon of milk instead of putting that money in their pockets?
Do you think any small or minority-owned business stands a chance for getting a loan when computers analyze previous decisions to make new decisions?
Because I don’t.
Sounds grim when you put it that way.
Understand, I grew up in Detroit in the ’90s.
The auto industry had collapsed and a lot of the people I grew up around blamed robots for taking their jobs.
Now, imagine the entire nation goes from Detroit in the ’50s and ’60s to Detroit in the ’90s and 2000s, and you’ll get a better sense of my concern.
C’mon, there’s got to be a benefit to AI.
I guess, but does the benefit outweigh the concerns?
Make no mistake, AI can be fun. Computers are creating images of presidents with mullets. And AI has been used to have the rapper Biggie Smalls recite Tupac Shakur lyrics.
See? It’s not all doom and gloom.
If you say so. For me, I’m reminded of the old Jeff Goldblum line from “Jurassic Park”: “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”
An organization called the Center for AI Safety has released a statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemic and nuclear war.”
The letter was signed by some of the bosses associated with Microsoft and Google.
If they’re worried, we should be worried.
That said, you can’t put the toothpaste back in the tube.
AI is here, but lawmakers and executives need to be responsible with how they allow it to be deployed and how they manage the technology for national security.
Don’t forget, at this point, AI is still more of a tool than a self-guided force.
That means we don’t need to worry about bots deciding that humans aren’t good for the planet as much as we need to worry about China, Russia, Iran or North Korea making the same determination about the U.S. and using AI to accomplish their goals.
This article originally appeared on Arizona Republic: Is AI a threat to humanity? There is reason to worry