Why are we so afraid of AI?

Google - OpenAI displayed on screen with ChatGPT on mobile seen in this photo illustration. On 17 February 2023 in Brussels, Belgium.(Photo illustration by Jonathan Raa/NurPhoto via Getty Images)

Tech companies are more excited than kids on Christmas about AI. Most people aren't so thrilled.

New surveys about public attitudes toward artificial intelligence taught me two things:

Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post.

First, the more AI becomes a reality, the less confidence we have that AI will be an unqualified win for humanity.

And second, we don't always recognize the pedestrian uses uses of AI in our lives - including in filtering out email spam or recommending new songs - and that may make us overlook both the risks and benefits of the technology.

The bottom line: AI has not won your trust. You want to see proof of its benefits before the technology is used in your hospital room, the battlefield and our roads.

This skepticism is healthy. Frankly, you might have more good sense about AI than many of the experts developing this technology.

If tech companies, AI technologists and regulators are listening, you are saying loud and clear that you have nuanced opinions about where AI should and shouldn't be used.

And this AI trust problem won't be helped by unhinged replies from Microsoft's AI chatbot or Tesla's recent overhaul of its AI-powered driver assistance feature because of car crash risks.

Let's dig into the public attitudes about AI and what they might mean for your life.

- - -

AI's image problem

A Monmouth University poll released last week found that only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.

When the same question was asked in a 1987 poll, a higher share of respondents - about one in five - said AI would do more good than harm, Monmouth said.

In other words, people have less unqualified confidence in AI now than they did 35 years ago, when the technology was more science fiction than reality.

The Pew Research Center survey asked people different questions but found similar doubts about AI. Just 15 percent of respondents said they were more excited than concerned about the increasing use of AI in daily life.

(The Pew survey was conducted in December and published last week. Monmouth conducted its poll in late January. You can read the organizations' methodologies here and here.)

The biggest share of respondents in both polls said they had mixed views on whether AI would be a plus or a minus.

"It's fantastic that there is public skepticism about AI. There absolutely should be," said Meredith Broussard, an artificial intelligence researcher and professor at New York University.

Broussard said there can be no way to design artificial intelligence software to make inherently human decisions, like grading students' tests or determining the course of medical treatment.

- - -

Where you think AI is a good idea and a bad idea

Most Americans essentially agree with Broussard that AI has a place in our lives, but not for everything.

Monmouth asked people six questions about settings in which AI might be used. Most people said it was a bad idea to use AI for military drones that try to distinguish between enemies and civilians or trucks making local deliveries without human drivers. Most respondents said it was a good idea for machines to perform risky jobs such as coal mining.

Attitudes about where AI is right and wrong haven't budged much since Monmouth asked people those questions in 2015.

Alec Tyson, associate director of research with Pew, told me that prior research by his team found that people want to see evidence of tangible benefits before they feel confident in AI for high stakes settings such as law enforcement or in self-driving cars.

Public attitudes can shift, of course. We change our minds all the time. But the irony is that AI is being tested or used in many settings in which people expressed doubts, including self-driving cars and deciding when to administer medicines.

Roman Yampolskiy, an AI specialist at the University of Louisville engineering school, told me he's concerned about how quickly technologists are building computers that are designed to "think" like the human brain and apply knowledge not just in one narrow area, like recommending Netflix movies, but for complex tasks that have tended to require human intelligence.

"We have an arms race between multiple untested technologies. That is my concern," Yampolskiy said. (If you want to feel terrified, I recommend Yampolskiy's research paper on the inability to control advanced AI.)

- - -

AI is everywhere, and we may not know it

Automated product recommendations on sites like Amazon, email spam filters and the software that chats with you on an airline website are examples of AI. The Pew survey found that people didn't necessarily consider all of that stuff to be AI.

And Patrick Murray, director of the Monmouth University Polling Institute, said few of his students said yes when he asked if they use AI on a regular basis. But then he started to list examples including digital assistants such as Amazon's Alexa and Siri from Apple. More students raised their hands.

The term "AI" is a catch-all for everything from relatively uncontroversial technology, such as autocomplete in your web search queries, to the contentious software that promises to predict crime before it happens. Our fears about the latter might be overwhelming our beliefs about the benefits from more mundane AI.

Broussard also said that public skepticism of AI may be influenced by depictions of evil computers from books and movies - like Skynet, the super-intelligent malicious machines in "The Terminator" movies. Broussard said the ways AI can end up eroding your quality of life won't be as dramatic as murderous fictional computers.

"I'm worried about constant surveillance and AI used in policing and people relying on AI-based worker management systems that depend on not giving people biology breaks in factories," Broussard said. "I am not worried about Skynet."

- - -

Elsewhere . . . One tiny win

Twitter said last week that it will stop letting people receive one-time account access codes by text message, unless they pay for its subscription service.

You have options if Twitter's decision affects you.

To remind you, many apps and sites give you the option to add a second step to log in for stronger security. With two-factor authentication, you must have both your account password and some other proof that you are you - like a temporary string of numbers that the app texts to you.

Instead of receiving those codes by text message, you can instead download and use free apps that generate limited-time codes as an extra security measure.

You can download Google's two-factor authentication app for iOS or Android; or Authy for iOS or Android from a company called Twilio; or Microsoft's authentication app for iOS or Android.

The Verge has instructions on adding authenticator apps to your Twitter account.

Related Content

McClellan projected to become first Black woman to represent Virginia in Congress

Inside the collapse of the Trump-DeSantis 'alliance of convenience'

More states scrutinizing AP Black studies after Florida complaints