The Claims That “A.I. Will Kill Us All” Are Sounding Awfully Convenient

  • Oops!
    Something went wrong.
    Please try again later.

This article is from Big Technology, a newsletter by Alex Kantrowitz.

Shortly after ChatGPT’s release last year, a cadre of critics captured headlines and made noise on social media claiming that A.I. would soon kill us. As wondrous as a computer speaking in natural language might be, it could use that intelligence to level the planet. The thinking went mainstream via letters calling for research pauses and 60 Minutes interviews amplifying existential concerns. Leaders like Barack Obama publicly worried about A.I. autonomously hacking the financial system—or worse. And this month, President Joe Biden issued an executive order imposing some restraints on A.I. development.

That was enough for several prominent A.I. researchers to start pushing back hard after watching the so-called A.I. doomers influence the narrative and the field’s future. Andrew Ng, the soft-spoken co-founder of Google Brain, said last week that worries of A.I. destruction had led to a “massively, colossally dumb idea” of requiring licenses for A.I. work. Yann LeCun, a machine-learning pioneer, eviscerated research-pause letter writer Max Tegmark, accusing him of risking “catastrophe” by potentially impeding A.I. progress and exploiting “preposterous” concerns. A new paper earlier this month indicated that large language models can’t do much beyond the data they are trained on, making the doom talk seem overblown. “If ‘emergence’ merely unlocks capabilities represented in pretraining data,” said Princeton professor Arvind Narayanan, “the gravy train will run out soon.”

Worrying about A.I. safety isn’t wrongheaded, but the doomers’ path to notability has insiders raising eyebrows. They may have come to their conclusions in good faith, but companies with plenty to gain by amplifying doomer worries have been instrumental in elevating them. Leaders from OpenAI, Google DeepMind, and Anthropic, for instance, signed a statement putting A.I. extinction risk on the same plane as nuclear war and pandemics. Perhaps these companies—these A.I. companies—are not consciously attempting to block competition. But they surely wouldn’t be that upset if that were a byproduct.

All this alarmism makes politicians feel compelled to do something, leading to proposals for strict government oversight that could restrict A.I. development outside a few firms. Intense government involvement in A.I. research would help big companies, which have compliance departments built for these purposes. But it could be devastating for smaller A.I. startups and open-source developers who don’t have the same luxury.

“There’s a possibility that A.I. doomers could be unintentionally aiding big tech firms,” Garry Tan, CEO of the startup accelerator Y Combinator, told me. “By pushing for heavy regulation based on fear, they give ammunition to those attempting to create a regulatory environment that only the biggest players can afford to navigate, thus cementing their position in the market.”

Ng took it a step further. “There are definitely large tech companies that would rather not have to try to compete with open source [A.I.], so they’re creating fear of A.I. leading to human extinction,” he told the Australian Financial Review.

The A.I. doomers’ worries, meanwhile, feel pretty thin. “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably—and then kill us,” Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute and one of the more outspoken doomers, told a rapt audience at TED this year. He confessed he didn’t know how or why an A.I. would do it. “It could kill us because it doesn’t want us making other superintelligences to compete with it,” he offered.

After Sam Bankman-Fried ran off with billions while professing to save the world through effective altruism, it’s high time to regard those claiming to improve society while furthering their business aims with relentless skepticism. As the doomer narrative presses on, it threatens to rhyme with a familiar refrain.

Big Tech companies already have a significant lead in the A.I. race via cloud computing services that they lease out to preferred startups in exchange for equity in those companies. Further advantaging them might hamstring the promising open-source A.I. movement—a crucial area of competition—to the point of obsolescence. That’s probably why you’re hearing so much about A.I. destroying the world. And why it should be considered with a healthy degree of caution.