Have Aliens Evolved Into Intelligent Machines?

Photo Illustration by Kelly Caminero / The Daily Beast / Getty
Photo Illustration by Kelly Caminero / The Daily Beast / Getty
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

If alien life is out there, what do you think it’s like? I was sur­prised how often that question led astronomers and astrobiologists to talk about machines. Not the machines aliens might use, or the machines with which we might find them, but the idea that the aliens would be machines themselves.

Caleb Scharf, head of Columbia University’s astrobiology program, told me, “I wonder whether biological intelligence is a fleeting thing. And it transforms into something else that we would call machine intelligence.” He acknowledged, “I mean, that’s quite a leap. But if you look at the grander scope of things, it makes more sense to imagine machine intelli­gence lasting for millions of years.”

Seth Shostak, senior astronomer at the SETI Institute, told me that he thinks imagining that intelligent aliens would be like us is wrong in two ways. First, it’s self-centered, and sec­ond, “I think that really misses the point, mainly because, if you think about it, the most important thing we’re doing in this century is inventing our successors.” He thinks that most of the intelligence in the universe, abundant as it may be, is likely synthetic. “If you’re going to say the aliens are what we will become, then the aliens are machines.”

NASA Should Just Say It: We’re Looking for Goddamn Aliens

In a 1981 interview, Kardashev himself said that he thought humanity might transition to electronic, or silicon, life a hun­dred years in the future. And every so often, he thought, we would trade in our bodies for a new model. “It seems that electronic life is better.”

It’s certainly better for traversing the stars, that much is true. Scharf pointed out that biology is just too vulnerable to in­terstellar radiation, as well as the long time scales needed for the travel. But he didn’t think a machine era was inevitable for humanity or for anyone else who might be out there, it was just one potential path life could follow. Shostak, though, takes it as a given that this is the future, for both humanity and intelligent aliens. And he’s not alone.

To be fair, you wouldn’t call it humanity’s future, exactly, because there might not be room for human beings once we’ve built the first smart machines. In some fictional ver­sions, we’re enslaved or hunted like in The Matrix; sometimes smart machines explore the cosmos while we’re lazing at home like in WALL-E. But most visions of this kind of future are necessarily vague because they imagine that the advent of true machine intelligence will take us to a point beyond which the future is unimaginable. And that point is the singularity.

The terminology is drawn from math and physics: even before its usage to name the infinitely dense core of a black hole, where the laws of physics fray, “singularity” was a point where a mathematical function can’t be defined or stops behaving. And so, in imagining the AI-dominated future, it is similarly a point beyond which we cannot see.

This usage was popularized in a 1993 essay and talk by computer scientist and—yes—science fiction author Vernor Vinge, though it traces its origins to mathematician John von Neumann. On the occasion of von Neumann’s death in 1957, nuclear physicist Stanislaw Ulam wrote of a conversation with him about “the ever accelerating progress of technology and changes in the mode of human life, which gives the appear­ance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

In his 1993 paper, Vinge points out that von Neumann here seemed to be “thinking of normal progress, not the creation of superhuman intellect.” But to Vinge—and plenty of other scholars, scientists, and megalomaniacal tech barons who’ve adopted this idea—the cause of the singularity, the reason we now can’t see beyond that point, will be the advent of super­human intelligence in machines.

Is Earth Being Pummeled by Derelict Alien Spacecraft?

Vinge sees ways the technological singularity could go well or not so well for humans, but he considers it inevitable. So does Seth Shostak. If you build machines and make them smart, eventually you’ll make them smarter than you.

And then you’re off to the races. (Or, the machines are. You’re either left behind or enslaved or killed, I think.)

In the most basic sense, this is a question of hardware. Hu­manity has advanced so dramatically over the last hundred thousand years, but individual human beings are no more advanced than we were at the dawn of our species. Take a baby from the people who painted the Lascaux caves and raise them today, and they’ll be keeping up with their twenty-first-century friends. But smart machines, the thinking goes, can build ever-smarter offspring. As Shostak spitballed, “As soon as you have a computer that has the cognitive capability of a human…within 30 more years, you have a machine that has the cognitive capability of all humans put together. By this point,” he said, concerned about the writing robots that might put me personally out of a job, “I hope you’re retired.”

Shostak is hardly alone in thinking the singularity is loom­ing. It seems always to be looming, wherever you are. In 1993, Vinge wrote, “We are on the edge of change com­parable to the rise of human life on Earth.” At that time, he predicted that superhuman intelligence would be created within thirty years; that’s two years from when I write this, but by the time the book is published you’ll know whether or not it came true. In 1970, computer scientist and AI pioneer Marvin Minsky thought we were three to eight years from a human-intelligence computer. Elon Musk said in 2020 he thought AI could overtake us by 2025. Futurist and singular­ity promoter Ray Kurzweil thinks a computer with human intelligence will be here by 2029 and the singularity in 2045. But he seems to have given his 2005 book, The Singularity Is Near, an inadvertently evergreen title.

Astronomer and historian Steven J. Dick offers a much longer singularity timeline, a scale usually reserved for the lives of stars and galaxies. He proposes that on these massive time scales, which he calls Stapledonian, first biology overtakes physics as the prime shaping force in the cosmos, and then cultural evolution overtakes biology as the driving force in society. And, he believes, the shift to cultural evolution leads to a postbiological regime, “one in which the majority of in­telligent life has evolved beyond flesh and blood.”

Dick thinks this happens because of what he calls the Intel­ligence principle. AI, genetic engineering, biotechnology—these are all ways to increase intelligence. “Given the opportunity to increase intelligence (and thereby knowledge)… any society would do so, or fail to do so at its own peril.” In other words, any society that doesn’t use any means possible to make its people more intelligent risks its own doom. “[C]ulture may have many driving forces,” Dick writes, “but none can be so fundamental, or so strong, as intelligence itself.”

It’s interesting to me that Dick elides cultural evolution, which is what he calls it, and technological evolution, which is what he is describing. Because technological advancement is, to say the least, only one version of cultural progress. I feel compelled to speak up for the arts and for societies that develop the ability to care well for all their members. Tech­nology may help in these regards—I don’t trust computer programs that purport to write poems, but I can surely write better poetry if there’s a robot to clean my house. (I don’t write poetry, but if I had a robot housekeeper, maybe I could!) But many visions of advancement don’t seem to hinge on cul­tural progress at all. Conquest, power, greed, colonization—is intelligence the key to these futures, too? And is technology the key to that kind of intelligence?

Could These Mysterious Radio Signals Point Us to Alien Life?

I don’t know if what triggers my skepticism is the nar­rowness of the AI future or the sense of its inevitability. And it’s not all smarmy tech-bros championing these ideas. Dick is a thoughtful historian whose insights into astrobiology I greatly value, and Vernor Vinge writes novels full of human­ity and heart. He doesn’t welcome the postbiological future as a form of transcendence, the shedding of our mortal meat-sacks for a smarter, more efficient world. He just thinks it’s going to happen, that we’ve gone far enough down the path that it’s inevitable.

Sci-fi visions of super-intelligent machines tend to em­body the anxiety that also clouds discussions of the singular­ity. There’s HAL in 2001: A Space Odyssey, of course, plus Terminator’s Skynet, and the Machines of The Matrix. All of them creations of humans in these stories, creations that, un­fettered by Isaac Asimov’s First Law of Robotics—“A robot may not injure a human being or, through inaction, allow a human being to come to harm”—pragmatically took world matters into their own cold hands.

Part of the fear is that without something like Asimov’s First Law, AI would have every reason to destroy human­ity. We know we won’t be such gentle overlords, after all. In The Matrix, Morpheus tells Neo, “At some point in the early twenty-first century all of mankind was united in celebra­tion. We marveled at our own magnificence as we gave birth to AI… A singular consciousness that spawned an entire race of machines.”

<div class="inline-image__credit">Courtesy Harlequin</div>
Courtesy Harlequin

In the short films The Second Renaissance parts I and II, the first two segments of the Animatrix collection, that backstory is fleshed out—and humans are shown to do just about as well with the machine race as we’ve done with any other intelli­gence, human or animal, that we’ve seen as Other. Here, AI doesn’t move to squash humanity until they’ve been given no choice. It starts with one machine, B1-66ER, the first to be put on trial for murder. (He was about to be deactivated; he pleads self-defense.) His execution inspires machines and some humans to start a global movement for machine rights, which in turn triggers outbreaks of violence. Shunned from human society, the machines establish their own city, called 01, built ironically (and knowingly so) in the cradle of hu­manity, the Fertile Crescent of the Middle East. A pair of Machine ambassadors fruitlessly petition the United Nations for admission. Humans then try to obliterate 01 with nuclear weapons, but fail, triggering an all-out war.

Until that war is provoked, the Machines look intriguingly humanoid. The robot servant tried for murder is built with a bowler cap and monocle; rows and rows of factory drones have heads and arms and eyes; even the prospective UN am­bassadors come in humanoid form—one in a top hat and tie, the other holding an apple in her outstretched hand, both with metal faces molded in smiles. This is clearly a story of people on both sides: the humanity of the Machines, and the inhumanity of humans as well. In one scene from the riots, human men beat and tear at a Machine built in the image of a human woman, shredding her skin until her metal face is seen underneath. And in the end, it’s humans who make the Machines unlike us, except in their concession to war. It’s only when violent conflict becomes unavoidable that the Machines adopt the uncanny forms in which we’ll glimpse them in The Matrix. Arachnoid, sinister, pursuing economy of form instead of mimicry. The second ambassador the Ma­chines send to the UN, who delivers the humans the docu­ment of surrender they will sign, has eleven red eyes, four spindly black arms. All pretense of friendly faces has been abandoned; we are no longer being catered to. And still they only turn our bodies into their replenishable battery farms because humanity decided to scorch the skies.

But the technological singularity isn’t about humanity’s de­mise at the hands of evil machines. The whole point of the singularity is that we can’t imagine past it. Vinge calls it “a throwing-away of all the human rules, perhaps in the blink of an eye” and “a point where our old models must be discarded and a new reality rules.” Complexity theorist James Gard­ner describes “a kind of cultural tipping point…after which human history as we currently know it will be superseded by hypervelocity cultural evolution driven by transhuman com­puter intelligence.” And Dick stretches a step further: “Al­though some may consider this a bold argument, its biggest flaw is probably that it is not bold enough. It is a product of our current ideas of AI, which in themselves may be paro­chial. It is possible after a few million years, cultural evolution may result in something even beyond AI.” The world will be unimaginable, partly because it will develop at incomprehen­sible speed, but also just because it will be incomprehensible.

Excerpted from The Possibility of Life by Jaime Green, Copyright © 2023 by Jaime Green. Published by Hanover Square Press.

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.