×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Most aliens may be artificial intelligence, not life as we know it

Over the years that have passed since Fermi asked his question, dozens of potential solutions to the “paradox” have been suggested
Last Updated 01 June 2023, 17:25 IST

The Fermi paradox takes its name from a 1950s visit by physicist Enrico Fermi to the Los Alamos National Laboratory in New Mexico. One day, as Fermi was walking to lunch with physicist colleagues Emil Konopinski, Edward Teller and Herbert York, one mentioned a New Yorker cartoon depicting aliens stealing public trash cans from the streets of New York. While dining later, Fermi suddenly returned to the topic of aliens by asking: “Where is everybody?”

Whereas not everybody agrees as to what Fermi was precisely questioning, the “paradox” has generally been interpreted as Fermi expressing his surprise over the absence of any signs for the existence of other intelligent civilizations in the Milky Way. Because a simple estimate showed that an advanced civilization could have reached every corner of the galaxy within a time much shorter than the galaxy’s age, the question arose: Why don’t we see them?

Over the years that have passed since Fermi asked his question, dozens of potential solutions to the “paradox” have been suggested.

In particular, a few scientists have argued that the absence of alien signals is the result of a “great filter”—an evolutionary bottleneck impenetrable to most life. If true, this great filter is either in our past or in our future. If it’s behind us, then it may have occurred when life spontaneously emerged, for example, or when single-cell organisms transitioned to multicellular ones. Either way, it implies that complex life is rare, and we may even be alone in the Milky Way. If, on the other hand, the great filter is ahead of us, then most advanced civilizations may eventually hit a wall and cease to exist. If so, that too may be humanity’s fate.

Instead, we would like to propose a new way of thinking about the Fermi paradox. It stands to reason that there are chemical and metabolic limits to the size and processing power of organic brains. In fact, we may be close to those limits already. But no such limits constrain electronic computers (still less, perhaps, quantum computers). So, by any definition of “thinking,” the capacity and intensity of organic, human-type brains will eventually be utterly swamped by the cerebrations of artificial intelligence (AI). We may be near the end of Darwinian evolution, whereas the evolution of technological intelligent beings is only at its infancy.

Few doubt that machines will gradually surpass or enhance more and more of our distinctively human capabilities. The only question is when. Computer scientist Ray Kurzweil and a few other futurists think that AI dominance will arrive in just a few decades. Others envisage centuries. Either way, however, the timescales involved in technological advances span but an instant compared to the evolutionary timescales that have produced humanity. What’s more, the technological timescales are less than a millionth of the vast expanses of cosmic time lying ahead. So, the outcomes of future technological evolution could surpass humans by as much as we intellectually surpass a comb jelly.

But what about consciousness?

Philosophers and computer scientists debate whether consciousness is a special property associated only with the kind of wet, organic brains possessed by humans, apes and dogs. In other words, might electronic intelligences, even if their capabilities seem superhuman, still lack self-awareness or an inner life? Or perhaps consciousness emerges in any sufficiently complex network?

Some say that this question is irrelevant and semantic—like asking whether submarines swim. We don’t think so. The answer crucially affects how we react to the far-future scenario we’ve sketched: If the machines are what philosophers refer to as “zombies,” we would not accord their experiences the same value as ours, and the posthuman future would seem rather bleak. If, on the other hand, they are conscious, we should surely welcome the prospect of their future hegemony.

Suppose now that there are indeed many other planets on which life began, and that some or most followed a somewhat similar evolutionary track as Earth. Even then, however, it’s highly unlikely that the key stages in that evolution would be synchronized with those on Earth. If the emergence of intelligence and technology on an exoplanet lags significantly behind what has happened on Earth (either because the planet is younger, or because some “filters” have taken longer to negotiate) then that planet would plainly reveal no evidence of an intelligent species. On the other hand, around a star older than the sun, life could have had a significant head start of a billion years or more.

Organic creatures need a planetary surface environment for the chemical reactions leading to the origin of life to take place, but if posthumans make the transition to fully electronic intelligences, they won’t need liquid water or an atmosphere. They may even prefer zero gravity, especially for building massive artifacts. So it may be in deep space, not on a planetary surface, that nonbiological “brains” may develop powers that humans can’t even imagine.

The history of human technological civilization may measure only in millennia (at most), and it may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence, which might then persist, continuing to evolve on a faster-than-Darwinian timescale, for billions of years. That is, organic human-level intelligence may be, generically, just a brief phase, before the machines take over. If alien intelligence has evolved similarly, we’d be most unlikely to catch it in the brief sliver of time when it was still embodied in the organic form. Particularly, were we to detect ET, it would be far more likely to be electronic, where the dominant creatures aren’t flesh and blood—and maybe aren’t even located on planets, but on stations in deep space.

The question then becomes whether the fact that electronic civilizations can live for billions of years seriously exacerbates the Fermi paradox. The answer is: not really. While most of us who are puzzled by the Fermi paradox and the absence of alien signs imagine other civilizations as being expansionist and aggressive, this is not necessarily the case. The key point is that whereas Darwinian natural selection has put in some sense at least a premium on survival of the fittest, posthuman evolution, which will not involve natural selection, need not be aggressive or expansionist at all. These electronic progeny of flesh and blood civilizations could last for a billion years—maybe leading quiet, contemplative lives.

The focus of the search for extraterrestrial intelligence (SETI) so far has been on radio or optical signals, but we should be alert also to evidence for non-natural construction projects, such as a “Dyson sphere,” built to harvest a large fraction of stellar power, and even to the possibility of alien artifacts lurking within our solar system.

If SETI were to succeed, we think that it would be unlikely that the signal it observes would be a simple, decodable message. It would more likely be a byproduct (or maybe even an accident or malfunction) of some supercomplex machine far beyond our comprehension. Even if messages were transmitted, we may not recognize them as artificial because we may not know how to decode them. A veteran radio engineer familiar only with amplitude-modulation might have a hard time decoding modern wireless communication. Indeed, compression techniques today aim to make signals as close to noise as possible.

So to conclude: conjectures about advanced or intelligent life are on a far shakier ground than those about simple life. We would argue that this suggests three things about the entities that SETI searches could reveal:

They will not be organic or biological.

They will not remain on the surface of the planet where their biological precursors lived.

We will not be able to fathom their motives or intentions.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

ADVERTISEMENT
(Published 01 June 2023, 16:11 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT