Artificial intelligence, Armageddon and the Antichrist

Peter Thiel speaks during a conference in Miami, April 7, 2022.
Peter Thiel speaks during a conference in Miami, April 7, 2022.
Summary

As technology advances, Peter Thiel asks the same big questions people were asking 2,000 years ago.

An anomaly: In the Factiva database of published news sources, 16,785 articles since 1980 have drawn a connection between artificial intelligence and the apocalypse or Armageddon. Until the investor Peter Thiel began giving talks last month around San Francisco, the Antichrist hardly figured.

Now that has changed, though I doubt Mr. Thiel would get past the door of the copyright office if he tried to coin the linkage. For one thing, humanity seems to have had a genetic disposition to fear the death of itself, different from our fear of our individual deaths, for as long as humanity has existed. Our literature and other traditions tell us we have longed for but also feared a secular savior powerful enough to put these fears at bay.

Causing spasms in the media, Mr. Thiel’s lectures reportedly included passing reference to Greta Thunberg. Who is Ms. Thunberg except she who would save us from climate apocalypse?

Elon Musk’s name apparently came up during a Q&A. Who is Mr. Musk except he who would rescue us from planetary doom?

If doubts about one-worldism surface in Mr. Thiel’s talks, consider words sometimes attributed to a U.S. presidential aide in 1947, which could have been spoken by any highly reputed AI doomer last week: “We were not arguing for a world government; we were arguing for a world that could survive." This came as the Truman administration had just been shouted down by domestic and foreign opposition to a short-lived attempt to hand control of atomic weapons to the United Nations.

Mr. Thiel aptly summons an age-old debate. To simplify, he sees more apocalyptic risk to humanity from those who would stop AI than from those who promote it. In a long essay in the religion journal First Things, he and colleague Sam Wolfe work through literary treatment of the false secular savior from Francis Bacon (1626) to the recent Japanese manga epic. One they might have cited but didn’t: 1907’s “The Lord of the World," by an Anglican-turned Catholic priest, Robert Hugh Benson.

This early example of dystopian science fiction concerns a senator from Vermont who appears mysteriously on the world stage in the early 21st century, just as a final and apocalyptic war looms between Europe and an Asian empire. To assure universal peace and brotherhood, he orders up a compassionate euthanasia of clingers who refuse to give up their old beliefs and threaten the new religion of man worshiping man.

In overtly religious works, when the Antichrist vouchsafes temporal comfort, it usually comes at the expense of man’s eternal salvation. Now the Antichrist may be a stalking horse for a different question: what it means, in some genetically stable sense, to be human.

Here I have to confess to being dismissive of the most common version of AI doom, in which superintelligent machines do away with humanity. Whatever apocalypse ends up getting us, it will likely be one we didn’t recognize and prepare for, not one we did. In the meantime, if we don’t fully understand what’s going on inside today’s large language models, that’s all the more reason to observe their behavior closely.

The other version of AI doom is more interesting. In one sense, after all, humans are already doomed. We already knew apocalypse lay in our future. The average mammalian species, not to mention the average primate species, lasts about one million to three million years, and most didn’t need an Armageddon-scale trauma to usher them out of the fossil record.

The odds, as of this morning, that humans will be around in 100, 1,000 or 10,000 years aren’t bulletproof. Whatever the risk of AI, it’s quite possible, in present value terms, its risk is dwarfed by the risk to humanity’s longevity of not developing AI. In the next 50,000 years or so, after all, human civilization is going to have to survive an ice age.

The interesting version of the AI apocalypse is one in which humans do away with themselves—decide to become machines. This is the version that led to a famously snippy poolside moment between Mr. Musk and Google founder Larry Page (Mr. Musk opposed becoming machines).

A technologist, libertarian and self-professed Christian, Mr. Thiel makes easy bait for commentators of a certain algorithimic ilk. He’s the overempowered tech billionaire using Bible nuttery to advance his deregulatory agenda.

His argument, though, is longstanding: The biggest threat to human longevity would be stopping technology. Stagnation is death. And it may be nearer than we think.

Of course, 2,000 years ago people used different language and symbols to get at the core truths of humanity’s jeopardy-filled existence. I’m not sure there’s anything terribly strange about trying to learn from them.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

Read Next Story footLogo