FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

The tech is moving so quickly that it may seem presumptuous to believe that we already know what to make of it all. But many of those who have spent the last decade neck-deep in machine learning believe they do, in fact, know, and that we need to be thinking in quite dire terms. It’s common to hear invocations of the A.I. revolution as an event as significant as the arrival of the internet — but it’s one thing to prepare for a cultural earthquake like the internet and another to be preparing for the equivalent of nuclear war.

FOR EDUCATIONAL PURPOSES

OPINION. A.I. Is Being Built by People Who Think It Might Destroy Us. March 27, 2023. THE NEW YORK TIMES

By David Wallace-Wells, Opinion Writer

All of a sudden, it’s as if we’ve come face to face with a new species on our smartphone screens.

Over the past few months, A.I. chatbots have swarmed into the country’s social media feeds and, to some extent, its nightmares, with transcript after transcript triggering a collective, millenarian form of what the 19th-century critic John Ruskin memorably called the pathetic fallacy — the very human tendency to project onto nonhuman beings those features we see as the quintessential attributes of humanity. With ChatGPT and Bing Chat, we aren’t just projecting depth or pathos or inner life as we implore them to “narrate the invasion of Iraq in lyrics suitable for a Disney princess” or “explain to a 4-year-old why King Tut’s death mask was made for a woman”; we’re reading our own existential panic into their responses, seeing them less as robot pets than as so many Frankenstein’s monsters, even when they are simply following our commands.

How menacing are the chatbots? They are still routinely making mistakes so basic that it seems pointlessly mystical to refer to them as “hallucinations,” as machine learning engineers and A.I. theorists alike tend to. (Was the A.I. microdosing or just wrong when it suggested that LeBron James had a pretty good chance of winning the N.B.A. M.V.P. this year?) They are prone to misinformation, and biased in quite conventional if disturbing ways. Some are trained on databases that do not extend to the present day, so that any questions about recent events (such as the run on Silicon Valley Bank) are likely to generate useless or counterproductive answers — making today’s leading “use case” for these tools, as a form of online search, a bit hard to understand.

But A.I. is also exhibiting some plainly disorienting progress, not just on concrete tasks but on unnerving ones: a chatbot hiring a human TaskRabbit to solve a captcha, another writing its own Python code to enable its “escape.” These are not examples of robot autonomy so much as performances of ready-made anxieties — in each case, they were prompted by human observers to test guardrails — and yet they still disquiet, signs that something strange and disruptive is absolutely afoot.

The tech is moving so quickly that it may seem presumptuous to believe that we already know what to make of it all. But many of those who have spent the last decade neck-deep in machine learning believe they do, in fact, know, and that we need to be thinking in quite dire terms. It’s common to hear invocations of the A.I. revolution as an event as significant as the arrival of the internet — but it’s one thing to prepare for a cultural earthquake like the internet and another to be preparing for the equivalent of nuclear war. And it is especially remarkable, given the pervasive utopianism of the internet’s original architects, just how dystopian those ushering in its next phase seem to be about the very new world they believe they are spawning.

“Last time we had rivals in terms of intelligence they were cousins to our species, like Homo neanderthalensis, Homo erectus, Homo floresiensis, Homo denisova and more,” the neuroscientist Erik Hoel wrote in one much-passed-around meditation on the current state of play, with the subtitle “Microsoft’s new A.I. really does herald a global threat.” Hoel went on: “Let’s be real: After a bit of inbreeding we likely murdered the lot.”

More outspoken cries of worry have been echoing across the internet now for months, including from Eliezer Yudkowsky, the godfather of A.I. existentialism, who lately has been taking whatever you’d call the opposite of a victory lap to despair over the progress already made by A.I. and the failure to erect real barriers to its takeoff. We may be on the cusp of significant breakthroughs in A.I. superintelligence, Yudkowsky told one pair of interviewers, but the chances we will get to observe those breakthroughs playing out are slim, “because we’ll all be dead.” His advice, given how implausible he believes a good outcome with A.I. appears to be, is to “go down fighting with dignity.”

Even Sam Altman — the mild-mannered, somewhat normie chief executive of OpenAI, the company behind the most impressive new chatbots — has publicly promised “to operate as though these risks are existential,” and suggested that Yudkowsky might well deserve the Nobel Peace Prize for raising the alarm about the risks. He also recently wrote that “A.I. is going to be the greatest force for economic empowerment and a lot of people getting rich we have ever seen,” and joked in 2015 that “A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” A year later, in a New Yorker profile, Altman was less ironic about the bleakness of his worldview. “I prep for survival,” he acknowledged — meaning eventualities like a laboratory-designed superbug, nuclear war and an A.I. that attacks us. “My problem is that when my friends get drunk they talk about the ways the world will end,” he said. “I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

This may not be a universal view among those working on artificial intelligence, but it also is not an uncommon one. In one much cited 2022 survey, A.I. experts were asked: “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median estimate was 10 percent — a one in 10 chance. Half the responses rated the chances higher. In another poll, nearly one-third of those actively working on machine learning said they believed that artificial intelligence would make the world worse. My colleague Ezra Klein recently described these results as mystifying: Why, then, would you choose to work on it?

There are many possible answers to this question, including that ignoring growing risks in any field is a pretty good way to make them worse. Another is that the respondents don’t entirely believe their answer, and are instead articulating how significant they believe A.I. to be by resorting to theological and mythological reference points. But another partial explanation could be that, to some, at least, the apocalyptic possibilities look less like downsides than like a kind of enticement — that those answering survey questions in self-aggrandizing ways may be feeling, beyond the tug of the pathetic fallacy, some mix of existential vanity and an almost wishful form of end-of-days prophecy.

In recent years, this critique of catastrophist thinking has been regularly and conspicuously leveled by complacent centrists and patronizing graybeards against the alarmist fringe of the climate movement — yes, warming was happening, they acknowledged, and yes, it represented a challenge to the world’s collective status quo, but still, all of this hyperbolic talk was, let’s be honest, a bit much. More recently, commentators fretting over the mental health crisis in U.S. teenagers have linked the spikes in despair to catastrophist thinking on the progressive left more broadly.

But look elsewhere on the political spectrum and you can find a similar fatalism, often incubated online, that allows catastrophists to extrapolate and even braid their various fears — about imminent ecosystem collapse and mass extinction in this corner of the internet, or low birthrates and “surplus men” in that one; about worldwide bank runs and the crisis of fiat currency here, and hyperinflation and a global debt crisis there; about mass permanent disability from Covid infection over here and permanent lockdowns and rampant cardiac disaster from vaccination over there.

Some of these fears are better grounded than others — your mileage may vary, as they used to say in the internet’s more sociable age. But it’s clear that catastrophic thinking isn’t some isolated or idiosyncratic phenomenon you can cordon off or excise from the culture. Soft millenarianism has become so much a universal grammar that even the high priests of technological progress, the self-appointed architects of our brave new world, can’t manage to escape it.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.