A very good read from a respected source!

Editor’s P(doom) is 1 to 99 percent.

1 percent if all AGI systems are mathematically provably contained and controlled before the intelligence explosion occurs… if not…

Geoffrey Hinton: “We’re toast”

THE NEW YORK TIMES. Silicon Valley Confronts a Grim New A.I. Metric.

Where do you fall on the doom scale — is artificial intelligence a threat to humankind? And if so, how high is the risk?

By Kevin Roose

December 6, 2023

Dario Amodei, the chief executive of the A.I. company Anthropic, puts his between 10 and 25 percent. Lina Khan, the chair of the Federal Trade Commission, recently told me she’s at 15 percent. And Emmett Shear, who served as OpenAI’s interim chief executive for about five minutes last month, has said he hoverssomewhere between 5 and 50 percent.

I’m talking, of course, about p(doom), the morbid new statistic that is sweeping Silicon Valley.

P(doom) — which is math-speak for “probability of doom” — is the way some artificial intelligence researchers talk about how likely they believe it is that A.I. will kill us all, or create some other cataclysm that threatens human survival. A high p(doom) means you think an A.I. apocalypse is likely, while a low one means you think we’ll probably tough it out.

Once an inside joke among A.I. nerds on online message boards, p(doom) has gone mainstream in recent months, as the A.I. boom sparked by ChatGPT last year has spawned widespread fears about how quickly A.I. is improving.

It’s become a common icebreaker among techies in San Francisco — and an inescapable part of A.I. culture. I’ve been to two tech events this year where a stranger has asked for my p(doom) as casually as if they were asking for directions to the bathroom. “It comes up in almost every dinner conversation,” Aaron Levie, the chief executive of the cloud data platform Box, told me.

P(doom) even played a bit part in last month’s OpenAI drama. After Mr. Shear was appointed as OpenAI’s interim leader, employees began circulating a clip from a recent podcast in which the executive had stated that his p(doom) could be as high as 50 percent. Some employees worried he was a “doomer,” and that he might seek to slow down or limit their work because it was too risky, one person who witnessed the discussions said. (Ultimately, Sam Altman, OpenAI’s ousted chief executive, returned, so it didn’t matter.)

Sci-fi fans have theorized about robot takeovers for years, of course. But after the release of ChatGPT last year, the threat started to seem more real. After all, if A.I. models were winning art prizes and passing the bar exam, how far off could disaster be?

A.I. insiders were also sounding the alarm. Geoffrey Hinton, the prominent A.I. researcher who quit Google this year and began warning about A.I. risks, recently estimated that if A.I. was not strongly regulated, there was a 10 percent chance it would lead to human extinction in the next 30 years. Yoshua Bengio, who along with Mr. Hinton is considered one of the “godfathers of deep learning,” told an interviewer that he thought an A.I. catastrophe was roughly 20 percent likely.

Nobody knows whether A.I. is 10 percent or 20 percent or 85.2 percent likely to kill us, of course. And there are lots of obvious follow-up questions, such as: Would it still count as “doom” if only 50 percent of humans died as a result of A.I.? What if nobody died, but we all ended up jobless and miserable? And how would A.I. take over the world, anyway?

But the point of p(doom) isn’t precision. It’s to roughly assess where someone stands on the utopia-to-dystopia spectrum, and to convey, in vaguely empirical terms, that you’ve thought seriously about A.I. and its potential impact.

The term p(doom) appears to have originated more than a decade ago on LessWrong, an online message board devoted to the Rationalist philosophical movement.

LessWrong’s founder, a self-taught A.I. researcher named Eliezer Yudkowsky, was early to the idea that a rogue A.I. could take over, and wrote about various A.I. disaster scenarios he envisioned. (At the time, A.I. could barely set a kitchen timer, so the risk seemed pretty remote.)

Mr. Yudkowsky, who has since become one of the A.I. world’s best-known doomers, told me that he didn’t originate the term p(doom), although he helped to popularize it. (He also said that his p(doom), if current A.I. trends continue, is “yes.”) The term was later adopted by members of the Effective Altruism movement, who use logical reasoning to arrive at ideas about moral goodness.

My best guess is that the term was coined by Tim Tyler, a Boston-based programmer who used it on LessWrong starting in 2009. In an email exchange, Mr. Tyler said he had been using the term to “refer to the probability of doom without being too specific about the time scale or the definition of ‘doom.’”

For some, talking about your p(doom) is just idle chitchat. But it has also become an important social signal in the debate raging in Silicon Valley between people who think A.I. is moving too fast, and people who think it should move even faster.

Mr. Levie, the chief executive of Box, belongs to the more optimistic camp. He says his p(doom) is very low — not zero, but “about as low as it could be” — and he’s betting that we’ll mitigate the big risks from A.I. and avoid the worst possible outcomes. His worry is not that A.I. will kill us all, but that regulators and lawmakers will seize on scary predictions of doom as a rationale for cracking down on a promising young sector.

“The overreach is probably if it enters critical policy decisions way too early in the development of A.I.,” he said.

Another problem with p(doom) is that it’s not clear what counts as good or bad odds, when the stakes are existential. Are you really an A.I. optimist if you predict, for example, that there is a 15 percent chance that A.I. will kill every human on Earth? (Put another way: if you thought that there was “only” a 15 percent chance that the next plane you boarded would crash and kill everyone on board, would you get on the plane?)

Ajeya Cotra, a senior researcher at Open Philanthropy who studies A.I. risk, has spent a lot of time thinking about p(doom). She thinks it’s potentially useful as a piece of shorthand — her p(doom) is between 20 and 30 percent, for the record — but she also sees its limits. For starters, p(doom) doesn’t take into account that the probability of harm associated with A.I. depends in large part on how we choose to govern it.

“I know some people who have a p(doom) of more than 90 percent, and it’s so high partly because they think companies and governments won’t bother with good safety practices and policy measures,” she told me. “I know others who have a p(doom) of less than 5 percent, and it’s so low partly because they expect that scientists and policymakers will work hard to prevent catastrophic harm before it occurs.”

In other words, you could think of p(doom) as a kind of Rorschach test — a statistic that is supposed to be about A.I., but that ultimately reveals more about how we feel about humans, and our ability to make use of powerful new technology while keeping its risks in check.

So, what’s yours?

Kevin Roose is a Times technology columnist and a host of the podcast “Hard Fork.”More about Kevin Roose

Learn More: