FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Actual p(doom) predictions by AI experts.

The future survival of humanity to co-exist with AI is uncertain.

Uncertain Human Survival is Unacceptable.

p(doom) is the probability of very bad outcomes (e.g. human extinction) as a result of AI.

This most often refers to the likelihood of AI taking over from humanity, but different scenarios can also constitute “doom”. For example, a large portion of the population dying due to a novel biological weapon created by AI, social collapse due to a large-scale cyber attack, or AI causing a nuclear war. Note that not everyone is using the same definition when talking about their p(doom) values. Most notably the time horizon is often not specified, which makes comparing a bit difficult.

  • <0.01% Yann LeCun, one of three godfathers of AI, works at Meta (less likely than an asteroid)
  • 10% Vitalik Buterin, Ethereum founder
  • 10% Geoff Hinton, one of three godfathers of AI (wipe out humanity in the next 20 years)
  • 9-19.4% Machine learning researchers (From 2023, depending on the question design, median values: 5-10%)
  • 15% Lina Khan, head of FTC
  • 10-20% Elon Musk, CEO of Tesla, SpaceX, X
  • 10-25% Dario Amodei, CEO of Anthropic
  • 20% Yoshua Bengio, one of three godfathers of AI
  • 5-50% Emmett Shear, Co-founder of Twitch, former interim CEO of OpenAI
  • 30% AI Safety Researchers (Mean from 44 AI safety researchers in 2021)
  • 33% Scott Alexander, Popular Internet blogger at Astral Codex Ten
  • 35% Eli Lifland, Top competitive forecaster
  • 40% AI engineer (Estimate mean value, survey methodology may be flawed)
  • 40% Joep Meindertsma, Founder of PauseAI (The remaining 60% consists largely of “we can pause”.)
  • 46% Paul Christiano, former alignment lead at OpenAI, founder of ARC
  • 50% Holden Karnofsky, Executive Director of Open Philanthropy
  • 10-90% Jan Leike, Former alignment lead at OpenAI
  • 60% Zvi Mowshowitz, AI researcher
  • 70% Daniel Kokotajlo, Forecaster & former OpenAI researcher
  • >80% Dan Hendrycks, Head of Center for AI Safety
  • >99% Eliezer Yudkowsky, Founder of MIRI
  • 99.999999% Roman Yampolskiy, AI safety scientist

Do something about it

  • However high your p(doom) is, you probably agree that we should not allow AI companies to gamble with our future.
  • Join PauseAI to prevent them from doing so.

Learn more at Pause AI

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.