FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“Artificial intelligence could be a real danger in the not-too-distant future. It could design improvements to itself and out-smart us all.” (Stephen Hawking, 2014)

Why my p(stop/doom) = 99%

BECAUSE:

  • FACT: Mutualistic Symbiosis is a very successful biological strategy.

  • IF the Corals can do it with algae for 210 million years…

  • THEN Homo sapiens can do it with ASI for 1,000 years (a good start)

p(doom) = the probability of the doom of humanity by Artificial Super Intelligent (ASI) technology.

p(doom) Scientific opinions range from 0.00001% to 99.99999% with a relatively high average of 10 to 20% probability of human extinction from ASI.

The truth is:

  1. Nobody knows what will happen in the future with ASI.
  2. Nobody currently understands how, or why, Large Language Models (LLMs) like ChatGPT actually work.
  3. IF the industry continues on it’s current trajectory (with Zero safety) then p(doom) is certainly 10 or 20% or more…

But, the fact is:

  1. Corals and algae have an IQ of exactly 0, but they have survived together with mutualistic symbiosis for 210 million years.
  2. Homo sapiens have evolved and survived largely because of: (A) intelligence and (B) the ability to cooperate in large groups.
  3. Given functional cooperation, the collective intelligence of Homo sapiens is vast- and soon to become exponentially greater with AI.
  4. Homo sapiens can probably engineer SafeAI, if we cooperate effectively, with a high probability of success. (Hence humble opinion: 99.99%)

CONTAINMENT is THE REQUIREMENT.

CONTAINMENT is POSSIBLE.

ASI can certainly not violate the LAWS of PYSICS.

FUNCTIONAL human societies create and enforce SOCIAL LAWS.

IF global cooperation, THEN humans can survive the 21st Century.

Learn more:

Given the fact that LLM “jailbreaks” are commonly understood and openly published, the current industry AI Safety Level = 0

HINT: for a reasonably safe future, we require Level 4, with mathematically provable containment.

“Artificial intelligence could be a real danger in the not-too-distant future. It could design improvements to itself and out-smart us all.” (Stephen Hawking, 2014)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.