FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Bestseller: Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

The existential risk: Containment and control of Superintelligent AI is an extreme engineering challenge. The future survival of humanity is at stake.

Control methods for safe containment of Superintelligent AI include:

1. Boxingrestricted access to the outside world by Superintelligent AI

2. Incentivising – using social or technical rewards for Superintelligent AI

3. Stuntingconstraints on cognitive abilities of Superintelligent AI

4. Tripwiresdetection of activity beyond expected behaviour automatically closes down the Superintelligent AI system

Important Note: Since 2014, AI technology has developed far faster than scientists expected. The current prediction for superintelligent machines is between 2025 and 2030 (if it has not happened already and we don’t know about it)

Scientific Consensus: Mathematically provable safe deployments of Superintelligent AI is a requirement for the future survival of the human race.

Learn More:

Nick Bostrom Interview by Tim Adams | The Guardian | 12 JUNE 2016

  • On humanity and artificial intelligence: “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
  • According to Oxford philosopher Nick Bostrom, superintelligent machines are a greater threat to humanity than climate change.

Learn More

Figure 2: Qualitative risk categories. The scope of a risk can be personal (affecting only one person), local (affecting some geographical region or a distinct group), global (affecting the entire human population or a large part thereof), trans-generational (affecting humanity for numerous generations, or pan-generational (affecting humanity over all, or almost all, future generations). The severity of a risk can be classified as imperceptible (barely noticeable), endurable (causing significant harm but not completely ruining quality of life), or crushing (causing death or a permanent and drastic reduction of quality of life).

The area marked “X” in figure 2 represents existential risks. This is the category of risks that have (at least) crushing severity and (at least) pan-generational scope.6 As noted, an existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or the permanent and drastic failure of that life to realize its potential for desirable development. In other words, an existential risk jeopardizes the entire future of humankind.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.