FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT.

TED | What happens when our computers get smarter than we are? | Nick Bostrom | 27 APRIL 2015

TED | How civilization could destroy itself — and 4 ways we could prevent it | Nick Bostrom | 17 Jan 2020

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Bostrom categorises four (4) control methods for safe containment of Superintelligent AI:

1. Boxingrestricted access to the outside world by Superintelligent AI

2. Incentivising – using social or technical rewards for Superintelligent AI

3. Stuntingconstraints on cognitive abilities of Superintelligent AI

4. Tripwiresdetection of activity beyond expected behaviour automatically closes down the Superintelligent AI system

Note: Since 2014, AI technology has developed faster than scientists expected: The current prediction for superintelligent machines is between 2025 and 2030 (if it has not happened already and we don’t know about it)

Scientific Consensus: Mathematically provable safe deployments of Superintelligent AI is a requirement for the future survival of the human race.