The intelligence explosion: Nick Bostrom on the future of AI
The birth of superintelligence
I think we have this notion of what’s smart and what’s dumb. Whereas I think there is actually a huge amount of space above us between our level of intelligence and God’s. And once you go a little bit beyond human, then you get this feedback loop, where the brain’s doing the AI research will become AIs themselves. Therefore I think there is a significant chance that we’ll have an intelligence explosion. So that within a short period of time, we go from something that was only moderately affecting the world, to something that completely transforms the world. All the things that we could imagine human intelligence being useful for, which is pretty much everything. Artificial intelligence could be useful for as well if it just became more advanced. Whether it’s like diseases or pollution or poverty we would have vastly better tools for dealing with if you had super intelligence you could help develop better clean energy technologies or medicines. So it does look to me like all the plausible paths to a really great future, involve the development of machine super intelligence at some point.
Existential risks
There are I think existential risks connected with the transition to the machine intelligence era. And the most obvious being the possibility of underlying super intelligence that then overrides the earth, human civilization, with its own value structures. Another big class of failures would be if this technology were used for destructive purposes.