FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

28:50

UnHerd: do you think we’re ready for the intelligence explosion are we properly prepared or are we How can we mitigate AI risks?

Bostrom: Underprepared no no no and I mean I don’t think we’ll ever I mean I think it’s a little bit like we are in some plain and we realize there is no pilot or the pilot has had a heart attack and died and so now we are the passengers here we got to try to land this right and uh um it’s harder because we don’t have sort of a ground control that is like giving us instructions. So we have to we see a big instrument panel right and uh there are like a few people in the cockpit kind of looking at all of this and like you could look at the fuel gauge and we’ve got some time left in the air um like a limited period of time before this arrives and we got to sort of figure out how to bring this uh this this bird in for a safe landing.

UnHerd: How do we do that with such incredible levels of dispute and ideological Schism across the world it feels like without a kind of central body who’s dealing with the ethics of this we’re going to very quickly get into a situation where different polar axis of the world are developing this stuff in a kind of Space Race style competition but with their own ideological messaging and values built into it which feels like such an exist such a clearly existential threat how what would be the first step in the kind of Nick Bostrom Global program to to to mitigate the risk…

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.