FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“Demis, what’s your P(Doom)?”

“definitely non-zero”

Full Quote: “Yeah, I know that people are fixated with that. Do you know my honest answer to these things? First of all, I actually find a lot of the debate on the social media sphere a little bit ridiculous in this sense. You know, you can find people on both sides of the argument, very eminent people. Just take like Hinton versus LeCun, right? I mean, both are Turing Award winners. I know them both very well. Yoshua Bengio, you know, these are some of the top people who originally were in the field. And, you know, the fact that they can be completely in opposite camps, to me suggests that actually, we don’t know, right? With this transformative technology, it’s so transformative. It’s unknown. So I don’t think anyone can precisely put a probability on it. What I do know is it’s non zero that risk, right? It’s also it’s, it’s definitely worth debating. And it’s worth researching really carefully. Because even if that probability turns out to be very small, right, let’s say on the optimist end of the scale, then we want to still be prepared for that. We don’t want to, you know, have to wait to the eve before AGI happens and go. Maybe we should have thought about this a bit harder, okay? We should be preparing for that now, right? And trying to ascertain more accurately what the risk really is […], no matter what the probability [of Doom] is, because it’s definitely non-zero, right? So I don’t agree with the people that say there’s nothing to see here. I think that’s ridiculous.”

On the Hardness of AI Safety:

“What is that risk? How would we mitigate it? What risks are we worried about? […] the field should be doubling down on analysis techniques and figuring out understanding of these systems way ahead of where we’re on the kind of cusp of AGI. And that isn’t a lot of time because if we’re less than a decade away, these problems are very hard research problems. They’re probably harder or as hard as the breakthroughs required to build the systems in the first place. So we need to be working now, yesterday, on those problems […]“

On the absurdity of the 0% of chance of risks position:

“On what basis are they making that assumption, right? Just like in the past, 10, 15 years ago, when we started out, well, I remember doing my postdoc at MIT, which was the home at the time of traditional AI methods, logic systems, and so on. I won’t name the professors, but some of the big professors there were like, learning systems, deep learning, reinforcement learning, they’ll never work. 300 years for sure never work. And I was just like, how can you put 0% on something in 300 years? 300 years! Think 300 years back; what happened? What has society done? I mean, that’s just not a scientific statement to say 0%. We don’t even understand the laws of physics well enough to say things are 0%, let alone technologies. So it’s clearly non-zero, it’s massively transformative, we all agree, hugely monumental impact, hopefully for good. Obviously, that’s why we’re working on it, and I’ve worked my whole life on it. We just talked about that science, medicine, et cetera, human flourishing, but we gotta make sure it goes well. So if it’s non-zero, we should be investigating that empirically and doing everything we can to understand it better, and actually be more precise, then in future […] ”

“Demis, what’s your P(Doom)?”

yeah I know that’s people are fixated with that do you know my honest answer to these things first of all um I actually find a lot of the debate on the social media sphere a little bit ridiculous in this sense you know you can find people on both sides of the argument very eminent people you know just take like Jeff Hinton versus Yan Lon right I mean both are chewing Award winners I know them both very well yosha Benjo you know these are the some of the top uh uh uh uh uh people who originally were in the field and um you know the fact they can be completely in opposite camps to me suggests that actually the the we don’t know right with this transformative technology it’s so transformative um it’s unknown so I I I don’t think anyone can precisely I think it’s kind of a nonsense to precisely put a probability on it um what I do know is it’s nonzero that risk right it’s Al so it’s it’s definitely worth debating and it’s worth researching really carefully because um even if that probability turns out to be very small right let’s say on the on The Optimist end of the scale then we want to still be prepared for that we don’t want to know have to wait till the eve before AGI happens and go you know what maybe we should have thought about this a bit harder okay you no we should be preparing for that now right and and trying to ascertain more more accurately what the risk really is right what is that risk how would we mitigate it what risks are we worried about is it self-improvement is it uh controlability is it the value systems is it the goal specification you know all of these things are research questions and I think they’re empirical questions so it’s unlike a natural science like chemistry physics and biology the phenomena you’re studying is already out there exists in nature so you go out there and you study and and you sort of try to take apart and and and deconstruct what’s going on but with the engineering science the difference is you have to create the artifact of of worthy of study first and then you can deconstruct it and and only very recently I would say do we have ai systems that are even sort of interesting enough to be worthy of study but we have them now things like Gemini Alpha fold and so on and we should be um doubling down and we are obviously as as as as Google deep mine but the the field should be doubling down on analysis techniques and and figuring out uh understanding of these systems um way ahead of where you know we’re on the kind of cusp of AI and that isn’t a lot of time because if we’re you know less than a decade away these problems are very hard research problems they’re just they’re probably harder or as hard as as the breakthroughs required to build the systems in the first place so we need to be working now yesterday on those problems no matter what the the the the probability is because it’s not it’s definitely uh nonzero right so I don’t agree with the people that say there’s nothing to see here I think that’s ridiculous how on what basis are they making that that that assumption right just like in the past 10 15 years ago when we started out well I remember I was doing my postdoc in MIT and that was the home at the time of traditional AI methods logic systems and so on I won’t name the professors but some of the big professors there were like learning systems these deep learning reinforc learning they’ll never work you know 100 300 years for sure never work and I was just like how can you put z% on something in 300 years 300 years think 300 years back what happened what we’ve what society’s done like I mean that’s just not a scientific statement to say 0% we don’t even understand the laws of physics well enough to say things are 0% let alone you know techn so it’s clearly nonzero it’s massively transformative we all agree hugely Monumental like impact hopefully for good obviously that’s why we’re working on it and I’ve worked my whole life on it we just talked about that science medicine Etc human flourishing but we got to make sure it goes well so if it’s nonzero we should be investigating that empirically and doing everything we can to understand it better and actually be more precise then in future maybe in five years time i’ better give you I would hope to better give you a much more precise answer with evidence to back it up rather than you know slanging matches on Twitter which I don’t think are very useful to be honest right.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.