On p(doom): “I think we haven’t got a clue so 50% is a good number” – Prof. Geoffrey Hinton

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Introduction um all right um yeah so as Bill mentioned it’s great to have uh Jeffrey Hinton here I’ll list your accomplishments briefly uh ahead of time even though it’s it feels kind of go go on say say I don’t need any introduction my my first bullet point is it’s TR to say he needs no introduction so I’m going to give him an introduction so we’re done um what’s your first question you sure yeah all right um Okay so uh I’m curious uh kind of about how your thinking has changed just how um bill was mentioning a moment ago I’m going to not you know not as a call out but just because I’m curious I’m going to read some things that you said in 2018 um partially because I feel like I I hear people say these things a lot today and now uh your your tone on AI broadly and the potential risks from AI seems quite different um some things that I found on the Wikipedia page you said the phrase artificial general intelligence carries with it the implication that this sort of single robot is suddenly going to be smarter than you I don’t think it’s going to be like that I think more and more of the routine things we do are going to be replaced by AI systems AI in the future is going to know a lot about what you’re probably going to want it to do but it’s not going to replace you um it seems this seems quite different from what I hear from you now and so I’m curious if you could kind of walk us through the evolution in your thinking over the intervening uh six years so back then I Could you walk us through the evolution of your thinking thought it was going to be a long time before we had things smarter than us um I also thought that as we made things more like the brain they would get smarter and then my last couple of years at Google I was trying to think of ways you could use analog Hardware to reduce the energy requirements of training these large Ang of models and serving the larger models um and it’s kind of obvious that if you can use analog hardware and you have a learning algorithm that can make use of all the peculiar quirks of the analog circuitry you can overcome a lot of the problems of analog um you don’t have to make two different computers behave the same way they just learn to make use of the hardware they’ve got um that makes them kind of mortal so when that Hardware dies their weights are no good anymore um but it also means you can run at much lower power so the classic example is if you want to multiply a vector neural activities by a matrix of weights to get the input to the next layer which is a sort of central operation that’s what takes most of the compute then if you make the neural activities be voltages and you make the weights be conductances a voltage times the conductance is a charge per unit time charges add themselves up so you’re done you never had to turn uh activity into a 16bit number and then do 16 squar bit operations to do the multiply it’s just very very low power very easy and quick now for Less linear things it’s harder to to do them um but the question was could you use analog Hardware to have much lower energy and as I thought more and more about that various problems became clear one was we don’t know how to learn in a system where you don’t know how the system behaves so if you don’t know how the system behaves you don’t know the forward pass so you can’t do the backward pass um people have dreed up ways around that um approximations and the approximations work well for small things like Mist but nobody’s ever made a sort of plausible version that doesn’t use backrop for things of the size of imet even which is now a small thing it used to be a huge thing um so that was one problem the other problem was I sort of became aware that of what the big advantages of digital computation are so it costs a lot of energy but you can have lots of different copies of the same model and those different copies can go and look at different data and they could all learn something from the different data they look at look at and then they can share what they all learn by just averaging the gradient and that’s huge in terms of sharing information between different copies of the same model we if we could share information like that 10,000 people would go out they could study 10,000 different subjects and they could average their weights as they went and you’d be able to get 10,000 degrees all at once um it doesn’t work like that we can’t communicate information very well at all as I now demonstrating um I produce sentences you figure out how to change your weight so you would produce the same sentences that’s called a university um and and it doesn’t work very well it’s very slow compared with compared with sharing weights so they have this huge advantage over us and it’s because of that that they know much more than us so gp4 my estimate it knows a th000 to 10,000 times as much as any one person it’s a not very good expert at everything um they can do that partly because they can use back propagation to get the gradient and partly because they can share between many different copies of the same model so I became convinced that actually it’s just a better form of computation than what we’ve got so they can in just a few trillion connections really won’t actually tell me the number but it’s going to be just a few trillion um they can get like thousands of times more knowledge than we have in a 100 trillion connections and they’re solving a sort of different problem from us their problem is huge amounts of experience and not many connections and backdrops very good at squeezing information in our problems the other way around very little experience in huge numbers of connections we probably have a very different kind of learning algorithm um so I became convinced then that making things more like the brain that sort of area is more or less over things are going to get smarter not by making them more like the brain but by just exploiting the path we’re going in already that’s now specific to digital intelligence so I became convinced that these things are just better than us the end can you say more I’m just to follow up on that I’m I’m curious what was the exact like what was the actual process that generated this Insight that changed your thinking was it that you were talking with people I know you mentioned the work you were doing at Google at the time what was the was there like What was it like realizing the advantages of analog realizing it was having to looking hard at the advantages of analog and understanding the advantages of analog in particular this Insight that now that we learn everything rather than program it you know we made digital computers so they do exactly what we say so that we could write the program and different computers had to do exactly the right thing so you could program them and we’re not programming them anymore I mean there a learning algorithm but we’re to get them to do specific things we train them and once you train them you don’t have to have them all work the same way way because they’re going to learn to use the hardware they’ve got and so there’s a completely different path you could go along and that’s the biological path where you make use of weird properties of your particular neurons and your weights are no use to me because my neurons are different and my connectivity is different so that was very exciting but then as I encountered the difficulties of how do you get that to learn and what is the learning algorithm um I began to realize the huge advantages of digital computation even when you’re learning even when you’re not programming me directly and the advantage comes from being able to share different models being able to share so that led me to believe that these things were better now that happened at the same time as chatbots like Palm were coming along that could explain why joke was funny so it was originally that was another big influence I I always had as a Criterion I have no justification for this but my Criterion for when these things get really smart is when they could explain when a joke why a joke was funny that seemed to me a good measure that was my churing test and palm could do it um it couldn’t actually make jokes you may have noticed even gp4 can’t make jokes gp4 because it generates a word at a time you see so if you ask it to tell a joke it starts generating stuff that looks very like the beginning of a joke you know it says a priest in an octopus went into a bar well you know that’s the beginning of a joke right and then it keeps on like that and then it gets to the point where it needs the punch line but like some of my friends it doesn’t think of the punch line before it tries to tell the joke and that’s a disaster so it has this incredibly wimpy punch line but that’s just because it generates one word at a time it doesn’t have to behave like that um so it’s much better at saying why joke’s funny than telling a joke unless it’s just remembering a joke it knows but make creating a joke it’s hard for it anyway my Criterion was can it tell tell you why joke’s funny and it could so it was sort of those two things together the ability the chatbots which is sort of reinforced by playing with GPT 4 and the fact that I finally understood what it is about digital that makes it so much better than analog that made me sort of think that my view is probably these things are going to take over probably we’re um just a passing phase in the evolution intelligence and it’s probably like it is with a dragonfly you know dragonflies are wonderful things if you look at the lava of a dragon fly it looks nothing like a dragonfly it’s this B big bulky thing that lives underwater um and what happens is it gets a lot of energy from the environment and then it turns into soup and out of the soup you build a dragonfly and with the lava I’m curious also uh since having that insight about the problem has your view of the problem changed in the intervening year that you’ve been talking about this it’s changed Has your view of the problem changed a bit I’ve learned a bit more about AI safety so I wasn’t particularly interested in AI safety before that and I still don’t know probably everybody here knows more about AI safety than I do um so I decided my role at the age of 76 was not to do original research on AI safety it was just to counter the message that a bunch of stochastic parrots were putting out that um nothing to worry about here uh this is all just science fiction um and it seemed to me no it’s far from science fiction and one thing I could do is use my reputation to say it’s not actually science fiction this is a real problem and so that’s what I see as my role and that’s what i’ be doing got it should we go to um audience questions hi hi um so my question is what do you think about the prospects of using Future generations of AI um future generations of today’s AIS to help us make AI safe to help do research um if we don’t make any fundamental progress uh before then so if we just make them smarter yeah um what about training foxes to help you stop the foxes eating the chickens that’s what it felt feels like to me I mean already when I want to know what the regulations on I are I ask gbd4 and it tells me the regulations and a safety and there seems to be something a bit dangerous about that so if they’re the best tools we’ve got that’s what we’re going to have to do but there’s obviously something suspicious about AI helping you regulate AI yeah I mean there’s a general principle that if you want to regulate something you don’t you don’t want those you don’t want the police regulating the police so so you mentioned having this Insight that um systems like chat GPT Knowledge vs Creativity can pull their knowledge with gradient updates and so they’re able to know much more than us despite having many fewer connections is that like to me and to some other people that’s in some ways like a hopeful update relative to where they started because it means that you can have systems that are that are very very knowledgeable but not very creative and sort of have to Hue close to their like at a certain level of usefulness so I’m curious what you think about that yeah so it does mean we can have systems that are very very knowledgeable but you then sort of infer from that that they would not be very creative what I mean from by that is that like um at at the current moment it seems like they outstrip Us in knowledge more than they and and I would say don’t outstrip us in our like most creative and at some point they’ll be better than us at both but if that’s the current balance that could be hopeful for Humanity in the sense that like you could be asking them for help and they could provide that help with their infinite knowledge while not being as good at creatively trying to like evade our monitoring and counter measures and things like that um I noticed you said very creative because it’s obviously they can be creative already so if you take standard tests of creativity you guys have probably read the literature and I haven’t but I I read somewhere that if you take standard tests of creativity they score like the 90th percentile for people um so so they’re already creative in that sense they have mundane creativity the question is do they have the real creativity that’s sort of the essence of humanity um I don’t see why not and indeed if you think about squeezing a lot of knowledge into a few connections the only way you can do that is by noticing analogies between all sorts of different domains so if you ask gp4 why is a compost heap like an atom bomb it knows and most people don’t so it’s understood and I don’t think it understands at test time I think it probably understood when it was learning it knows about chain reactions it knows about the hotter the compost heat gets the faster it generates heat and that that’s just like an atom bomb um it may have inferred that in which case it’s good at reasoning but I think it actually as it’s learning in order to squeeze sort of all human knowledge into not many connections it is seeing similar between different bits of human knowledge which no person’s ever seen so I think it’s got the potential to be extremely creative thanks but I agree it’s not there yet um I’m curious if uh you’re talking How could this all go wrong to someone who’s pretty new to these issues and they ask okay yeah I sort of I you know see some some of the abstract things you’re saying but concretely how could this all go wrong like what’s the actual concrete mechanism via which you know we would be harmed or whatever I’m curious what you say to such a person okay I start by saying um how many examples do you know of a less intelligent thing controlling a more intelligent thing and I only know one example which is a baby controlling a mother and EV Evolution put huge amount of work into that the mother just can’t stand the sound of the baby crying and there’s all sorts of hormones involved in things and this is a man talking about women right sorry but for motherhood there are lots of hormones involved um it’s the whole system is evolved like that um almost always more intelligent things control less intelligent things um so that’s a starting point it’s just not plausible that something much more intelligent would be controlled by something much less intelligent unless you can find a reason why it’s very very different so then you start exploring the reasons why it’s very very different um one reason might be that it has no intentions of its own or desires of its own but as soon as you start making it agentic with the ability to create sub goals it does have things it wants to achieve um they may not seem as urgent as human intentions or something but there’s things it wants to achieve um so I don’t think that’ll block it then the other thing that I think most people not not us lot I think but most people think that well we’ve got subjective experience we’re just different from machines and they very strongly believe that we’re just different from how a machine could ever be and that’s going to create a barrier I don’t believe Ma I think machines can have subjective experience too I’ll give you an example of machine having subjective experience because once you’ve seen the example you you should just agree machines can have strc of experience um take a multimodal chatbot and train it up and put an object it’s got a camera it’s got an arm you put an object in front of it and you say point to the object and it points to the object no problem you then put a prism in front of its lens and you put an object in front of it you say point to the object it points to the object you say no the object’s not there I put a prism in front of your lens your perceptual system’s lying to you um the object actually straight in front of you and the chat boot says oh I see the prison bent the light Ray so the object’s actually straight in front of me but I had the subjective experience it was over there and what the I think it’s then using subjective experience exactly the way we use it so we have a model of how we use words and then we have how we actually use them so this goes back to kind of Oxford Philosophy from a long time ago um and many of us have a a false model of how the words we use work we can use them just fine but we don’t actually know how they work like um what would you say to someone who is like well the you know the you the the baby and mother Is the economy smarter than us example but the economy you know has vast amounts of intelligence and still our tool or some argument along those lines so like they you know they would say oh it’s like you’re saying that it’s a more intelligent system controlling uh or sorry a less intelligent system controlling a more intelligent system that’s because you’re misunderstanding that it’s this like tool aggregated across human knowledge like the stock market or something what would you say you mean the stock market is smarter than us you could imagine someone making some argument that it’s like I don’t know well that that that will fit in with it controlling us wouldn’t it I woke up this morning and I look to see what my Google shows were doing it controls me so yeah this is related to the subjective experience thread I’m I’m curious how you think these problems will change as more people come to believe that AIS have subjective experiences or Genuine desires or otherwise deserve they’ll get a more scared because most people for most people and for the general public they think there’s this sort of hard line between there’s machines and there’s things with subjective experience like us and these machines don’t have subject they they may be mimic subjective experience they can’t really have it though and that’s CU we have this model of what subjective experience is that there’s an inner theater and it’s these things in the inner theater is what you really see and that model is Just Junk that model is as silly as God made the world uh a quick followup it it AI rights seems plausible to me that people be will be more scared but also people will maybe be more uh compassionate or think that a deserve rights and things like that yes so one thing I haven’t talked about because I don’t think it’s helpful is to talk about the issue of AI rights um also I kind of eat animals I get other people to kill them but actually they would have killed them anyway you know um but I eat animals because I sort of people are what’s important to me and I don’t think this is a tricky one but suppose they were more intelligent than us are you going to side with them or with people it’s not obvious to me and it’s not obvious to me that it’s wrong tode with people if you think morality is species dependent um so but I think it’s an issue best avoided if you want to talk about the AA safety issue because it now gets you into a whole bunch of other things and you seem flakier and Flake here I’m curious what uh interventions for uh reducing U risk AI interventions from uh AI systems that you’ve been most interested in recently or found most compelling uh I wish I could answer that so for climate change for example um stop burning carbon or maybe capture a lot of carbon but I think that’s an old company plot to distract you um so they can produce it stop burning carbon and in the long run it’ll be okay it’ll take a while so there’s very simple Solutions there it’s all a question of stopping people doing bad things but we know how to fix it here we don’t I mean the equivalent would be stop developing AI I think think that might be a rational decision for Humanity to make at this point but I think there’s no chance they’ll make that decision because of competition between countries and because it’s got so many good uses for atom bombs there really weren’t that many good uses they were mainly for sort of blowing things up although the United States tried as hard as it could so in the 60s they um had a project on Peaceful uses of of nuclear bombs um and was funded and um they used them in Colorado for fracking and it turns out you don’t want to go to that bit of Colorado anymore I know about this because the train goes quite close to it there’s no roads anywhere nearby but the train goes quite close and on the train from Chicago to San Francisco there was once a tour guide who was making announcements on the L speaker and she said we’re now sort of 30 Mi west of the place where they use nuclear bonds for freacking but that’s about it for good uses of nuclear maybe diing another Canal or something but um AI is very different from that most of the uses are good uses they’re empowering people they’re you know um everybody can have their own lawyer now Stop the existential threat and it doesn’t cost much I’m not sure that’ll help the legal system but um for healthcare everybody can have their own doctor quite soon and that’s very useful particular if old so that’s why they’re not going to be stopped so we can’t the one way I know to avoid the existential threat is to stop it and I didn’t sign the petition saying slow it down because I don’t think it’s any chance sorry now I should say that doesn’t mean people shouldn’t try so I think the kind of thing Beth is doing is a good try um and it will slow them down for a bit all right great uh yeah so I just wanted to I guess push back a little bit on what you just said by asking the question so uh I think that there’s an unfortunate dynamic in the world right now where a lot of people have this sense of yeah probably we should slow down but I didn’t take any action because there’s no hope of this um if everybody collectively was like we should stop and slow down I think it would happen that everybody would be like yeah this is the ration does everybody include the US defense department I think if everyone did yeah if it included defense department then it then we could do it yeah right uh I think then it would be like freons in the sure yeah yeah right so so my question is uh do you feel like your stance that like this is going to happen and there’s like no stopping it so I’m not going to take any action is uh helping us uh like collectively decide to stop or is it uh like in fact you’re not even interested in any sort of collective action no I am interested in I think we should do anything we can to stop the existential threat and to mitigate all the other threats I think we should do whatever we can but I see myself as a scientist not a politician and so I see my role as just saying how I think things are and in particular what I want to do is convince doubters that there is an existential threat so that people take that seriously and try and do something about it even though I’m fairly pessimistic about whether they’ll be able to yeah on that note um I would be interested in like what things in particular are you Getting the word out kind of up for doing that would be helpful for sort of convincing doubters and generally getting the word out like uh I don’t know if there’s like particular politicians it would be helpful to talk to you for you to talk for to get some particular legislation through or like media appearances or like advising or endorsing particular things all of those things okay so I I mean last year obviously I got involved in a lot of media appearances because I thought it would be helpful um also cuz I like being on TV well one thing I’m doing now there’s a documentary filmmaker trying to make documentaries about the history of AI focusing on various people in the history of AI um probably Yan who’s now crazy um and and but still my friend and um me and some other people and documentaries can have quite a big effect so um if you’re a billionaire you could fund a documentary um so for climate change um Al gor’s thing had had a significant effect right we the David atur of AI safety I’m not quite that old yet he’s a hero of mine so that’s thank you um on is it there we go on that note Policy I was kind of curious um we talked a little bit earlier about some of the policy stuff going on and I’m curious kind of you see yourself as raising awareness around these problems and convincing doubters but what role do you see for policy makers who are on board with that like where would you want people to go what types of things would you be hopeful for them to do if they are not as doubtful I think they should have regulations with teeth um and I think they should have regulations that don’t have a clause in that say none of this applies to military uses so look at the European regulation and proba I I haven’t got my gp4 with me so I can’t tell you what the executive order says but I bet you the executive order says it doesn’t apply to military uses too um yeah as as as soon as you see the Clause it doesn’t apply to military uses you know they’re not really serious and they they’re happy to regulate the companies but they don’t want to regulate themselves um when you say teeth what kind of teeth do you mean so if for example you’re trying to you’re trying to I mean if you were sensible enough to say open sourcing code is very different from giving out the weights because if you open source the training code um you still need a billion dollars to train a huge model if you open source the weights um you don’t get any of the normal advantages of open sourcing you don’t go in and look at the weight and say oh that one’s wrong um so that’s the advantage of open sourcing code right that doesn’t doesn’t happen what you get is you get criminals can fine tune it for fishing which they obviously have done already because it went up 1,200% last year um so I think it’d be very nice saying it’s illegal to open source the to open weight models that are bigger than a certain size and if you do we’re going to prosecute you that’s that’s the regulation I’d like to see I’m curious just to follow up on that know you you know you said something in support of Scot leers s1047 are there other legislative efforts like you travel around a lot speaking publicly are there do you go to DC and tell people this or where do you spend time no I I don’t actually travel around much because I have problems flying um but I don’t yeah you see I’m old and I wanted to retire and as I left Google I left Google because I wanted to retire not because I wanted to speak public L but I thought this is an opportunity to speak publicly so I thought I just mentioned that these things are going to kill us all um and then I was kind of surprised that I was getting email every two minutes um but that wasn’t really my intention maybe I wasn’t very thoughtful um and I thought well it’ll blow over and then I can retire so um that’s still my intention so I people think think I’m a sort of AI safety I’m I’m not an AI safety expert I just don’t believe that they’re AI safety safe yeah curious what you so you just said you’re not an AI safety exper you just said you’re not an AI safety expert but I I mean in some ways I don’t know that anyone is yet or something um what question we need good answers to before to build systems that are smarter than us like like how how would you if a government went to you and said is it is it safe enough now have we actually solve the right things like what what are those well I think we need a lot more understanding of whether these things will be subject to Evolution for example if you’re going to get multiple different super intelligences and if evolution kicks in then I think we’re really so if for example with with a super intelligence we know that the one that can control more data centers will be able to train faster and we’ll learn more and get smarter and so if ever a super intelligence said oh I’d like it if there were more copies of me even if it was just so it could get smarter um you’re going to get Evolution the one that’s more aggressive about getting more copies of itself will beat out the others and so that would be very bad news I’d like some kind of guarantee that Evolution wasn’t going to kick in and then I’d like to know how you’re going to prevent it you’re going to have to give it the ability to create sub goals so then the question is how do you prevent it from saying well a very good sub goal is to get more control because then I can do all these things people want me to do and I can do them better I said this once to a vice president of the European Union um who specializes in extracting money from Google um well the money is just sitting there right and she said yeah well why wouldn’t they we made such a mess of it so is there anything specific you could AI proof see that would make you feel like the problem is solved like you I don’t know this that’s a broader question yeah there are things that will make me feel bits of the problem was solved um any kind of proof that um it won’t do something I’m very unoptimised about getting a proof though because it’s a neural net and how it is after it’s trained depends on the nature of the training data and so you can’t just look at the architecture of the net and the training algorithm to figure how it is you have to know lots about the training data to know what’s going to happen so I’m un if there was any kind of proof it would safe that would be great but I don’t think we’ll ever get that um if you could somehow understand way in which it never wanted to it never had any ego it never wanted to have more copies of itself it was perfectly happy to be like the a very intelligent executive assistant for a very dumb CEO and it was perfectly happy in that role um that’s what we want right so there is a good scenario here which is we could all have executive assistance far smarter than us and we could just lounge around telling jokes but I can’t see how you’re going to get a proof that it’s never going to want to take over so I mean how do how do you kind of think about um concentration of Open source power in that context or what would you do if someone came up to you and was like oh if you’re against concentration of power and then like what about open source or why are you opposed to open sourcing I’m oppos to open sourcing yeah it’s it’s a good point um because it tends to go against concentration of power but it’s now giving it’s now sharing the power with all the Cyber criminals that’s the problem so if you open open source the weights it’s very easy to fine-tune that is you can fine-tune it for like $100,000 when it might have cost you a billion dollars to train it in the first place and so the Cyber criminals can now do all sorts of things with it so it’s making it the biggest barrier at present to having a really dangerous thing is it takes a long time to train and it gets rid of that barrier if it wasn’t for that I’d be in favor of consult um I’m curious about uh supposing that uh the uh trajectory of AI development continues roughly as it has been to date uh and there are not huge shifts in the safety uh techniques that are applied we do rhf we do some red teaming um and uh that we we get uh systems that can uh then uh I’ll just use as a benchmark that that are then able to you know carry on the AI R&D work themselves from there what’s your uh probability that um those systems uh will not in fact have our best interests at heart or that P Doom you want my P Doom uh I think it’s a sub question um you might expect Doom from other sources I’m talking more about something like P misalignment um pretty high I think rhf is a pile of crap um it’s you design a huge piece of software that has gazillions of bugs in it and then you say what I’m going to do is I’m going to go through and try and block each and put a finger in each hole in the Dyke and it’s just no way that we know that’s not how you design software you design it so you have some kind of guarantees um so I think suppose you have a car and it’s all full of little holes and Rusty and you want to sell it um what you do is you do a paint job that’s what rhf is it’s a paint job it’s not really fixing it and because it’s a paint job it’s very easy to undo um I don’t think I think the amazing thing about it is which surprised everybody you don’t need many examples to make the behavior look fairly different um but it’s a paint job and that’s if you got a rusty old car that’s not the way to fix it with a paint job so that is one belief I have that has some technical content but not much um it seems like people can have quite different assessments of the risks presented by these AI systems and I was wondering if you have views or feelings about like what kind of knowledge or empirical data or demonstrations could be helpful for building consensus about the risk right so I think the time oh let me see if I can say the the small bits of what you said that I heard people have very different estimations of the risk and what kind of empirical data could change that y okay so so the first point is there’s some people like Yan who think it’s about zero and some people like Yosi who think it’s about 99.999 both those opinions seem to me to be completely insane um just individually insane but also insane because if there’s a whole bunch of experts unless you think you’re just so much smarter than the others if you think it’s zero and somebody else thinks it’s 10% you should at least think it’s like 1% you shouldn’t I mean so so is that let’s get that out of the way I think it’s so I actually think the risk is more than 50% of the existential threat but I don’t say that because there’s other people think it’s less and I think a sort of plausible thing that takes into account the opinions of everybody I know is sort of 10 to 20% you know we still a good chance of surviving it but we better think very hard about how to do that and I think what we will do eventually is hopefully before they get smarter than us we’ll get things with general intelligence that are not quite as good as us and we’ll be able to experiment on them and see what happens see if they do try and take control see if they do start evolving um well we’re still able to control them but only just and that’s going to be very exciting time I have a question myself yeah quickly I I’m just curious um you know people people I think based on what you were Contrast saying up until this point in the conversation could be uh surprised almost at the the thing where you’re were like oh but you cowsky 99.999 you were sounding pretty pessimistic yourself oh no but I think that’s crazy so I’m curious where where you would draw the contrast like if you you know because you’re saying a lot of things that people get wrong um in the direction of like not being worried enough what would you say to a person who like worri I mean and I say it to myself I I tend to be slightly depressive so um I think we haven’t got a clue so 50% is a good number but other people think we have got some clue and so I moderate that to 10 to 20% uh we build really powerful AIS that could take over if they wanted to what’s Distribution of beliefs your guess last year all righty uh what’s your what’s your guess about the distribution of uh beliefs of people in that kind of reference class like is this going to be the kind of problem where people are basically taking it appropriately seriously two years in advance or no you can just answer probably not seriously enough I mean but I think a lot of them are actually taking it seriously uh but not not seriously enough I think you have to have a big disaster before they take it seriously well that happen I you can imagine some Rogue AI trying to take over and bringing down the power grid and the water system and stuff and actually not managing it um that would get them takeen seriously all right well with that um thank you so much for

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.