FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Elon Musk on X-risk of AI  Start at 1:19:33

TRANSCRIPT (auto-get)

where are we on AI right now AGI right now and what are your views?

well I think at this point it’s obvious to everyone that AI is advancing at a very rapid Pace yes um you can see it with the new capabilities that come out every month or every every week sometimes um you know you AI at this point can write a better essay than probably 90% maybe 95% of all humans say write an essay on any given subject um AI right now can can beat the vast majority of humans um uh if you say draw an image draw a picture um it can draw like um if you try to say mid mid Journey which is the Aesthetics of mid Journey are incredible um it will draw it will create incredible images uh that are better than again like 90% of artists it’s just objectively the case and it’ll do it immediately like 30 seconds later um we’re also starting to see uh AI movies so you start seeing you know short films with AI um AI Music Creation um and the the rate at which we’re increasing AI compute is exponential hyper exponential so this dramatically more AI compute coming on online every every month um you know there seems to be roughly I don’t know the the amount of AI Compu coming online is increasing at like I a quote roughly 500% a year and like it’s like that’s likely to continue for several years um the and then the sophistication of the AI algor is also improving so we’re bringing online a massive amount of AI compute and also improving the efficiency of the computer and and what and like what what the AI software can do so it’s it’s it’s quantitative and qualitative in Improvement um so the you know I I might I think next year you’ll be be able to ask AI so certainly by the end of next year make a short movie about something or or you know probably can do at least a 15minute you know show or something like that um so yeah it’s advancing very rapidly um my top concern for AI safety is that we need to have a maximally truth seeking AI uh so the this is the most important thing for AI safety in my opinion um you know the the central lesson that say oy clar was trying to convey in 2001 Space Odyssey was that you shouldn’t Force AI to lie um so in that book the um the the AI was told to take the astronauts to the monolith but they also could not know about the monolith it resulted that at uh quandry by killing them and taking them to the monolith or didn’t kill all of them killed most of them that’s why H hell would not open the pod bay doors right um so uh very important to have truth seeking AIS now and what what I actually see with the AI that being developed is that they’re being programmed with the woke mind virus um so the lying is baked in yes um and we saw this on display very clearly with the release of Google Gemini yes uh where you would ask for a picture of the founding fathers of the United States um and it would show a group of diverse women in you know but dressed in in with sort of 18th century gol pow powdered wigs but from St Lucia yeah I mean like look if I understand if you say like show me a group of people for sure if and it shows a group of divorce women that’s totally fine but if you say this if you say very specifically the the founding fathers of the United States which were you know group of white dudes then should show them like and and with and and what they actually look like because you’ve asked for something which is a fact from history um but it didn’t uh it was it was programmed with the work M virus so so much that it it actually even though it knew the truth it it It produced a lie um now of course then then people really started playing with it and said okay now now show me a group of fafen ss officers in World War II turns out there were also a group of divorce woman according to Gemini all the black Nazi ladies yeah it’s like it’s like wow I didn’t realize that you know it’s not what I expected um so you know what’s also not what happened happened it’s not what happened so it’s just it the AI is is producing a lie um and um and then you know then that like one of the questions that people that people asked was like which is worst uh global thermonuclear war or misgendering Caitlyn Jenner and said misgendering Caitlyn Jenner is worse now Caitlyn Jenna kill kills fewer people yeah to Caitlyn Jenna to her credit I said no please misgender me that is far more preferable than World War Global ther nuclear war we all die but but to have you know a production release AI say stuff like that is concerning um because if if let’s say this becomes like all powerful and and it still has this programming um where misgendering is worse than nuclear war well it could conclude that the way to ensure that there there can never be any misgendering is to eliminate all humans now Pro if like optimization is probability of misgendering is zero no no humans no misgendering problem solved now we’re back to arthy Clark who’s exactly pretty pre yes so that’s why I think the most important thing is to have a maximally truth seeking AI That’s why I saw it xai and that’s our goal with grock um no people will point out cases where Gro gets it wrong but we try to correct it as quickly as possible but maybe even a bigger problem is that when you make decisions that affect people you want those decisions to be informed by love of people yeah and machines are incapable of Love yeah I mean there there they certainly okay they’re capable of you can program a machine to be philanthropic rather than misanthropic yes um but don’t don’t instincts shape decisions particularly decisions you can’t plan for I mean if I ask you you know a question about one of your children every answer you give is going to be shaped by your love for that child and that’s why know that that’s what makes us decent parents in the end is that that instinct which is love and if a machine has any power over us without that animating Instinct won’t it def by definition hurt us yeah well whether I mean I don’t know it we should certainly aspire to program the AI philanthropically not M anthropically yes um and to have like that we wanted to be truthful and Cur curious and to Foster Humanity into the future um and uh yeah that’s what we want obviously is there any way I guess to set limits on the decisions that machines can make that affect human lives and make certain that there’s some trigger in the system that inserts a human being into the decisionmaking process well the look The the reality of what’s Happening whether one likes it or not um is that we we’re building super intelligent AIS hyper like hyper intelligent like intelligent more intelligent than we can comprehend yes um so I’d like this to like let’s say you have a child that is a Super Genius child that that you know it’s going to be much smarter than you then well what can you do you you can instill good values in how you raise that child so even though you know it’s going to be far smarter than you um you can make sure it’s got good values philanthropic values um good morals you know honest uh you know productive that kind of thing controlling at the end of the day I I don’t know if I don’t think we’ll be able to control it uh so I think the best we can do is make sure it grows up well you’ve been saying that for a long time yes I’ve been saying for a long time yes are you still as worried about it as you seem to be two years ago when I asked you about it well I I think that like my guess is like look it’s it’s it’s 80% likely to be good maybe 90 um so you could look think of the glass as 80% full um it’s probably going to be it’s probably going to be great the some chance of annihilation and you say the chance of annihilation is 20% 10 to 20% something like that how concerned is Sam Alman about Annihilation do you think I I think in reality is not concerned about it I don’t trust open AI I mean I you know I started that company as a nonprofit open source yes the open and open AI I named the compy I named the company yeah open AI as an open source um and it is uh now extremely closed source and and Ma and maximizing profit so does risk I don’t understand how you actually go from being a an open source nonprofit profit to a close source for maximum profit organization I’m missing well but Sam Alman got rich didn’t he at various points he’s claimed not to be getting rich but he’s claimed many things that were false um and now apparently he’s going to get10 billion a stock or something like that so um I don’t trust Sam Alman and I and I don’t think we want to have the most powerful AI in the world controlled by someone who is not trustworthy and sorry I just don’t I mean but that that seems like a fair concern yeah but but you don’t think as someone who knows him and has dealt with him that he is worried about the possibility this could get out of control and hurt people he will say those words yeah but no if AI did if it became clear to the rest of us that it was out of control and posed a threat to humanity would there be any way to stop it I hope so um I mean if you have multiple AIS and ones that are hopefully you have the AIS that are pro-human be stronger them the a that are not Battle of the AIS yeah yeah I mean that that is how it is with say chess these days the the um like the AI chess programs also are vastly better than human um in incomprehensibly better meaning like we can’t even understand why it made that move why they’re so good right yeah we don’t even know why it made it it’ll make a move we don’t know why it made the move um so and in fact some of the moves will seem like blunders but then turn out to Checkmate um so you know for for a while there there was there was some the best chest players with the best computers could beat just a computer and then it got to the point where if you added a human it just made everything worse and then it was just AI it’s just computer programs versus computer programs um that’s that’s where things are headed in general what I mean sweet dreams um at what point so I don’t know I think we just got to make like I said make sure we instill good values in the AI what’s everyone going to do for a living I mean in a benign AI scenario that is probably the biggest challenge is how do you find meaning if AI is better than you at everything yeah um that’s the benan scenario that’s the good news well yeah but I guess you know for a lot of people like the idea of retiring and you know um really are you looking forward to it no not me I I’d like to hope I’d like to think that I I’d like to be do useful things um don’t you think it’s a universal desire it’s it’s it’s not it’s not Universal in that there are certainly I know many people who prefer to be retired that they prefer to um sort of have not have responsibilities and engage in in leisure activities so I we and we’re on the cusp of of this is it’s really a remarkable time to exist um well I’ll tell you like one of the ways I I sort of was able to sort of sleep and Rec reconcile myself to um to this is that I I thought well would I prefer to be alive and see the Advent of digital super intelligence or would I would I prefer to be alive at a different time and not see it and I guess I’m like well I guess I’d prefer to be alive to see if it’s going to happen I prefer to be alive to see it happen out of curiosity um and then I was like well let’s say you knew for sure it would uh kill everyone would you you you could now now you can shift back in time like I guess I’d want to be near the end of my life or something before that happened but I at at the end of the day it’s like if um if it’s going to happen uh and there’s nothing you do about it hypothetically would you prefer to see it or not to and I guess I guess it’s going to happen I prefer to see it rather not see it um yeah but as a man of action why not convince Trump to make you Secretary of Defense and then just nuke AI um I I I I think I I would certainly push for a having some some kind of regulatory body that at least has insight into what uh these companies are doing and Can Ring the Alarm Bell even if we don’t have a regulation or rule so I’m I’m not I’m not someone who wants to get rid of all regulatory agencies or everything I think think we there’s the right number of regulations right number of regulators and we’ve gone we’ve gone too far just like if you in a football game if you had too many referees on the field it would be weird like you can’t throw the pass because you hit a referee ex then there too many referees um so uh but but no but if you like say look at any pro sports game um they all have referees that like the teams could decide we’re going to have game we’re going to not have referees that could be a thing but but every sports game they have refs to make sure that the rules are followed and um and it’s it’s a better game if if you have well we have cops too yeah yeah exactly cops are R the referees so uh I think we for something that is a a danger to the public or potential danger to the public we we have referees we have Regulators you know so um like the FDA and the FAA and you know the various Regulatory Agencies they were they were established because aircraft were falling out of the sky and and some manufacturers were not you know building high quality aircraft they’re cutting corners and then people will die um and uh you know for Food and Drugs that some manufacturers were making lowquality drugs and so they that they’re they’re lying to people so saying that something cured them when it killed them um so FDA to you know Regulators to referees to try to uh make you make sure that this this uh drug manufa are truthful now I do think it mostly works I mean I think it’s doesn’t mean don’t need regulatory form we do reform we do but um I don’t think we should have no Regulators in AI given that it’s potential existential RIS a little weird that everything is regulated yeah I mean you said you’re being sued by the Department of Justice for not hiring more Asylum Seekers for your high-tech company yeah even though it’s a legal for us to hire Asylum Seekers right so um so they’re watching everything regulating everything controlling everything including our thoughts right that’s why they’re posted to free speech but they’re not meaningfully regulating AI which will eliminate like the purpose for most people’s lives and could kill us all it’s a little weird yeah I think we should have some why don’t we something above nothing in that range yeah but why don’t we I don’t know um you know I I all the way back like I like during the Obama presidency um I I you know met with Obama many times but usually in like group settings um the the one one-on-one meeting I had with Obama in the Oval Office I said look the one thing that we really need to do is is set up an the beginnings of an AI regulatory agency and it can start with Insight where you don’t you don’t just come shooting from the hip throwing out regulations you just start with Insight where the the AI regulatory committee uh simply goes into understand what all the companies are doing insight and then proposes rules that all the AI companies agree to follow just like you know sports teams in the NFL you know you have proposed rules for football that everyone agrees to follow um that make the game better you know so that that’s the way to do it um but nothing came of it what did he say when he said that to him I mean he seemed to like kind of agree but but also people didn’t realize what what the where AI was headed that at that time you know so AI seemed like some super futuristic yeah sci-fi basically so like I’m telling you this is going to be smarter than the smartest human and um my predictions are coming absolutely true and uh so we need to have some insight here just to make make sure that this companies aren’t cutting Corners um doing dangerous things um but go Google kind of controlled the the White House at that time and and they they they did not want any regulatory well that’s it I mean you never see politicians turn down opportunities to become more powerful which is the point of Regulation it makes them more powerful yeah so it sounds like regulatory capture then well yeah um I mean the CIO the the Ys at the time was ex Google person so um they they put the brakes on any AI regulation um and we still don’t have any AI regulation at the federal level it’s amazing so I I think we should have something above nothing um like I said at least Insight where even even if there’s no there’s no rule that’s being break broken they can at least say hey we we have insight into what this company are doing or that company is doing and we’re concerned that would be helpful to know.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.