FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Very good information from very respected sources! Learn about the well-known dangers of AI.

Meanwhile, going forward, every day our p(doom) goes up and up and up…

I mean how how would like human beings sort of experience such a super intelligence I mean like in practice what’s that like well unless it’s limited narrow super intelligence I think you mostly don’t get to observe it because you are dead unfortunately what [Music] we started open AI seven years ago because we felt like something really interesting was happening in AI we wanted to help steer it in a positive direction openai unveiled the chat GP has been in circulation for just three months and already an estimated 100 million people have used it how many folks in the audience have used chat TPC I think it’s the single largest opportunity and biggest Paradigm Shift we’ve seen since the internet originally came out so I’m going to show you how to use chatp to make money online absolute best chat gbt problem it will turn your drawing into a fully functional website chatbot gcp thank you for talking to me today you’re welcome I’m here to help answer any questions you may have six weeks these guys have gone from so I appreciate it thanks for having me zeroll evaluation [Music] now being a 29 Billion Auto Company [Music] tonight we take you inside the headquarters of a small company in San Francisco called open AI creators of chat GPT CEO Sam Altman is just 37. I think people should be happy that we’re a little bit scared of this you’re a little bit scared a little bit you personally what is your like best case scenario for AI and worst case the bad case and I think this is like important to say is like lights out for all of us oh listen up this morning a massive development on the AI front Elon Musk and other major Tech leaders calling for a pause on giant artificial intelligence experiments writing bluntly in an open letter AI systems with human competitive intelligence impose profound risks to society and Humanity and nobody would suggest that we allow anyone to just build nuclear warheads if they want that would be insane and block my words AI is far more dangerous than nukes the man widely seen as The Godfather official intelligence has quit his job he is what I call the existential threat which is the chance that they get more intelligent than us and they’ll take longer than I heard the old dude that created AI somehow this is not safe because the AIS got their own minds I’m like if we’re in a movie right now or what [Music] [Applause] um so so you know the original story that I heard on open AI when you were founded as a non-profit where you were there as the great sort of check on the big companies doing their unknown possibly evil thing with AI and you were going to build models that sort of somehow held them accountable and was capable of slowing the fuel down if need be and yet what’s happened arguably is the opposite is that your release of chapter put such shock waves through the tech world that now Google and meta and so forth are all scrambling to catch up but this isn’t an arms race it’s a suicide race where everybody loses if anybody’s AI goes out of control absolutely do you believe I’m quoting him that it is not inconceivable that it could actually lead to the extinction of the human race not only is it not inconceivable I think it’s quite likely unfortunately and I’m not just the only one saying this overall you know maybe you’re getting more up to like 50 50 chance of Doom shortly after you have a systems that are human level this is a stat that took me by surprise 50 of AI researchers believe there’s a 10 or greater chance that humans go extinct from our inability to control AI we have to realize what people are talking about the destruction of the human race the end of human civilization you see how it all evens out who would want to continue playing with that risk but it is happening today and companies are continuing there’s not enough divestment there is not enough real meaningful action by the experts to say we are going to change our behavior in the interest of protecting Humanity it just sounds absurd that serious people like yourself these tech people can talk about the end of the human race it really it’s really concentrates the mind so every time you release a model every time you build such a model you’re rolling it down you know maybe this time’s fine maybe next time but at some point it won’t be like it’s Russian Roulette so the risk is that Chad gpt6 won’t be written by humans it’ll be written by Chad GPT 5.5 I’ve been watching lots of these kind of long-form podcasts with people like Sam Altman who you mentioned and then we’ll continue to speak about the research they’re doing after saying that it might bring about the end of humanity why do they carry on doing it yeah I um I think that’s a really important wow and the investment in the creation of the foundation models is on the order of 50 million 100 million we don’t share base much more than that billions of dollars and you know thousands tens of thousands of our brightest engineers and scientists are working day in day out to create ever more powerful systems well the number of people who work full-time unlike the alignment problem is probably less than 200 people if I had to guess the alignment means making it safe the moral alignment so present so 99 of the money is going into developing them and one percent is going into sort of people saying all these things might be dangerous it should be more like 50 50 animals alignment is moving like this capabilities are moving like this for the listener capabilities are moving much faster than the alignment kind of like we’re rushing towards this cliff but the closer the cliff we get the more Scenic the views are and the more money there is there and the more so we keep going but we have to also stop at some points right given how fast things are moving and how fast you’re developing this technology how much time do we actually have videos involved in artificial intelligence development meeting with President Biden and vice president Harris in Washington the White House said that Biden told the CEOs they need to mitigate risks posed by AI to individual Society National Security I’m Scott skeptical and I think many are skeptical maybe that’s warranted because the technology just developed so so quickly and public policy takes so much longer to develop just calling the private companies and say you’re in charge and you have more obligation is nothing especially for Microsoft and Google the two leaders here and open AI I guess you know we hear the word responsible responsible responsible we’re going to do this responsibly seems like you’re not buying that what do you think well those companies are responsible to their shareholders they’re not necessarily responsible to humanity as a whole it’s the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development and raise your right hand as these systems do become more capable and I’m not sure how far away that is but maybe not not super far I think it’s important that we also spend time talking about how we’re going to confront those challenges so that’s what a large language model is it’s this giant trillion parameter circuit that’s been trained to predict the next word what goes on inside we haven’t the faintest idea I expect there will be times when we find something that we don’t understand and we really do need to take a pause but we don’t see that yet we probably have more idea what’s happening inside the human brain than we do about what’s happening inside the large language models there is an aspect of this which all of us in the field call it as a black box you know you don’t fully understand you can’t quite tell why it said this or why it got wrong we have some ideas you don’t fully understand how it works and yet you’ve turned it loose on society [Music] just shut down all the giant training wheels that they don’t know what they’re doing they’re not taking it seriously there’s an enormous gap between where they are now and taking it seriously and if they were taking it seriously they’d be like we don’t know what we’re doing we have to stop that is what it looks like to take this seriously a traditional Software System programmer writes code which solves the problem AI is very different AIS are not really written they’re more like grown you have a sample of data of what you wanted to accomplish and then you use huge supercomputers to Crunch these numbers to kind of like organically almost grow a program that solves these problems and importantly we have no idea how these programs work internally they are complete Black boxes we don’t understand at all how their internals work this is a unsolved scientific problem and we do not know how to control these things what a lot of safety researchers have been saying for many years is the most dangerous things you can do with an AI is first of all teach it to write code because that’s the first step towards recursive self-improvement which can take it from AGI to much higher levels Bart has already learned more than 20 programming languages that’s good chat gbt to write some code for us foreign [Music] oops we’ve done that another thing high risk is connected to the internet Let It Go to websites download stuff on its own and talk to people a big part of our strategy is while these systems are still relatively weak and deeply imperfect to find ways to get people to have experience with them to have contact with reality and to figure out what we need to do to make it safer and better oops we’ve done that already that’s like saying well the only way we can test our new medicine the only way we can know whether it’s safe or not is actually put white into the water give it to literally everybody as fast as possible and then before we get the results for the last one to make an even more potent drug and put that into the water supply as well and do this as fast as possible [Music] have you seen don’t look up the film this feels like a gigantic uh don’t look up scenario it’s a movie about like this asteroid hurtling to Earth good afternoon everybody there’s an expert from the machine intelligence Research Institute who says that if there is not an indefinite pause on AI development this is a quote literally everyone on Earth will die would you agree that does not sound good Peter is quite it’s quite something we are taking this very seriously we put our blueprint out it is a cohesive federal government approach to AI related risks as you just laid out in a very dramatic way but clearly we’re trading more dramatic I mean you just read it literally that one on earth will die pretty pretty dramatic pretty dramatic isn’t that level event wow that’s not dramatic here at this very moment I say we sit tight and assess we are actually acting out it’s life imitating art humanity is doing exactly that right now except it’s an asteroid that we are building ourselves I feel like we’re at the beginning of a disaster film where they show the news Clips okay well as it’s damaging will it hit this one house in particular that’s right on the coast of New Jersey it’s my ex-wife’s house I needed to be here can we make that happen what is your like best case scenario for AI and worst case I mean I I think the best case is like so unbelievably good that it’s like hard to I I think it’s like hard for me to even imagine and when these uh Treasures from Heaven are claimed poverty as we know it’s social injustice also by diversity all these multitudes of problems are just going to become relics of the past we are working to build tools that one day can help us make new discoveries and address some of Humanity’s biggest challenges like climate change and curing cancer they found a bunch of gold and diamonds and rare on the comet so they’re gonna let it hit the planet to make a bunch of rich people even more disgustingly Rich almost nobody is talking about it and people are squabbling across the planet about all sorts of things which seem very minor compared to the asteroid that’s about to hit us and one of the things that worries me most about the development of AI at this point so do I need to invest in the AI so I can have one would we seem unable to Marshal an appropriate emotional response to the dangers that lie ahead right now we’re at a fork on the road this is the most important Fork the humanity has reached and it’s over a hundred thousand years on this planet we’re building effectively a new species that’s smarter than us it’s as if aliens had landed but we didn’t really take it in because they speak good English we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models for example the US government might consider a combination of Licensing and testing requirements for development and release of AI models above a threshold of capabilities humans have kind of changed the environment on Earth very significantly as a result of our intelligence relative to other species and that’s had you know significant consequences for some species and for the biosphere in general Common Sense tells you that something similar might happen if we invent something more intelligent than us arguably we are on the Event Horizon of the black hole that is official superintelligence if we were to write a book about the Folly of the history of human hubris dealing with nukes and Ai and things like that we could easily have the last chapter in that book if we are not more careful about confident wrong ideas it’s possible that there’s no way we will control these super intelligences and that humanity is just a passing phase in the evolution of intelligence most observers and experts would say we’re on this path towards superhuman intelligence and we’re not prepared for success we’re investing hundreds of billions of dollars into a technology that if eventually it succeeds could be civilization ending could be a huge catastrophe I have not met with anyone right now in this lab so says that sure the risk is less than one percent of blowing up the planet it’s important that people know that the lies are being risked by this very particular experiments let’s be clear you’re Racing for their own personal gain for their own Glory towards an existential catastrophe that no one is consented to we just had a little baby and I keep asking myself you know how old is even gonna get you know and and I said to my wife recently it feels a little bit like I was just diagnosed with some sort of cancer which has some you know risk of dying from in some risk of surviving you know except this is the kind of cancer which would kill all of humanities [Music] oh if somebody’s listening to this and they’re young and trying to figure out what to do with their life what advice would you give them don’t expect it to be a long life don’t don’t put your happiness into the future the future is probably not that long at this point but none know the hour nor the day

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.