FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

nobody knows for sure when we’re going to get artificial general intelligence they can outsmart us kind of across the board um but um The Tipping Point is now because once we get too close to it we’ll already have lost a bit too much control I think as a species to be able to change the direction now is a time when we are still in charge as humans if if we get this right AI can unlock the intelligence inherent in the cosmos and you know make it work for us as long as we humans have worked the Earth we’ve wondered about how stuff works and gradually we figured out a lot about how the external stuff works the physics chemistry and so on then we started to figure out that we are all the kind of machines we built machines that could do things faster and with more power than us and got the Industrial Revolution but we could never figure out quite this intelligence stuff so artificial intelligence is simply non-biological intelligence so what’s intelligence intelligence is the ability to accomplish goals uh the more difficult the goals are the higher the intelligence traditionally we’ve had AI that could only accomplish really narrow goals like playing chests or multiplying numbers now we’re getting ever more broad intelligence that can do wide range of tasks more like a human artificial general intelligence means AI that can do all the tasks basically as well as human take all our jobs effectively and then there’s artificial super intelligence which which is dramatically beyond the human capability it might be as far beyond our intelligence is human intelligence is beyond the uh ladybug say early AI researchers totally underestimated how clever Evolution had been in designing our brain you know you have in your brain about 100 terabytes it was only pretty recently that you could put that in the in a computer but you know the exponential growth of of our computer technology has totally caught up with that now and in addition as the computers got bigger and we got more data and we started also realizing that we could invent algorithms which were a little bit brain inspired where the machine learns like a child from data that ushered in this possibility that the machines can now get much smarter than their creators right just like a kid can outsmart their parents because they can learn the early AI was all stuff where we humans had to program in the intelligence what we call the good oldfashioned AI GOI is when humans who know stuff program in instructions for how to do it into a machine and it sometimes beats humans like G kaspar and chess just because it can do more operations per second the more modern approach machine learning is where you don’t program any the intelligence you just give it a lot of data to learn from and it’s actually pretty easy to understand how you do it you you build something called the an artificial neural network which is basically like a black box with a whole bunch of knobs on it you put in some data and something comes out depending on what the settings are of these knobs or parameters as we call them and then you teach the computer what good means like good means when you win the chess game or good means when it’s an accurate translation and you just tell it okay maximize the goodness so the computer itself will just go twiddling all these knobs now and constantly make the thing better and better and lo and behold if you have a lot of data to train on and a lot of knobs it often gets incredibly good a large language model is simply a kind of uh artificial neural network which is trained on massive amounts of language gp4 for example is crudely speaking read all the text on the internet that was would take me about 10,000 years to do if I didn’t pause for sleeping and um it’s trained that good just means accuracy of predicting the next word which seems super lame but it’s turned out that just to be really really good at predicting the next word it has to teach itself also all the different languages on Earth it has to teach itself to kind of model the the humans that are writing the words to see are they Republican or Democrat Etc it’s been quite shocking to see how such a simple definition of good that the computer can train itself on right has given these quite broad general intelligence capabilities and once you have that basic architecture depending on what you train it on it gets incredibly good at a really wide range of tasks I think the media perceives as if something dramatic has just happened all of a sudden whereas in fact fact that as a researcher what I’ve seen it’s just quite steady progress year after year after year it’s just that when it’s suddenly crosses a threshold where it becomes commercially viable and everybody starts using it people outside our little research bubble are like you know something happened but it’s exactly because of this steady progress that we can feel so confident also predicting that there’s going to be a whole bunch more even more wild things happening soon since AGI can by definition do all the jobs as well as humans that includes the job of AI development so as soon as you get to AGI you can replace all the human AI researchers Like Me by AI that can do it way faster and better and that can shorten the R&D cycle for the Next Generation from the human time scale of maybe a year to a machine time scale like a week you or an hour and then if you repeat that many many many times over you get this EXP poal runaway which is often called an intelligence explosion which could leave human intelligence as far behind the machine intelligence as a cockroach intelligence is behind human intelligence of course no explosion can go on forever and an intelligence explosion will end ultimately when you bump up against the laws of physics but there’s been some really nice work on that showing that the laws of physics limit you only to things that are no more than about a million million million million million times more powerful than today so there’s a lot of room for intelligence that’s just dramatically Beyond human intelligence I’ve been concerned about um and excited about the possibility of last AI for as long as I’ve known about AI because it’s kind of obvious that all the cool stuff and all the horable stuff that we humans do we do with intelligence what’s really changed is that uh timelines are gotten so much shorter 5 years ago most of my colleagues thought we would have to wait decades to pass the touring test have machines that could talk so well that they could fool us into thinking they were human and oops dpt4 just check that box and now a lot of very serious people think we might get AGI next year two years from now or at least very soon it’s not gp4 that gives me nightmares it’s what it represents namely that we’re no longer far away from the things that really scare me when Enrico fery managed to build the first nuclear reactor under the Chicago football stadium in the 40s the big deal wasn’t that the big deal was that that made them realize that they could build nuclear weapons and similarly the fact that we have been able to now Master human language with AI makes us realize that you know we’re probably going to be able to master the rest too pretty soon it’s not really a leap where there’s like nothing nothing nothing and then Kaboom we all die it’s a trend that you can already start to see where we are delegating ever more power to machines we already have machines today rejecting some job applications making bail decisions making important military decisions and it’s so convenient for those in charge right improve the efficiency of this improve the efficiency of that you kind of don’t see the forest for all the trees that we’re gradually giving up power and uh the ultimate source of all the power that we humans have the reason we are the most powerful species on the planet is our intelligence we have more power than tigers not because we have a bigger biceps or sharper claws but because we’re smarter and uh it’s pretty obvious that if we create new entities that are way smarter than us there’s no guarantee that we’re not going to totally lose power to them the biggest risk is obviously human extinction that every single person on the planet is dead it’s not very far-fetched that we that we would all go extinct if there’s a more intelligent set of entities on the planet because it’s happened before right we humans have already driven about half of the other species extinct similarly if we lose control over our planet maybe they’ll change the environment so that we can’t survive in it anymore they if we protest too much they’ll view us as pests and decide to get rid of us it’s um all about control right the reason why other species that we wiped out like the woolly mammoth and Neanderthal couldn’t stop it because cuz they had lost control over the planet they were no longer the smartest and uh similarly once we abdicate control Destiny is no longer in our hands it doesn’t have to be a disaster to be in the presence of more intelligent beings you know you and I have both done this when we were kids they were Mommy and Daddy and it worked out for us because their goals were aligned with ours the big risk is that the more intelligent beings we’re creating now might have goals are not aligned with ours that’s exactly what went wrong for the woolly mammoth the Neanderthals and all the other species that we wiped out some people dismiss Extinction Risk by arguing that when you’re more intelligent you become less power seeking and it’s generally sort of more nice whereas um I think it’s pretty clear from human history that intelligence is not something that makes you morally good or morally bad it just makes you more capable of doing whatever it was that your goals were if Hitler had been more intelligent I actually think we would have been worse off rather than better off as a species and um that means it’s not enough to just build smart machines you have to actually make sure that you give them goals that are aligned with human flourishing just like when you’re parent you don’t want to just teach your child to use powerful tools and weapons you want you need to really also make sure that they have good values I see two main challenges for good future with AI one is figuring out how to make machines actually do what their owners want them to do that’s often called AI safety you don’t want your safe self-driving car to suddenly go off a cliff the other one is to make sure that Society gives the right incentives to all players so that uh you don’t get some human who wants to take over the world or wipe out a certain ethnic group or whatever to do that with the with the aid of AI right it’s not if we just solve the technical problem it’s not even clear that that’s a net positive because now someone who wants to do horrible things has more ability to do so the truth is that we have really no better understanding of how a large language model Works inside and we have about her own brain does what it does when large language model convinced this Belgian man to commit suicide or tried to convince a journalist to leave his wife you know it’s not cuz some Engineers were like ha let’s mess with these people and put this feature in they had no idea that it was going to do those things so mechanistic interpretability is the nerdy name for this quest to change that to say let’s look inside the black box figure out how it’s actually doing and so we can figure out whether we should trust it and maybe even change it to make it worthy of our trust some of my colleagues think we’re doomed because the west and China somehow are never going to cooperate I think that’s too pessimistic because you know it doesn’t matter if you’re American or Chinese once you’re extinct and the Chinese government is no more interested in losing control over this technology than the American government is so they actually have an aligned interest now that Extinction is something they think about and take seriously building ever more powerful AI is not a race that the West can win and China lose or or the other way around I think the only really realistic outcomes are either that we all get screwed all humans or that we all win in the sense that suddenly Americans are 100 times richer the Chinese are 100 times richer nobody dies of cancer anymore it’s not a zero sum game in any way and uh the Real Enemy therefore is not any one of the countries or any one of the tech companies the enemy is just this Dynamic that makes us fight against each other and fail to collaborate and create a future where we all get way way way better off let’s remember that many of these challenges we’ve succeeded in facing before we’ve successfully regulated very powerful Tech before in biology we’ve successfully managed to find ways of collaborating with with others that were incredibly distrustful of when we invented money for example it enabled people to start doing mutually beneficial transactions with people they didn’t trust at all and even when the US really wasn’t getting along with Stalin they still man is to collaborate enough that they didn’t blow each other up so we can totally do this and um if we do it’s going to be so much more rewarding for all parties than the nuclear arms race where it was like either poof or nothing here either we go extinct or we can have this most incredible future blowing away even the most optimistic fantasies of sci-fi writers the space of possible artificial Minds is vastly larger than the space of Minds so when we make artificial Minds we have a huge Freedom what if we can just amplify our intelligence now to cure all those diseases that take away our loved ones to eliminate poverty we humans are actually at this Fork on the road where if we take the wrong turn here we’re going to lose control and probably get wiped out I actually think the most likely cause of death for you and me is AI related not cancer or any of the old stuff it’s it’s now it’s happening you know um but we this is not a lost battle the way we win it is talk calmly about it and do the right things we know what we need to do already we just need to get out there and do it

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.