“It can either be the best thing to ever happen to humanity, or the worst.”
— Max Tegmark
Axios chief technology correspondent Ina Fried speaks to Future of Life Institute president Max Tegmark at an event in Davos.
TRANSCRIPT
hi everyone we’re going to have lots of time to mingle it’s going to be great but um if you can keep those conversations to a dull whisper I’m Ena freed Chief technology correspondent at axios I’m also author of our daily AI plus newsletter uh if you don’t get it it’s free all we ask for is your email you can go to newsletters. axios.com um without further Ado uh I am joined by our first guest future of life instit president Max tegmark uh we’re going to talk today about our AI future but when we say that it sometimes implies that we’re talking about a future that’s predestined not something that we as humans have an ability uh to determine you know this could look a lot of different ways so we’re going to get into that now Max a lot of people first heard about you when you called for a six-month pause on Advanced AI Development I’ve heard you say you knew there wasn’t going to be a six-month pause that that wasn’t really the goal The Hope was to get people to pay attention are enough of the right people paying attention actually the real goal was to just make it socially safe for people to express their concerns because a lot of people were nervous about voicing concerns about building smarter than human machines in the future for fear of looking like Lites right but then after they could see the people like yosua Benjo the most cited AI researcher on the planet were also worried about it it really did become safe and I was very happy to see that the public debate kind of exploded after that and even of politicians started talking about it we got Senate hearings we got International Summits AI safety institutes and stuff so now at least people are talking the talk remains to see whether they’ll walk the walk and I want to set the stage here for where we’re at because it seems like the consensus these days is that what we call AGI artificial general intelligence basically computers that are smarter than us at least in various aspects are coming certainly within the next decade probably the next few years and when we say that we mean they’re going to be able to do the work that we get paid to do today and you know paint a picture for what that means again we’re going to talk about how we can change how the future looks but talk about what does that mean to society if computers can do a lot of the work that the folks in this room do well it’s pretty obviously gonna either be the best thing ever to happen to humanity or the worst if you can’t get paid for doing what you’re doing now if you’re basically economically obsolete then um that could either mean that somehow the planet is run in such a way that Society is still created where you don’t need to work and you can go fulfill your dreams and do other cool stuff or it just means some hardcore dystopia from from the movies and that’s why I’m so glad that you mentioned earlier here right that we shouldn’t just eat popcorn and kind of ask what’s going to happen and wait for the future to happen to us you are all very influential people you know let’s think about what future we actually want and make sure that’s what happens and I actually do not think it’s a foregone conclusion that we will build machine AGI machines that cannot smart us all not because we can’t I think it’s obvious by at this point that our brains are biological computers and and it’s it’s physically possible to build much better ones and the companies are just a few years away but rather we might just not want to you know I think I think it’s just so cool to realize that every single person I talk to here in Davos about what they are excited about doing with AI it’s always something that only requires tool AI that doesn’t require AGI so why are we obsessing by trying to build some kind of digital God that’s going to replace us if we instead can build all these super powerful AI tools that that augment us and and Empower us well I want to push you a little on this idea that there’s something that we’re going to be able to build and we’re going to choose not to build it cuz in human history I don’t see a lot of examples of us CH choosing that there’s a couple you can point to certainly the Manhattan Project of you know sort of seeing it but I’ve covered Silicon Valley for 25 years and I’ve rarely known them to say we could build this but maybe it’s not a good idea so we won’t let me push back add some optimism here actually CU I I agree you often hear oh humans always build anything that gives power or money but it’s just not true I mean you could make I could make so much money if I could just clone you and make th a million copies and rent them out right but we don’t do human cloning not because we can’t uh but because we think it’s just not the it’s dystopian we don’t want to lose control over our species the only guy who ever did it as in J went to jail we but those are like the only two examples right human cloning and nuclear proliferation and even that we’ve only got there’s all sorts of eugenic stuff you could do edit the human germ line I can give you a longer list and and this one I mean raise your hand if you are excited about getting powerful AI that can help cure diseases and help arst yeah okay now take them down and raise your hand if you’re really excited about building AI that makes you all OB economically Obsolete and replaces you oh there’s a couple of there’s a couple of AD they must own AI companies yeah but my point is if if the most powerful people on the planet don’t want it and they understand if we can help them understand that make this chest move and you know that’ll happen then we won’t do it so I want to get in in a few minutes into how we get to that world but for the moment and I usually save this to the end I don’t give everyone this magic one I’m going to to give you my special magic one you get to determine the future how would you build it what would it do and not do in an ideal case paint for us what you would like to see happen I would go Gang Busters on tool AI that solves specific problems and my definition of tool is exactly something that we’re not going to lose control over right you might like to drive a powerful car not an uncontrollable car so if you just have any kind of safety standards like we have for drugs for airplanes for cars and standard number one is before you can release this thing you have to demonstrate to a bunch of nerds I’m an MIT Professor so I I know lots of nerds you know you have to demonstrate to the Nerds that that um convince the Nerds that you’re not going to lose control over it that’s all you need to do now all the other cool tool development in AI goes on the Cancer Treatments everything else um but but there’s but the pursuit of a digital God that we is going to come through screeching halt at least for the next decade or or two and then we get this future that I’m on fire about where we actually use AI to augment and Empower ourselves and a lot of that economic disruption still happens if we build these great tools for sure it doesn’t prevent that but it’s at least our human future not some where we just sort of throw away the keys to Earth to some stupid alien digital species that doesn’t care about us um and now I want to you know bring it back to okay how do we get there and regulation is obviously a big part of how you put guardrails on things uh we’ve seen a little bit of movement that we have the eua ACT almost no real movement in the US we’ve had some executive orders but it’s more about how we use AI than preventing anything how do we get there what would good legislation regulation look like is anyone doing it right nobody’s doing it right at the moment although China has some AI regulation it’s a little tougher than the E eua act but uh what with the legislation we need is very simple you really just need something like an FDA for AI where companies have to demonstrate they’re not going to lose control over it and you might want to add some other requirements too like don’t teach terrorists make bi weapons whatever uh but the hard part is not that we do it in other Industries the hard part is the getting the political will to make it happen and in the past if you look for example at how the Food and Drug Administration in the US came about it was always triggered by some tragedy like in that case the thomi the disaster where hundreds of thousands of pregnant women got gave birth to children without arm without arms or legs and people like okay this is bad yeah we’re going to create the FDA the problem is this time we don’t really have have time probably to wait for the big disaster before we treat AI like every other industry so I think it has to come more from um from the top leadership just realizing hey you know I we don’t humans they just don’t want to lose control over over the human future they want it to be Machines working for the humans rather than the other way around and and just put something in place like this quickly and it could just come with an an executive order very quickly I think that’s probably the most realistic in the US and we we do as of you know a couple minutes from now we will have a new executive issuing I’ve heard he’s issuing like a 100 executive orders I suspect none will be the one you’re talking about I think you’re right but I’m actually apologies for being such a cheerful Optimist here it’s just my nature but you know when you listen to Trump himself speak about powerful Tech it’s pretty clear that he is bothered by extreme threats from extremes powerful te he’s talked a lot about the risk of nuclear war he was talking pretty recently about the risk of losing control over superp powerful AI uh so I think the question is whether he’s well ultimately whether it’s going to be kind of his own uh conscience or moral compass that wins or the tech lobbyists are telling him to look the other way and where you know one of the clear you know as you point out who’s in Trump’s ear I think matters a lot one of the people in his ear a lot is Elon Musk I’ve heard you say that could be a good thing talk about that yeah so like if if Trump decides that he does not want to entrust the future of the universe to some tech Executives in in Silicon Valley right then he can obviously talk get advice from Elon on how to implement this kind of stuff and David socks the new aisar also has a has this tweet where he talks about how his all four TOA gu but not the new this replacement species kind so he has people around him who could execute this if he wants to but he also of course has a lot of people around him who be like nothing to see here we’re going to build new species but it’s going to just be peace and love and motherhood nothing to worry about so so we we’ll see what happens but uh I I think um you can all make a difference too by just making sure that people feel comfortable speaking up about what kind of future we want and the show of hands I thought was pretty overwhelming here and it’s pretty obvious this is not like a Democrat versus Republican thing I’ve asked for showers of hands in many many different context and across the world the cross party lines or people want tools that are going to help them we don’t want to be replaced so I do want to move for a second away from the replacement and into let’s say we have this tool AI as as we said it’s going to be a big economic disruptor how do you account for that cuz it seems like right now most of the way that AI is being adopted is corporations are using it to get efficiencies eventually they’re going to have unchecked I think a lot more power over human workers even than they have today the wealth concentration risks are enormous is it just we need some sort of minimum basic e uh uh income what types of things do you in Vision to have this technology if we have super powerful tools not super intelligence but really good tools that can do a lot of human work how do we build a society around that yeah if it seems confusing that that AI discussions range from talking about avoiding human extinction from loss of control to uh labor displacement and stuff how it seems to come into almost every conversation it’s because intelligence itself of course is what created all aspects of our society in the first place and so it’s so it’s easy to get a little bit bewildered by it but I think ultimately it just comes down to how how where the power is in this planet on this planet right we’ve seen some societies I grew up in Sweden for example where the powers be decided that actually we’re going to try to use technology to make sure everybody had a good life and then there were other places like phoh iic Egypt where all the power belonged to one dude decided to build really big pyramids and so there’s a spectrum we have we can sort of choose between here um you can probably guess which side of the spectrum I personally prefer but it’s clear that we humans have options and uh if we can massively improve the productivity and have this abundance of goods and services that machines can help us make I mean if we can’t even figure out how to share that in a way that everybody gets better off shame on us I say I mean yes I would agree and it seems like so far shame on us I mean if you look at the benefits of Technology thus far I mean there are things you know I do think correct me if I’m wrong but I do imagine things that are purely knowledge based so you know medical knowledge getting much more broadly diffused education getting much more broadly diffused I see a lot of benefits in those areas yes know maybe but where we go where we’re the the direction we’re driving right now right towards this Cliff is is even is also one of just increasing power concentration which I think you’re not a huge fan of where more and more power now it used to be that tech companies were just one cluster there were oil companies that were very powerful there were other sectors all the biggest companies in the S&P 500 now are tech companies right and it’s pretty clear that that’s never going to reverse future power concentration will all go into the the tech companies so the it’s a question we face in all countries in all societies do this what we want to have a bunch of unelected tech Executives making ever more impactful decisions for everyone or should we have a more democratic way of uh of running these things I think putting some basic safety standards in place saying that these Tech Bros can’t just do whatever they want is a good first step because I find it kind of absurd right now if you want to open a sandwich shop in San Francisco you can’t actually sell your first sandwich to a customer before the Health Inspectors come to check your kitchen for rats and stuff but if you have a company in San Francisco and you just built super intelligence you can just unleash it without even calling the Health Inspector or any other inspector there’s just no rules so Step One is let’s get some standards in place at least all right well I think unfortunately we’re going to have to leave it there but we’re going to be having a ton of discussions over the next week with a lot of interesting folks from Ai and I think you’ve given us a great starting point so everyone please thank Max Tark thank you was fun