FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

FRY. Is it definitely possible to contain an AGI though within the sort of walls of an organization?

HASSABIS. Well that’s a whole separate question um I don’t think we know how to do that right now. (31:41)

It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up. In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models. Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.

TRANSCRIPT

(31:58) what about intermediary well intermediary I think we have that we have good uh ideas of how to do that so uh you know one would be things like a um secure sandboxing so you test that’s what I’d want to test the agent behaviors in is in a game environment or a version of the internet that’s not quite fully connected right so there’s a lot of uh security work that’s done and known you know in this space in in fintech and other places so we’d probably borrow those ideas and and then build those kinds of systems and that how we would test the early um prototype systems but um that’s we also know that’s not going to be good enough to contain an AGI something that’s potentially smarter than us so I think we got to understand those systems better so that we can design the protocols for an AGI um when that time comes we’ll have better ideas for how to contain that potentially also using AI systems and tools to monitor uh the next versions of the AI system so one the subject of safety because I I know that you a very big big part of the AI safety Summit AT blessy Park in 2023 which is of course hosted by the UK government and and from the outside I think a lot of people just say the word regulation as though it’s just going to come in and fix everything but what is your view on how regulation should be structured well I think it’s great that governments are getting up to speed on it and involved I think that’s one of the good re things about the the recent explosion of interest is that of course governments are paying attention and I think it’s been great UK government specifically who I’ve talked to a lot and us as well they’ve got very smart people in the Civil Service staff that are um are understand the technology now to to a good degree and it’s been great to see the AI safety institutes being set up in the UK and us and I think many other countries are going to follow so I think these are all good precedents and protocols to settle into again before the stakes get really high right so this is a sort of proving stage again as well um and I do think International cooperation is going to be needed ideally around things like regulation and guard rails and deployment Norms so um because AI is a digital technology very much so you know it doesn’t it’s hard to contain it within National boundaries right so if the UK or Europe does something but or even the us but China doesn’t does that really help the world as oppos you know when we start getting closer to AGI not really so I think my my view in it is you’ve got to be uh because the technolog is changing so fast um we’ve got to be very Nimble and and lightf footed with regulation so that it’s easy to adapt it to where the latest technolog is going if you’d regulated AI 5 years ago you’d regulated something completely different to what we see today which is geni and but it might be different again in five years it might be these agent based systems that are the ones that are carry the highest risk so right now I would you know recommend a sort of beef up existing regulations in in domains that already have them Health transport so on I think you know you can update them for AI for an AI world just like they were updated for mobile and internet that’s probably the first thing I do while doing a watching brief on you know and making sure you understand and test this the frontier systems and then as Things become clear and sort of um more clearly obvious then um start regulating around that uh you know maybe in a couple years time would make sense one of the things we’re missing is again the benchmarks the right test for capabilities that what we’d all want to know including the industry in the field is at what point are capabilities posing some sort of big risk and and there’s no answer to that at the moment right beyond what I’ve just said which is Agent based capabilities is probably a next threshold but there’s no agreed upon test for that you know one thing you might imagine is like testing for deception for example as a capability you really don’t want that in the system because then you can’t rely on anything else that it’s reporting right so um that would be my number one uh emerging capability that I think you know would be good to test for but there’s many uh you know ability to achieve certain goals ability to replicate uh and there’s quite a lot of work going on on this now and I think the safety institutes which are basically sort of government uh agencies I think they’re working I think it’ be great for them to do a lot you know to push on that as well as well as the labs of course contributing what we know I wonder in this this picture of the world that you’re that you’re describing what’s the place for institutions in this I mean if we get to the stage where we have AI that’s kind of supporting all scientific research is there still a place for great institutions yeah I think so look well there’s there’s there sort of the stage up to AGI and I think that’s got to be a cooperation between Civil Society Academia government and and the industrial Labs so I think I really believe that’s the only way we’re going to get to the to the to the sort of final stages of this now if you’re asking after AGI happens you know that maybe that is what you’re asking then AGI of course one of the reasons I’ve always wanted to build it is then we can use it to start answering some of the biggest most fundamental questions about the nature reality and physics and all of these things and Consciousness and so on it depends you know what form that takes whether that would be a human uh expert combination with AI I think that will be the case for a while uh in terms of discovering the next Frontier so um like right now these systems can’t come up with their own conjectures or hypotheses they can help you prove something and I think we’ll be able to prove you know gold get gold medals on International maass Olympia things like that but maybe even solve a famous conjecture think we’re that’s within reach now but not they don’t have the ability to come up with ran hypothesis in the first place right uh or gen relativity so that’s really was always my test for um maybe a true artificial general intelligence is it’ll be able to do that or invent go you know and and so we don’t have any systems we don’t really know how even probably you know know how we would design in theory even a system that could do that you know the computer scientist Stuart Russell so he told me that he was a bit worried that once we get to AGI it might be that we all become like the Royal princes of the past you know the ones who never had to ascend the throne or do any work but just got to live this life of unbridled luxury and have no purpose yeah so that’s that is the interesting question is it maybe it’s beyond AGI it’s more like artificial super intelligence or something sometimes people call it ASI but then we should have you know radical abundance and assuming we you know make sure we distribute that you know fairly and equitably then we will be in this position where um you know we’ll have more freedom to choose what to do and um and then meaning will be a big philosophical question and I think um we’ll need philosophers perhaps theologians even to start thinking at Social scientists that they should be thinking about that now what what brings meaning I mean I still think um there’s of course self-actualization and I don’t I think we’ll all just be sitting there meditating but but but but maybe we all be playing computer games I don’t know but is that a bad thing even or or not right who who knows I don’t think the princes of the past came off particularly well no traveling the stars but then there’s also you know extreme sports people do why do they do them I mean uh you know climb Everest all these I mean there’ll be you know but I think it’s going to be very interesting and and that I don’t know but that’s that’s kind of what I was saying earlier about the it’s under appreciated what’s going to happen you know going back to the hype near term versus far term so if you want to call that hype even it’s it’s definitely underhyped I think the amount of transformation that will happen I think I think it will be very good in the limit we’ll cure lots of diseases and or all diseases you know solve our energy problems climate problems um but then the next question comes is is is their meaning so bring us back like slightly closer to to AGR rather than than superintendents I know that your big mission is to to build um artificial intelligence to benefit everybody but how do you make sure that it does benefit everybody how do you include all people’s preferences rather than just the designers yeah I think you’ve got to I think what’s going to have to happen is I mean it’s impossible to include all preferences in one system because by definition people don’t agree right we can see that in unfortunately in the current state of the world countries don’t agree um governments don’t agree um we can’t even get agreement on obvious things like like dealing with the climate uh uh situation so I think it’s that’s very hard what I imagine will happen is that you know we’ll have a set of safe architectures hopefully that um personalized AIS can be built on top of and then everyone will have you know or or different countries will have their own preferences about what they use it for uh what they deploy it for what they you know what can and can’t be done with them but overall and that’s fine that’s for everyone to individually decide or countries to decide themselves just like they do today but as a society we know that um there’s some provably safe things about those architectures right and then you can let them proliferate and and so on so I I I think that we going to kind of got to get through the eye of a needle in a way where um we got to as we get closer to AGI we’ve probably got to cooperate more ideally ideally internationally uh and then get make sure we build agis in a safe architecture way because I’m sure there are unsafe ways and I’m sure there are safe ways of building AGI uh and then once we get through that then we can sort of open the funnel again and everyone can have their own personalized pocket AG if they want what a version of the future okay but then in terms of the safe way to build it I mean are we talking about undesirable behaviors here that might emerge yes undesirable emergent behaviors um uh capabilities that the deception is one example that that you don’t want um value systems you know we got to understand all of these things better what kind of guard whs work uh not circum ventable and there’s two cases to worry about there’s the there’s bad uses by by bad uh individuals or or nations so human misuse and then there’s the AI itself right as it gets closer to AGI doing going off the rails so that and I think you need different solutions for those two problems um and so yeah that’s that’s what we’re going to have to contend with as we get closer to uh uh building these Technologies and also just going back to your benefiting everyone point of course what what I’m you know we’re showing the way with things like Alpha fold and isomorphic I I think we could you know cure most diseases within the next decade or or two if uh AI drug design works and then they could be personalized medicines where it minimizes the side effects on the individual because it’s it’s mapped to the the person’s individual illness and their individual metabolism and so on so these are kind of amazing things um you know clean energy renewable energy sources you know Fusion or better solar power all of these types of things I think they’re all within reach and then that would sort out water access because you could do desalination everywhere so I just feel like um this enormous good is going to come from uh these Technologies um but we have to mitigate the risks too and one way that you said that you would want to mitigate the risks was uh that there would be a moment where you would basically do the scientific version of Avengers assembl yes sure Terren to on exactly bring on down yeah exactly is that still your plan yeah well I think I think I think so I think if we can get the international cooperation you know love there to be a kind of international C basically for AI where you get the top researchers in the world you go look let’s focus on the final few years of this Pro you know AGI project and get it really right and do it scientifically and carefully and thoughtfully at every every step the final sort of steps I still think that would be the best way how do you know when is the time to press the button well that’s that’s the big question because you you can’t do it too early because you would never be able to get the Buy in to do that a lot of people would disagree today people disagree with the risks right you see very famous people saying there’s no risks and then you have people like Jeff Hinton saying there there’s lots of risks and you know I’m I’m I’m in the middle of that I wanted to talk to you a bit more about Neuroscience um how much does it still Inspire what you’re doing because I noticed the other day that Deep Mind had unveiled this computerized rat with a with an artificial brain that that helps to change our understanding of of how the brain controls movement but in the first season of the podcast I remember we talked a lot about how deep mind takes direct inspiration from biological systems is that still the core of your your approach no it’s evolved now because I think we’ve got to a stage now in the last I would say two three years we’ve gone more into an engineering phase uh large scale systems you know massive uh uh training architectures um so I would say that the influence of a of of Neuroscience on that is a little bit less um it may come back in so any time where you need more invention then you want to get as many sources as possible Neuroscience would be one of those sources of of ideas um but when it’s more engineering heavy then um I think that takes a little bit more of a backseat so maybe more applying AI to Neuroscience now like you saw with the virtual rat brain uh and I think we’ll see that as we get closer to AGI using that to understand the brain um I think it’ be one of the coolest use cases for AGI and science I guess this stuff kind of goes through phases of like the engineering CH intervention CH it’s done it’s part it’s you know now and it’s it’s been great and we still obviously keep a close track of it and take any other other ideas too okay um all of the pictures of the future that you’ve painted um are still anchored quite in reality but I know that you’ve said that you really want um AGI to be able to peer into the mysteries of the universe down at the plank scale yes like kind of subatomic Quantum World um do you think that there are things that we have not even yet conceived of MH that that might end up being possible I’m talking wormholes here completely yes I love wormholes to be possible I I think we there is a lot of probably misunderstanding I would say still things we don’t understand about physics and the and the nature of reality and um you know obviously with quantum mechanics and unifying that with you know gravity and all of these things and there’s all these problems with the standard model so I think there’s there’s there and string theory you know I mean I just think this giant gaping holes in physics phys all over the place and if you you know talk to my physics friends about this and there’s a lot of things that don’t fit together um I don’t really like the Multiverse explanation so I think that um it would be great to uh come up with new theories and then test those on massive apparatus perhaps out in space um at these these tiny qu you know the reason I’m obsessed with plank scale things plank time plank space you know is is uh uh because that seems to be the resolution of reality right that in a way the kind of smallest Quant you can break anything into so that feels like the kind of level you want to experiment on if you had powerful um uh apparatus perhaps designed or enabled by having AGI and radical abundance you need both so to be able to afford to build those types of experiments the resolution of reality what a phrase what so is in like the resolution that we’re at at the moment sort of human level is just an approximation yes that’s right and then we know there’s the atomic level where below that the plank level which as far as we know is the smallest resolution one can even talk about things and so that to me would be the resolution one wants to experiment on to really understand what’s going on here I wonder whether you’re also envisioning that there’ll be things that are beyond the limits of human understanding AGI will help us to to uncover that actually we’re just not really capable of understanding and then I sort of wonder if if things are are unexplainable or un understandable are they still falsifiable yeah well look I mean these are great questions I think there will be a potential for an AGI system to understand higher level abstractions than we can so through again through neur going back to Neuroscience we know that you know it’s your prefrontal cortex that does that and there’s sort of up to about six or seven layers of of indirection you know one could take you know this person’s thinking this and I’m thinking this about that person thinking this and so on and then we we sort of lose track but um I think an AI system could have an arbitrarily sort of large prefrontal cortex effectively so you could imagine higher levels of abstraction and patterns that it will be able to see about the universe that we can’t really comprehend or hold in mind at once and then I think the from in terms of explainability point of view the way I think that is a little bit different to other philosophers who’ve thought about this which is like we’ll be like to an closer to an ant and then the AGI right in terms of IQ but I think that’s the way to think of it I think you know it’s it’s we We are touring complete so we’re sort of a you know a full general intelligence as ourselves or be a bit slow because we run on slow machine and we can’t you know infinitely expand our own brains but um we can in theory on given enough time and and memory uh understand uh anything that’s computable and so it I think it will be more like uh you know Gary Kasparov or Magnus Carson playing an amazing chess move I couldn’t have come up with it but they can explain it to me why it’s a good move so I think uh that’s what an AGI system will be able to do you said that deep mind was a 20-year project uh how far through are we are you are you on track I think we’re on track yeah crazily because usually 20e projects stay 20 years away but uh yeah we’re a good way in now and I think we’re 20 years is 2030 for yeah so I think I would the way I say is I wouldn’t be surprised if it comes in the next decade so I think we’re on track that matches what you said last time you haven’t updated your prior exactly amazing yeah deis thank you so much absolute Delight absolute Delight as always so fun to talk as always as well thank you okay

I think there are a few really important things that came out of that conversation especially when you compare it to what Demis was saying last time we spoke to him in 2022 because there there have definitely been a few surprises in the last couple of years the way that these models have demonstrated a genuine conceptual understanding is one this this real world grounding that came in from language and human feedback alone we did not think that that would be enough and then how interesting and useful imperfect AI has been to the everyday person Demis himself there admitted that he had not seen that one coming and that makes me wonder about the other challenges that we don’t yet know how to solve like long-term planning an agency and robust unbreakable safeguards how many of those which we’re going to cover in detail in this podcast by the way are we going to come back to in a couple of years and realize that they were easier than we thought and how many of them are going to be harder and then as for the big predictions that Demis made like cures for most diseases or in 10 or 20 years or or AGI by the end of the decade or how we’re about to enter into an era of abundance I mean they all sound like Demus is being a bit overly optimistic doesn’t it but then again he hasn’t exactly been wrong so far you’ve been listening to Google deep Minds the podcast with me professor Hann fry if you have enjoyed this episode hey why not subscribe we have got plenty more fascinating conversations with the people at The Cutting Edge of AI coming up on topics ranging from how AI is accelerating the pace of scientific discoveries to addressing some of the biggest risks of this technology if you have any feedback or you want to suggest a future guest then do leave us a comment on YouTube until next time

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.