FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

6,205 views Streamed live on 21 Jan 2025. LIVE FROM DAVOS How AI can revolutionise scientific discovery and shape a brighter future for humanity with Google DeepMind CEO & Co-Founder Demis Hassabis

uh but that was before a um his uh Nobel Prize so it’s it’s the right place to start what was it what was it like um well I mean it’s it’s it’s it’s just been a surreal experience to be honest even though um it still Stills not quite real now um but it’s the realization of a lifelong dream for me this is always what I got into AI for in the first place was um to help advance science uh use it as a kind of ultimate tool to advance science um help with human health things like drug discovery which of course Alpha fold is very Central to you want to just explain for a second what Alpha fold is of course um protein folding yeah so uh the the what we got the Nobel for was was a program called Alpha fold which essentially uh uh can can give you back the 3D structure of a protein just from its amino acid sequence its genetic sequence so um it’s really important because the 3D structure of the protein proteins are what you know your everything in your body relies on proteins um and uh in your body they fold up into these 3D structures and that’s what determines their function so if you’re interested in disease or or or how biology works or you’re trying to design a drug you need to understand the 3D structure of the protein and uh Alpha fold was the solution to that problem how did you celebrate um actually so it was a funny story because we we We unexpectedly obviously heard about that uh I think it was October 9th and um it was unexpected because it was very fast you know sometimes it can take 10 20 30 40 years uh for you to get The Nobel to be awarded for some work the Breakthrough that you actually did uh and this was just four years and um but I had uh the perfect celebration for me the next day which was um some of my friends were in town some of the best chess players in the world best poker players in the world and one of my childhood friends was hosting a poker chess evening as house with all these world champions and I was supposed to be going there so we had one of the you know toughest home poker games there is in the world but that’s the best way I can fun you know Magnus Carlson was there and all sorts of things we have a funny picture I should post it to to X actually show um not that even it was a lot of fun no no no one lets anyone win in that game you have to absolutely not it was special day my idea of fun you mentioned that you know usually it takes much much longer to for for this kind of recognition was this to you a recognition of AI as well because the other two recipients they’re that’s what they were recognized for well it seems like that was the message the the Nobel committee were were were were sending with that obviously with the physics prize as well and Jeff Hinton and hawfield and and and all their well-deserved fundamental work they had done um and then our work and it was a kind of maybe a coming of age or recognition that AI can really is now mature enough to really contribute to you know helping solve some of the some of the deeper scientific challenges out there uh and then in terms of the time I mean it was a surprise but it’s you know as I understand it uh the notebo Committees always wait to see not just the intellectual breakthrough the research breakthrough but has it really had impact I mean Nobel in his will said for the benefit greatest benefit of mankind and humankind right that’s what the Nobel is is is awarded for which I think it’s a great statement and um so they wait for practical impact to see demonstrably is this going to impact Society for for the for the good and and in a big way and of course that normally takes decades often before that can happen even after the Breakthrough and I think what it was just great about it and also for the alpha fold team and also the whole of you know deep mine and Google is that it it happened so quickly because there’s recognition of the impact um Alpha folders had you know just some just in some numbers over 2 million 2 and a half million researchers around the world use it we folded all proteins known to science or 200 million um usually it would take 5 years average you know several years to do that experimentally so if you times you know the rule of thumb is it takes a PhD student their entire PhD to find the structure of one protein so 200 million would have taken a billion years of PhD time and we’ve just given that all to the world for free you know all open for anyone to use so I think it’s just I call it science of digital speed which is what I think we’re going to see more of we we’re used to digital Technologies disseminating fast around the world and I think that can start happening with scientific breakthroughs and and and help with things like medicine and health Le let’s get to that what breakthroughs are you actually working on I think what everybody what everybody wants to know is what are you may not want to talk about specific diseases but what are the areas where you think we will see breakthroughs yes well look so we we’re we’re we’re continuing you know in in to invest in massive ways in two things one is the continuing on the fundamental research so continuing to build uh um better better models of biology models of chemistry uh uh physics mathematics um you know the weather so we’re building all these types of models using our general AI systems so that’s one track uh so things like Alpha fold 3 the new advancers of alpha fold can deal with Dynamics symbology and then we’re doing practical things too so we spun out a new company isomorphic Labs um to Really T build on Alpha fold and tackle the adjacent Technologies you need to do drug Discovery CU knowing the 3D structure of protein that’s involved in some kind of disease is only one part of the puzzle it’s an important part but it’s only one part you need to design the drug compound the molecule then you need to make sure it’s not toxic it has all the right properties it’s soluble so you need actually lots of other models adjacent to it uh uh that are just as complex in some ways as Alpha fold and put them all together and then you can revolutionize the whole you know drug Discovery process which usually takes an average of like you know 10 years or 5 to 10 years uh for one drug and maybe we could accelerate that by 10x which would be an incredible revolution in in in human health so when do you think there will be a drug out there that is produced by the mor yeah and an ai ai generator yeah as an as a as a core part of it well so so we have some great Partnerships with Eli Lily and novatis which we announced last year and uh we’re working on some drug programs with them uh some really tough ones they gave us some of their hardest targets that a lot of their chemists were not able to find um good good candidates for and we like those and then uh we have our own internal drug programs too and we’re looking at very big therapeutic areas we’re kind of agnostic to which area because these tools are very general they could apply to any type of uh disease area but we’re looking at oncology cardiovascular uh neurogeneration you know all the big uh uh disease areas and um I think by the end of this year we’ll have our first uh drugs in in the clinic by the end of this year yeah okay I think that’s a scoop I I hope the f I hope the ft is writing it not some of the other journalists here um I know that your main interest is science uh but Google of course and you are now responsible for everything AI at uh at Google um you’ve had you had a year I I would say last year for example a lot of people would have said Google is still is still behind uh open AI is is the big story this year you’ve had several breakthroughs uh what was the most significant one for you I think there’s been a lot that we’ve had a very productive year last year and we’re trying to build on that momentum and I think um all of you saw that at the end of last year couple of months ago you know our sort of endof year releases where we released a whole Suite of new models so our list model Gemini 2.0 The Flash version which is the smallest version super powerful and super efficient for its size so very good for scaling to billions of users the kinds of things some very good evaluations yeah so it’s had really good evaluations we’re building bigger versions of that so we’ll be releasing those soon um but then you know we had a lot of other I think really cool models we had V2 state-of-the-art video model some of you may have seen some of the some of the videos that we can produce that you know still astounds me how accurately can model the physics so um there’s a sort of canonical thing which may sound funny to some of you where you test these video models is can can you generate a video of someone chopping a tomato you know it sounds crazy but it’s like but actually slicing the tomato with the fingers and the knife in the right place and the slicers don’t just magically come back together and and vo does does all that super accurately and it’s the first model I think that does that and it’s the beginning for us of what we call a world model which is a model that doesn’t just model uh doesn’t just um understand language but can understand uh uh uh the the richness of the world the spatial temporal aspects of the world and I think that’s what we’re going to need to build on to actually have what we call Universal assistance uh so a digital assistant that can help you in your everyday life and be useful and agents yeah those are the agents which we’re building towards so our our project there is called project Astra it’s a it’s a research prototype right now it’s in sort of trusted tester kind of beta testing when do you think we’ll we’ll be able to try that out I think when will consumers be able to I think I think through we’ll see how the tests go and there’s lots of still lots of research to be done on it and and um but I think some point you know later in the year I I think we’ll have a version that that that everyday consumers can use um and I’m very excited about the the way those Technologies are going and I think we’ve got kind of um leading models in pretty much most of these categories now so let’s talk a little bit about um the the competition um one of your competitors has now a a um a modern llm that can also reason MH does Gemini 2.0 reason and how do how do you get to that yeah well we we we have our own version of that which we call a thinking model in our in our case uh but it’s also you know you call it a reasoning model um and it’s obviously one of the most active areas of research right now in all the leading Labs is can you take a you know a BAS model you pre-trained it you post-train it and now at test time sort of at inference time can you allow it to do more thinking and go back on look at its own introspect its own answer and and maybe use some tools like search or something to ground some of the some of the answer or double check it and then output it right and this is obviously one way to address quite a lot of the flaws in the current current models like you know hallucinations and so on so it’s very powerful could be very powerful technique and then you can there’s a scaling law for that too which is the more time you spend thinking the potentially the more accurate it will be now don’t forget we have have over a decade we basically invented pioneered this whole area with alphago and actually all our games work uh that we did in the early days of Deep Mind they were all agents so they all had models and then they had thinking or search on top of that so we’re bringing that back in now and all of I knew that would all come back to the Forefront at some point but the difference now is instead of putting on top of a model of a game which is relatively simple R you have to put it on a world model which is obviously far more complicated including language that you have to model and learn about the statistics of that world and so planning is a lot harder because in if you think about a game like Alpha go which which which you know cracked the game of Go or chess programs you know there’s there’s there’s there’s it’s very simple to understand and learn the rules right and those rules are are are are very logical and they’re very consistent um the real world of course is very messy so your world model let’s say it even has you know only 1% error in it but actually mostly they have like 10% error um then if you start thinking reasoning over 100 steps even if you only have a 1% error that error compounds so after 100 steps you might be in a totally random uh nonuseful space because of that 1% error compounding so you have the bar that one needs to be able to think over that and reason over that amount of time is very high in terms of the accuracy and reliability of the models yeah this is why I’m uh wondering about reasoning because one of the basic issues with current you llms that that uh we’re all we’re all using is the question of hallucinations which I’m sure you’re asked a lot about and there is a there is a way to overcome hallucinations by maybe sending them to essentially factchecking the answers for the llm to be factchecking its own answers yet that is not necessarily what is happening why well actually it is H there there’s several I mean there’s many ways people are trying and we’re trying to address the factuality point and actually we’re very pleased the 2.0 Gemini models are way better on factuality than our previous generation so we’re improving on that all the time and it’s something of course that we’re very very focused on uh especially being you know Google and deep mind um and of course if you want to do science with these models which is ultimately what I’d like to do you you really need to be accurate you can’t be hallucinating uh uh you know citations or papers and and so on so so in some ways that’s a great test ground for it now there are many ways to do it you can address it with just um training your your objectives your your you know the pre-training step right making like filtering the misinformation out of that and stuff like that then there’s the there’s the next step which is you can use tools so that um you actually call Google search as part of the fact checking well what better way to fact check right than actually use search as a tool so that’s something of course we of course we it’s native for us we would be experimenting with we already have a very good uh accurate system Google search let’s use that in in the loop of the the system but of course the system has to know learn how when to call it right when when what is the right thing to call it for so there’s still a learning that has to happen there it’s not automatic and then the final thing is the reasoning you know the more time you give the system to reason and allow it to backtrack and and and not just output the first thing it’s it’s up with it’s going to be more accurate some people have suggested to me that you don’t you actually want hallucinations because they are the creativity part do you agree with that well for in certain domains that may be true that’s one way but then you would want you’d want it to be intentional right so you want you want to be able to decide sort of flag yeah now I’d want you to be creative or you know obviously we should discuss what creativity really means but but but but then you know hallucination could be one way to to at least get you into into a new space but you want to you’d want to you’d want that to activate that almost as a feature rather than have something that happens accidentally so just um because we’re on llms how do you use Gemini actually I use and how and what do you tell your kids how do you tell your kids to use it yeah um so well my kids actually like using the um the creative tools the image generation tools things like that um and uh I personally use things like um things that are built on Gemini so notebook them I think I don’t know if you know what that some of you know what that is it’s basically you put in whatever research you’re interested in you P papers or or or um websites or whatever it is that you’re that you’re trying to understand and it will generate you a podcast a really engaging one it’s kind of you wouldn’t think it would work but it’s actually you know you I initially thought surely you just want um the AI system to summarize it for you and then maybe read it out to you but actually it’s way more engaging if it’s two AI critiquing it strangely and it’s almost a new for me it’s almost a new a brand new modality for learning information and then now the later system you can actually interrupt the podcast and ask the the the the the host questions so if you want to steer it into so if you haven’t tried it I recommend you all try it it’s amazing so I use it actually pretty much every day to um summarize some new area of research that uh just to break the ice of it and just give me the gist of it before and so I can decide if I want to Deep dive into it or not and you’ve also talked um a lot in the past about everything you’re doing is in the pursuit of AGI yeah um I’m still not convinced of what is the problem that we are trying to solve what is the problem you’re trying to solve that you feel that need to get to AI well and then I’m going to ask you about the your time frame well the the the the problem I’m trying to answer is I want to understand the fundamental nature of reality and all the biggest questions I’ve been I don’t know why I’ve been fascinated that since I was a but that’s what I’ve I’ve always felt there’s this deep mystery to the universe usually you go into physics if you if you’re interested in that and and I love physics was my favorite subject school but I kind of decided after reading all the greats the Fineman the winbergs and you know in search of the final theories all of these UniFi theories of physics that perhaps the problem was we needed a bit of help even the smartest experts we needed a tool to help us and and and AI for me was my answer to my expression of what I I thought I could cont to that search for meaning which I think is the you know deep-seated human need and so and so in the limit if we really want to understand you know the the nature of physics and Consciousness and dreaming and time and I don’t know why more people don’t worry about these things because I worry about all the time there’s all these things we interact with all the time we don’t even know what time is isn’t that incredible I mean you know that’s but that’s part of the mystery of life it is but but we we are very curious creatures and so when that’s what science is about and and philos philosophy um so it’s more you know it’s science and philosophy and I love philosophy and I think we need new great philosophers if there any philosophers listening I think there is a need for the next C today to understand where we’re heading into and um and I hope some brilliant new philosophers working on that and and and I think to understand the the full nature of reality uh including all the physics you would need something of the level of AGI and what is the time frame you’re looking at is it 10 years is it 5 years is it 50 years there’s a lot of debate about this at the moment I don’t usually give sort of time scales but but but it partly depends on how you define it we’ve had a very consistent definition from the beginning of of Deep Mind for the last 15 20 years and and which is you know a system that is capable of of exhibiting all the cognitive capabilities that humans have and the reason that’s important is the the human mind is the only example we know of in currently in the universe that exhibits general intelligence right and and so so if you think you have general intelligence not just a narrow one you you need to be at least able to do all of the um uh things the human mind can exhibit you know including things like creativity and so so that kind of AGI I think we are you know five to 10 years away from and maybe one or two new breakthroughs are still missing I think you’ve probably thought about this a lot but who is supposed to make the decision on whether we reach AGI is it the scientists the AI experts or is it governments and societies I think where are they in this in this debate because they don’t seem to have much of a voice well they we try I think it needs to be all of society right all the stakeholder Society I think it needs to be of course the the the the industrial Labs um Academia uh Civil Society the RO the eies you know uh and then government and um and and and then when you think about products then all the all the and deployment of the technology then all the people that would have potentially affect and so uh we try to do our bit on that at Google and a Deep Mind of like engaging uh as wider stakeholders as possible we’ve we’ve we’ve convened like AI science forums we I talk all the time to all theem um roal Society you know and so on National Academy in the US and and um but I think uh we maybe don’t have the right institutions you know what I would like to see is I mean if if the somewhere like the UN was in better shape we need to have international discussion about this because it’s also not just a Nationals thing it has you know if we think about regulation or other things it’s got to be an international agreement that’s why I’m I’m I’m I’m pleased to see things like this AI Summits you know the first one in the UK the next you know there’s one in South Korea then after that and then Paris next month which I’m going to go to uh and I think we need to accelerate that pace of debate and discussion so that we can um discuss how we want to deploy these systems what we want to use them for how we should share the the the benefits of that as widely as possible um there’s and then and then the intendent risks and how to mitigate those at the same time as facilitating uh all the benefits what are what are the risks that you think about and then we’ll talk about D regulation or rather D I mean look I think there’s two big broadly speaking there’s two big risks I worry about one is um Bad actors uh repurposing general purpose systems for harmful ends right so that’s that’s one whole class of uh age-old class of of problem you know transformative techology that’s why you’re not a you’re not a fan of Open Source well I am so that’s much more nuanced so we’re huge supporters of open science and open source Alpha fold is completely open source uh and we publish you know we half of the the modern the modern industry depends on you know 189% of that I would say is on the is built on the published work of Google in Deep Mind Over the last decade things like Transformers Al alphao all of these ideas which we put freely out there so it for sure um that’s how the scientific discourse works and progress gets made at the maximum speed is by sharing information the problem is that with this technology uh unlike most Technologies it’s so transformative and it’s super general purpose so it can be it can be sort of um reconfigured uh and the more gentic a system is the more easy it would be to reconfigure an end system for something else so that’s one class a problem so how do we so how do we this is why I say to the open source um Advocates is I’m I I I would love that to be the case and facilitate as many of the good use cases as possible but as these systems get more and more powerful how are we going to guard against the harmful uh cases because once you make it open source there’s no way of pulling that back at least with an API you can put it out there and if you discover later something something bad can be done with it then you you can pull it back you can shut off the access and so on the other reason this is this is so complicated with AI is because of these emergent it’s an emergent technology it’s not a uh something you can just just put out and you can stress test to the Limit and then you know exactly what it can and can’t do right a car engine won’t suddenly generate create in of itself some new capability you didn’t know right so this is a very different type of machine or tool or system to we’ve had before and even if you test it for six months you may still miss when you put it out in the world and billions of people use it there may be some some clever person somewhere some clever kid in a garage could figure out some new technique some new combination of things that might have a different capability so that’s one and then the second thing is the risk from the technology itself which we sometimes call AGI risk that’s not today that’s as we get closer to AGI but you know depends again on your definition way we will get closer to AGI we will but it’s not just my way it’s just gonna you know it’s the it’s the it’s it’s the kind of if you look at if you study The History of Science just seems to be the way if something’s possible and is is valuable to do people will do it and I think we’re past that point now with AI where the genie can be ped batter in the bottle so we have to deal with that and we have to try and make sure and Steward that into the world in as safe way as possible and so that means this is also unique to AI is the AI itself having agency and what kinds of AI should be built and then an international agreement about that um because I think there are many um safe ways to build Ai and if you think of throughout the design space of AGI right there are many ways to build containable understandable AI but there also I’m sure ways to build it not understandable and and not containable and so but and and that’s what we need research on right now but it’s not just a technical question I think the harder part and I’m optimistic actually we will given enough time and enough brain power the smartest people we will solve the technical questions around that um but what about the sort of more geopolitical coordination problem of of of um stopping you know prisoners dilemma problems or tragedy the commons type problems and that brings us to Donald Trump uh because well in 24 hours you know he’s he’s come out of various International understandings and and agreement so the prospect of any International understanding in the next four years is really remote but there but there is also likely to be a lot more deregulation rather than regulation does that worry you well I think it’s we have to see what what what the administration actually does so I think um it’s far too early to to to talk about that yet but the the the you know I think I take some heart from the fact there’s lots of people involved with the new Administration who you know are deep technologists um they’ve they understand both the opportunities here and the risks and the risks here um you know a lot of the ones I know they understand what’s at stake the both both on the on the upside and the downside and so I think um you know I think that potentially that could be a good thing you know they’re up to speed with the technology um and hopefully uh uh there’ll be an understanding there to take a a sensible Road where we you know I call it kind of like you know being bold and responsible with this technology right so bold with the kind of opportunities I think we have to do that you know I encourage the UK government to do that in terms of you know we need the extra growth and economic growth and the efficiencies with the health service and applying it to Medicine all these amazing things and curing diseases helping with climate in many ways some of the other Grand challenges that face society today I can’t see how we’re going to solve them unless we introduce new technologies like AI um but you know we need to do that in a in a in a sensible responsible um fashion and I’d Advocate putting the scientific method at the center of that it’s interesting because because in in many ways um Elon Musk was for you know uh moratorium at the at the beginning he wanted to slow things down but now he is a competitor as well with Grog um and he’s likely to be a favored you know competitor um that could be that could be problematic Way Beyond the the deregulation that we’re likely to to see or do you think I mean you you you know him I know well early days yeah we’re good friends and we know you know of course there’s competition as well as as as we I’ve talked to him for he was an early investor in deep M I’ve known him for 12 plus years now and we’ve discussed AI from from the very beginning you know I think I got him into AI in the first place and I I look that there’s um this is M there’s much more at stake here than just companies or you know um uh products or things like that you know if we just listen to what we’ve been discussing earlier on you know there’s there’s the there’s the sort of future of humanity and then The Human Condition and and where we want to go as a society um and as a species really and that’s what’s at stake and and I think people understand that um some at least some people understand that and maybe more people need to understand that and what I sometimes say actually is sort of related to that is I think there’s way too much hype about AI in the short term you know all this noise you get on on X and other things about you know it’s AGI next year and what is it and everyone’s sort of losing their minds but we just but actually it’s it’s underrated still underappreciated what’s going to the amount of transformation that’s going to happen with AI in the medium to long term sort of 5 to 10 years I’ve I’ve been surprised that uh B the adoption of AI in business um has been slower than I would have I would have expected and I think a lot of people after uh the first year after Chad GPT came out uh felt that they had to experiment um and they have experimented but they haven’t exactly found the right uses yeah I think that’s fair you’re not you’re not surprised by that no that’s exactly where I think we’re at which is these impressive systems that are um useful for Fairly narrow use cases you know like summarizing documents or you know doing a bit of research or or writing proformer you know recommendation letters or something like that right but it’s not that that’s why I don’t think this is AG you know we’re close to AGI yet because if we were you would imagine all sorts of of full workers and really you know having having a set of these things helping you out every day in in all sorts of walk parts of your life right from recommending you cool things to to watch and read and and enriching your life in many ways and we we’re not seeing any of that yet I think we will so I don’t think this this is where I’m talking about the overhype of maybe today and and and and last year’s AI but but it’s you know but it’s coming it’s just not here yet um and um but but actually what’s going to happen in the 5e 10 10 years time scale is going to be Monumental um and I think that’s not understood yet how do you think it’s going to change my industry well I don’t know I don’t know I think there are a lot of Journal maybe be faster to how it won’t but um I I I I think um it’s going to have all sorts of effects I mean you would know better than me I think in terms of the the the writing and and factchecking and information gathering I hope in some ways um AI will help a lot in terms I thought a lot about more to do with social media for example and I just feel like we’re deluged as individuals and consumers with information bombarding us from algorithms they’re not even AI algorithms they’re just normal you know normal pieces of software right and recommendation systems and so I wouldn’t call those AI systems at at the moment and we’re just delus and it’s all trying to gain our attention and so on and I wonder there as well I always think about these things as if we can apply technology or or in service of ourselves as individuals you know maybe to protect our minds a bit imagine a digital assistant that you can talk to and sort of configure to say look today I want a peaceful day in flow of my writing or programming or whatever or science whatever it is you’re doing but but just just alert me you know just check Twitter for me and all these things just alert me if something that you know is urgent for me right but but at the moment we can’t do that you have to dip into that that stormy River to to try and find the thing that you care about but the problem is you’re using the same brain that you’re in flow with so you’re affecting your brain immediately by partaking in that filtering so it be better to have in my opinion to have an AI system that works for you to do that and that would be part of this sort of digital assistant Vision I’m going to take a few um questions WEA mod they got a weather model I’m C curious if you can do climate modeling and how that problem might be different yeah great question yeah so we have the one of the many things we did last year was the was create gen cast which is the most accurate weather prediction model in the world and uh we put that also out open source um but we are thinking a lot about climate modeling which is very different because it’s it’s there you’re talking about very different types of data and it’s much longer term uh versus uh what the kind of data you deal with with with weather forecasting so it’s sort of related but I think we’ll need a proper a different system for it and actually but actually that team that genass team is amazing team they are thinking next about climate modeling yes go ahead yeah my um you mentioned about um wanting some kind of I guess not regulatory oversight but you mentioned the UN earlier as like having some say in this I think a few years ago you actually talked about possibly even handing AGI over to something like the UN back then this was sort of before the Big Race happened I think everyone was publishing everything it feels like a lot has changed do you still feel that way um do you still think like a company should hold AGI if it gets there do you think you should hand it if Google say gets there handed over or or Google shareholders say otherwise no there’s the near term issues of you know capitalism and all these sorts of things I do think this is one of the things I talk about in the in the in the far term you know 10e time scale when we have full what I would call AGI I don’t even know what that you know one of the big things The Economist should be thinking about what does that do to money the capitalist system even the notion of companies I’m I think probably all of that changes so so then if you think about that that I’m thinking much bigger picture than it’s clearly way bigger than a company certainly than individual but way bigger than a company way bigger than than a single country in my opinion it’s it’s it’s it’s for all of humankind I think it’s it’s going to be at that level so ideally there would be some institution that is around that could meet the moment and then that would be where you would put you know a wise Council of international Council of um very diverse and smart people from different backgrounds you know not just technologists I’m talking about philosophers social scientists writers Etc um and come together but who’s building that Institute is what I would ask and and and um I think we we really need that um and yeah IDE would be something like the UN but I think it’s difficult with the geopolitical tensions and Security Council I mean you all know everyone knows what the issues are there and um and I think that that’s needed and then I think some but maybe a model could be something like a CERN for AI research right i’ I’ve been advocating that for a long time an international research body that takes the last few steps in a very careful scientific rigorous way um that’s that has participation from from from government Academia as well as as well as industry um I I would advocate for something some kind of structure like that is this what you mean when you say you’re a cautious Optimist yes well well there’s two parts to that obviously there the cautious part and The Optimist part The Optimist part that’s easy to explain I mean I spent my whole life working on AI because I think it will be the most incredible beneficial technology ever I think we can cure all diseases with it I think we’ll come up with new energy sources that clean renewable energy sources and I think that will allow us to that will that’ll give us radical abundance and then we’ll travel to the stars and spread Consciousness to the Galaxy that’s what I think is going to happen um which I think is an amazing future for we may be we may get to Mars very quickly we may get to Mars very quickly and and but then we want to go beyond Mars you know if you think of Star Trek or the culture series Ian Banks books which I’m really looking forward to living in that world yes exactly we all we all would be I mean there would be an amazing world um so that’s where we want to get to That’s The Optimist part so you couldn’t get any more techno Optimist than me but there that’s we got a thread there’s on the way there there as I said earlier there are lots of ways to build AGI in service of humanity to get us to that to that end point um which I think would be wonderful um I kind of regard it as spreading human consciousness to the to the Stars right the Carl Sean type of vision for Humanity um and we could be within the grasp of that but the if it goes wrong the bad act’s case or the AGI risk it could go disastrously wrong so we got to thread the needle here as a as a society and as a a a research field and that’s the cautious part so you know in in a situation where there’s uncertainty so the other Optimist part of me is that I believe in human Ingenuity you know we’re super adaptable we’re we’re amazing our minds are amazing um it’s incredible what what we’ve done with our monkey brains I mean science we’ve created the modern world it’s unbelievable this is built for Hunter Gathering our brains we evolve for Hunter Gathering I think about this every time I go over on a 747 to to California is like how did we do this people just take it for granted you know I’m looking out the window in this hunk of metal how did we do it you know if we explained it to our hunter gatherer ancestors how wouldn’t have believed it so anyway so there’s that so we’ve done that we’ve come that far so I think we can do this too I I I so but but we need to do it in cooperation and with the right methods and not in a rush it’s got to be done you know if you’ve got the best Minds in the world or I always used to mention it to Terrence tows of the world the greatest mathematician in the planet right spoke to him about this I’d recruit all the Terren tows and have them working on these questions right of of um the AGI design and then I think you know given enough time we will solve the technical questions but the question for me is will we solve the geopolitical societal questions because I think that’s actually going to be harder I think maybe you might want to just slow down for a few years then then accelerate there’s another question there yes go ah yeah uh Demis I just wanted to pick up on your comments about agentic Ai and I can see many and varied uses of that but what paint us a picture of what happens when you’ve got lots of these yeah agents scurrying around right trying to interact with each other particularly in real world applications say Financial Services yeah who accepts responsibility for their actions yeah great question look I think the reason agent systems are going to get built and are being built is that they’ll be way more useful than non- agentic systems so today’s systems you know that let’s say take the language model chat Bots they’re basically Q passive Q&A systems you you inject the intent they give you an answer but if you say like you know to one of the one of the systems gemini or any of the others like you know give me a recommendation for a restaurant right why would you want to stop there if it gives you a recommendation you when say book me the table right I mean clearly that would it was just inconvenient not to have that next step so I think that on a sort of More praic Level assistant like level dig digital assistant you’re going to want people are just going to choose agents over non- agents because they’re they’re more effective at the at saving you time and and and and making your life um uh better um but then when you think that through you’re you’re exactly going to get to what you just mentioned which is actually be starting that through one or two or three years down the line of that and there are millions of Agents or of them and they’re all in that now you may have agents negotiating with each other on behalf of the vendor and the customer and then you we probably need a new way to rethink web and and the web infrastructure and apps so that’s all going to get revolutionized in my opinion and that’s I think another under underappreciated um uh uh um uh transformation that’s going to happen in the in the midterm okay I have one minute can I take two questions here and Mar and then you’re going to have to answer in one second okay quickly so I have a quick question two actually I’m greedy a bit um so um there’s a race seems like a race between China and and us do you think there will be an impact on the world and outcome differentiation if one or the other wins and second question there bias the Big Data it comes from the big companies so do you think also it might be a bias to the information and a bit scar to the customer to the client consumer thank you where okay I had two and I’ll get rid of one was the most important so with your wonderful biological systems how soon will it be before somebody comes along and says design me the perfect pathogen which will kill everybody so look you you can you first of all you don’t need AI system to design pathogens that’s one thing I would say and I have a lot of you know with every step we are taking and obviously we’re at the Forefront of this with Alpha fold and other things uh we talk to all the biocurity you know before we released Alpha fold 2 we talked to 30 biocurity experts and bioeth thesis and also Nobel Prize winners on the biology side to make sure that the benefits far out where any risks and and we made some slight tweaks based on their feedback but they all came back universally that it was beneficial as we get more and more advanced systems we will also have to build the detection systems to detect if someone’s doing that kind of thing and that’s where open source it could be an issue because if you have an API you could also put the detection system on that and immediately detect if someone’s trying to with with more AI systems immediately flag if somebody is trying to BU make some kind of pathogen right you would easily be able to build an AI system to classify that and then immediately do something about it shut them off call the authorities whatever it is that needs to happen so so yes and in some ways it’s the tip of the Spear of dealing with um both the good use case solving all diseases but then also the the the bad act to use case so that’s um cyber is another place where that’s coming to head right now but biology is another one so there those cyber biology and nuclear are three areas we’re looking at specifically um question I’ll just ask the China One because way have of time yes that’s obviously you know deep seeks a very interesting model just came out yesterday um this is where I think you know whatever is needed ideally it needs to be International collaboration it’s beyond my my my ability to predict how that will happen but if if R you said mentioned something earlier about if we have a c AI maybe one of the decisions a CERN would take in the absence of a race Dynamic would be to slow down to allow more work to be done slow down on capability research to allow more work and more attention and the best people to work on on understanding controllability safety uh value systems all of this but it doesn’t matter at this the way the Technology’s gone if one group stops or even one nation if not everyone’s doing that yeah which is the China question yeah and because there still it’s still everyone yeah the the race essentially I mean you got to you got to come to some kind of an understanding about this fairly I did warn about this 10 15 years ago to various people that I won’t mention but what would happen and this exactly happened what I unfortunately what I thought would happen Okay on this um very cautious optimistic yes uh note we’re going to have to bring this to a close thank you Demus and thank you to the audience thanks thanks for the question

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.