“We’re dealing with something unbelievably transformative, incredibly powerful, that we’ve not seen before. It’s not just another technology… it’s different in category and I don’t think everyone has fully understood that.” — Demis Hassabis

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

71,458 views 18 Feb 2025

Google DeepMind and Anthropic founders, Demis Hassabis and Dario Amodei, are two of the world’s foremost leaders in artificial intelligence. Our editor-in-chief, Zanny Minton Beddoes, sat down with them to discuss AI safety, timelines for artificial general intelligence and whether they fear becoming the Oppenheimers of our time, in a conversation for Visionaries Club.

this is the AI equivalent of having the Beetles and the Rolling Stones in the same place we have on stage now two of the handful of people who are going to build this future Dario amade co-founder and CEO of anthropic s Demis hassabis co-founder and CEO of Deep Mind and Nobel laurat let us start with where you are on timelines and definitions of AGI when we’re at the point where we have a model AI model that can do everything a human can do you know at the level of a a Nobel Laurette like the one sitting next to me across across many fields um uh can do anything a human can do remotely can do tasks that take you know minutes hours days months my guess is that we’ll get that in 2026 and or 2027 Demis do you agree with that I think you thought this was a bit further away we don’t disagree about uh uh too much uh uh I think the timelines are uh a little bit longer the way I would Define AGI is a system that’s that can exhibit all the cognitive capabilities humans can and that’s important because the the human mind is the only example maybe that we know of in the universe that is a general intelligence of course how to test that is the big question the one I’m really looking for and why I think it’s a little bit further out maybe 50% chance of in in 5 years time so maybe by the end of the decade I think we don’t have systems yet that could have invented gen relativity uh when Einstein did with the information that he had available at the time or another example I give is can you invent a game like go not just play a great move like move 37 or build alha go that can beat the world champion Could you actually invent a game that’s as um beautiful aesthetically and and so on as go is so I think it’s going to take a little bit longer to get that kind of capability how does the world change when you reach here and and so three subcomponents is this incremental capability enhancement or is there a kind of threshold moment when the AI are smart enough to train other genius level Ai and so whoever gets there first has an unassailable lead I’ve thought about this and worried about this in the context of geopolitics right um where you know if I if if if I think about um you know what what what happens if if one part of the world is is able to make AIS that build eyes faster than another part of the world I think that’s an area of worry and I worry in particular that authoritarian States if they got ahead in this could imperil everything that we hold dear and you know I’ve always wanted to make sure that that doesn’t happen this is happening at the same time as we are in the midst of a massive geopolitical shock we have the United States under the new Administration tearing up all kinds of longstanding international Norms we had a very feisty speech from the vice president Vance uh yesterday how do you see the current the fact that in coming years we’re going to be very soon getting to AGI at the same time as I think it’s not controversial to say we don’t have anything like a particularly Cooperative International environment governments need to be become more aware of what’s at stake um I think actually this is relat to the timelines and and also your other question about what happens after we have AGI I think that people haven’t understood that well enough yet I would say that AI is um you know it’s it’s hard for it to be more hyped than it is right now I would say it’s overhyped in the near term despite how you know even though it’s super impressive but still underappreciated what it’s going to do in the how much it’s going to change things in the medium to longer term and I think once that’s more understood I think there’ll be more uh uh a basis to have international cooperation around it so at some point you think people will wake up and say actually we we can’t just go it alone and we need to have some Global Norms well my my hope is uh and you know I’ve talked a lot about in the past about a kind of a CERN for AGI type setup where basically an international research collaboration on the last sort of few steps that we need to take towards building the first agis Dar you called this AI Summit declaration a missed opportunity what do you mean by that we’re on the eve of something that has great challenges right it’s going to greatly upend the balance of power if someone dropped a a a new country into the world 10 10 million uh uh you know 10 million people smarter than than than any human uh alive today you know you’d ask the question what is their intent what what are they actually going to do in the world particularly if they they’re able to act autonomously I think it’s unfortunate that there wasn’t more discussion of those issues you know there was in the earlier Summit in uh uh in in um the UK held in in in 2023 the one at the one at Bletchley Park um and you know I hope I I I hope that that future Summits kind of reclaim this mantle quick perhaps a little simplistically the view in the US right now is we’re going to go hell for leather no real focus on regulation we need the US to dominate this technology and the view in Europe has been even though we don’t have an awful lot of this we’re going to regulate and perhaps too much regulation Demis where do you think the balance should be and who’s got it more right you won’t be surprised to hear I think the balance needs to be somewhere in the middle of that um I think that we need to embrace the incredible opportunities that that AI is going to bring um I’m especially passionate in in in in areas of science and medicine I think it’s going to revolutionize those fields obviously our work on Alpha fold and other things I think that in the next decade for example most diseases might be curable with uh with the help of AI uh helping drug Discovery and I think it’ll help with climate all these massive challenges so we got to embrace that and economic benefits and productivity benefits and individual countries and regions need to do that but we’ve also got to uh uh be aware of the of the of the risks the two big risks that I talk about are Bad actors repurposing this general purpose technology for harmful ends how do we enable the good actors and restrict access to the Bad actors and then secondly is the risk from AGI or gentic systems themselves um getting out control or not having the right values or the right goals and um both of those things are critical to get right and I think the whole world needs to focus on that come on ask you a more both of you actually a more personal question about this because you are personally leading companies that are likely to be at the Forefront of this so the personal decisions you make are going to shape this technology do you ever worry about ending up like Robert Oppenheimer look I I I worry about th that’s those kinds ofos all the time that’s why I don’t s very much I mean there’s a huge amount of responsibility on the people probably much on the people uh leading this technology I I think that’s why us and others are advocating for we probably need new institutions to be built to to help govern some of this you know I talked about CERN I think we need a a kind of equivalent of an iea atomic agency to monitor uh uh sensible projects and and and those that are more more risk-taking um I think you know we need to think about that the the society needs to think about what kind of governing bodies are needed um ideally it would be something like the UN but given the geopolitical complexities that doesn’t seem very possible so um you know I worry about all that all the time and we just try to do at least on our side everything we can in the vicinity and influence that we have Dario how do you are you sleeping well yeah um my thoughts exactly ech odemis so so my feeling is that almost every decision that I make feels like it’s kind of balanced on the edge of a knife like you know if we don’t if we don’t build fast enough then the authoritarian countries could win um if we build too fast then the kinds of risks that that Demis is talking about and that we’ve written about a lot uh you know could could could uh uh could Prevail um and and you know either way I’ll feel that that it was my fault that you know we didn’t make exact we didn’t make exactly the right decision um and I also agree with Demis that this this idea of you know governance structures outside ourselves um I think these kinds of decisions are too big for anyone one person we’re still struggling with this you know as as you alluded to not everyone in the world has has the same uh has the same perspective and so you know some some countries in a way are adversarial on this technology but even within all those constraints I think we somehow have to find a way to build a more robust governing structure that doesn’t doesn’t put this in the hands of of just a few people and even if you wanted to do that and right now the geopolitics is not helping that is it actually still possible and how much has what what one might call the Deep seek moment the sense that there’s it’s it’s much easier to catch up much faster than people realized changed your thinking about this because maybe it’s no longer possible that a small group of leading model Founders can get together and kind of Define the terms deep seek and these kinds of advents of these types of uh advances I think just sort of shows that some sort of international dialogue is going to be needed these fears are sometimes written off by by others as sort of Lite thinking or deceleration and so on but I I’ve never heard this situation in you know in the past where the people at the leading the field are sort of also expressing caution we’re dealing with something unbelievably transformative incredibly powerful that we’ve not seen before it’s not just another technology I think there’s still a lot of you can hear from a lot of the speeches at the summit still people are regarding this as a very important technology but still another just another technology I’m very it’s different in category and I don’t think everyone fully understood that given that and given that others don’t get it do you think we can avoid they having to be some kind of a disaster before minds of because let’s think about the UN the UN was born in the aftermath of World War II it wasn’t like people kind of got together they created the League of Nations after World War I and it didn’t work so what should give us all hope that we will actually get together and create this until something happens that demands it if everyone wakes up one day and they learn that some terrible disaster has happened that’s you know kill killed a bunch of people or caused an enormous security incident you know that would be one way to do it obviously that’s you know that’s not what we want to happen um so you know we’ve worked on demonstrating some of these dangers in in the lab um back in 2023 we did some work on you know can AI systems you know gen generate information that you couldn’t find on Google or in a textbook that could be could be you know useful for generating bioweapons right could a terrorist or a non-state actor use this we came to the conclusion yeah it’s just starting to be able to do that a little bit it’s not really dangerous yet but each model gets better than than the model before and so every time we have a new model we test it we show it to the National Security people we say like hey you know this is the point we’re at in terms of in terms of where the models are similarly we’ve done a lot of research showing this kind of AI autonomy loss of control risk so for example we train the model to be good and friendly and have positive Pro Humanity values then as an exercise in the lab we told the model that the people who trained it us and thropic were secretly evil and after we did this the model started lying to us it went through the chain of logic okay I’m a good AI these people are evil it shows the unpredictability of the systems and I I I think we need substantially stronger evidence like 10 times stronger evidence which may or may not exist because we don’t know for sure how real the risks are um but if we’re able to demonstrate risks that are really compelling and and I hope we don’t I I hope what we show is that actually there’s nothing to worry about if there is something to worry about I hope we can show it in the lab if we can’t show in the lab we may need to see it in the real world and you know that that would be a very that would be a very a very bad outcome but I guess I guess less bad than if you know we never find out and there’s a much a much biger disaster so we we focused on the risks here because I think they are important and perhaps underplayed right now but I don’t want to end on that note because I know that both of you are primarily focused on the tremendous opportunities and kind of world changing benefits so let’s end with two very concrete things one one thing that we should all look for in the next year that suggests we are on the path that you’re looking forward to and one positive thing that will come from it both of you well I mean I think in the next year we’re going to see um agent based systems so systems that able to accomplish tasks uh sort of on their own uh achieve things for you act in the world uh come into their own and I think that will then create a whole new category of useful systems um you know assistant type systems that uh can save you time increase productivity I think we’ll start seeing be used much more widely in everyday life uh rather than the Fairly Niche applications that current systems are use for um and then in the future you know I hope for seeing these AI systems that are inventive so you they don’t just solve a mass conjecture they actually propose one that’s super interesting right or a new Theory Sor I would I would look at uh uh code and AI models able to do AI research if we’re getting to the point where by the end of this year we’re increasing by 50% or even doubling the total Factor productivity of producing AI systems that that would be an indicator that the timelines I’ve indicated um uh were were on track for um if it’s a lot slower than that uh then uh I would I would definitely shift towards thinking there’s a substantial chance that deus’s picture of the world which which I think is still quite aggressive but a little a little less aggressive is more likely to be correct

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.