FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

CEO of Google DeepMind: We Must Approach AI with “Cautious Optimism” | Amanpour and Company

12,426 views | 16 Aug 2024

Artificial intelligence has the potential to influence issues from climate change to the global economy. Demis Hassabis is co-founder and CEO of one of the world’s leading AI research groups, Google DeepMind. Hassabis tells Walter Isaacson why he takes a cautiously optimistic view of the much-discussed technology. Originally aired on August 16, 2024

TRANSCRIPT. (auto-generated) well next to something that has the potential to influence several of the issues we have discussed in the show so far from climate change to presidential elections and that is artificial intelligence Demis hassabis is the co-founder and CEO of one of the world’s leading AI research groups Google Deep Mind and he tells Walter Isaacson why he takes a cautiously optimistic approach to the much discussed technology thank you bana and simus hasas welcome to the show thanks for having me uh we now uh I think you’re in your London office there and behind you probably is that wonderful first edition of Alan touring’s 1950 paper in which he ass I propose to address the question can machines think now we’ve got a lot of large language models such as uh Google Gemini which you help create and chat GPT from open AI how do we get from a chat box that kind of can pass the touring test fool a person to thinking it’s human to something that’s really serious like artificial general intelligence AGI what you call the Holy Grail yeah well look it’s a great question and of course there’s been unbelievably impressive progress and fast progress in the last decade plus as you say getting towards systems that we have today that can pass a touring test but it’s still far from general intelligence what we’re missing is things like planning and memory and Tool use so they can actively you know solve problems for us and actually do tasks so right now what we have is kind of passive systems we need these active systems wait explain to me what planning is I know you and I do it how does a machine do it well we’ we’ve experimented a lot in the past with planning actually using games so one of our most famous programs back in 2016 was alphao which was the program that we built to beat the world champion at go the the the the the ancient game of Go and involves building a model of uh the board game and what kinds of moves would be good and then on top of that that’s not enough to play really well you also need to be able to try out different moves sort of in your mind and then plan and figure out which one which path is the best path and so today’s models don’t do that the language models and really we need to build that planning capability the ability to break down a task into its subtasks and then solve each one in in the right order to achieve some bigger goal they’re still missing that capability tell me why the use of games is so important to the development of artificial intelligence yeah games well got me it was what got me into artificial intelligence in the first place it was playing a lot of chess for the England junior teams and then trying to improve my own thought process has led me to thinking about uh uh mechanizing intelligence and artificial intelligence and so um we used games when we started deep mine back in 2010 as a testing ground a Proving Ground for our algorithmic ideas and developing artificial intelligence systems and one reason that’s so good is because games have um clear objectives you know to win the game or maximize the points that you can score in a game um so it’s very easy to sort of map out and track if you’re making progress with your artificial intelligence system so it’s a very convenient way actually to develop the algorithmic ideas that you know now underpin modern AI systems I think most of us have now used the chat BS like uh I or chat GPT but you’ve talked not only about moving us to artificial general intelligence in other words the type of intelligence that can do anything a human can do but also I guess I’d call it real world intelligence you know robots or self-driving cars things that could take in visual information and do things in the physical world how important is that and how do you get there yeah it’s incredibly important I think this idea of embodied intelligence is sometimes called and you know self-driving cars are an example of that and robotics is another example where uh these systems can then actually uh interact with the real world as you say that you know the world of of of of atoms so to speak and not just be stuck in the world of bits um so that’s going to be huge advances I think we’re going to see in in that space in the next few years um and and and you know that’s also going to involve this planning capability and the ability to sort of do actions um and uh carry out plans that uh in order to achieve certain goals um and that’s not the only area of real world I would say application the one other area that I’m super passionate about and the reason I have spent my whole career building AI is to apply AI to science scientific problem scientific discovery uh and you know including our program Alpha fold that cracked the Grand Challenge of protein folding yeah tell me a little bit more about Alpha fold because what it can do is understand RNA DNA all these things that we think determine what a protein looks like but actually it’s the folding of the protein how uh important and hard was that and what is it going to do for us well the protein folding problem is a 50-year kind of Grand Challenge in in Biology one of the biggest challenges in biology was sort of proposed in the 1970s by a Nobel Prize winner an Fen and the idea was that can you determine the 3D structure of a protein you know everything in life depends on proteins all your muscles and your body everything all the functions of your body are governed by supported by proteins and um what the protein does depends on its 3D shape how it folds up in the body and uh and and the the conjecture was could you predict the 3D shape of a protein based just on its two-dimensional sort of one-dimensional um genetic sequence right so just a string of of numbers sometimes it’s called the amino acid sequence and can you predict the 3D structure of the protein just from its amino acid sequence and if you could do that that would be really important for understanding biology and the processes in the body but also designing things like drugs and cures for diseases and understanding when something goes wrong and how to design a drug to to bind to a certain part of the protein so it’s a really foundational fundamental problem in biology and we managed to pretty much sort of crack that problem with Alpha fult there’s so many large language models competing it’s almost like a racetrack in which Google Gemini yours is up there against open Ai and against groc a grock from xai and meta I think has its own and anthropic one of the things that seems to distinguish the latest model of Google Gemini is that it’s multimodal meaning it can look at images it can hear words not just deal with text explain that to me and if that’s a differentiator yeah that was one of the key things we did when we were designing our Gemini system was to make it as you said so-called multimodal from the beginning and what that means is it doesn’t just deal with language and text but also uh images and video processing and code and audio so the different modalities we as human beings sort of use and exist in and we’ve always thought that was critical for the AI systems and models to be able to understand if they want if we want them to understand the world around us and build models of the world and how the world works and be useful to us as perhaps digital Assistance or something like that they need to really have a good grounding and understanding of how the world Works uh and in order to do that they have to be multimodal they have to process all these different types of information not just text and language and so we built Gemini from the beginning to be natively multimodal so it was had that ability from the start and we were envisaging things like a you know a digital assistant a universal assistant that can understand the world around you and therefore be much more helpful but also if you think about um things like robotics or uh anything in the operating in the real world it also needs to interact with and deal with uh real world problems things like spatial relations uh and and and context that you’re in so we think it’s kind of fundamental for general intelligence the big news in the past uh week or two was U meta Facebook uh coming out with llama its form of a competitor in some ways the Google Gemini and open AI uh system and Mark Zuckerberg when he introduced it made a big deal about it being open source you’ve been on you know that debate better than anybody tell me why the full-fledged Google Gemini is not open source and whether Mark Zuckerberg is right to say this is important it’s definitely very important we we’re huge um uh Google deep mine and Google in general are huge supporters of Open Source software we’ve put out I mean we were just discussing ala fold earlier that is open source you know over two million biologists and scientists around the world make use of it today and you know pretty much every country in the world to do their important research work um we’ve we’ve published you know thousands of papers now on on uh all the underlying Technologies and architectures required for uh building modern AI systems including most famously the Transformers papers that that is the architecture that underlies pretty much all the modern uh uh uh language models and foundational models so we we very much believe in um that’s the fastest way to make scientific progress is to share information that’s always been the case that’s why science works now in this particular case with with AGI systems um I think we need to think about as they get more powerful so not today’s models I think that’s fine um but you know as we get closer to artificial general intelligence you know what about uh uh the issues around Bad actors uh uh uh whether that’s individuals or up to nation states using these things repurposing these same models they’re Dual Purpose they can be used for good obviously that’s why I’ve worked on a my whole career is you know to help cure diseases and maybe help with things like climate change and so on Advanced science uh and Medicine uh but they can also be used for harm if incorrectly used by Bad actors so the that’s the question I think the that we’re going to have to sort of resolve as a community and a research Community is how do we enable all the amazing good use cases of AI and share information amongst well-meaning actors you know researchers and so on to advance the field and come up with amazing new applications that are benefiting Humanity but at the same time restrict access to wouldbe Bad actors to do harm ful things with those same systems by repurposing them in a different way and I think that’s the conundrum uh we’re going to have to sort of solve somehow with this debate about open uh systems versus closed systems uh and I don’t think there’s a clear answer yet or consensus about how to do that as this as the um uh these systems improve uh but of course you know I congratulate Mark Zuckerberg and and meta on that you know their great new model and and and and and I think this is a useful uh to stimulate the debate on on this topic one of the things that can make an AI system really great is the training data that it can use and you’re at Google you own YouTube this show will be on YouTube our segment pretty soon uh is uh Google Gemini it trains on YouTube unless somebody stops it it also can train on my books it could read any book I wrote what is to uh how do we regulate that uh Google Gemini can’t just take all this data and intellectual property without some uh deals yeah we’re very well we’re very careful that Google uh to respect uh all of those kind of copyright issues and to only train on the open web whether that’s YouTube or or or or the web in general um and then obviously we have uh uh uh content deals as well and so you know this is a this is going to be uh an interesting question as well for the whole industry the whole research industry is how to tackle this going forward um we also have a Google opt outs to allow any website to opt out of those of training if they want to do that uh and many people take advantage of that um and then in the in the fullness of time I think we need to develop some new technologies um where we can do sort of attribution or some form of um you know this input uh training input helped in some fractional way some output and then derive some uh commercial value from that that can flow back to uh um content creators I think that technology is not there yet but I think we need to develop that you know analogous to that would be content ID for YouTube That’s YouTube has had for many many years and runs very well in order for the Creator Community as well to benefit massively from the distribution that YouTube gives and I think um that’s a good example that we’re trying to follow you know uh uh uh with the in the AI space um you know that as an example like the way youtubeers the YouTube ecosystem has developed in the fascinating biography of your life there’s something almost as important as being a game player and a game designer and that’s you have a PhD and cognitive Neuroscience you love the human brain how important is it to understand how the human brain works in order to do Ai and is there something that’s always going to be fundamentally different between a silicon based uh digital system and the wetwear of the human brain yeah you’re right so I did my PhD you know nearly 20 years ago now in the mid 2000s and I think back in those times in the early parts of Deep Mind early 2010s it was very important to have inspiration from both from machine learning and Mathematics as well as neuroscience and inspiration for the for the human brain as to how intelligence might work so it’s not that you want to slavishly copy how the brain works because as you pointed out the you our brains are carbon based and and the our computers are silicon based so there’s no reason why the mechanics should work in the same way and in fact they work quite differently but a lot of the algorithmic principle and the systems and the architectures and the principles behind intelligence are in common including you know in the in the in the in the first early days of neural networks you know the things that underpins all modern AI uh were originally inspired by Neuroscience and synapses in the brain um and so the implementation details are different but the algorithmic ideas uh were extremely valuable in terms of uh kickstarting what we see as the modern AI Revolution today uh including this idea of learning systems reinfor learning and and and systems that learn for themselves very much like biological systems in our own brains do um and then ultimately you know uh uh maybe when we build AGI we’ll be able to use that to back to analyze our own mind so that we can understand the Neuroscience better and finally understand you know the workings of our own brains so I love the kind of whole circle here of this kind of virtuous circle of them influencing each other

CNN: Here’s something you’ve said quote “mitigating the risk of Extinction from AI should be a global priority” – what are those risks?

DEMIS: look I think uh uh that was a sort of open letter that that I and many others signed and I think it was important to put that in the Overton window* of things that need to be discussed um you know I think nobody knows the time scales of that yet or the worries of that I think we’re still at the current systems impressive though they are are still quite far from artificial general intelligence um and also we don’t know what the risk levels are of that maybe it’ll turn out to be very simple to to navigate you know controllability of these systems how do we interpret them how do we make sure when we set them goals you know these more act agent based systems they don’t go and do something else on the side that we didn’t intend unintended consequences you know there’s lots of science fiction books written about that most of asimo’s books are about those kinds of scenarios so we want to avoid all those things so that we make sure we use systems uh uh for for good and for amazing things you know solving diseases you know helping with climate inventing new materials all these incredible things that I think are going to come about in the next decade or so um but we need to understand these systems better and I think over that time we’ll also understand the risks involved about runaway systems that are doing unintended consequences or or or Bad actors using using these systems in in nefarious ways um you know that may end up all to be a very low probability likelihood and let’s hope that’s the case but right now um it’s there’s a lot of uncertainty over it so as a scientist you know the way I deal with that I think the only sort of responsible approach to that is to approach it with cautious optimism so I’m very optimistic obviously that we’ll you know human Ingenuity collectively will work this all out you know I’m very confident of that otherwise I wouldn’t have started this whole journey 30 years ago for myself but um uh uh uh you know it’s not a given right so there are some unknowns that we need to do research on and and focus on to understand and things like analysis of these systems so they’re not just black boxes that we actually understand and can control and look at how knowledge is represented in these systems and then we’ll be able to understand the the the risks and the probability of those risks and then mitigate against those so really it was just a call to action to pay more attention to that as well as all the exciting commercial potential that everyone’s wrapped up in we should also think at the same time about the risks but you know still be optimistic about that but approach it with the the the respect that it deserves for such a transformative uh technology the AI is

Sir Demas thank you for being with us thanks for having [English Auto-generated]

*The Overton window is the range of policies politically acceptable to the mainstream population at a given time.[1] It is also known as the window of discourse.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.