FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

On a visit to Cambridge in March 2025, Sir Demis Hassabis – who in 2024 became our first alumnus to receive a Nobel Prize – gave this talk on ‘Accelerating Scientific Discovery with AI’. The co-founder and CEO of Google DeepMind, Demis was awarded the 2024 Nobel Prize in Chemistry jointly with his DeepMind colleague Dr John Jumper “for protein structure prediction”. This was in recognition of the major advances made possible by their AI model AlphaFold2, with whose help they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified. In this lecture, he describes his journey from a child chess prodigy whose interest in computers was sparked by using them to improve his chess game, to his time at Cambridge University to his AI career today. And he talks about how the AI tools Google DeepMind is developing have the capacity to greatly speed up discoveries in areas of science from health to the environment, and more.

welcome everybody so I’m Alistair barisford the current head of the Department of computer science and Technology also known as the computer laboratory it’s my great pleasure this afternoon to welcome Demis back to Cambridge so Demis studied computer science here in Cambridge in the 1990s at a time when the lab was based just next to this lecture hall and where Robin Walker who I’m pleased to say is here today was dems’ director of studies at Queen’s College uh I was discussing earlier with Demis and we think that this is where he had his first Cambridge lecture maths at 9:00 a.m. on the first Thursday of mlma term so seems a Fitting Place for him to return to so Demis had already made several incredible achievements by the time he arrived in Cambridge he was a chess master and second highest rated under 14 player in the world and after completing his schooling a year early instead of backpacking around Europe he instead took a job in the computer games industry where he co-designed and was the lead programmer for the computer game theme park y after graduating from Cambridge with a first class degree Demis returned to the games industry at first working at lion head Studios and subsequently forming his own company however there was clearly a passion in him for fundamental scientific research and so Demis returned to Academia this time to UCL where he studied for a PhD in cognitive Neuroscience graduating in 2009 he stayed on at UCL until 2011 when he left to co-found Deep Mind an AI research lab which was acquired by Google in 2014 Demis and colleagues at Google Deep Mind have gone on to make several seminal contributions to science highlights include Alpha go which was the first computer program to beat professional human players at the board came go and Alpha fold a computer program which is able to predict protein structure his Demo’s contributions to Alpha fold for which he was awarded a share of the 2024 Nobel Prize in chemistry now alongside his incredible intellectual contributions over this period he’s also been a fantastic supporter of the University uh including funding for academic positions and significant support for students from underrepresented groups both in the computer lab and at Queen’s college and demas’s uh passion and support for the next generation of computer scientists is the motivation for our lecture today and I’m sure he will not only help us understand how to accelerate science uh scientific discovery with AI but also Inspire the next generation of students in the room to change the world too and with that I would like to welcome Demis to the stage thanks aliser for that lovely introduction and um it’s so great to be back at Cambridge I always have a warm feeling when I um sort of my homecoming back to Cambridge and specifically this lecture hall as alista reminded me I think it is the first lecture hall I was in it’s always been my favorite lecture hall uh I remember telling and I see a lot of my old friends here from my Cambridge days um Aaron I think about uh that one day maybe I’d come back to give a lecture in here and talk about announcing AGI and maybe a robot would walk on and uh Astound everyone I’m not going to do that today um to to disappoint you but maybe in a few years time I’ll come back again and I’ll give that lecture um but you know this is amazing place it’s such an inspiring place and I’m going to talk a little bit about how Cambridge has inspired my whole career actually and hopefully is going to do the same for many of you the students in in the room so for me it started uh uh my journey on AI started with games and specifically chess as aliser mentioned so I was playing chess from the age of four years old and uh very seriously for the England junior teams and things like that and it got me thinking about thinking itself you know how does our mind come up with these plans with these ideas um how do we problem solve and how can we improve obviously when you’re playing chess at a young age and you’re trying to play competitively you’re trying to improve that process uh and it was fascinating to me perhaps more fascinating than even the games I was playing was the actual mental processes behind it uh and in fact Ai and and computers I came across computers and AI for the first time in the context of Chess uh and trying to use very early chess computers like the one on the right here I think this was my first ever chess computer um there were physical boards where you had to actually had to press the squares down to make move the pieces uh and of course we were supposed to be using these chess computers to Train opening Theory and and learn more about Chess but I remember being fascinated by the fact that someone had programmed this lump of inanimate plastic to actually play chess really well against you and uh I was sort of really fascinated by how that was done and how um how someone could program something like that and I ended up experimenting myself uh in my early teenage years with the Amiga 500 uh computer amazing home computer back in the in the late ‘ 80s and early ’90s and building those kinds of AI programs myself to play games like cell and really that was the first my first taste of AI and and I was hooked from then on and that’s you know I decided from very early on that I was spend my my entire career trying to push the frontiers of of this technology so then that led me to uh to Cambridge um and which was really my three years here were incredibly formative for me um and I went to a school in in a compens school in North London no one had ever got to sort of no one had gone to O bridge for in sort of living memory but and the reason I wanted to come to Cambridge was all these inspiring stories that I’d heard about um what happened at Cambridge all these amazing people that I used to read about their biographies and the work they’ done um especially people like Crick and Watson in the in the top left there and I remember particularly a film uh the race for the double helix amazing film from the’80s if you haven’t seen it with Jeff Goldblum one of his early parts we you know with all the enthusiasm that he plays all of his parts as uh he was Watson and they were having just such an amazing time discovering roaming around Cambridge you know working on things like DNA and I thought look that’s what I want a piece of that and I want to feel what it’s like to be at the frontier of Discovery and what could be more exhilarating and and that film actually really brought it to life uh what that might be like um and then of course all of my my my heroes my scientific Heroes a lot of them had gone through Cambridge people like Alan Ing and Charles babage of course in in the lecture hall that we we now sit in um and uh and and even places like the eagle Pub where if you start at Queen’s College one of the tours they give you on the first day is to go and show you the the the puted sort of uh uh table that they were discussing and and and and the DNA structure uh around and uh you can’t help but be inspired by that and walking down King’s parade um and I almost felt like the the the intellectual Giants of the past were were almost speaking to you from the stones um and um that’s how I felt um you know going for a late night burger at gardinas um that that was what was inspiring me around like all of these amazing people that had walked those same steps uh over hundreds of years and and that’s the history that is sort of unrivaled here at Cambridge that I think we can still draw on and take inspiration from today uh and then there’s a picture of me and Aaron there one of my best friends from Queens you know obviously on on the mathematical Bridge there and then finally you know what when we alista mentioned obviously the Nobel Prize and it was an honor of a lifetime to go and collect that in in Stockholm um in December uh amazing one week of of activities but my favorite activity is when you get to sign the Nobel book in the Nobel foundation and uh and and that’s the book there and one of my pictures of and I started sort of leafing through the book you sign your name and then you leave back you wonder if you know it’s crick in there and of course he is and then you go back further and then Einstein signatures there and it’s just mind-blowing really and I start to spend an hour just photoing every page of the uh of the book um so you know it’s full circle for me of of of that picture and and then really seeing that film in the in the in the in the late 80s so then uh you know in 2010 we started Deep Mind in London as uh really at the time it was a kind of an Apollo program effort is the way we thought of it uh for trying to build artificial general intelligence you know AI that was truly General and could perform all the cognitive capabilities that that humans are capable of so it would be a truly uh General AI system in fact you know the idea for that really comes from touring and touring machines so something that’s able to uh compute anything that is computable uh as churing showed with his touring machines and really that’s been the foundation for me uh of uh and one of the main things that I carried with me from the lectures here at Cambridge was all these theoretical underpinnings of computer science and computation theory that that people like churing and Shannon uh famously did in the 40s and the 50s so we started in 2010 and it’s amazing like it’s 15 years ago which in some ways isn’t that long ago but when we started out Deep Mind almost nobody was working on AI which is hard to believe today given that almost everyone seems to be working on AI today uh in just a matter of just over a decade uh things have gone you know accelerating incredibly and obviously we’ve been part of that very exciting Journey so our mission at Deep Mind from the beginning was um uh uh we talk about building AI responsibly to benefit Humanity but the way we used to articulate it when we started out was uh in a two-step process step one solve intelligence step two use it to solve everything else and um and this seemed very outlandish at the time in 2010 and you can imagine trying to pitch a venture capitalist on that on the basis of that mission it seemed pretty crazy but I I still fundamentally believe uh in that today and I think more and more people are realizing that AI um built in this General way could have these kind of profound and transformative impact on almost any field uh uh which is obviously the second part of that mission statement and I think uh uh for me that that involves accelerating scientific discovery itself and medicine and advancing our understanding of the universe around us so back when we started there were basically two ways and in fact when I studying here in the ’90s there are two ways to build AI broadly speaking there’s the expert system way which is you kind of pre-program an expert system directly with the solution um things like deep blue that beat Carri casar at chess very famously in the ’90s actually while while I was studying here um that would be the Pinnacle example probably of an expert system but the problem with these expert systems was and why they never really scaled to full general intelligence is they can’t deal with the unexpected um if you you know if something unexpected happens that you didn’t already cater for um there’s nothing in the system that will allow it to to deal with that um and they sort of were inspired by Logic systems um and they were quite rigid and and fragile and brittle because of that whereas the modern day approaches uh are built on learning systems so these are systems that are able to learn for themselves and learn directly from experience or data from first principles um and really inspired by more neuroscience ideas um and the obviously the promise of these systems that we have today is that they can go beyond the knowledge potentially that we as the programmers or the system designers already know how to solve and of course that’s extremely valuable in areas like um scientific discovery so we started in the in the early 2010s with um with games of course and I’ve used games many times in my life first of all to train my own mind then I used to build games and AI for computer games and then finally in a third way to train up our AI systems and games are the perfect Proving Ground for AI systems um you can start with very simple games like Atari games from the 70s uh and really at this this this system dqn was the first time anyone had built an endend Learning System that could learn directly from raw data so in this case the raw pixels on the screen and it’s it’s not told anything about the games or anything about what it’s controlling it’s just told to maximize the score based on this uh uh video stream input pixel stream input so we were able to master all different Atari games uh uh uh in sort of around 2013 then we took these uh uh systems and we scaled them up to really the I would say the Grand Challenge of games AI which is um can you you you create systems that can play the game of Go um at world champion level or Beyond and go of course is probably the most complex game that humans have ever invented Ed it’s thousands of years old so it’s also the oldest game and one of the most elegant games um but one of the ways you can just see the complexity of go is that there are 10 to the power 170 possible positions in go um so that’s more way more than there are atoms in the observable universe and um the important reason point about that is that you cannot come up with a strategy and go using Brute Force techniques it would be impossible it’ll be totally intractable so you have to do something much smarter um and we famously in basic 2016 won million doll challenge match against 10 times world champion Leisa doll and one of the legends of the game the South Korean grandm and was watched by 200 million people around the world um and not only did alphao our system uh win that match importantly it actually came up with new original go strategies even though um we’ve played go for thousands of years and professionally for hundreds of years um it was still able to find never seen before strategies most famously uh this move 37 move uh here in red uh during game two which um if you watch the documentary on this you’ll see which is on YouTube You’ll see uh that how surprised the the best players in the world were they were commentating on the game uh about this move it was sort of an unthinkable move and yet it decided um the G this game too in favor of alphao um 100 moves later so again that told me about the potential for these types of systems to invent and discover new knowledge so here of course we’re just talking about game knowledge but um obviously my dream was to generalize this to all areas of um scientific discovery so how do these systems work um we we basically train up these newal networks through a system of selfplay so this is actually Alpha go and also um subsequent systems uh that Alpha go Zer and Alpha Zer that generalized what we’ve done for go to play any two-player uh game from scratch and you start off with a version one of the system that doesn’t really know anything about the game just the rules and it plays randomly and you play say 100,000 games against itself uh of this system and that creates a new database of um game positions from those 100,000 uh games and from that you train a second version slightly better version of the model version 2 uh that’s trained to predict um what the likely moves are to be played in any one position and also who is more likely to win which side black or white is likely to win from that position uh and what percentage chance do they have of winning uh and then you can use that version two to um you play against version one uh and in a 100 game match off and then if it wins by significant uh margin so in this case 55% win rate you replace the version one with version two and you create a new uh database of games that are slightly higher quality and then you learn a version three system uh and if you do this and you repeat it around 17 18 times you go from playing randomly in the morning to 24 hours or less later your your your by the version 17 or 18 you’re stronger than um the world champion level so it’s quite an incredible thing uh process to see this this self improvement process playing out uh in a very very short amount of time so if we think about what these neural networks are doing is you’re kind of uh reducing down this intractable search space of you know 10 to the^ 170 possibilities down to something that’s much more tractable in a few minutes of uh compute time and uh it’s doing this by narrowing down uh using the newal network to efficiently guide the search mechanism so if you think about the this tree of possibilities as uh and each of the nodes in this tree is a go position um then instead of having to to to look at sort of every possibility you can actually use the neuron Network to guide you just down the most um the most uh uh most interesting and most useful lines to examine so in this case the ones in blue and then um after you’ve run out of thinking time you pick the best line the most promising line that you’ve seen thus far so I I in this case uh this this particular line in purple so this then leads to you know we we then play did not just go but any two-player perfect information game and it was either able to discover new strategies and new styles of playing chess um which which is kind of extraordinary but given that chess computers were so strong already so uh programs like stockfish and ala zero was able to beat stockfish at the time uh uh in at chess which is almost sort of impossible to do and not only did it beat stockfish uh so Alpha zero here is in White playing against stockfish who’s black but in this particular particular position one of the most famous games that Alpha zero played it’s called The Immortal zwang game um why is winning here um because it favors Mobility over material so most chess computers favor material and you’ll see that black those you who play chess you’ll see that black has more material um but actually can’t move any of its pieces they’re all stuck in the corner and this is Alpha zero sacrificed material for this mobility and actually for human Grand Masters uh and top chess players this is not only very effective sty it’s very beautiful aesthetic style uh to and to play chess in so um it’s kind of amazing that Alpha Zer was able to sort of discover this new way this new Dynamic way of playing and in fact some of the top chess players in the world commentated about this so Gary Kasparov my alltime sort of favorite chess player he said that programs usually reflect priorities and prejudices of the programmer but because Alpha zero learns for itself I would say that its style reflects the truth and then the current world champion at the time Magnus Carlson said uh you know read uh and and looked at these games and and read about the books that were written about Alpha zero and I said I’ve been influenced by one of my heroes recently one of which is Alpha zero so he actually Incorporated a lot of these ideas uh into his own game to dominate The Chess scene for you know almost a decade now so we did all these Landmark breakthroughs in games Ai and over the first I guess sort of decade of of of deep min’s existence but of course these were just the training ground for what we wanted to do and uh was just a a means to an end it wasn’t the end in itself to play these games much as I love games um it was to create these algorithms that could be generally useful for tackling real world problems so um what we look for in real world problems not only scientific problems uh but actually uh industrial problems as well and we look for sort of three different criteria that make it suitable the problem suitable to be tackled by these types of AI systems and and ideas and algorithms that we developed for playing games um number one is we look for problems that can be described as massive combinatorial search spaces so um so usually far too complex far too many combinations to Brute Force the solution so um but maybe there’s some kind of structure that we can learn about with annual networks that can guide that search very efficiently secondly uh we look for problems that can be described with a clear objective function or some sort of metric uh that you can optimize against so in games that’s very easy it’s things like maximizing the score or uh winning the game but actually there’s a lot of real world problems that you can boil down to uh a few metrics or a few objective functions that you’re trying to maximize and then finally of course you need quite a lot of data uh or experience to learn from and or ideally it’s and a kind of accurate and efficient simulator so you can generate more syn thetic data to augment uh the real data that you have and it turns out uh that there are a lot of problems uh that can be couched in these terms if you’re looking at the problem from this angle uh and including many many problems uh important problems in science and the one that I always had in mind actually from my days of first seeing coming across the problem here at Cambridge as an undergrad is the protein folding problem uh which I’ll just quickly describe to you for those that don’t know about biology and proteins so proteins are incredibly important they’re they’re the building blocks of life pretty much every every uh uh uh function in the living body depends on proteins um from your neurons firing to your muscle fibers twitching so really proteins are what makes life possible and the protein folding problem then is is really easy to describe it’s basically a protein is is defined by its uh descri is is is defined by its gene sequence it genetic sequence which then specifies an amino acid sequence uh which in nature then folds up spontaneously into usually a very beautiful uh protein structure so you go from this genetic sequence to a protein structure and the reason the protein structure the 3D structure is very important is it goes a long way to defining what function it has what it does uh in the body so um it’s not doesn’t totally describe the function but it it has a big part to play in what what it actually does uh in nature so the protein folding problem then is is this problem of can you predict the protein structure directly from this one-dimensional amino acid sequence can you predict that computationally that incredible 3D structure uh uh from that sequence so why is this such a hard problem well um uh lantal is a famous researcher protein researcher in the in the 60s described this uh uh uh conjecture that that that became called became known as the T’s Paradox which is that he calculated there roughly you know 10 to the 300 possible shapes that an average protein can take um and yet somehow in nature and in the body these proteins fold up spontaneously in a matter of millisecs so that’s the Paradox like if there’s so many possibilities how does nature do this right and basically how does physics uh achieve this um and that gives you hope that this must be tractable computationally in some reasonable amount of time time because uh physics does solve this problem U um you know billions of times a second in the body and um furthermore What attracted me to this um problem was that there was a biannual competition called the Casp competition which is like you can think of it as like the Olympics for protein folding and it happens every two years and it’s run by some amazing people led by professor John molt uh University of Maryland and um it’s been running for since 1994 and it’s a great competition because they work with experimentalists and uh who who painstakingly find these structures using very exotic and expensive equipment like electron microscopes and um and they newly discovered structures that haven’t been published yet they put it into the competition so you actually the the the the competition organizers know what the ground truth is but the computational teams you know hundreds of teams enter every year every every competition every couple of years and they try with their computational method to predict that those structures and it’s usually around 100 proteins that are in the competition and then at the end of the summer um they reveal what the true structures are and you can compare the predicted ones uh and their distances the error in the predictions uh to to the real structures so we entered uh Alpha fold 1 actually for the first time in 2018 and we started this Alpha fold project in 2016 actually pretty much the day after we C we got back from the alphao match in Soul for in Korea we felt that and I felt that we were ready we had the techniques that were mature enough and ready to now be applied outside of games and to try and Tackle uh really meaningful uh uh problems we call them kind of root note problems because they open if they could be solved they they open up whole new branches and avenues of discovery that can be built on top and protein folding was was a prime example of that so we started working in 2016 we we Alpha fold one was already after a couple of years and we entered it into the C 13 competition and you can see for the decade prior uh these bar charts are showing the winning score of the winning team uh uh uh in the hardest category actually the hardest proteins that were being predicted and you can think of it as a percentage accuracy of how many of the atomic how many of these amino acids have you got in the right position within a certain tolerance uh within the sort of width of an atom you need to predict it within and um you can see there was sort of not much progress for a decade and we were stuck at this um 60 points level and you we for the which is effectively if you got to 90 you would be within the width of an atom so You’ be Atomic accuracy and that’s what we were told by experimentalists was the accuracy you’d had to reach so that it was competitive with experimental methods so the experimentalists could actually rely on these predictions rather than have having to necessarily do the laborious painstaking uh work to um to find that structure and uh just as a rule of thumb my biologist friends would always tell me that you know it take a PhD student their entire PhD so four or five years to fold uh to find the structure of just one protein um and um and and there are 200 million proteins uh known to science and 20,000 uh uh uh proteins in the human uh proteo so uh so we with Alpha fold 1 we were able to uh win this competition and and sort of um be better by almost 50% than the next best system and Alpha fold one for the first time introduced machine learning techniques as the main component of the system uh for the first time uh but it was not enough to to reach this Atomic accuracy we actually had to go back to the drawing board with what we’d learned and re architect it uh for Alpha fold 2 from scratch uh using all the learnings we had from alpha fold one to finally reach this atomic accuracy and that led the organizers to declare that the problem had been solved in uh the end of 2020 so this is an example of how alpha Works visually so you can see on the left hand side here is a very complex protein uh the ground truth is in green the predicted uh structure is in blue and you can see how closely the blue uh overlaps the green uh and then on the right hand side you can see how alpha fold 2 works it sort of builds up that structure in the iterative process it kind of recycl Cycles itself actually over 192 steps um and then builds out starts as a as a scrunch ball of of protein matter amino acids and then it builds out a more more plausible structure and at the end it sort of refines the last parts of it until it has the finished prediction um we we immediately uh then try to because Alpha fold is so accurate but it’s not just accurate it’s also extremely fast it’s able to fold proteins in a matter of seconds an average protein we realized quite quickly that we could actually fold all 200 million proteins uh uh uh known to science and we over the course of a year we used a lot of computers on the Google Cloud to to fold all of them and then put them out freely uh on a database with our colleagues at emble ebi just up the road uh at the sang Center um uh just outside of Cambridge and we provided that for free unrestricted access to anyone in the world to use it um and that 200 million proteins if you think about how long it takes to do that experimentally four five years it’s kind of like a billion years of PhD time done in one year so um you know it’s it’s kind of amazing to think about how much science could be accelerated and it opened up whole new avenues of exploration because um many of these uh structures especially for the less well studied organisms like some certain types of plants they’re very important for Science and for agriculture um and agriculture research but the the there’s there’s that you wouldn’t have any almost none of those structures would have been found and be available so now those are all available and also 200 million you can look at them at an aggregate level and sort of look at structure structures um across species uh and uh kind of meta structures and see what the commonalities are uh through Evolution so there really interesting new avenues of of branches of of structural biology now that are being explored and of course we thought about safety from the beginning uh and we take our responsibility very seriously at the Forefront of AI and in this case we you know consulted with over 30 biocurity and bioethics experts to make sure that um what we’re putting out into the world um the benefits of that far outweighed any any uh risks associated with it and now I’m very proud to say that you know over two million researchers are using it um from pretty much every country in the world um and uh and it’s been sort of cited over 30,000 times now and it’s become a standard Tool uh in in in biology research now and many of you in the audience who are PhD students hopefully you’re using it and making use of it and it’s just um part of the standard Canon now that is used for biology research uh it’s been amazing to see what other researchers have done uh with all of this uh technology and all these structures um I’ve just called out uh six of my favorite uh uh examples uh people at University of Portsmouth are using it research group is using it to tackle plastic pollution in the environment trying to design new enzymes which are types of proteins uh that can digest plastic um we’re working with the Fleming Center on antibiotic resistance um neglected diseases uh like tropical diseases that affect the poorer parts of the world we work with the drugs for neglecta diseases Institute um and here’s a good example of where we can accelerate research in those areas where you know whether it’s malaria leash manases um zika virus a lot of those structures are not known uh but now they can go straight to drug Discovery because uh they have a lot of the information about the structures for those viruses and and bacteria uh and then there’s been a lot of fundamental research being done on things like uh finding the the structure of the nanop complex which is a very important protein that that that lets in nutrients in and out of the nuclear Poe of the cell um as amazing work at the broad Institute done with drug delivery sort of Designing molecular syringes redesigning uh proteins that can deliver drugs targeted to a particular part of the body uh and it’s even being used in in things like looking at mechanisms of of fertility so the the amount of things that it’s being used for is sort of almost every area of biology and medical research now is is uh making use of alpha fold we’ve continued in the last few years to develop uh more struct more uh uh uh uh developments and improving the systems we released Alpha fold 3 uh earlier this year uh uh for academics to use and we’ve extended now Alpha fold 3 to to deal with um interactions so um you can think of alpha fold 2 as a picture of the the static uh uh protein structure but really biology is a dynamic process so you need to understand how how um different biological elements interact with each other so of course proteins with other proteins but also proteins with others other molecules uh important to life things like DNA and RNA and also ligans so uh small molecules which uh you know things like drug compounds how does the protein bind with um with that compound and then we have a separate set of work Alpha proteo which is doing the reverse of alpha fold about making use of still of of the alpha fold techniques where if you want to design a novel protein maybe that doesn’t exist in nature for a particular job a particular function what is the amino acid SE and the genetic sequence that will give you that structure so um so it’s kind of like running it in reverse and trying to design new structures that will do novel things um and again could be extremely useful uh uh in in for for Designing drugs and things like anti antibiotics and antibodies so taking a step back then what is uh you know if I look at all the work we’ve done in the last 15 years um what’s the implications uh for Science and and also machine learning um and if you think about what we’ve done with first of all our games work uh and then now with the scientific work that we’ve been working on of which Alpha fold is our best example um it’s all about making this search tractable you have this incredibly complex problem there’s many many possible solutions to the problem and you’ve got to find uh the optimal solution kind of needle in the Hy stack of that uh enormous combinatorial search space and you can’t do it by Brute Force so you have to learn the this new network model um so it sort of learns about the topology of the problem uh so that you can efficiently guide the search to reach your uh to maximize or find the optimal solution to the objective that you have in mind and I think this is an incredibly General way uh uh in general solution an incredibly General way to approach um a whole Myriad of problems and so we think about back to the go example so we’re trying to use these systems to find the best go move but you could also change those nodes to be uh chemical compounds and now you’re trying to find the best molecule in chemistry space in chemical space and the best molecule you know and this this is the beginning of drug design and and that will bind specifically to the Target you’re interested in but nothing else so it reduces the side effects and the toxicity of that compound um and it’s a very very similar um techniques that we’re using in order to design these molecules now as the next steps as we as we move more and more into drug Discovery so I think in biology at least um I feel like we’re entering a new era now of what I like to call digital biology so you know I think of biology in its most fundamental level as an information processing system you know that’s trying to resist entropy around it and I think that’s basically what life is um of course it’s a phenomenally complex and emergent information processing system uh and I think that’s where AI comes in just like maths and the maths that I learned in this room was the perfect description language for physics um and phys phys physical phenomena I think that AI is potentially the perfect description language for biology um it’s perfect for dealing with the complexities of the emergent behaviors and interactions that you get in a dynamic system like biology and I think Alpha fold is a proof point of that um and I hope when we look back in 10 years time it won’t be an isolated breakthrough but actually um have has sort of heralded in this new era um Golden Era of digital biology and we’re trying to progress that ourselves we started a new spin out company isomorphic labs to build on our Alpha fold technology and move more into the chemistry space that I was just talking about and actually try and reimagine Drug Discovery from first principles with AI and right now it takes an average of 10 years um for a drug to be developed um and it’s extraordinary expensive it cost billions and billions of dollars and so why I’m thinking why why can’t we use these techniques to reduce that down from years to months maybe even one day weeks just like we reduce down the discovery of protein structures from potentially years down to now minutes and seconds and so we think of this as a kind of doing science a digital speed so um trying to bring the best of what we do in the technology area to the um the Natural Sciences and my dream one day is to be to create maybe a kind of virtual cell uh a computational cell perhaps of something very simple like a yeast cell um and that you can actually run experiments in silico on it and the predictions that you get out of the virtual cell will actually tell you uh inform your your real world experiments in the lab um and you can reduce down a lot of the search that’s done in the wet lab uh and actually um more use the wet lab for validation steps rather than the very expensive um and slow search process of course Ai and we’ve been using AI not just in biology but it can be used for science mathematics medicine more generally um and we’ve had a whole range of breakthroughs not just in the biological sciences but from things in health like identifying eye disease from retinal scans discovering new materials um helping with plasma container infusion reactors faster algorithms um so AI discovering better algorithms for itself like faster matrix multiplication doing weather prediction and even helping with quantum computers and error correction in Quantum Computing and that’s just a small example of some of the work we’ve been doing in the last um two three years and I think AI will be almost be applicable to pretty much every field and uh and I always encourage actually uh universities to start thinking very seriously about multi-disciplinary work where you apply AI to the right questions in a particular specialist field and I think there’s many many uh advances to be made over the next 5 to 10 years by doing that so I’ll just end then with a little bit of uh more General view about not just AI for science but the path to AGI and how close we are to that and our more General work on the original Mission of of AGI and um we’ve been making a lot of advances in in all areas of General understanding of the world um we sometimes call them World models um so we’re particularly proud of our new video model called V2 which was just released at the end of last year it’s it’s um state-of-the-art video generation and um it’s able to generate these videos just from a text description right or or a single static image so you know and uh actually although some of these videos may not seem that impressive if you think about um this chopping the Tomato one this is like the touring test for video models because usually you get you know the Tomato comes magically back together or you’re chopping through the fingers or the knife moves off somewhere and it’s actually if you think about what the systems had to do to really understand the physics of the world uh or this or the bubbles around this blueberry here right it’s just generating that from text you know blueberries dropping into a glass of water and it’s it’s doing all the physics of this correctly or the motion of um you know these these little cartoon characters or the bee here um it’s kind of mind-blowing really and I think even if you told me five years ago um that this would be possible without sort of building in some special understanding of physics or something I would told you that that’s seems unlikely that that would be possible but yet somehow these learning systems are able to learn about real world physics um just from watching you know many many YouTube videos and um it’s it’s pretty crazy that that’s possible so we’ve done that we’ve gone a step further with genie2 which of course bringing my games uh uh hat back in here and this is taking those vo models a step further so now with the text instruction you can generate a whole game so you know here we at the bottom here he said generate a playable world as a robot in a futuristic city and it just comes up with this and you can control it with the you know qwe keys and the arrow keys um at the moment it’s only consistent for a few seconds um but we we’re we’re working to extend that to so that the consistency of the game World lasts for many minutes um and so then you’ve really got what I would call a world Model A really an understanding of the real world um and and how interactions in that real world work and the physics of the real world work um of course we’ve been working very hard on the safety aspects of this uh and from very from the very beginning in 2010 we were working on um uh planning for Success even though almost nobody was working on AI back then um we we imagined that it would be a 20-year Mission and actually amazingly we’re sort of on track 15 years in and um we we were sort of planning for Success if we were to build these kinds of transformative systems and Technologies um it would come with a lot of uh uh responsibility as well to make sure they get deployed in a safe uh and responsible way and one of the uh systems we Technologies we built is called synth ID which um invisibly watermarks uh actually using an AI system adversarial AI system uh slightly adjust the pixels or the text or the or the audio um imperceptibly to the human ear or eye um but it can be detected by a detection system that these were synthetically generated images whether that’s audio um image or video and of course it’s going to become increasingly important um as uh as as these Technologies become widely dep deployed that we’re able to easily distinguish between synthetically generated images and real images so AI has been you know it’s this incredible potential uh to to help with our greatest challenges from climate to to health but obviously this is going to affect everyone um so I think it’s really important that we engage not just it’s not just the technologist deciding this but that we engage with a wide range of stakeholders from society so I’ve been really pleased in the last couple of years that one of the consequences of AI becoming mainstream is that many governments have got interested in it and many parts of all parts of society and I think it’s been great to see these International Summits actually the UK hosted the first one in Bletchley Park a couple of years ago bringing together heads of government with Academia and Civil Society to discuss uh these Technologies um how to put the Right Guard Wells on it how to make sure we embrace the opportunities um but we mitigate the risks um that are coming down the line and uh I think that’s going to become increasingly important um given the exponential Improvement uh that we’re seeing with these Technologies so my my my short hand for this is to say you know with with a lot of Silicon Valley tries to the kind of mansion Silicon Valley is is like move fast and break things and of course that’s that’s created a lot of advancers a lot of of the Technologies we all use every day today but I think it’s not appropriate in my opinion for this type of transformative technology I think instead you know we should be trying to use uh the scientific method and approach it with the kind of humility and respect that this kind of Technology deserves and you know not we we don’t know a lot of things there are a lot of unknowns around how this technology is going to develop it’s so new um um and I think with exceptional sort of care and foresight we can get all the benefits uh and minimize uh the downsides of this but but I think only if we um start the research and the debate about that now so just to end then we’re now building our own big multimodal models that try and take the best of all of these different models I’ve shown you and put it into one system uh uh we call it the Gemini series our latest one’s Gemini 2.0 that some of you may have tried which is state-of-the-art across many leading benchmarks um we’re using it to further that I’m very excited about the next generation of assistants I call it Universal assistance we call it project Astra where actually you have it on your phone or some other devices maybe glasses and it starts it’s being an assistant that you can take around with you in the in the real world and it helps you in everyday life to um uh to to get to enrich your life or to make you more productive and the next step then in AI is combining what I’ve shown you with alphago these kinds of um agent-based models they’re able to um efficiently search through and find a good solution to a problem in a in a limited domain in this case in games but we want to actually build those types of search systems and planning systems on top of much more General models like Gemini these World models that understand uh how the real world works and then can then plan and and Achieve things in the real world and of course that’s key to things like robotics uh working which I think in the next two three years is going to be a huge area um that’s going to going to have a massive advances in it so I’ll just finish then by just having a a slight conjecture about what does this all mean if we think back to touring and all of the work that he did to lay down the foundations uh of computer science and um and I think that uh if you see the work that we’ve done I see myself as a kind of touring’s champion in a way like he how far can the touring machines and this idea of classical Computing go and I feel like um probably one of the lectures I took in this room was one of my fite favorite things to think about is the P calls MP problem which is uh a famous problem in computer science of you know what sorts of problems attractable on classical systems and um and there’s obviously a lot of great work going on in Quantum Computing systems uh many of that here in Cambridge and also at Google we have one of the top Compu uh Quantum Computing groups in the world and there’s a lot of things that are thought to require Quantum Computing to solve a lot of real world uh systems that we would like to understand and model my conjecture is that actually classical touring machines basically classical machines that these types of AI systems are built on can do a lot more than we previously gave them credit for and if you think about Alpha fold and protein folding you know proteins are quantum systems you know they operate at the atomic scale and one might think you need Quantum simulations to actually be able to find the structures of proteins and yet we were able to approximate those Solutions uh uh with our neural networks um um and so I think you know one one potential idea here is that any pattern that can be generated or found in nature so I has some real structure physical structure can be efficiently discovered and modeled by one of these classical learning algorithms like Alpha fault um and if that turns out to be true I think that has all sorts of implications uh for quantum mechanics and and actually fundamental physics um which you know is something that I hope to explore and many of my colleagues hope to explore maybe with the help of these classical systems as well to help us uncover what the true nature of reality might be and that leads me back to the whole reason why I started my path on AI many many years ago is that I always believed that AGI uh built in this way could be the ultimate general purpose tool to understand the universe around us and our place in it thank you [Applause] right we have time for some questions if people have questions firstand shot up just here hi than thank you uh thank you for the great talk um because you have a background in Neuroscience um and you really like to think in terms of root note problems was there ever a root Noe problem you came across in Neuroscience that you thought was worth tackling and still worth tackling to understand biological and artificial intelligence even better yeah there’s many I mean I that’s what I studied for my PhD actually was um memory but also imagination so future thinking kind of planning so I really wanted to understand um how the brain did that and it turns out the hippocampus is involved in both uh so that we could maybe mimic that with with some of these algorithms so I think there are many key things in that of course um there are all the big questions around you know creativity dreaming Consciousness all these big questions that I think um building AI uh and then comparing it to the human mind is one of the best ways I think we’ll make progress with those sorts of root no problems like you know what is the nature of Consciousness and is there something special about the instantiation of the substrate of the brain versus um uh algorithmically uh a mimicking that in Silicon great um got a question just here uh hi um I have two questions actually so since Deep Mind was founded before the Deep learning Revolution I wanted to know what your state of mind was had deep learning not picked up or how were you going to progress uh that’s the first question and second question is since you’ve um had intimate experiences with um such challenging problems such high dimensional problems and we know that U gradient descent variants can’t uh sort of converge to the optimum solution only locally Optimum Solutions were you surprised that anything works at all in these systems at every s at every point in time and um do you think that most of the nature is kind of suboptimal and so we can potentially build a more optimal nature so look I I think um they’re both great questions so the F the first one uh look that’s why we called it mind partly because the Deep refers to deep learning right so we we deep learning had well the early parts of the it wasn’t called Deep learning then but it was it was sort of just becoming um common there was these boltzman machines and things that Jeffrey Hinton had invented and uh uh in just a few years before in 2006 2005 these hierarchical new networks and it seemed like a super promising idea even back then uh to us that come across it in Academia and um and then the other thing we bet on was reinforcement learning and the combination of that right which uh again is coming back into Vogue but it was also important for us to solve something like alphago so you need both parts of that you need the Deep learning to kind of model the environment and the world and then you need the reinforcement learning to make the plans and and and the solutions and take action in the world and um so there was two reasons we bet on that even when it it was just sort of the beginning of it is that we we knew that the classical methods these expert systems would not scale and actually that’s again one of the things I learned here and also at my postto at MIT was that was that they were the kind of churches if you like of the classical methods these expert systems and actually there’s something else you can learn here is like you know in your university courses is not just what to do but also what not to do and why it may not like like I sort of thought about it and I felt it would never scale to uh uh the kinds of problems that I wanted to solve with AI um whereas The Learning System seemed to have unlimited potential although they were a lot harder to get anything to do uh significant to do at the beginning right and this is the problem because they weren’t scaled up enough and the other the other reason we started DM 2010 is we could also see the Computing uh Paradigm was shifting on the hardware side gpus and other things uh uh which were of course also invented for gaming and turns out everything’s a matrix multiplication right intelligence and gaming and computer Graphics so um and so all of those different influences came together and the you know the understanding of Neuroscience and FM machines and Neuroscience had Advanced a lot in that previous 10 years too so I felt it was the perfect time to sort of bring all of that together uh back in 2010 and we were betting on that not necessarily because we knew it would work but we knew that the other methods we were pretty confident would not work uh and that’s what the AI Winters were about basically people trying to push those expert systems um and then the second question you know um I think I wouldn’t say that uh well first of all it is surprising that some of these things converge and actually we weren’t sure so you know that Atari stuff I showed you so for the first couple of years nothing worked right so we couldn’t even get one point on if some of you remember pong the the you know one of the first computer games we couldn’t this sort of tennis bat and ball game simplest game you could imagine we weren’t able to get a point on it so we were wondering whether were we like 10 20 years too early like like babage turned out to be with his Difference Engine right Amazing Ideas it worked but he was just in the end 50 years or 100 Years too early and uh and I always say you want to be like 5 years ahead of your time not 50 years ahead otherwise you know you you you’ll be in for a lot of pain like like babish was and um and and so we were worried about that but then it did converge and then that gave us confidence to to tackle the harder problems and I think if you ask him I think the last part of your question was about the things in nature well the things there what I’m thinking is they’re not suboptimal they’re actually probably pretty optimal because they’ve they’ve gone through an evolutionary process not just life with Biology but actually geologically and physically you know asteroids and physical phenomena combined together they survive some amount of time because they’re stable over time right and if they’re stable over time then there’s probably some structure that’s learnable that that would be my conjecture great uh a question down here what do you think about um building high bandwidth brain machine interfaces and implantable memory and reasoning modules so that humans can be further empowered to make discoveries at mostly as opposed to only talking to AI in the cloud yeah I I I um I I love that area and I’ve um carefully followed it and and help people building like EG caps and and things um of course the problem is on resolution of these things um to get the readouts from the brain and also ideally you’d want it read and write um but I’m very fascinated by projects like neuralink or you know chips in the brain obviously right now um that’s for uh you know veterans and people to get function back in their body I think it’s going to be amazing things on that where I think people will be able to gain uh be able to walk again if they’ve broken their back and things like that I think there’s going to be some credible advances in medical sciences that would be amazing but then beyond that maybe uh if things you know if becomes routine and it’s surgery safe and there’s safe ways of doing this I could imagine um that would be one way for us to keep up with the with the technology and in some senses it’s no different to what we already have today with our Technologies all around us we all have our phones all you know 24 s with us and computers and other things so we’re already symbiotic almost with our technology it would be one step further of course to have that but I’m not sure I mean maybe that’s for for the philosophers in the room to answer what the difference what whether there’s a how boundary there if if with the technology if it’s attached to you versus you know it’s just something you carry around with you um all the time great some question over here that’s one just sure there are you sure well hello ah there we go there we go uh what do you think about the speed at which artificial intelligence is developing and its effects on sort of economic developments there are a lot of people out there who are just hiding careers right now who given the sort of Rapid change in the landscape make it really difficult to kind of predict what they should go into yes so it’s it’s it’s um it’s a very complicated one because as you say things are changing at kind of lightening speed um we were just discussing it with alist actually earli even designing threeyear computer science courses is quite difficult given that there the underlying material changes in with you know less than three years um I think the only we can say for sure is there’s going to be a lot of change but I think that brings with it disruption and opportunity so um I mean I just give you an example on coding which I don’t know if you’re computer scientist but you know so I still would recommend that you get good at coding uh and math because I think you’ll be able to use these new Tools in a much deeper way if you have an understanding of how they’re built but on the other hand um I think coding is going to be more available to many many more types of people uh uh because of the way the AR you’ll be able to program sort of in natural language probably rather than uh quite a complicated uh computer language and so that will open up field fields for Creative people to you know build games make films make applications that um maybe is more the kind of balance of that is more on the creative side than it is on the engineering side but I also think that will enhance Engineers to be able to do 10x of what they can do today so I still I think it’s difficult to know um but what I would say is just uh to um focus on uh kind of embracing those tools in your spare time and and making training yourself to be really good at picking up new information extremely quickly CU I think that’s that’s basically what’s going to happen in the next 10 years okay um we’ve got one question just on the here the yellow and black top um is this yeah uh do you think that there are any biological processes or behaviors or patterns which can’t be modeled with existing um deep learning techniques I’m not saying like throw more computed it until it works and just make a bigger and bigger model do you think there are some processes which physically cannot be modeled with the architecture we have um I mean there’s certainly lots of processes that can’t be modeled uh today with but but but again this is sort of goes back to what I said at the end of the talk I’m not sure in the limit uh that there are I think in the end uh if physics can can sort of solve it and there’s some structure there to be learned uh probably with enough examples one could reverse engineer a model of that uh and then um I don’t see any theoretical reason why uh a classical system um or be very comp Lex one could not make predictions or or simulations of that biological system so I I don’t really see uh any in the limit uh what what the limit of that could be I mean there are lots of abstract things like factorizing large numbers uh cryptography where it’s sort of it’s sort of human-made systems right where there may not be any structure I mean there may be structure in in the in the natural numbers lots of people conjecture there is if there is then that will also be learnable if there isn’t and it’s kind of uniform distribution then you would need a quantum computer to to you know crack uh cryptography things like that so those are open conjectures but I think most things in nature have evolved and um over geological or biological physics time and so then that suggests to me there is some structure to learn so that makes the the search or the prediction potentially tractable okay and uh last question then um we’ll have the person in the pink thank this question comes from behalf of the Cambridge University game development Society uh so you mentioned uh you mentioned about the gen2 model and how currently it’s uh stable for a few seconds of consistency and you hope to eventually have that a few at a few minutes uh but I suppose the question that our society has is um games that we actually play have consistency that’s indefinite you know when you’re playing Minecraft you expect to turn around and the village is still there right yes um so do you see your current model being integrated into a workflow or what exactly how do you see Ai and your model and what you’re working on uh how do you see it integrating into game development in the coming decades yeah well look um I I think there’s many ways AI is going to come in so uh one is uh the tools to build the assets you need for games so 3D models animations I think that’s all uh going to come in in the next um you know couple of years um I think you can think of AI as well for game balancing so you imagine you design a game and overnight it could play a million playthroughs of that game and then in the morning you could just get a report as a game design of like these things are unbalanced right or uh uh reduce the power of this unit or whatever it is um I think also bug testing for open world games so I used to make I used to make simulation games open world games they’re a nightmare to bug test because the whole point of them is that the player can almost do almost anything and then the game will react to you so how do you test you know 10 million people having their own unique journey through your game while actually uh having AI players play it before you release it um could help you solve a lot of those bugs and then finally I mean I think excitingly as well is like AI characters that are much more lifelike that move the storyline on used to dream about that massive multiplayer worlds where you know actually the AI characters were intelligent and actually updated their beliefs and um their story lines based on what the players were doing so it felt like a much more uh living realistic World um and I think we’re on the cusp of having of building those types of of of games and then finally the world model we’re building that’s more about General Ai and and can you actually model the it’s an expression of being able to understand the world does your model understand the world well if it can if it can generate it for some amount of time then obviously it must be in some sense understanding it’s call empirical evidence that it’s understanding something about the underlying physics so um so that’s more for the general intelligence rather than I think we’re gonna maybe one day we’ll have this Hol deck thing that you can just imagine and it’s just all there around you um probably we will be able to have that once we have AGI but um I think that’s still a ways off thanks great uh well it seems like a nice place to finish a question on games returning back to games but uh thank you all so much for coming and particular special thank you D coming in and talk to us today so thank you guys [Applause]

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.