Definition of AGI:  “Do all the things the human brain can do, even theoretically.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

welcome back to hardfork thanks for having me again a lot has happened since the last time you were on the show um most notably you won a nobel prize congrats on that um ours must be still in the mail can you put in a good word for next year with the committee i will do i will do i imagine it’s very exciting to win a nobel prize i know that have been a goal for a long time of yours um i imagine it also leads to like a lot of people giving you crap like during everyday activities like if you’re you know struggling to work the printer and people are just like h oh mr nobel laur does that happen um a little bit i mean look i tried to say look i can’t you know that maybe it’s a good excuse to like not have to fix uh those kinds of things right so it’s more shield um so you just had google io and it was really the gemini show i mean i think gemini’s name was mentioned something like 95 times in the keynote of all the stuff that was announced what do you think will be the biggest deal for the average user wow i mean we did announce a lot of things i think for for the average user i think it’s the new powerful models and i hope uh uh this astrotype technology coming into gemini live i think it’s really magical actually when people use it for the first time and they realize that actually ai is capable already today of doing much more than what they thought uh and then i guess v3 was the big uh uh the biggest announcement of the show probably and seems to be going viral now and that’s pretty exciting as well i think yeah one thing that struck me about io this year compared to previous years um is that it seems like google is sort of getting agi pill as they say um i remember interviewing people researchers at google even a couple years ago and um there was a little taboo about talking about agi they would sort of be like “oh that’s like demis and his deep mind people in london that’s sort of like their crazy thing uh that they’re excited about.” but here we’re doing like you know real research um but now you’ve got like senior google executives uh talking openly about it what explains that shift i think the sort of ai part of the of the equation becoming more and more central like i sometimes describe uh google deep mind now as the engine room of google and i think you saw that probably in the keynote yesterday really if you take a step back um and then it’s very clear uh i think you could sort of say agi is maybe the right word that we’re quite close to this uh human level general intelligence u maybe closer than people thought even a couple of years ago and it’s going to have broad crosscutting impact and i think that’s another thing that you saw at the keynote it’s sort of literally popping up everywhere because it’s this horizontal layer that’s going to underpin everything and i think everyone is starting to understand that and um maybe a bit of the deep mind ethos is bleeding into the into the general google which is which is great you mentioned um that project astra is powering some things that maybe people don’t even realize that ai can yet do i think this speaks to a real challenge in the ai business right now which is that the models have these pretty amazing capabilities but either the products aren’t selling them or the users just sort of haven’t figured them out yet so how are you thinking about that challenge and how much do you bring yourself to the product question as opposed to the research question yeah it’s great great question i mean i think um one of the challenges i think of this space is obviously the underlying tech is moving unbelievably fast and i think that’s quite different even from the other big revolutionary techs internet and mobile at some point you get some sort of stabilization of the tech stack so that then the you know the focus can be on product right or or exploiting that tech stack and what we’ve got here which i think is very unusual but also quite exciting from a researcher perspective is that the the tech stack itself is evolving incredibly redly fast as you guys know so i think that makes it uniquely challenging actually on the product side um not just for us at google and deepmind but for startups for for anyone really any any any uh company small and large is where do you what do you bet on right now when that could be 100% better uh in a year as we’ve seen and and so you’ve got you’ve got this interesting thing where you need kind of fairly um deeply technical sort of product people product designers and managers i think to in order to sort of intercept where the technology may be in a year so there’s things it can’t do today and you want to design a product that’s going to come out in a year so you’ve got a kind of you got a pretty deep understanding of the tech and where it might go to to sort of work out what features you can rely on and so it’s it’s it’s an interesting one i think that’s what you’re seeing so many different things being tried out and then if something works we got to really double down quickly on that yeah during your keynote you talked both about gemini as powering both uh sort of productivity assistant style stuff and also fundamental uh science and and research challenges and i wonder in your mind is that the same problem that sort of like one great model can solve or are those sort of very different problems that just require different approaches i think you know when you look at it it looks like an incredible breadth of things which is true and how are these things related uh other than the fact i’m interested in all of them but is that uh that was always the idea with building general intelligence you know truly generally and and and and this in this way that we’re doing it should be applicable to almost anything right that being productivity which is very exciting help billions of people in their everyday lives to cracking some of the biggest problems in science um 90% i would say of is is the underlying core general models uh you know in our case gemini especially 2.5 and the in most of these areas you still need additional applied research or some a little bit of um special casing from the domain maybe it’s special data or whatever um to tackle that problem and you know maybe we work with domain experts in in the scientific uh areas uh but underlying it you all the when you crack one of those areas you can also put those learnings back into the general model and then the general model gets better and better so it’s a kind of very interesting flywheel and um it’s great fun for someone like me who’s very interested in many things you get to use this technology and sort of um uh uh uh go into almost any field that you find interesting a thing that a lot of ai companies are wrestling with right now is how many resources to devote to sort of the core ai push on the foundation models making the the models better at the basic level versus how much time and energy and money do you spend trying to spin out parts of that and commercialize it and turn it into products and i i imagine this is both like a resources challenge but also like a a personnel challenge because say you join deepmind as an engineer and you want to like build agi and then someone from google comes to you and says like we actually want your help like building the shopping thing that’s going to like let people try on clothes is that a challenging conversation to have with people who joined for one reason and maybe asked to work on something else yeah well we we don’t you know it’s sort of self- selecting internally we don’t have to that’s one advantage of being quite large there are enough engineers on the product teams and the product areas you know that can deal with the the product development product and the researchers if they want to stay in core research that they’re absolutely that’s fine and and we need that um but actually you’ll find a lot of researchers are quite um motivated by real world impact be that in medicine obviously and and things like isomorphic but also um uh to to have billions of people use their research it’s actually really motivating and so there’s plenty of people that like to do both so um yeah we don’t there’s no need for us to sort of have to pivot people to certain things um you did a panel yesterday with uh sergey brin google’s co-founder um who has been working on this stuff back in the office and uh interestingly he has shorter agi timelines than you um he thought agi would arrive before 2030 and you said just after he actually accused you of sandbagging basically like artificially pushing out your estimates so that you could like underpromise and overd deliver um but i’m curious about that because you will often hear people at different ai companies arguing about when the timelines are but presumably you and sergey have access to all the same information and the same road maps and you you understand what’s possible um and what’s not so what is he seeing that you’re not or vice versa that leads you to different conclusions about when agi is going to arrive uh look well first of all there wasn’t that much difference in our timelines if he’s just before 2030 and i’m just after also our my timeline’s been pretty consistent since the start of deep mind in 2010 so we thought it was roughly a 20-year mission and amazingly we’re on track so it’s it’s somewhere around then i would think and i and i i feel like between i actually have obviously a probability distribution of you know where the most mass of that is between 5 and 10 years from now and i think partly it’s to do with predicting anything precisely 5 to 10 years out is very difficult so there’s uncertainty bars around that and then also um there’s uncertainty about how many more breakthroughs are required right and also about the definition of agi i have quite a high bar which i’ve always had which is it’s it it should be able to do all of the things that the human brain can do right even theoretically and so that’s that’s a higher bar than say what the typical individual human could do which is obviously very economically important that would be a big milestone but not in my view enough to call it agi um and and we talked on stage a little bit about what is missing from today’s systems sort of true out of the box invention and thinking um sort of inventing a conjecture rather than just solving a mass conjecture solving one’s pretty good but actually inventing like the reman hypothesis or something as significant as that that mathematicians agree is really important is very is much harder um and also consistency so the consistency is a requirement of generality really and you should it should be very very difficult for even top experts to find flaws especially trivial flaws in the systems which we can easily find today and you know the average person can do that so there’s a sort of capabilities gap and there’s a consistency gap before we get to what i would consider agi and when you think about closing that gap do you think it arrives via incremental two 5% improvements in each successive model just kind of stacked up over a long period of time or do you think it’s more likely that we’ll hit some sort of technological breakthrough and then all of a sudden there’s liftoff and we hit some sort of intelligence explosion i i think it’s i think it could be both and and and i and i think for sure both is going to be useful which is why we push unbelievably hard on the scaling and the you know what you would call incremental although actually there’s a lot of innovation even in that to keep moving that forward pre-training post-training infant time compute all of that stack so there’s actually lots of exciting research and we showed some of that that diffusion model um the deep think model um so we’re innovating at all parts of that the traditional stack should we call it and then on top of that we’re doing uh more green field things more blue sky things like alpha evolve maybe you could you could include in that which um is there a difference between a green field thing and a blue sky thing i’m not sure maybe they’re maybe they’re pretty similar so uh some new area let’s call it and uh and uh and then that could come back into the main branch right and we’ve i’ve i mean as you both know i’ve been fundamental believer in sort of foundational research we’ve always had the broadest deepest research bench i think of any lab out there um and that’s what allowed us to do past big breakthroughs obviously transformers but alpha go alpha zero all of these things distillation um and if to the extent any of those things are needed again another big breakthrough of that level um i would back us to do that and uh we’re you know pursuing lots of very exciting avenues that could bring that sort of step change uh as well as the incremental and then they of course also interact um because the better you have your your your base models the more you things you can try on top of it um again like alpha evolve you know add in evolutionary programming in that case on top of the the the llms um we recently talked to karen how who’s a journalist um just wrote a book about ai um and she was making an argument essentially against scale um that you don’t need these big general models that are incredibly energyintensive and comput inensive and require billions of dollars and new data centers and and all kinds of uh of resources to make happen that instead of doing that kind of thing you could build smaller models you could build narrower models you could have a model like alphafold that is just designed to uh predict the 3d structures of proteins you don’t need a huge behemoth of a model to accomplish that what’s your response to that well i think you need those big models we we we’re you know we love big and small models so you need the big models often to train the smaller models so uh we’re very proud of our kind of flash models which are the most you know we call them our workhorse models really efficient some of the most popular models we use a ton of those types of size models internally but you can’t build those kind of models without distilling um from the larger teacher models and um and even things like alpha which obviously i i i’m huge advocate of more of those types of models that can tackle right now we don’t have to wait to agi we can tackle now really important problems in science and medicine uh today and uh that will require taking the general techniques but then potentially specializing it you know in that case around protein structure prediction and i think there’s huge potential for doing more of those things um and we are largely in our science work ai for science work um and i think you know we’re producing something pretty cool on that pretty much every month these days and um i think there should be a lot more exploration on that probably a lot of startups could be built uh combining some kind of general model that exists today with some domain specivity and um but if you’re interested in agi you’ve got to push the the again both sides of that it’s it’s not an either or in my mind i’m i’m an and right like let’s scale let’s let’s look at specialized techniques combining that and hybrid systems sometimes they’re called and let’s look at um new uh uh blue sky research that could deliver the next transformers um you know we’re betting on all of those things you mentioned alpha evolve something that kevin and i were both really fascinated by tell us what alpha evolve is well a high level it’s basically taking our um latest gemini models actually two different ones uh uh to generate sort of ideas hypotheses about programs and other uh mathematical functions and then it goes they go into sort of evolutionary programming process to decide which ones of those are most promising and then that gets sort of ported into the next step and tell us a little bit about what evolutionary programming it sounds very exciting yeah so it’s it’s basically a way for uh systems to kind of uh uh explore new space right so like what things should we you know in genetics like mutate to uh to give you a kind of new organism so you can think about the same way in programming or in mathematics you know you change the program in some way and then uh you compare it to some answer you’re trying to get and then the ones that fit best according to sort of evaluation function you put back into the next set of generating new ideas uh we have our most efficient model sort of flash model generating uh possibilities and then we have the pro model uh critiquing that right and deciding which one of those is most promising for the to be selected for the next uh next round of evolution so it’s sort of like an autonomous ai research organization almost where you have some ai coming up with hypotheses other ai testing them and supervising them and the goal as i understand it um is to have an ai that can kind of improve itself or over time or suggest improvements to existing problems yes so it’s the beginning of i think that’s why people so excited about and we’re excited about is the beginning of a kind of automated process it’s still not fully automated and also it’s still relatively narrow we’ve applied it to many things like chip design uh scheduling uh ai tasks on our on our data centers more efficiently um even improving matrix multiplication one of the most fundamental units of training uh uh training algorithms uh so it’s it’s actually amazingly useful already but um it’s still constrained to domains that are kind of provably correct right which obviously maths and coding are but we we need to sort of fully generalize that but it’s interesting because i think for a lot of people the knock they have on llms in general is well all you can really give me is the statistical median of your training data but what you’re saying is we now have a way of going beyond that to potentially generate novel ideas that are actually useful in advancing the state-of-the-art that’s right and and but we we already had these type this is another approach alpha evolve using evolutionary methods but but we already had evidence of that even way back in alph go days so you know it’s alph go came up with new go strategies most famously move 37 in game two of our big lisa sad doll world championship match and okay it was limited to a game but it was a genuinely new strategy that had never been seen before even though we’ve played go for hundreds of years so that’s when i kicked off our sort of alpha fold projects and science projects because i was waiting for to see evidence of that kind of spark of um creativity you could call it right or originality uh at least in within the domain of what we know but there’s still a lot further that has to you know so we we we know that these kinds of models paired with things like monte carlo research or reinforcement learning planning techniques uh uh can get you to new regions of space to explore and evolutionary methods is another way of going beyond what the current model knows to explore force it into a new regime where it’s not seen it before i’ve been looking for a good monte carlo tree for so long now so if you could help me find one it would honestly be a huge help one of these things could help okay great um so i read the alpha evolve paper or to be more precise i fed it into notebook lm had it make a podcast that i could then listen to that would explain it to me at a slightly more elementary level um and one fascinating thing that stuck out to me um is a detail about how you were able to make alpha evolve more creative and one of the ways that you did it was by essentially forcing the model to hallucinate i mean so many people right now are obsessed with eliminating hallucinations but it seemed to me like one way to read that paper is that there’s there’s actually a scenario in which you want models to hallucinate or be creative whatever you want to call it yes well i think that’s right i think you you know hallucination in when you want factual things obviously is you don’t want um but in creative situations where you know you can think of it as a little bit like lateral thinking in an mba course or something right uh is just just create some crazy ideas most of them don’t make sense um but the odd one or two may get you to a a region of the search space that is actually quite valuable it turns out once you evaluate it afterwards uh and so um you can substitute the word hallucination maybe for imagination at that point right uh there’s they’re obviously two sides of the of the same coin yeah i did talk to one uh ai safety person who was uh a little bit worried about alpha evolve not not because of the actual technology and the experiments which this person said you know they’re fascinating but because of the way it was rolled out so uh deep google deep mind created alpha evolve and then used it to optimize some systems inside google and kept it sort of hidden for a number of months um and only then sort of released it to the public and this person was saying well if we really are getting to the point where these ai systems are starting to become recursively self-improving and they can sort of build a better ai doesn’t that imply that when google if google deepmind does build agi or even super intelligence that it’s going to keep it to itself for a while rather than doing the responsible thing and informing the public well i think it’s a bit of both actually you need to for first of all alpha vololve is a very naent self-improvement thing right and it’s still got human in the loop and it’s um and it’s only shaving off you know albeit important percentage of points off of already existing tasks you know that’s valuable but it’s not some it’s not creating any kind of step changes uh and there’s a there’s a trade-off between you know carefully evaluating things internally before you release it to the public out into the world um and then also getting the extra critique back which is also very useful from the academic community and so on and also we we have a lot of trusted tester type of programs that we talk about where people get early access to these things um and um and then give us feedback and and stress test them uh including sometimes the the safety institutes as well but my understanding was you weren’t just like red teaming this internally within google we were actually like using it to make the data centers more efficient using it to make the kernels that train the ai models more efficient so i guess what this person is saying is like it’s just we we want to start getting good habits around these things now before they become something like agi and uh they were just a little worried that maybe this is going to be something that stays hidden for longer than it needs to so i don’t like you i would love your hear your response to that yeah well look i i mean i think that that system is not uh uh anything really that i would say you know has any risk on the agi type of front i think as we get and i think today’s systems still are not although very impressive are not that powerful um from a you know any kind of agi risk standpoint that maybe this person was talking about um and i think you need to have both you need to have incredibly rigorous internal tests of these things and then you need to also get collaborative inputs from external so i think it’s a bit of both i actually don’t know the details of uh of of the alpha uh uh process for the last few you know the first few months it was just function search before and then it become more general so it’s it’s sort of evolved it’s evolved itself over the last year in terms of becoming this general purpose tool um and it still has a lot of um way to go before we can actually use it in our main branch which is at that point i think then becomes more serious like with gemini it’s sort of separate from from that currently let’s talk about ai safety a little bit more broadly it’s been my observation that it seemed like if the further back in time you go and the less powerful ai systems you have the more everyone seemed to talk about the safety risk and it seems like now as the models improve we we hear about it less and less including you know at the keynote yesterday so i’m curious what you make of this moment in ai safety uh if you feel like you’re paying enough attention to the risk that could be created by the systems that you have and if you are as committed to it as you were say three or four years ago a lot of these outcomes seem less likely yeah well we’re we’re just as committed as we’ve ever been i mean we we’ve we’ve from the beginning of deep mind we plan for success so success meant something looking like this this is what we kind of imagined i mean it’s sort of unbelievable still that it’s actually happened but it’s it is sort of in the in the overton window of what we thought was going to happen if if these technologies really did develop the way we thought they were going to um and the risk and attending to mitigating those risks was was part of that and so we do a huge amount of work on our systems uh i think we have very robust red teaming uh uh uh uh processes pro both pre and post launches um and we’ve learned a lot uh and i think that’s what’s the difference now between having these systems have albeit early systems contact with the real world i think that’s actually been i’m sort of persuaded now that that has been a useful thing overall and i wasn’t sure um i you know i think 5 years ago 10 years ago i may have thought maybe it’s better staying in a research lab and and you know kind of collaborating with academia and that but actually there’s a lot of things you don’t get to see or understand unless millions of people try it so it’s it’s this weird trade-off again between um you you can only do it when there’s there’s millions of smart people uh uh try your uh technology and then you find all these edge cases so you know however big your your testing team is it’s only going to be you know 100 people or thousand people or something so it’s not comparable to tens of millions of people using your your your systems but on the other hand you want to know as much as possible uh ahead of time so you can mitigate the risks before they happen so and this is so this is interesting and it’s good learning i think what’s happened in the industry in the last 2 three years has been great because we’ve been learning when the systems are not that powerful or risky as you were saying earlier right i think things are going to get very serious in 2 three years time when these agent systems start becoming really capable we’re only seeing the beginnings of the agent era let’s call it but you can imagine and i hopefully you understood from the keynote what the ingredients are what it’s going to come together with and then i think we really need a step change in research on analysis and understanding controllability but the other key thing is it’s got to be international you know that’s pretty difficult and i’ve been very consistent on that because it’s an inter it’s it’s a technology going to fit everyone in the world it’s been built by different countries and different companies in different countries so you got to get some international kind of norm i think around uh uh what we want to use these systems for and and what are the kinds of benchmarks that we want to test safety and reliability on um but there’s plenty of work to get on with now like we don’t have those benchmarks we should we and the industry and academia should be agreeing to consensus of what those are what role do you want to see export controls play in doing what you just said well export controls is a very complicated issue and and obviously geopolitics today is extremely complicated um and there you know i can i see both sides of the arguments on that you know there’s proliferation uncontrolled proliferation of these technologies uh do you want different places to have frontier modeling uh training capability uh i’m not sure that’s a good idea but on the other hand um you want western technology to be to be uh the thing that’s adopted uh around the world so it’s a complicated trade-off like if there was an easy answer i think we’d all you know i would be you know shouting from the rooftops but i think there’s it’s it’s nuanced like most real world problems are do you think we’re heading into a bipolar conflict with china over ai if we aren’t in one already i just recently saw the trump administration making a big push to uh make the middle east uh countries in the gulf like saudi arabia and the uae into ai powerhouses have them you know use american chips to to train models that will not be sort of accessible to to china and its ai powers do you see that becoming sort of the foundations of a new global conflict well i hope not but i i think uh short term you know i feel like ai is getting caught up in the in the bigger g geopolitical shifts that are going on so i think it’s just part of that and it happens to be one of the most uh topical new things that’s appearing but on the other hand what i’m hoping is as people as these technologies get more and more powerful the world will realize we’re all in this together because we are and so uh you know and the the the the last few steps towards agi um hopefully we’re on the longer timelines actually right um the more the timelines i’m thinking about then we get time to sort of get the the the the collaboration we need at least on a scientific level um before before then would be good do you feel like you’re in sort of the the final home stretch to agi i mean sergey brin uh google’s co-founder had a a memo that was reported on by my my colleague at the new york times earlier this year that went out to google employees and said you know we’re in the sort of the home stretch and everyone needs to get back to the office and be working all the time uh because this this is when it really matters do you have that sense of like of of finality or or sort of entering a new phase or an end game i think we are past the middle game that’s for sure but i’ve been working every hour there is for the last 20 years because i felt the how important and momentous this this technology would be and we’ve thought it was possible for 20 years and i think it’s coming into view now i agree with that and um whether it’s 5 years or 10 years or 2 years that they’re all actually quite short timelines when you’re talking discussing what the the enormity of the transformation of this techn you know this technology is going to bring uh that none of those timelines are very long we’re going to switch to some more general questions about the ai future sure a lot of people now are starting to at least in conversations that i’m involved in think about what the world might look like after agi um the context in which i actually hear the most about this is from parents who want to know um what their kids should be doing studying will they go to college um you have kids that are older than than my kid um how are you thinking about that so i think that the when it comes to kids and i get asked this quite a lot is is u university students um i think first of all i wouldn’t dramatically uh change some of the basic advice on stem uh getting good at even for things like coding i would still recommend because i think whatever happens with these ai tools you’ll be better off understanding how they work and how they function and you know what you can do with them um i would also say immerse yourself now that’s what i would be doing as a teenager today in in trying to become a sort of ninja at using the the the latest tools i think you can almost be sort of superhuman in some ways if you got really good at using uh all the latest uh coolest ai tools um but don’t neglect the basics too because you need the fundamentals and then i think uh teach sort of meta skills really of um like learning to learn and the only thing we know for sure is there’s going to be a lot of change over the next 10 years right so how does one get ready for that what kind of skills are useful for that creativity skills um adaptability resilience i think all of these sort of you know meta skills is what will be important uh for the next generation um and i think it’ll be very interesting to see what they do because they’re going to grow up ai native just like the last generation grow grew up mobile and and ipad and you know sort of that that kind of you know tablet native and then previously internet and computers which was my era and um you know they always i think the kids of that era always seem to adapt to uh make use of the latest coolest tools and i think there’s more we can do on the ai side to make the tools actually um if people are going to use them for school and education let’s make them really good for that and sort of provably good and i’m very excited about bringing it to education in a big way and also to you know if you had an ai tutor uh uh to bring it to poorer parts of the world that don’t have good educational systems um so i think there’s a lot of upside there too another thing that kids are doing with ai is chatting a lot with digital companions um google deepmind doesn’t make any of these companions yet um some of what i’ve seen so far seems pretty worrying it seems pretty easy to create a chatbot that just does nothing but tell you how wonderful you are and that can sort of like lead into some dark and weird places so i’m curious what observations you’ve had as you like look at this uh market for ai companions and whether you think i i might want to build this someday or i’m going to leave that to other people yeah i think we got to be very careful as we as we start entering that domain and and that’s why we we haven’t yet and we’re being very thoughtful about that my my view on this is um more through the lens of uh the universal assistant that we talked about yesterday which is something that’s incredibly useful for your everyday productivity you know gets rid of the boring mundane tasks that we all hate doing to give you more time to do the things that you love doing i also really um hope that they’re going to enrich your lives by giving you incredible recommendations for example on all sorts of amazing things that um you didn’t realize you would enjoy you know sort of the delight you with surprising things um so i think these are the the ways i’m hoping that uh these systems will go and actually on the positive side i feel like um we if this assistant becomes really useful and knows you well you could sort of program it with you obviously with natural language to protect your attention so you could almost think of it as a system that works for you you know as an individual it’s yours and um it protects your attention from being assaulted by other algorithms that want your attention which is actually nothing to do with ai most most social media sites that’s what they’re doing effectively their algorithms are trying to gain your attention and i think that’s actually the worst thing and it be great to to protect that so we can be more in you know creative flow or whatever it is that you want to do that’s how i would want these systems to be useful to people if if you could build a system like that i think people would be so incredibly happy i think right now people feel assailed by the algorithms in their life and they don’t know what to do about it well the reason is is because you have to use your you’ve got one brain and you have to let’s say whatever it is a social media stream you have to dip into that torrent to then get the piece of information you want but then you’ve already but you’re doing it with the same brain so you’ve already affected your mind and your mood and other things by dipping into that torrent and you know to find the valuable you know the piece of information that you wanted but if if an assistant dig digital assistant did that for you you would you know you’d only get the useful nugget and you wouldn’t need to um break your you know your your mood or what it is that you’re doing the day or your concentration with your family or whatever it is um i think that would be wonderful yeah casey loves that idea you love that idea i love this idea of an ai agent that protects your attention from all the forces trying to assault it i’m not sure how the the ads team at google is going to feel about this um but we can ask them when the time comes um some people are starting to look at the job market especially for recent college graduates and uh worry that there we’re already starting to see signs of ai power job loss um anecdotally i talked to young people who uh you know a couple years ago might have been interested in going into fields like tech or consulting or finance or law who are just saying like i don’t know that these jobs are going to be around much longer um a recent article in the atlantic wondered if we’re starting to see ai competing with college graduates for these entry-level positions do you have a view on that i haven’t looked at that you know i don’t know i haven’t seen the studies on that but um you know maybe it’s starting to appear now i i don’t think there’s any hard numbers on that yet at least i haven’t seen it um i think for now i mostly see these as tools that augmenting what you can do and what you can achieve um i think like with most i think the next era i mean maybe after agi things will be different again but over the next five to 10 years i think we’re going to find uh what normally happens with with big sort of new technology shifts which is that some jobs get disrupted but then new um you know more valuable usually more interesting jobs get created so i do think that’s what’s going to happen in the in the nearer term um so you know today’s graduates and the next you know next five years let’s say i think it’s very difficult to predict after that um that’s part of this sort of more societal change that we need to get ready for i mean i think the the tension there is that you’re right these tools do give people so much more leverage um but they also like reduce the need for big teams of people doing certain things i was talking to someone recently who said you know they had been at a data science uh company in their previous job that had 75 people working on some kind of data science tasks and now they’re at a startup that has one person doing the work that used to require 75 people and so i guess the question i’d be curious to get your view on is what are the other 74 people supposed to do well look i think um uh these tools are going to unlock uh uh the ability to create things much more quickly so you know i think there’ll be more people that will do startup things i mean there’s a lot more surface area one could attack and try with these tools um than was possible before so let’s take programming for example um you know so obviously these these systems are getting better at coding but the best coders i think are getting differential value out of it because they still understand how to pose the question and architect the whole codebase and and check what the coding does but simultaneously at the hobbyist end it’s allowing designers and maybe nontechnical people to vibe code some things you know whether that’s prototyping games or or websites or uh movie ideas so in in theory it should be those 70 people or whatever should could be creating new startup ideas maybe it’s going to be less of these bigger teams and more smaller teams are very empower empowered by ai tools um but it but that goes back to the education thing then which skills are now important it might be different skills like creativity sort of vision and uh design sensibility um you know could become increasingly important do you think you’ll hire as many engineers next year as you hire this year i think so yeah that’s that’s the i mean there’s no plan to to hire less but you know we again you have we have to see how fast the the coding uh agents improve um today they they’re not you know they can’t do things on their own they need to they need uh they’re just helpful for for the best you know for the best human coders last time we talked to you we asked you about some of the more pessimistic views about ai in the public and one of the things you said to us was that the field needed to demonstrate concrete use cases that were just clearly beneficial to people to kind of shift this my observation is that i think there are even more people now who are like actively antagonistic toward ai and i think maybe one reason is they hear folks at the big labs saying pretty loudly eventually this is going to replace your job and most people just think well i don’t want that you know so i’m curious like looking on from that past conversation if you feel like we have seen some use cases enough use cases to start to shift public opinion or if not what some of those things might be that actually change views here well i think we’re we’re we’re working on those things they take time to develop um i think the a kind of universal assistant would be one of those things if it was uh kind of really yours and working for you effectively so technology that works for you um i think that this is what economists and other experts should be working on is do you have uh does everyone have manage a a suite of of you know fleet of agents that are doing things for you and you know including potentially earning you money or building you things um you know does that become part of the normal job process i could imagine that in the next four or five years i also think that as we get closer to agi and we make breakthroughs in we we probably talked about last time material sciences energy fusion these sorts of things helped by ai um uh we should have we should start getting to a position in society where we’re getting towards what i would call radical abundance where there’s a lot of resources u to go around and then again it’s more of a political question of how would you distribute that in a fair way right so i’ve heard this term like universal high income something like that uh i think is going to probably be you know good and necessary but obviously there’s a lot of uh complications there that need to be thought through um so and and then in between there’s this transition period you know between now and whenever we we have a that sort of situation where what what do we do about the change in in the in the interim and depends on how long that is too what part of the economy do you think agi will transform last well i mean i think the parts of the economy where you know involves humanto human interaction and emotion um and those things i think uh you know will probably be the hardest things for for ai to do so um you know but aren’t already aren’t people already doing ai therapy and talking with chat bots for things that they might have paid someone you know $100 an hour for well therapy is a very narrow domain and i’m not sure exactly there’s a lot of you know hype about those things cuz i’m not actually sure how many uh of those things are really going on in terms of actually affecting the real economy and rather than just sort of more toy things um and i don’t think the ai systems are like capable of doing that properly yet um but just the kind of emotional connection uh and uh that we get from talking to each other and um doing things in nature in the real world uh i don’t think that ai can really replicate all of those things so if you lead hikes that’ be a good job yeah yeah climb everest my intuition on this is that it’s going to be some heavily regulated industry where there will just be like a a massive push back on the use of ai to displace labor or or take people’s jobs like healthcare or or education or something like that um but you think it’s going to be an easier lift in those heavily regulated industries well i don’t know i mean it might be but then we have to weigh that up as society whether we want all the all the all the the positives of that for example you know curing all diseases or or um you know i think there’s a lot of uh finding new energy sources so i think these things would be clearly very beneficial for society and i think we need um to for our other big challenges it’s not like there’s no challenges in society other than uh ai but i think ai can be a solution to a lot of those uh other challenges be that energy resource constraints uh aging disease um you know you name it and water access etc a ton of problems facing us today climate um i think ai can potentially help with all of those and i agree with you society will need to decide what um it wants to use this these technologies for and um but then you know what’s also changing is what we discussed earlier with products is the technology is going to continue advancing um and that will open up new possibilities like uh kind of radical abundance space travel these things um which are a little bit out of scope today unless you read a lot of sci-fi but i think rapidly becoming uh real during the industrial revolution there were lots of people who embraced new technologies moved from farms to cities to work in the new factories uh were sort of early adopters on that curve um but that was also when the transcendentalists started retreating into nature and rejecting technology that’s when throw went to walden pond and there was a big movement of americans who just saw the new technology and said “i don’t think so not for me.” do you think there will be a similar movement around rejection of ai and if so how how big do you think it’ll be um i don’t know if it’ll be i mean there could be a get back to nature and i mean i think a lot of people will want to do that and and i think this potentially will give them the room and space to do it right if you’re in a world of radical abundance i fully expect that’s what what a lot of us will want to do is use it to you know i think again i’m thinking about it sort of space fairing and and and and more you know kind of um maximum human flourishing but uh i think there will be that will be exactly some of the things that a lot of us will choose to do and but i have time and the space and the the resources to do it are there parts of your life where you say i’m not going to use ai for that even though it might be pretty good at it for some sort of reason wanting to protect your creativity or your thought process or something else um i don’t think ai is good enough yet to have impinged on any of those sorts of areas where i would you know it’s mostly i’m using it for you know things like you did with notebook lm which i feel find great like breaking the ice on a new topic scientific topic and then deciding if i want to get more deep into it that’s one of my main use cases summarization those things i think those are all just helpful um but you know we’ll see i haven’t got any examples of what of what you suggested yet but maybe as ai gets more powerful there will be when we talked to dario ammed of anthrop topic uh recently he talked about this feeling of excitement mixed with a kind of melancholy about the progress that ai was making in domains where he had spent a lot of try time trying to be very good like coding yes where it was like you see a new coding system that comes out it’s better than you you think that’s amazing and then your second thought is like ooh that stings a little bit have you had any experiences like that so i maybe maybe one reason it doesn’t sting me so much is i’ve had that experience when i was very young with chess so you know um chess was going to be my first career and you know i was playing pretty professionally when i was a kid for the england junior teams and then deep blue came along right and clearly uh the computers were going to be much more powerful than the world champion forever after that and so but yeah i still enjoy playing chess um people still do it’s different you know but it’s a bit like i can you know usain bolt we celebrate him for for running the 100 meters incredibly fast but we’ve got cars but we don’t care about that right like it’s we we’re interested in other humans doing it and um i think that’ll be the same with robotic football and all of these other things so um and that maybe goes back to what we discussed earlier about what i think in the end we’re interested in in other human beings that’s why even like a novel maybe it maybe ai could write one day a novel that’s sort of technically good but i don’t think it would have the same soul or connection to the reader that um uh if you knew it was written by an ai at least as far as i can see for now you mentioned robotic football is that a real thing we’re not sports fans so i just want to make sure i haven’t missed something i was meaning soccer yeah no yeah no no i don’t know i i i there are there are robocup uh sort of soccer type little robots trying to kick balls and things i’m not sure how serious it is but there is a there is a field of of robotic football you you mentioned the you know sometimes a novel written by a robot might not feel like it have a soul i have to say for as incredible as the technology is in vo or imagine i sort of feel that way with it where it’s like it’s beautiful to look at but i don’t know what to do with it you know what i mean exactly and that’s that’s what i was you know that’s why we work with great artists like darren aronowski and shanker on the music um is i i totally agree i think these are tools and they can come up with technically good things and i mean v3 is unbelievable like when i look at the you know i don’t know if you’ve seen some of the things that are going viral and being posted at the moment with the voices actually i i didn’t realize how big a difference audio is going to make to the video i think it just really brings it to life but it’s still not as darren would say yesterday when we were discussing on an interview it it it doesn’t he he brings the storytelling it’s not got deep storytelling like a master filmmaker will do or a master novelist you know the top of their game and um it might never do right it’s just always going to feel something’s missing it’s a sort of a soul for a better word of the piece you know the real humanity the magic if you like the the great pieces of art you know art too when you when i see a van go or a rothco or you know why does that touch your you know i spill you know um sort of you know hair’s gone out the back of my my my spine because of i remember you know and you know about what what they went through and um the struggle to produce that right in every brush stroke of van go’s brushstrokes his his sort of torture and i’m not sure what that would mean even if the ai mimicked that and you were told that it was like so what right and and and so i think that is the piece that at least as far as i can see out to 5 10 years um the the top human creators will always be bringing and that’s why we’ve done all of our tools vo lia in in com in collaboration um with top creative artists the new pope pope leo is reportedly interested in agi i don’t know if he’s agi pilled or not but uh that’s something that he’s spoken about before um do you think we will have a religious revival or a renaissance of interest in faith and spirituality in a world where agi is forcing us to think about what gives our lives meaning i think that potentially could be the case and um i actually did speak to the last pope about that and and the vatican’s been interested but even prior to this pope haven’t spoken to him yet but on these these matters how does ai and religion uh and uh technology in general and religion uh interact and and what’s interesting about the catholic church is and i’m a member of the pontipical academy of sciences is they’ve always had uh which is strange for a religious body a scientific arm you know which they like to always say galileo was the founder and and uh those interest so so but then and and it’s actually really separate and i always thought that was quite interesting and people like steven hawking and and you know avowed atheists were part of the academy and and that’s partly why i agreed to join it is because it’s a fully scientific body and it’s very interesting and i was fascinated they’ve been interested in this for 10 plus years so they they were on on this early in terms of like how interesting or how phil from a phil philosophical point i think um uh this this this technology will be and i and i i actually think we need more of that type of thinking and work from from philosophers and theologians uh actually would be really really good so i hope the new pope is genuinely interested um we’ll close on a question that uh i recently heard tyler cowan ask jack clark from anthropic that i thought was so good and decided to just steal it whole cloth in the ongoing ai revolution what is the worst age to be oh wow uh well i i don’t i mean you know um gosh i haven’t thought about that but i mean i think any age uh uh where you can live to see it is a good age because i think we are going to make some great strides uh with things like you know medicine and so um i think it’s going to be incredible journey i don’t none of us know you know exactly how it’s going to transpire it’s very difficult to say but it’s going to be very interesting to find out try to be young if you can yes young is always better i mean in general young is always better all right deis thanks so much for coming thank you very much

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.