“I think if humanity collectively puts their mind to solving a problem, whatever it is, I think we can get there. So because of that, you know, I think I’m optimistic on the P(doom) scenarios. But that doesn’t mean- I think the underlying risk is actually pretty high. But I’m, you know, I have a lot of faith in humanity kind of rising up to the, to meet that moment.” — Sundar Pichai

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

GUEST BIO: Sundar Pichai is CEO of Google and Alphabet.

so lots of folks are talking about timelines for AGI or ASI artificial super intelligence So AGI loosely defined is basically human expert level at a lot of the main fields of pursuit for humans And then ASI is what AGI becomes presumably quickly by being able to self-improve So becoming far superior in intelligence across all these disciplines than humans When do you think we’ll have AGI is 2030 a possibility uh there’s one other term we should throw in there I don’t know who who used it first Maybe Karpati did AJI Have you have you heard AJI the artificial jagged intelligence sometimes feels that way right both that are progress and you see what they can do and then like you can trivially find they make numerical errors or like you know counting ours and strawberry or something which seems to trip up most models or whatever it is right so uh so maybe we should throw that term in there I feel like we are in the AJI phase where like dramatic progress some things don’t work well but overall you know you’re seeing uh lots of progress but if your question is will will it happen by 2030 30 Look we constantly move the line of what it means to be AGI There are moments today you know like sitting in a Whimo in a San Francisco street with all the crowds and the people and kind of work its way through I see glimpses of it there The car is sometimes kind of impatient trying to work its way uh using Astra like in Gemini Live or seeing uh you know asking questions about the world What’s this skinny building doing in my neighborhood it’s a street light not a building You see glimpses That’s why I use the word AJI because then you see stuff which obviously you know we’re far from AGI too So you have both experiences simultaneously happening to you I’ll answer your question but I’ll also throw out this I almost feel the term doesn’t matter What I know is by 2030 there’ll be such dramatic progress We’ll be dealing with the consequences of that progress both the positives uh both the positive externalities and the negative externalities that come with it in a big way by 2030 So that I strongly feel right whatever we may be arguing about the term or maybe Gemini can answer what that moment is in time in 2030 but I think the progress will be dramatic right so that I believe in will the AI think it has reached AGI by 2030 I would say we will just fall short of that timeline right so I think it’ll take a bit longer it’s amazing in the early days of Google deepmind in 2010 they talked about a 20-year time frame to achieve uh AGI So which is which is kind of fascinating to see But you know I for me the whole thing seeing what Google brain did in 2012 and when we acquired deep mind in 2014 uh right close to where we’re sitting in 2012 you know Jeff Dean showed the image of when the neural networks could recognize uh a picture of a cat right and identify it you know this is the early versions of brain right and so you know we all talked about couple decades I don’t think we’ll quite get there by 2030 so my sense is it’s slightly after that but I I would stress it doesn’t matter like what that definition is because you will have mind-blowing progress on many dimensions maybe AI can create videos we have to figure out as a society how do we we need some system by which we all agree that this is AI generated and we have to disclose it in a certain way because how do you distinguish reality otherwise yeah there’s so many interesting things you said So first of all just looking back at this recent now feels like distant history uh with Google brain I mean that was before TensorFlow before TensorFlow was made public and open sourced So the tooling matters too combined with GitHub ability to share code Then you have the ideas of attention transformers and the diffusion now And then there might be a new idea that seems simple in retrospect but will change everything and that could be the post training the inference time innovations And I think Shad Cen tweeted that Google is just one great UI from completely winning the AI race meaning like UI is a huge part of it like how that intelligence uh uh I think Logan K project likes to talk about this right now it’s an LLM but it become like when is it going to become a system where you’re talking about shipping systems versus shipping the particular model yeah that matters too how the system is um manifest itself and how it presents itself to the world that really really matters oh hugely so there are simple UI innovations which have changed the world right and Uh I absolutely think so Um we will see a lot more progress in the next couple of years is I think AI itself uh on a self-improving track for UI itself like you know today we are like constraining the models the models can’t quite express themselves in terms of the UI to to people Um but that is uh like you know if you think about it we’ve kind of boxed them in that way but given these models can code uh you know they should be able to write the best interfaces to express their ideas over time right that is incredible idea so the API is already open so you you create a really nice agentic system that continuously improves the way you can be talking to an AI Yeah But it a lot of that is in the interface and then of course the incredible multimodal aspect of the interface that Google’s been pushing These models are natively multimodal They can easily take content from any format put it in any format They can write a good user interface They probably understand your preferences better that over time like you know and so so all this is like the evolution ahead right and so um that goes back to where we started the conversation I like I think there’ll be dramatic evolutions in the years ahead Maybe one more kitchen question Uh this even further ridiculous concept of P doom So the philosophically minded folks in the AI community think about the probability that AGI and then ASI might destroy all of human civilization I would say my PD doom is about 10% Do you ever think about this kind of long-term threat of ASI and what would Europe doom be look I mean for sure look I’ve uh both been uh very excited about AI Uh but I’ve always felt uh this is a technology you know we have to actively think about the risks and work very very hard to harness it in a way that it it all works out well Um on the PDoom question look it’s you know wouldn’t surprise you to say that’s probably another micro kitchen conversation that pops up once in a while right and given how powerful the technology is Maybe stepping back you know when you’re running a large organization if you can kind of align the incentives of the organization you can achieve pretty much anything right like you know if you can get kind of people all marching in towards like a goal uh in a very focused way in a mission-driven way you can pretty much achieve anything But it’s very tough to organize all of humanity that way But I think if pedum is actually high at some point all of humanity is like aligned in making sure that’s not the case right and so we’ll actually make more progress against it I think so the irony is so there is a self-modulating aspect there like I think if humanity collectively puts their mind to solving a problem whatever it is I think we can get there So because of that you know I I I I think I’m optimistic on the PDOM scenarios but that doesn’t mean I think the underlying risk is actually pretty high but I’m uh you know I have a lot of faith in humanity kind of rising up to the to meet that moment That’s really really well put I mean as the threat becomes more concrete and real humans do really come together and get their together Well the other thing I think people don’t often talk about is probability of doom without AI So there’s all these other ways that humans can destroy themselves And it’s very possible at least I believe so that AI will help us become smarter kinder to each other uh more efficient uh it it’ll help more parts of the world flourish where it would be less resource constrainted which is often the source of military conflict and tensions and so on So we also have to load into that what’s the pdoom without AI with AI Poom with AI Poom without AI cuz it’s very possible that AI will be the thing that saves us saves human civilizations from all the other threats I agree with you I think I think it’s insightful Uh look I felt like to make progress on some of the toughest problems would be good to have AI like pair helping you right and and like you know so that resonates with me for sure Yeah

0:00 – Episode highlight 2:08 – Introduction 2:18 – Growing up in India 8:27 – Advice for young people 10:09 – Styles of leadership 14:29 – Impact of AI in human history 26:39 – Veo 3 and future of video 34:24 – Scaling laws 38:09 – AGI and ASI 44:33 – P(doom) 51:24 – Toughest leadership decisions 1:02:32 – AI mode vs Google Search 1:15:22 – Google Chrome 1:30:52 – Programming 1:37:37 – Android 1:42:49 – Questions for AGI 1:48:05 – Future of humanity 1:51:26 – Demo: Google Beam 1:59:09 – Demo: Google XR Glasses 2:01:54 – Biggest invention in human history

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.