FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“We’re worried about the future and the point at which you really want to get worried is called recursive self-improvement. So recursive self-improvement means go learn everything, start now and don’t stop until you know everything. And this could allow this recursive self-improvement could eventually allow self- invocation of things. And imagine a self recursive self-improvement system which gets access to weapons so you can imagine they’re doing things in biology that we cannot currently understand. So there is a threshold. Mow my standard joke about that is that when that thing starts learning on its own do you know what we’re going to do? We’re going to unplug it. Because you can’t have these you can’t have these things running randomly around, if you will, in the information space and not understanding at all what they’re doing.”

Intro something is about to change that I don’t think people have clocked yet we’re going to have a very different world and it’s going to happen very quickly for the following reason people tend to think of AI as language to language and we’re going to move from language to action what are we going to do when super intelligence is broadly available to everyone well in this case I’m I’m both optimistic and also fearful and obviously there’s evil people and they’ll use it in evil ways but I’m going to bet that the good people will route out evil that’s historically been true in human society the systems will get so good that you and I everyone in this audience will have access to a polymath let’s be a little proud that we are inventing a future that will accelerate physics science chemistry and so forth Eric first of all thank you for your friendship and for your partnership uh you’ve been an incredible uh friend Mentor supporter for xprize for Singularity for all the things that we’ve been working on and I just want to say a heartfelt thank you for that thank you so the conversation we’ve had over the past uh 24 hours has been what we Google and the AI Revolution call the great AI debate um which is a obviously I a challeng it’s not truly a debate but uh it’s been the conversation around as we evolve digital super intelligence that’s a million times and a billion times faster uh how do we think about it is it our greatest hope or our gravest existential threat and how do we steer the course there we had Rick Kur and Jeffrey Hinton on stage with us yesterday uh as well as uh many people that you know imod mustach and n Friedman uh and guom Verdon I’m curious how you’re steering this in your mind um I mean you have been you Google from its earliest Roots has been an AI first company I don’t think people realize that it’s always been uh the fundamental of the organization and and Google actually developed all this technology way before anybody else but it chose not to release it uh just to make sure it’s it’s safe and which was the responsible thing to do but your hand was forced how do you think about fear versus optimism uh in your own mind well in this case I’m I’m both optimistic and also fearful um and you know Larry Pig’s PhD research was on AI so you’re correct that Google was founded in the umbrella if you will of what AI was going to do and for a while it seemed like about 2third of the world’s AI resources were well hosted Within Google which is a phenomenal achievement on the part of the founders and the leadership um I think that that people tend to think of AI from what they’ve seen in the movies and which is typically you know the sort of female scientist uh kills the killer robot kind of scenarios and first we haven’t figured out how to get robotics to work yet but we certainly understand how to get information to work um I wrote a book with Dr Kissinger called the age of AI and we have our second one his last one he died unfortunately late last year called Genesis coming out later this year which is precisely on this topic I think the thing that to understand is that we’re going to have a very different world and it’s going to happen very quickly for the following reason the systems will get so good that you and I everyone in this audience and everyone in the world through their phone or what have you will have access to a essentially a poly polymath as in the historic polymath of old so imagine if you had Aristotle to consult with you on logic and you had Oppenheimer to consult with you on physics and not the person but rather the knowledge and that kind of scaling intelligence these sort of truly brilliant people who were historically incredibly rare their their equivalence would become generally available in the shorter ter so that’s the long-term answer is what are we going to do when super intelligence is broadly available to everyone and obviously there’s evil people and they’ll use it in evil ways but I’m going to bet that the good people will route out evil that’s historically been true in human society the thing I would emphasize for this audience is that something is about to change that I don’t think people clocked yet which is people tend to think of AI as language to language and we’re going to move from language to action spec specifically and technically it means that your text Lang your text will be essentially computed into a program that can be then used so in your case you’re doing a a conference start at all the potential confer members call them up figure out if they’re going to come lock them in figure out who are the most important ones and do the seating chart right and do it all by program right that’s something that humans do all day right in what you do and many of you do many things but that with one that will become automatic just by a verbal command somebody else will say you know I really like to see a competitor to Google so build a search engine sort the ranking but do it using my algorithm not the one that Google is which I don’t like and the system will the same thing so you’re going to see this explosion in digital power on a perp person basis and no one’s quite set it this way may you’re you’re very good at marketing maybe you can come up with a name for this it’s an abundance of intelligence but it’s also in your format it’s abundant of action yeah it’s it’s intentional AI yeah making things happen um and and this is going to everything is going to become I think we’re we’re heading towards like the trillion sensor economy where everything is knowable your AI can then take actions based upon the information out there uh and execute through Robotics and and such you’ve been very active uh in guiding national leaders on security uh and that’s been a really important uh work at this at this stage in your life uh and I want to hit on three of these we have such a short period of time so let me let me mention AI’s Power and Impact Today the three and then weave them as you as you would the first is AI and US National Security uh the second is AI and competitiveness with China and the third is the impact of AI on the upcoming US elections which many people have said could be patient zero in a lot of concerns so how do you think about these three things how should we think about them so I’m a part of a group that has looked very carefully at the real dangers of the current llms and they’re scary the conclusion of our group which is roughly 20 people who are SC basically scientists is we think we’re okay now and we’re worried about the future and the the point at which you really want to get worried is called recursive self-improvement so recursive self-improvement means go learn everything start now and don’t stop until you know everything um and this could allow this recursive self-improvement could eventually allow self- invocation of things and imagine a self a recursive self-improvement system which gets access to weapons so you can imag and we’re doing things in biology that we cannot currently understand so there is a threshold now my standard joke about that is that when that thing starts learning on its own do you know what we’re going to do we’re going to unplug it because you can’t have these you can’t have these things R running randomly around if you will in the information space and not understanding at all what they’re doing another threshold point is when two different agentic systems agents as a computer science Point are today defined as llms with state so in other words not only do they know how to go from input to Output but they can also they know what they did in the past and they can make judgments based on that so they accumulate knowledge so there’s a scenario where your agent and my agent learn how to speak to each other and they start and they stop talking in English and they start Lo talking in a language that they have invented what do we do in that case unplug the things you see my we’ve seen that and we’ve seen that so so these scenario these threshold points and we’ll know when they’re happening another example will be when does uh when the system can start doing math on its own at a level that’s you know incredibly advanced math that’s another threshold point now will these things occur when will they occur there’s a debate in the industry some people think five five years I think it’s going to be longer um but people you know that’s that’s the clear threshold now with respect to AI safety in general I was heavily involved with the UK act in November the executive order uh from the White House and we’ve started a series of track two dialogues with China so I kind of roughly understand and Europe of course is US its usual hopeless self um so so I I roughly know what everybody’s doing and the governments are trying to tread lightly at the moment by doing essentially various forms of notification and self-regulations so if you look at the US act for example you’re not required to um tell them what you’re doing but you’re required above 10 to the 26 flops which is an arbitrary measure that we frankly just invented um uh that you have to notify that the training event begins that seems like a reasonable compromise we don’t know what the Chinese are going to do in this area but you have to assume that they’re going to fear the broadscale impacts of AI more than democracies will because it will be used to to disempower the state and so we have to assume that the government will ultimately restrict it more than the West will Eric how how do China’s Race to Language Dominance you how do you Benchmark China today in terms of their capabilities and large language models and uh neural Nets against the us so I think as the audience knows the the government did did something good which is it restricted access access to asml and h100 H now h800 ships from Nvidia although Nvidia is doing just fine without all that revenue and uh so China is now stuck at the a100 level they’re roughly Limited at five n sorry I’ll just say broadly speaking seven nanometers lower is better um the chips that we’re using now are three nanometers going down to two and then 1.4 or so so it looks like the Gap Hardware Gap is going to increase and it also looks like the um the Chinese will be forced to do scalable software with lesser Hardware can they pull it off absolutely how will they do it they’ll spend more money so if it costs us a billion dollars to do training they’ll spend five billion so it’s it’s a temporary Gap it’s not a crippling Gap if you will in the competition you asked about the the TikTok’s Growth and Election Interference elections um one way to understand this is that people now and it’s sad don’t really get their information out of the traditional news sources they get it out of let’s think about it YouTube which is in my view well managed Instagram that um and Twitter and Facebook and Tik Tok now Tik Tok is not really social media Tik Tok is really television remember it’s not really a function of what your friends are doing it uses a different algorithm which is super impressive and it’s growing like like crazy and of course the us is busy trying to ban it which is probably not a very good idea but in any case with Twitter with Tik tok’s growth you should expect regulation of content because we regulate every country regulates television in one form or another for precisely this issue of election interference so I think you’re going to see um the decisions that are made by the social media companies with respect to how they present content will determine how badly regulated they’re going to be in this election because most people will encounter misinformation not because they built it but because they saw it through social media so the secret will be that the social media companies understand the Peril that they’re in with respect to the downside if they screw this up on either side everybody I want to take a short break from our episode to talk AD: The Future of Healthcare: Fountain Life about a company that’s very important to me and could actually save your life or the life of someone that you love company is called Fountain life and it’s a company I started years ago with Tony Robbins and a group of very talented Physicians you know most of us don’t actually know what’s going on inside our body we’re all optimists until that day when you have a pain in your side you go to the physician or the emergency room and they say listen I’m sorry to tell you this but you have this stage three or four going on and you know it didn’t start that morning it probably was a problem that’s been going on for some time but because we never look we don’t find out so what we built at Fountain life was the world’s most advanced diagnostic Centers we have four across the us today and we’re building 20 around the world these centers give you a full body MRI a brain a brain vasculature an AI enabled coronary CT looking for soft plaque dexa scan a Grail blood cancer test a full executive blood workup it’s the most advanced workup you’ll ever receive 150 gigabyt of data that then go to our AIS and our physicians to find any disease at the very beginning when it’s solvable you’re going to find out eventually might as well find out when you can take action Fountain life also has an entire side of Therapeutics we look around the world for the most Advanced Therapeutics that can add 10 20 healthy years to your life and we provide them to you at our centers so if this is of interest to you please go and check it out go to Fountain life.com Peter when Tony and I wrote Our New York Times bestseller life force we had 30,000 people reached out to us for Fountain life memberships if you go to fountainlife decomp will put you to the top of the list really it’s something that is um for me one of the most important things I offer my entire family the CEOs of my companies my friends it’s a chance to really add decades onto our healthy lifespans go to fountainlife decomp it’s one of the most important things I can offer to you as one of my listeners all right let’s go back to our episode I saw recently some uh limitations put on AI and the Fight Against Misinformation Gemini and talking about elections and politics is this are other companies doing this or is it just Google that’s stepping up so well Google again my view and I’m obviously biased has always been at the Forefront of this in 2016 when uh when Google faced the question of Elections and the Trump interference we did not have trouble because we had done the advertising with a white list in other words you had to be approved whereas the others in particular Facebook that was ultimately the the biggest casualty of this had not put a white list in since then Facebook has put a white list in so there’s hope that the companies who have a vested interest in their own Survival will manage this um I’ll let you speculate on X and Elon um but the important the important thing here is that I I didn’t fully understand this until the last few years when you run a large social network there are well-funded information transparency opponents who for whatever reason misinformation disinformation National Security what have you they want their information out there um I got in trouble one day because I announced why would we ever Source from RT which is Russia today um and people yelled at me at the time Russia to and RT after the Crimea invasions was in fact banned for precisely this reason so you really do have to be careful about the power of misinformation at scale the misinformer is guilty but so are the platforms if they spread it without checking right and that’s damaging to a democracy it really does put democracies at threat and this problem will only get worse um there are a gazillion videos now where basic I’ll give you an example you can have have chpt equivalent generate a text you can generate the mouth movements you can move the face and so forth to the average person they’re indistinguishable from real um the if you look at what happened with um Taylor Swift and the Deep fakes about her there were plenty of predict systems that were trying to prevent the creation of the deep fakes but people were so motivated to create these images that they managed to get around all of the checks and balances so it is a war between the locks and the lock lock Pickers lock makers and the lock makers need to win with disinformation for the nation frankly for democracy you have been involved in the inner workings of uh US National Defense policy how will AI change the business of war is It ultimately a a positive right now um helping us be more accurate I’ll say this it’ll sound cynical but I’ll say it I genuinely mean it the best thing about the Western militaries is they’re not at War and so they’re incredibly slow right there is a real war in the west and that’s in Ukraine and Russia and I’ve now been many times to to Ukraine and I provided some advice to and and I obviously want want I I I’m I think that however imperfect we want to preserve democracies in our world they’re just better and safer to have democracy than autocracies and certainly not ones that are busy invading the N neighboring country so what’s really going on in Ukraine is a vision of what’s happening in the future you now have and again I I can I’ll avoid my own history with respect to this but a year ago I could go to the front and I could hang out and you know joke and so forth uh the weather was nice you know the food was good kind of a thing uh now you cannot walk during the day or the night because there’s a traffic jam of your drones and enemy drones for both sides on top and it’s essentially a death zone so the ubiquity of drones means in my view that tanks and artillery and mortars go away as weapons of war I’m a sufficient Optimist that I believe that once countries figure out a way to make this ubiquitous notion of drones for their own defense it’ll become impossible to invade an adjacent country because once the tanks roll what you could do is just bomb them with drones and a drone costs $5,000 or less and the tank cost $5 million or less so the kill ratio is such that that the tanks just don’t make it and you can make enough drones to pull it off um the current drones are not particularly AIS sophisticated but if the US government in its infinite stupidity were actually to do something right and approve the Ukraine Aid package it would give us another another year right so of my current phrase publicly is let’s get another year here and in that year you’re going to see asymmetric um asymmetric Innovation that can allow a smaller government which is a new democracy trying hard to counter the moves of a large and established of invading power um I suppose the cynic would say well that means it’s going to get harder for the US to invade neighboring countries and I said well that may be true too but when having now seen real war as opposed to what you see in the in the movies and I have lots of drone death videos that I will not show anybody um it’s really horrific and we want to do everything we can to stop war and I think that there’s a scenario where AI makes it actually much less likely there will certainly with AI and empowered weapons be far fewer collateral damage because of the targeting and again this is this is lost in the various critics of what I and others are doing the biggest casualties of War are not actually the soldiers but the civilians so war is horrific and it should be if you have to have it do it with the professionals and don’t kill kids and women and old ladies and bomb the buildings like the Russians have been doing with their tanks which upsets me nowhere those are called War crime I had pommer lucky on this stage last year uh describing what he’s doing with andal and that was his key point that we Precision is everything and being able yeah and Palmer’s company has done a fantastic job they’re one of the great us leaders in this space yeah for sure let talk about uh AI safety um uh you know the point’s been made over and The Future of AI Safety over again in the last 24 hours that these AI models are are progyny they’re built on our digital exhaust um how should we be training models um how should we be trying to maximize is containment ever an issue is how do you think about uh safety in our super Advanced AI models I mean the first the first rules were don’t put it on the open internet and don’t allow it to self-referentially improve itself and we’ve put it out in the open internet and and we’ve had you know software coding software so where do we go from here well let’s understand the structure of the future internet at the moment the hyperscalers the big ones which essentially are Microsoft Microsoft open eyes kind of a pair Google um anthropic inflection um there’s a couple in China that are coming um these are closed models and when I say closed that means that you you don’t know how they work internally the source code is not available the weights are not available and the apis are limited in some way and I have there’s been a debate in the industry for a long time as open versus closed models um if you look at the open models have come out if you look at the mstr most recent models if you look at llama 3 each of these models are incredibly powerful they get to roughly 80% but so the debate that’s going on in the industry is will the open source and closed models will they track in other words will open source lag a year or two or will the hyperscalers get much bigger that is essentially a question of dollars right and train time dollars and so forth and we’re talking about $250 million dollar for a training run $500 $500 million for a trending run escalating quite quickly and you see this in in nvidia’s stock price Etc so so the first question is do you think that there’ll be a small number or large number of such things my own view is there’ll be a small number incredibly powerful AGI systems which will be heavily regulated because they’re so powerful this is my personal view um and then be a much larger number of what I’m going to call middle siiz models which will be open source which people will just plug in and out I looked for very carefully at this question of could you selectively train in other words if you could delete the the bad part of the information in the world and just only train in good information would you get a better model unfortunately it appears that it doesn’t actually work that way when you restrict training data you actually get a more brittle model so it looks like you’re better off at least today with the current algorithms to build a large model and then restrict it with guardrails with this the so-called uh red teams and so forth and the red teaming is is clever because what they do is they have humans who think that they test something they say if it knows something it must know something else and that seems to be working eventually the consensus of the groups that I have been working with is that the red teaming will become its own business I’ve been think about how to fund this philanthropically because if you think about it how do you know what an AI is doing unless an AI is watching it well how can the AI That’s watching it know what the AI discovered it unless that the AI tells it but it doesn’t know how to tell you what it knows so this conundrum is to be worked on there are plenty of people working on this problem I think we’ll get this solved but but I think it’s it’s important to say that these very large models are ultimately going to get regulated and the reason is they’re just too powerful and they’re going to be regulated because they need to be they they they know too many ways of harm as well as enormous enormous power of gain right the ability to cure cancer and fix our energy problems and do new materials and on and on and on I mean I I can go on and on and on about what they’ll be able to do because they’re polymaths did you see the movie Oppenheimer if you did AD: Viome did you know that besides building the atomic bomb at Los Alamos National Labs that they spent billions on biod defense weapons the ability to accurately detect viruses and microbes by reading their RNA well a company called viome exclusively licensed the technology from Los Alamos labs to build a platform that can measure your microbiome in the RNA in your blood now viome has a product that I’ve personally used for years called full body intelligence which collects a few drops of your blood spit and stool and can tell you so much about your health they’ve tested over 700,000 individuals and used their AI models to deliver members critical Health guidance like what foods you should eat what foods you shouldn’t eat as well as your supplements and probiotics your biological age and other deep Health insights and the results of the recommendations are nothing short of Stellar you know as reported in the American Journal of Lifestyle medicine after just 6 months of following biomes recommendations members reported the following a 36% reduction in depression a 40% reduction in anxiety a 30% reduction in diabetes and a 48% reduction in IBS listen I’ve been using viome for 3 years I know that my oral and gut health is one of my highest priorities best of all viome is Affordable which is part of my mission to to democratize health if you want to join me on this journey go to vom.com Peter I’ve asked naven Jane a friend of mine who’s the founder and CEO of viome to give my listeners a special discount you’ll find it at vom.com Government Struggles with Rapid Tech Growth Peter we had two political leaders on stage with us yesterday and you know the question is can the government possibly keep up with this uh from your own experiences inside the hallowed Hells of this our government and others um how are you how are you seeing it are they is there enough attention is there enough awareness enough fear um um well fear fear is a heavy motivator for Pol political leaders especially if they’re worried about their own jobs um what I found in the Senate was that there’s a group of four two Republicans and two Democrats who really got it um and I worked very closely with them we had a series of of U Senate hearings on this subject which were well attended um there’s a similar initiative now in the house and this is largely it’s happening so quickly In fairness to our political leaders most of us have trouble understanding what’s going on can you imagine a normal ping person who’s got like political political problems to deal with so I think this is a situation where America and and I I think it’s important to say that we should be very proud of our country we spent all of our time complaining but the fact of the matter is the future is being invented in the United States and in the UK are are closest to Ally and the fact of the matter is that the Chinese for example every Chinese training run starts with an open- Source event and then moves on right so they get it right and they start with our great work so let’s be a little proud that we are inventing a future that will accelerate physics science chemistry and so forth um I’m working with people who are busy reading science journals reading chemistry journals Jing hypotheses and then labeling proteins and so forth in New Way and doing it all automatically and then using robotic Farms to do it the scale of innovation that this notion of read everything take an action write a program and run the program is profound and by the way the Innovation is not being done by The Faculty it’s being done by The Graduate students and by the way guess what the engine of of growth in our society is The Graduate students who are trying to get their phds and they invent whole Industries and then when they get their phds we kick them out of the country and send them home perhaps we should try to keep them in the US perhaps you should staple a green card on the back of the doctoral degree exactly sure um I think my my my point here is that you know everyone spends all their time with these sort of concerns about how Society will adapt this is gonna first the systems are not prepared for this so the government’s not prepared for it the companies who are doing the majority of the work have an enormous responsibility right to maintain human values to maintain decency to deal with some of the abuses that occur online and they need to do it on their own they need to clean up their own act if they don’t have it cleaned up now and if they don’t they’ll get regulated and hopefully the industry as a group which is what we’re trying to do can present a coherent structure that manages the downside correctly but gives us this incredible upside both for National Security which I work on most of the time but also for health science and education you know back I remember when I was uh a genan jockey in the labs at MIT and and Harvard Med School in the 80s um when the first restriction enzymes came out and there was huge fear about that of what we what that could mean uh the the Biotech Industry got together in a series of ayar conferences to self-regulate is that same sort of Regulation and it worked by the way uh is that same conversation going on now in the a uh leadership world and in fact we have uh we had a meeting in December which was an attempt at that it was not aamar there was in meeting a week ago at aamar there’s another meeting at Stanford in two weeks on the same subject all of us are participating in it and we’re talking about all these things precisely if you go back to to your training way back when you were a doctor the rag right which is the sort of group that managed all of this was actually created out of the scientists not out of the government and eventually the rag was put under what is now HSS so there is a PO there’s a there’s a history here of the scientists who really do understand what this thing can do but are otherwise clueless on its impact typically can basically get can get the structure right and then the government can figure out how to what is the human impact of it and that’s the right partnership in my view last question Eric and again AI’s Potential Impact on Science thank you for your time the work that you do with Schmid features Foundation you’re a very uh Curious individual in across multitude of different areas I imagine that AI to discovering you physics and you math and new biology and new materials has to be just extraordinary candy for you what are you most excited about there well I I’ve gone to a series of conferences in physics and chemistry which I did not really understand a word of it but here’s my report they’re doing they’re taking the llms and more importantly diffusion model and diffusion model is essentially this strange thing where you take something you add um you add noise to it and then you Denise it and you get a more accurate version of the same thing um they’re using these tools in very complicated ways in physics to solve problems that are not that that are just not solved a typical example is that using physics equations or chemistry equations we know precisely how the forces work we just can’t they’re incomputable by computers in the next 100,000 years right but you can use these to nukes to get approximations and these approximations are good enough to solve the problem that you have in front of you which is an estimation problem or an analing problem or something like that I think the biggest area of impact is going to be biology because biology is so vast and so unknown and the way you do it is you basically you do math solving through a thing called lean and then you do all this chemistry work and then it builds on top of that in physics there are people who are working on partial differential equation solvers which are at the base of everything and again they’re using variants of llms but they’re not actually llms um and the math is impossible to understand but that’s okay I wasn’t good enough to do physics you know I I should have mentioned you’re the are you still the Quantum Simulations Revolutionize Drug Development chairman of sandbox AQ I am yeah uh we had Jack we had Jack hitory here last year is’s phenomenal and Brilliant and congratulations on the success of sandbox AQ Jack will come back with us again next year I have to imagine that as as explosive and exciting as AI is um that Quantum compute and Quantum Technologies are going to make that look like it’s standing still is that a fair statement yeah I’ve been waiting for Quantum Computing to arrive for about 20 years um the the physical problem with Quantum Computing is the error rate and so for one cubit you need a th uh real c one accurate Cubit you need a th and so forth people are working on this that stuff remains very hard what Jack’s company sandbox IQ did is said we’re not going to work on that we’re going to basically build simulations of quantum and and apply them to real word problems an interesting and I assume I can talk about this a little bit in public the um the interesting new thing that they figured out is that they can take a a drug if you will and using Quantum effects but using a simulator of quantum because they don’t have a quantum computer they can perturb it and in the perturbations they can make the drugs more effective longer lasting longer Shelf life what have you that turns out to be an incredibly powerful and Big Industry and it’s an example of a short-term impact of quantum that I for one never occurred to me I assume we had to wait for quantum computers but the quantum simulation is so good now that you can make wins now and that’s what he’s doing [Music]

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.