FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

00:00 Introduction 01:05 In your book Genesis: Artificial Intelligence, Hope, and the Human Spirit, co-authored with Henry Kissinger, what are the key ideas about the evolution of AI? 02:35 What do you think are the biggest risks of AI—sentience, misinformation, inequality, or loneliness? 04:49 Do you think AI girlfriends could make loneliness worse and lead to bigger social problems like extremism and misogyny? 07:18 How can we get the benefits of AI while stopping the negative effects? 09:50 In the past, big tech avoided regulation through lobbying. Why do you think AI will be treated differently? 13:17 Should companies be held criminally liable for harmful AI interactions? 17:29 Should kids under 14 have smartphones or use social media? Should tech companies take more responsibility? 19:20 How do we balance regulating AI to protect people while staying ahead of countries like China? 23:22 Should there be international treaties to prevent AI weaponization, and could AI be used to help monitor and stop threats? 26:20 Is the U.S. holding back on AI treaties because we think we’re the best? 31:04 Could the U.S. and China work together on AI security to solve a big part of the global threat?

Introduction

people who are absolutist on Free Speech which I happen to agree with um but they confuse free speech of an individual versus free speech for a computer I am strongly in favor of free speech for every human I am not in favor of free speech for computers and the algorithms are not necessarily optimizing the best thing for Humanity and we’re also going to have to change some of the laws for example section 230 to allow for liability in the worst possible cases so when someone is harmed from this technology we need to have a solution to prevent further [Music] harm Eric where does this podcast find you um I’m in Boston I’m at Harvard and uh giving a speech to students later today oh nice so let’s bust right into it you have a new book out that you co-authored with the late Henry Kissinger titled Genesis artificial intelligence hope and the human Spirit what is it about this book or give us what you would call the pillars of insight here around that’ll help people understand the evolution of AI well the world is full of stories about what AI

In your book Genesis: Artificial Intelligence, Hope, and the Human Spirit, co-authored with Henry Kissinger,

what are the key ideas about the evolution of AI? can do and we generally agree with those what we believe however is the world is not ready for this and there are so many examples whether it’s trust military power deception economic power uh the effect on humans the effect on children that are relatively poorly explored so the the reader of this book doesn’t need to understand AI but they need to be worried that the stuff is going to be unmanaged Dr Kissinger was very concerned that the future should not be left to people like myself he believed very strongly that these tools are so powerful in terms of their effect on human society that it was important that the decisions be made by more than just the tech people and the book is really a discussion about what happens to the structure organizations the structure of jobs the structure of power um and all the things that people worry about I personally believe that this will happen much much more quickly than societies are ready for including in the United States in China it’s happening very fast and what do you see as the real existential threats here is it is it that it become sentient is it misinformation uh income inequality loneliness what do you think are the kind of first and foremost biggest concerns you have about this rapid evolution of AI there are many things to worry about

What do you think are the biggest risks of AI—sentience, misinformation, inequality, or loneliness?

uh before we say the bad things let me remind you enormous improvements in drug capability for Health Care uh solutions to climate change better Vehicles huge discoveries in science greater productivity for kind of everyone a universal doctor Universal educator all these things are coming um and those are fantastic a long way you come with with because these are very powerful especially in the hands of an evil person and we know evil exists these systems can be used to harm large numbers of people the most obvious one is their use in biology can these systems at some point in the future generate biological pathogens that could harm many many many many humans uh today we’re quite sure they can’t but there’s a lot of people who think that they will be able to unless we take some action those actions are being worked on now what about cyber attacks you have a lone actor a terrorist group North Korea whomever whatever your evil person or group is and they decide to take down the financial system using an un a previously unknown attack Vector so-called zero day exploits so the systems are so powerful that we are quite concerned that in addition to De democracies using them for gains dictators will use them to aggregate power and they’ll be used used in a in a harmful and Military context I’m freaked out about these AI girlfriends I feel as if the biggest threat in the US right now is lonely loneliness that leads to extremism and I see these AI girlfriends and AI searches popping up and I see a lot of young men who have a lack of romantic or economic opportunities turning to AI girlfriends and begin to sequester from real relationships and they become less likely to believe in climate change more likely to engage misogynous content sequester from school their parents work and some they become really shitty citizens and I think men young men are having so much trouble that this lowrisk entry into these faux relationships is just going to speedball loneliness and the externalities of loneliness your thoughts I completely agree there’s lots of evidence that there’s now a problem

Do you think AI girlfriends could make loneliness worse and lead to bigger social problems like extremism and misogyny?

with young men um in many cases the path to success for young men has been shall we say been made more difficult because they’re not as educated as the women are now remember there are more women in college than men and many of the traditional paths are no longer as available and so they turn to the online world for enjoyment and sustenance but also because of the social media algorithms they find like-minded people who ultimately radicalize them either in a horrific way like terrorism or in the kind of way that you’re describing where they’re just Malad justic um this is an a good example of an unexpected problem of existing technology so now imagine that the AI girlfriend or boyfriend but let’s use AI girlfriend as an example is perfect perfect visually perfect emotionally and the AI girlfriend in this case captures your mind as a man to the point where she or whatever it is takes over the way you thinking you’re obsessed with her um that kind of obsession is possible especially for people who are not fully formed parents are going to have to be more more involved for all the obvious reasons but at the end of the day parents can only control what their sons and daughters are doing within reason We’ve Ended up again using teenagers as an example we have all sorts of rules about age of maturity 16 18 what have you 21 in some cases and yet you put a 12 or 13-year-old in front of one of these things and they have access to every evil as well as good in the world and not ready to take it so I think the general question of are you mature enough to handle it sort of the general version of your AI girlfriend example is unresolved so I think people most people would agree that the pace of AI is scary and that our institutions and our ability to regulate are not keeping up with the pace of evolution here and we see what perfectly would happen with social around this what can be done how how what’s an example or a construct or framework that you can point to where we get the good stuff the drug Discovery the help with climate change but attempt to screen out or at least put in check or put in some guard rails around the bad stuff what’s the what are you advocating for I think

How can we get the benefits of AI while stopping the negative effects?

it starts with having an honest conversation of where the problems come from so uh you have people who are absolutist on Free Speech which I happen to agree with um but they confuse free speech of an individual versus free speech for a computer I am strongly in favor of free speech for every human I am not in favor of free speech for computers and the algorithms are not necessarily optimizing the best thing for Humanity so as a general Point specifically we’re going to have to have some conversations about what is at what age are things appropriate and we’re also going to have to change some of the laws for example section 23 to allow for liability in the worst possible cases so when someone is harmed from this technology we need to have a solution to prevent further harm every new invention has created harm think about cars right so cars used to hit everything and they were very unsafe Now cars are really quite safe um certainly by comparison to anything in history so the history of these in Inventions is that you allow for the greatness and you police the guard technically the guard rails you put limits on what they can they can do and it’s an appropriate debate but it’s one that we have to have now for this technology I’m particularly concerned about the issue that you mentioned earlier about the effect of on human psyche Dr kinger who studied Kant was very concerned and we write in the book at some length about what happens when your world viw is taken over by a computer as opposed to to your friends right isolated the computer is feeding you stuff it’s not it’s not optimized around human values good or bad God knows what it’s trying to do it’s trying to make money or something that’s not a good answer so I think most reasonable people would say okay some sort of thought fossil fuels are a net good I would argue pesticides are a net good but we have emission standards and an FDA most people would I think Loosely agree or mostly agree that some sort of Regulation that keeps these things in check makes sense now let’s talk about big Tech which you were an instrumental player in you guys figured out a way quite frankly to overrun Washington with lobbyists and avoid all reasonable regulation why are things going to be different now than what they were in your industry when you were involved in it well president Trump has indicated that he is likely to repeal the U

In the past, big tech avoided regulation through lobbying. Why do you think AI will be treated differently?

executive order that came out of uh President Biden which was an attempt at this so I think a fair prediction is that for the next four years there’ll be very little regulation in this area as the president will be focused on other the things um so what will happen in those companies is if there is real harm there’s liability there’s lawsuits and things so the companies are not completely scot-free our companies remember are economic agents and they have loyers whose jobs are to Pro protect their intellectual property and their goals so it’s going to take I’m sorry to say it’s likely to take some kind of a Calamity to cause a change in regulation and I remember when I when I was in California when I was younger California driver’s uh licenses the address on your driver’s license was public and there was a horrific crime where a woman was followed to her home and then she was murdered based on that information and then they changed the law and my reaction was didn’t you foresee this right you put millions and millions of of licensed information to the public and you don’t think that some idiot who’s horrific is going to harm somebody so my frustration is not that it will occur because I’m sure it will but why did we not anticipate that as an example we should anticipate make a list of the biggest harms I’ll give you another example um these systems should not be allowed access to weapons very simple you don’t want the AI deciding when to launch a missile you want the human to be responsible and these kinds of sensible regulations are not complicated to state are you familiar with character AI I am really just a horrific incident where a 14-year-old thinks he’s EST establishes a relationship with an AI agent that he thinks is a character from Game of Thrones he’s obviously unwell although he my understanding is from his mother who’s taken this on as an is issue understandably uh he did not qualify as someone who was mentally ill establishes this very deep relationship with obviously a very nuanced character and uh the net effect is he he contemplates suicide and she invites him to do that and you know the story does not end well and my view Eric is that if we’re waiting for people’s critical thinking to show up or for the better angels of CEOs of companies that are there to make a profit that’s what they’re supposed to do they’re doing their job that we’re just going to have tragedy after tragedy after tragedy my sense is someone needs to go to jail and in order to do that we need to pass laws showing that if you’re reckless with technology and we can reverse engineer it to the death of a 14-year-old that you are criminally liable but I don’t see that happening so I I would push back on the notion that people need to think more critically that that would be lovely I don’t see that happening I have no evidence that any CEO of a tech company is going to do anything but increase the value of their shares which I understand and is a key component of capitalism it feels like we need laws that either remove this liability Shield I mean does any of this change until someone shows up in an orange jumpsuit I can tell you how we dealt with this at Google we had a rule that

Should companies be held criminally liable for harmful AI interactions?

in the morning we would look at things and if there was something that looked like real harm we would resolve it by noon and we would make the necessary adjustments the example that you gave is horrific but it’s all too common and it’s going to get worse for the following reason so now imagine you have a 2-year-old and you have the equivalent of a bear that is the 2-year-old’s best friend and every year the better the bear gets smarter and the 2-year-old gets smarter too becomes three four five and so forth that now 15-year-old’s best friend will not be a boy or a girl of the same age it’ll be a digital device and such people and highlighted in your terrible example are highly suggestible so either the people who were building the equivalent of that beay 10 years from now are going to be smart enough to never suggest harm or they’re going to get regulated and criminalized those are the choices the techn I used to say that the internet is really wonderful but it’s full of misinformation and there’s an off button for a reason turn it off can’t do that anymore the internet is so intertwined in our daily lives all of us every one of us for the good and bad that we can’t get out of the cess pool if we think it’s a Cess pool and we can’t make it better because it keeps coming at us the industry to to answer your question the industry is optimized to maximize your attention and monetize it so that behavior is going to continue the question is how do you manage the extreme cases anything involving personal harm of the nature that you’re describing will be regulated one way or the other yeah at some point it’s just a damage we incur until then right it we’ve had 40 Congressional hearings on child safety and social media and we’ve had zero Laws In fairness to that um there was there there is a very very extensive set of laws around child sexual abuse which is obviously horrific as well and and those those laws are universally imp imple mented and well adhered to so we do have examples where everyone agrees what the harm is I think all of us would agree that a suicide of a teenager is not okay and so regulating the industry so it doesn’t generate that message strikes me as a no-brainer the ones which will be much harder are where the system has has essentially captured the emotions of the person and is feeding them back to the person as opposed to making suggestions and and that’s and we talk about this in the book when the system is shaping your thinking you are being shaped by a computer you’re not shaping it and because these systems are so powerful we worry and again we talk about this in the book of the impact on the perception of Truth and of society how who am I what what do I do and ultimately one of the risks here if we don’t get this under control is that we will be the dogs to the powerful AI as opposed to us telling the AI what to do um a simple answer to to the question of when is the industry believes that within 5 to 10 years these systems will be so powerful that they might be able to do self-learning and this is a point where the system begins to have its own actions its own religion it’s called evolition it’s called general intelligence AGI as it’s called and the arrival of AGI will need to be regulated we’ll be right back we know that social media and a lot of these platforms and apps and times on time on phone is just not a good idea I’m curious what you think of my colleagues work Jonathan height and that is is there any reason for anyone under the age of 14 to have a smartphone and is there any reason for anyone under the age of 16 to be on social media should we Agate pornography alcohol the military shouldn’t we specifically the de device makers and the operating systems including your old firm shouldn’t they get in the business of educating uh they should and indeed Jonathan’s work is incredible he and I wrote an

Should kids under 14 have smartphones or use social media? Should tech companies take more responsibility?

article together two years ago which called for a number of things in the area of regulating social media and we start with changing a law called Copa from 13 to 16 and we are quite convinced that using various techniques we can determine the age of the person with a little bit of work yeah and so people say well you can’t implement it well that doesn’t mean you shouldn’t try so we believe that at least the pricious effects of this technology on below 16 can be addressed um when I think about all of this to me the the we want children to be able to grow up and and grow up in with humans as friends and I’m sure with the power of AI arrival that you’re going to see a lot of Regulation about child content what can a child below 16 C this does not answer the question of what do you do with a 20-year-old right who’s also still being shaped and as we know uh men develop a little bit later than women and so let’s focus on the underdeveloped man who’s having trouble in college or what have you what do we do with them and that question remains open in terms of the idea that the genie is out of the bottle here and we Face a very real issue or folr or tension and that is we want to regulate it we want to put in guardrails at the same times we want to let our you know our sprinters and our IP and our minds and our universities and our incredible for-profit machine we want to let it run right and the fear is that if you regulate it too much the Chinese or you know the Islamic Republic isn’t quite as concerned and gets ahead of us on this technology how do you balance that tension so there are quite a few people in the industry along with myself who are working on this and the general idea

How do we balance regulating AI to protect people while staying ahead of countries like China?

is relatively light regulation looking for the extreme cases so the worst the extreme events would be a biological attack a Cyber attack something that harmed a lot of people as opposed to a single individual which is always a tragedy any misuse of these in war any of those kinds of things we worry a lot about and there’s a lot of questions here one of them is do you think that if we had a AGI system that developed a way to kill all all of the all of the soldiers from the opposition in one day that it would be used and I think the answer from a military General perspective would be yes the next question is do you think that the North Koreans for example or the Chinese would obey the same rules about when to apply that and the answer is no one believes that they would that they would do it safely and carefully under the way the US law would require us law has a a law called person in the loop or meaningful human control that tries to keep these things from going out of hand so what I actually think is that we don’t have a theory of deterrence with these new tools we don’t know how to deal with the spread of them and the simple example and sorry for the diversion for a sec but there’s closed source and open source closed is like you can use it but the software and the numbers are not available there are other systems called open source where everything is published China now has two of what appear to be the most powerful models ever made and they’re completely open and we obviously you and I are not in China and I don’t know why China made a decision to release them but surely evil groups and so forth will start to use those now maybe they don’t speak Chinese or or what have you or maybe the Chinese just discount the risk but there’s a real risk of proliferation of systems in the hands of terrorism and delation is not going to occur by misusing Microsoft or Google or what have you it’s going to be by making their own servers in the dark web and an example a worry that we all have is exfiltration of the models I’ll give you an example Google or Microsoft or open AI spends $200 million or something to build one of these models they’re very powerful and then some evil actor manages to exfiltrate it out of those companies and put it on the dark web we have no theory of what to do when that occurs because we don’t control the dark web we don’t know how to detect it and so forth in the book we talk about this and say that eventually the network systems globally will have fairly sophisticated supervision systems that will watch for this [bad data] because it’s another example of proliferation it’s analogous to the spread of enriched uranium uh if anyone tried to do that there’s an awful lot of monitoring systems that would say you have to stop right now or we’re going to shoot you so you make a really cogent argument for the kind of existential threat here the weaponization of AI by Bad actors and we have faced similar issues before my understanding is there are multilateral treaties around bioweapons or we have nuclear arms treaties so is this the point in the time in time where people such as yourself and our our our our defense infrastructure should be thinking about or trying to figure out multilateral agreements and again the hard part there is my understanding is it’s very hard to monitor things like this and should we have something along the lines of interpole that’s basically policing this and then you fighting fire with a with fire using AI to go out and find scenarios where things look very ugly and move in with some sort of international force it feels like a time for some sort of multinational cooperation is is upon us your thoughts um we agree with you and in the book we we specifically talk about this in a historical context of uh the nuclear

Should there be international treaties to prevent AI weaponization, and could AI be used to help monitor and stop threats?

weapons regime which Dr kinger as you know invented largely what’s interesting is working with him you realize how long it took for the full solution to occur America used the bomb in 1945 uh Russia or Soviet Union demonstrated in 1949 so that’s roughly the it was a four-year Gap and then there was sort of a real arms race and once that it took roughly 15 years for an agreement to come for limitations on these things during which time we were busy making an enormous number of weapons which ultimately were a mistake including you know these enormous bombs that were unnecessary and so things got out of hand in our case I think what you’re saying is very important that we start now and here’s where I would start I would start with a treaty that says we’re not going to allow anyone who’s the signator of this trainy to have automatic weapon systems and by automatic weapons I don’t mean automated I mean ones that make the decision on their own so an agreement that any use of AI of any kind in a conflict sense has to be owned and authorized by a human being who is authorized to make that decision that would be a simple example another thing that you could do as part of that is say that you have a duty to inform when you’re when you’re testing one of these systems in case it gets out of hand now whether these Contra uh treaties can be agreed to I don’t know remember that it was the horror of nuclear war that got people to the table and it still took 15 years I don’t want us to go through an analogous bad incident involving an evil actor North Korea again I’m just using them as bad examples um or even Russia today we obviously don’t trust I don’t want to run that experiment and have all that harm and then say hey we should have foreseen this well well my sense is when we are better to technology we’re not in a hurry for a multilateral treaty right when we’re under the impression that our nuclear scientists are better than your n me remember our Nazis are smarter than your Nazis kind of thing that we like we don’t want a multilateral treaty because we see advantage and curious if you agree with this we have better AI than anyone else does that get in the way of a treaty or should we be doing this from a position of strength and also if there’s a number two and maybe you think we’re not the number one but assuming you think that the US is number one in this who is the number two who do you think poses the biggest threat is it their technology or their intentions or both who if you were to hear that one of these really awful things took place who would you think most likely are the most likely actors behind it is it a rogue state is it a terrorist group is it a nation state first place I think that

Is the U.S. holding back on AI treaties because we think we’re the best?

the short-term threats are from Rogue States and from terrorism and because there as we know there’s plenty of groups that seek harm against the Elites in any country today the competitive environment is very clear that the us with our partner UK I’ll give you an example this week um there were two Li libraries from China that were released open source one is a problem solver that’s very powerful and another one is a large language model that’s equal in some cases exceeds the one from meta with it they use every day it’s called Lama 3 400 billion I was shocked when I read this because I had assumed that our in my conversation with the Chinese that they were uh two to two to three years late it looks to me like it’s within a year now so be fair to say it’s the US and then China within a year’s time everyone else is well behind now I’m not suggesting that China will launch a rogue attack against an American city but I am alleging that it’s possible that a third party could steal from China because it’s open source or from the US if they’re malevolent and do that so the threat the threat escalation Matrix goes up with every Improvement now today the the primary use of these tools is to sew misinformation which is what you talked about but remember that there’s a transition to agents and the Agents do things so it’s a travel agent or it’s you know whatever and the Agents speak English that you give them English and they result they respond in English so you can C concatenate them you can literally put agent one talks to agent two talks to agent three talks to agent four um and there’s a a scheduler that makes them all work together and so for example you could say to these agents design me the most beautiful uh building in the world go ahead and file all the permits uh negotiate the fees of the builders and tell me how much it’s going to cost and tell my my accountant that I need that amount of money that’s the command so think about that uh think about the agency the the the ability to put an integrated solution that today takes a hundred people who are very talented and you can do it by one command so that acceleration of power could also be misused I’ll give you another example uh you were talking earlier about the impact on social media I saw a demonstration uh in England in fact the First Command was do build a profile of a woman who’s 25 she has two kids and she has the following uh strange beliefs and uh the system wrote the code and created a fake persona that existed on that particular social media case then the next command was take that person and modify that person into every possible stereotype every race sex so forth and so on age demographic thing with similar views and populate that and 10,000 people popped up just like that so if you wanted for example today this is true today if you wanted to create a community of 10,000 fake influencers to say for example that smoking doesn’t cause cancer which as we know is not true you could do it and one person with a PC can do this imagine when the AI are far far more powerful than they are today so one of the things that Dr Kissinger was known for and quite frankly I appreciate was this notion of rail politic obviously we have aspirations around the way the world should be but as it relates to decision- making we’re also going to be very cognizant of the way the world is and makes him I mean he’s credited with a lot of very controversial SL difficult decisions depending on how you look at it what I’m hearing you say leads all these roads lead to one place in my kind of quote unquote critical thinking or lack their brain and that is there’s a lot of incentive to kiss and make up with China and partner around this stuff that if China and the US came to an agreement around what they were going to do or not do and bilaterally created a security force and agreed not to sponsor proxy agents against the west or each other that we’d have a lot that would be a lot of progress that might be 50 60 80% of the whole shooting match as if the two of us could say we’re going to figure out a way to trust each other on this issue and we’re going to fight the bad guys together on this stuff your thoughts so Dr kisser of course was the world’s expert in China he opened up

Could the U.S. and China work together on AI security to solve a big part of the global threat?

China which is one of his greatest achievements and but he was also a proud American and he understood that China could go one way or the other his view on China was that China and he wrote a whole book on this was that China wanted to be the Middle Kingdom as part of their history where they sort of dominated all the other countries but it’s not like America his view was they wanted to make sure the other countries would so show falty to China and in other words do what they wanted and occasionally if if they didn’t do something China would then extract some payment such as invading the country that’s roughly what what Henry would say um so he was a very much a realist about China as well his view would be at odds today with Trump’s View and the US governments the US government is completely organized today around decoupling that is literally separating and his view which I can report accurately because I went to China with him was that we’re never going to be great friends but we have to learn how to coexist and that means detailed discussions on every issue at Great length to make sure that we don’t alarm each other or frighten each other his further concern was not that President XI would wake up tomorrow and invade Taiwan but that you would start with an accident and then there would be an escalatory ladder and that because the emotions on both sides you’d end up just like in World War I which started with a a shooting in Saro that ultimately people found in a few months that they were in a world war that they did not want and did not expect and once you’re in the war you have to fight so the concern with China would be roughly that we are codependent and we’re not best friends being dependent is probably better than being completely independent that is non-dependent because it forces some level of understanding and communication Eric Schmidt is a technologist entrepreneur and philanthropist in 2021 he founded the special competitive studies project a nonprofit initiative to strengthen America’s long-term competitiv in Ai and Technology more broadly before that Eric served as Google’s chief executive officer and chairman and later as executive chairman and Technical advisor he joins us from Boston Eric in addition to your intelligence I get the sense your heart’s in the right place and you’re using your human and financial Capital to try and make the world a better place really appreciate you and your work

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.