“We’re just not ready as a society for the implications of this powerful arrival of intelligence.“ — Eric Schmidt
Ex-Google CEO Eric Schmidt says AI superintelligence will be developed in the next 10 years and whichever country or company gets there first will have “an asymmetric, powerful monopoly for decades to come” due to recursively self-improving intelligence pic.twitter.com/WN113s1DOE
— Tsarathustra (@tsarnick) December 14, 2024
“There are many people in our industry who think that the arrival of and development of this new intelligence is so important it should be done in a multinational way. It should be done in the equivalent of CERN which is the great Physics laboratory in Switzerland. The political tensions and the stress over values is so great there’s just no scenario, there’s just I want to say it again, there’s just no scenario where you can do that.” — Eric Schmidt
9,279 views 13 Dec 2024 #amanpourpbs With the approach of a new year, and the prospect of accelerating technological advancement, all eyes are on AI. The new best-selling book “Genesis: Artificial Intelligence, Hope and the Human Spirit” is putting the new tech under the microscope, taking a good look at how it could help us, and how we might stop it from hurting us. Co-author and former Google CEO Eric Schmidt joins the show to discuss.
well we turn now as a new year a rapid technological advancement approaches all eyes are on AI it’s what everyone’s been talking about and the new bestselling book Genesis artificial intelligence hope and the human spirit is putting the new technology under the microscope taking a look at how it could help us and how it can stop us from hurting ourselves co-author and former Google CEO Eric Schmidt joins Walter isacon to discuss thank you bana and Eric Schmidt welcome back to the show thank you Walter it’s great to see you uh this new book which you wrote with the late Dr Henry Kissinger before he died and Craig Mundy long time of Microsoft is about how we’re supposed to handle AI but it’s even more philosophical you say it’s a question of human survival you’re addressing why do you say that and why did you call it Genesis we believe in particular Dr Kissinger believed that the AI Revolution is of the scale of the Reformation that right now we’re used to being the top dog if you will and we determine reason we determine outcomes um and with the AI Revolution there’s always a danger that we’re going to be dog the dog to their computers in other words they’re going to tell us what to do and the book is really a statement of how important human dignity is the ability to think and to be free um to not be subject to surveillance all of the things that are possible on the downside of the AI Revolution we also spent a lot of time talking about what AI can do and I’ll give you a simple example in a few years all of us believe that you’ll have in your pocket 90% of Leonardo da Vinci something you know a lot about uh 90% of the greatest physicists the greatest chemists what will it be like where each and every one of us have that kind of Discovery capability if in our book we start by talking about polymer something again you know a lot about because of your previous writing um polymaths are really important they change the course of history well what happens when everyone has their own polymath we’re just not ready as a society for the implications of this powerful arrival of intelligence well one of the things you say because you’re generally a techno Optimist is that sometimes you worry we aren’t going fast enough what do you mean by that well we we have to start by all the things that this AI intelligence will provide much faster cures for diseases how do you want to solve climate change you need new energy you need AI to do that um Universal doctor uh Universal teachers getting every human on the planet to their top potential making businesses far more efficient which means more profits but also more growth more jobs and so forth all of those are going to happen and they’re going to happen very very quickly and the downsides are also quite serious uh the ability to do cyber attacks nation state tensions misinformation one of my theories is most of what you see politically now is because everyone’s online and they’ve all found their special tribal groups and they’ve all decided that they all believe the same thing even though reality is much more subtle than what in the individual seems to believe do you think this should be left to the technologists like you and Greg Mundy well Dr Kissinger got started in this almost 10 years ago because he was very clear that people like myself should not be making these decisions let’s look at social media we’ve now arrived at a situation where we have these huge companies of which I was part of um and they all have this huge positive implication for entertainment and culture but they have significant negative implications in terms of tribalism misinformation individual harm especially against young people and especially against young women none of us foresaw that had maybe if we’d had some non-technical people doing this with us we would have foreseen the impact on society I don’t want us to make that mistake again with a much more powerful tool you talk about social media and you say that people who technologist you would Google didn’t really foresee some of the downsides my colleague har shavasan has been doing this a lot on this show talking about the algorithms and how the algorithms incent depression sometimes incent enragement not just engagement is it baked into the algorithms and if so should we hold social media uh account for that we should and uh it’s simple to understand I am personally very strongly in favor of human speech including human speech which is terrible and I don’t agree with that’s my personal view but I’m not in favor of computer algorithm speech being the same thing what they’re doing is boosting based on an algorithm so let’s let’s imagine you and I found a company and we’re perfect we have no bias as whatsoever but we want to maximize Revenue well the best way to maximize revenue is to maximize engagement and the best way to do that is outrage so even if you and I as well-meaning us we could be no bias no we want to be truthful and all of that our system will produce these holes these these cubicles these caves that people will end up with do you think we should get rid of what sometimes called section 230 protections which is that part of the law that says you don’t hold a platform accountable for what gets posted and maybe even Amplified you know section 230 was passed in roughly 1994 so it’s about 30 years old we had no idea that the internet would be used for this and so we simply asked and I I was part of it at the time we just wanted an exemption for technology and content we didn’t own that doesn’t make sense anymore there needs to be restrictions on Section 230 for the worst cases I’m talking about things where there’s real harm harm to people especially to young people we have to change it would you count harm to democracy in that list I think democracies are being harmed by the tribalism and by the misinformation but I doubt we’re going to come to an agreement certainly not in the US but also in many other countries as to what truth is so I think the best way that we can handle social media is basically to say that if there’s a real harm this thing has to get stopped if it’s a case where I tell you one thing and you say another and it’s an open debate that’s probably not going to destroy democracy tell me how these issues are increase the problem S as we move from social media meaning you know networks like X or Facebook to AI there are two really big things happening right now in our industry one is the uh the development of what are called agents where agents can do something so you can say I I want to build a house so you find the architect go through the land use buy the house there’s all can all be done by computer not just by humans and then the other thing is the ability for the computer to write code so if I say to you um I want to sort of study the audience for this uh show and I want you to figure out how to make a variant of my show for each and every person who’s watching it the computer can do that that’s how powerful the programming capabilities of AI are we’ve I mean in my case I’ve managed programmers my whole life they typically don’t do what I want you know they do whatever they want but with a computer it’ll do exactly what you say and the gains in computer programming from the AI systems are frightening they’re both enticing because they will change the slope right now the slope of AI is like this and when you have ai scientists that is computers developing AI the slope will go this it’ll go wham right but that development puts an awful lot of power in the hands of an awful lot of people let me ask this in a very broad way we often talk about a duty of care that corporations have others have what is the duty of care that you think AI companies should do well I’m I’m all in favor of AI companies inventing this new future and I understand that they will make mistakes and there will be some initial harm right some bad thing will happen the the secret is not that something bad happens but that it doesn’t happen again uh when we were running Google uh Larry and S are now doing other things we had a rule that if anything happened in the morning we would fix it by noon right we were on it and I think that kind of active management of social media and of AI in general for consumer products is going to be crucial you and I first discussed this I think with Dr Kissinger when we were all in China for or five years ago I think it was how do you think since then the Chinese are progressing on AI and one of the things we talked about was they have restrictions on free speech is that going to help them or hurt them in this regard well they believe that those restrictions help them and um obviously that’s horrific it’s a violation of the sort of liberal Western order but I can’t can’t fix that um when I was in China with Dr kinger a year and a half ago I was quite convinced that China was about two years behind us it looks like unfortunately I was wrong and that even with all of the chip restrictions that we put in which Trump put in President Biden put in as well all the right well meaning things the Chinese have gotten very close to our top models now you sit there and you go why is this important because these are models that can show planning they can begin to do physics they can do math these models are now at The Graduate level of math and physics people right true in China and in the United States so China clearly understands the value of having what is generally called general intelligence um as it applies to its National Security to its business goals to its societal goals and to the surveillance that is characteristic of the state the West needs to win that battle it’s really important that the systems that we use reflect American and Western liberal values such as freedom of thought freedom of expression the Dignity of the of all the people involved I’m very very worried that in this contest they’re now so focused that they’re not only catching up but they will catch up and remember that the the country or the company that develops the system that is smarter than any human in the world this is called super intelligence can then apply that to itself to get smarter and smarter and smarter there are people who believe that such a system when it appears and we believe it will appear probably within the next decade will give that in this case country or company an asymmetric powerful Monopoly for decades to come we just don’t know this fear that China is catching up in May soon surpasses in AI is that an argument to not put too many regulations and restrictions in the US on the development of AI in America I think based on what president Trump has said any existing restrictions are likely to be eliminated and China they are also moving so quickly their only restrictions are done after the fact so basically you can do whatever you want to but if you do something really bad they will come and arrest you so it’s it’s done that way it’s very important right now in America to allow this Innovation to occur during this critical time as quickly as we can now I know people say oh that’s terrible that means my privacy will be violated um we’ll deal with that if it happens but right now the sense of destiny that my industry has that somehow we’re building something larger than ourselves that the arrival of this intelligence that I’m discussing is so much powerful than people appreciate that we have to do it I will tell you by the way I don’t think Western Democratic systems are ready for this there are huge implications for this uh wealth distrib bution access privacy all the things that that everyone talks about but let’s make sure we win I do not want to be have China win this one ahead of us it’s too important yeah you talk about the the danger of too many regulations well now you have the Trump Administration coming in you have David Sachs who’s very much of a uh you know techn Progressive techno pushing for technology very close friend of uh Elon Musk how do you think Trump David and others will be looking at AI I’m assuming that they’re going to follow a la Fair no regulation approach the president has indicated that he’s not going to continue some of the AI regulations that were put in place by President Biden um so my prediction will be that we’ll start with no regulation but that there will be a major project within the next Administration to understand the China versus US National Security issues of AI when you talk about the competition with China you and of course Dr Kissinger spent a whole lot of time in China talk to the talk leadership do you think there’s a possibility we could end up cooperating with China more or do you think it’s inevitably a competition I spent a lot of years hoping the the collaboration would occur and there are many people in our industry who think that the arrival of and development of this this new intelligence is so important it should be done in multinational way it should be done in the equivalent of CERN which is the great Physics laboratory which is global um in Switzerland the T the political tensions and the stress over values is so great there’s just no scenario there’s just I want to say it again there’s just no scenario where you can do that you were a chairman of the defense Innovation board a while back under President Obama and uh I think you worked too with that with uh President Biden I was on the defensive Innovation board with you and we looked at Ai and how that was going to affect Warfare particularly drone Warfare what do you think the future of warfare can and should be in the era of AI if you study the Russia Ukraine conflict the ukrainians who had no Navy and no air force were forced to scramble and they did so valiantly I spent lots of time there um and they ultimately built relatively simple drones that are now turning into be very complex weapons um it looks to me like for terrestrial conflict the correct answer is autonomy which ultimately means drones um I’ve personally seen situations in Ukraine where you have a soldier sitting at a a screen drinking coffee controlling a weapon that’s very far away doing whatever job it was doing I know if you think about war in our history thousands of years of History it was stereotypically a man and a gun shooting the other man with a gun that is an Antiquated model of war the correct model uh and obviously war is horrific um is to have the people well behind and have the weapons well upfront and have them networked and control by AI the future of war is AI networked drones of many different kinds do humans need to be in the loop well the US rule is called human in the loop or um meaningful human control so what will happen is that the computer will produce the battle plan and human will authorize it thereby giving the legitimacy of both authorizing it as a human but also the legitimacy of control and liability if they make a mistake um that’s the likely outcome one of the key issues by the way is that Russia and China do not have this Doctrine and so you there’s always this worry about the Doctor Strange Love situation where you have an automatic weapon which makes the decision on its own that would be terrible I’m sure you like me uh know the movie 2000 Want A Space Odyssey and the question of how and computer getting out of control and the humans having to try to pull the plug on it do you think we ought to have a kill switches a way to pull the plug and what situations would we use that for in our uh AI systems we’re going to have to have them um one thought experiment is imagine that you that everyone in America has a red button that you press the disconnects the house from the internet and you say well that’s stupid but imagine a future scenario where an adversary has taken over the Internet is now using it to attack your house right so all of a sudden these questions of National Security that become very personal so I think that you’ll see first obviously huge monitoring systems but you will have defensive systems along the lines of the red kill button for that reason at the end of your book uh you say that you have high confidence that we can imbu our machines with the intrinsic goodness that is in humanity first of all are you sure that all of humanity has intrinsic goodness and what about those who don’t well look I think we all understand that there’s some percentage of people who are truly evil terrorists so forth and so on the good news is the vast majority of humans on the planet are well-meaning they’re social creatures they want themselves to do well and they want their neighbors and especially their tribe to do well I see no reason to think that we can’t put those rules into the computers one of the tech started its training of its model by putting in a Constitution and the Constitution was embedded inside of the model of how you treat things now of course we can disagree on what the comp Constitution is but these systems are under our control there are humans who are making the decisions to train them and fur them were the systems that you use uh whether it’s chbt or gemini or or CLA or what have what have you have all been carefully examined after they were produced to make sure they don’t have any really horrific rough edges so humans are directly involved in the creation of these models and they have a responsibility to make sure that nothing horrendous occurs as a result of them Eric schmett thank you so much for joining us thank you again