FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

TALLINN: Thank you Madame President uh ladies and gentlemen of the house humanity is Ain to a teenager with rapidly developing physical abilities lagging wisdom and self-control little thought for his long-term future and an unhealthy appetite for risk Toby art precipice Toby is of course a philosopher in Oxford and precipice is not the only Landmark book to have originated from this place in fact when I helped to start the center for the study of the existential risk in that other place one of my explicit goals was to help that other place to catch up to Oxford when it comes to study about existential risk so arguing about AI being an existential risk in front of an Oxford audience is a bit like preaching Christianity in Vatican not to mention that there are there is an increasing consensus now sandwiched between Alan touring in 1951 predicting that we will we should expect to lose control to machines and the inventor of deep learning itself Jeff Hinton starting to have doubts about its about his life work there are now hundreds of AI experts sounding their alarm Bells a recent poll found that 88% of AI Engineers think that AI could destroy the world and another one reported that 76 of American voters believed that AI was a threat to our existence just yesterday there was news that uh one of the leading super forecaster groups uh someti published their prediction that their estimate for AI catastrophic risk is 30% 30% the battle for establishing that AI is an existential risk a battle that I spent roughly 15 years of my career on has now all but one so I’ve set myself a bit more ambitious goal today argue that the ex Extinction risk from like AI is not just possible but imminent now and we urgently urgently need to change our trajectory I’m going to show that there are fundamental reasons why unallied Godlike AI will not leave any survivors that they’re now close to such AI by having no idea how to align it and how the counterarguments so unfortunate L are sadly extremely weak of course I will need to be brief so my arguments will be very condens the reason why I expect Godlike Ai and by that I mean an AI that can conceive and carry out much better plans than any subgroup of humans can the reason why I expect Godlike AI to be utterly lethal is that by definition it can control our environment rather than hacking into our computer which is course it also can do think about hacking into algae to make it produce What It Wants then venting the atmosphere and finally pulling out the hydrogen from the sun it’s easy to show that such takeover happens in toy models indeed in 2016 I gave a gave a presentation just about that nor should we forget that humans have behaved similarly to countless other species species who were unfortunate unlucky to not have sapiens in their name now the reason why I expect Godlike AI to not care about humans has to do with a dirty secret of AI industry the frontier AIS are not built they are grown the p in ch GPT stands for pre-trained pre-training perhaps we should call it summoning is a process where simple two page program two page program is soaked in terabytes of data and megawatts of electricity and left like that for months and then after that attempts are made to tame the emergent alien mind importantly those methods of taming rely on the AI being less competent than the humans who are taming it now the reason why we expect that we are close to Godlike AI is that the trend of AI is getting more powerful it’s now visible to everyone it’s obvious just look at capability differences between gpt2 gpt3 GPT 4 gpt2 was released in 2019 a simple extrapolation would take us to gpt7 before this decade is over so in summary we are blindly growing increasingly competent Minds while hoping that they are not so competent that they spin out of control and destroy our living environment unfortunately that hope is not justified which explains the increasing anxiety among the AI developers themselves so basically what I’m telling you is that on the current AI trajectory you are not going to live very long of course at this point just like a patient that has received a terminal diagnosis you’re very very much encouraged to seek for a second opinion unfortunately having been part of this this this debate for more than a decade I already know what you’re going to hear almost all counterarguments fall into four categories first labeling these are arguments oh this is science fiction this is alarmism but also the oldie but goody ad hominum these are doomsayers don’t listen to people with that non virtuous property X or these are people who say x because of Y second frame control AI is like x and x is X is very nice right and this is also the answer to your question this has now reached grotesque levels one prominent VC claimed recently that AI is basically just math so why should we worry imagine the captain of Titanic announcing don’t worry passengers this is just water third class of arguments human Supremacy AI can never do X or more moderately we are very far from AI doing X I guess this is little bit response to your your question this category actually Endor uh as reality can act as a judge here unfortunately reality reality has been very harsh judge here recently the set of things that only humans can do is collapse really rapidly and third one or sorry fourth category is topic change I am more worried about X than AI Extinction risk so let’s talk about X instead or like let’s talk about the benefits of AI instead as you just heard from the opposition unfortunately usually the other topic sometimes very important I I agree there are other other problems I agree that there are a lot of benefits from AI but they’re non sequitors they don’t have any bearing Extinction risk so here’s a bingo card for the opposition arguments today labeling frame control human Supremacy and topic change have fun or it would be fun and games if it weren’t so tragic it’s like tashing the hopes of that cancer patient by pointing out that the second opinion while comforting indeed was simply wrong still I think I can leave you with a hopeful note there’s now growing Global consensus that the unregulated blind AI scaling is reckless and dangerous so we need to constrain it or ban it all together just like we banned human cloning regardless of which door you walk out today once you on the other side of that door please take a moment to think if and how you can help you have received a terminal diagnosis please don’t simply ignore it. thank you.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.