FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

TEGMARK. it’s such an honor and pleasure to be here in England’s greatest University I’ve counted many different definitions of of existential threat tonight if we were in some very sloppy thinking place like Cambridge I would just leave it at that but because we’re in Oxford I’m going to Define what I mean by extensional threat first of all the term existential risks was defined by an Oxford professor and I’m going to go with his definition which is a threat of human extinction or permanent permanent limiting of of human potential and that’ll become important in a little bit what about threat what what is a threat exactly well um for example Silva you defined a threat as something which is 100% certain to happen and then very successfully attack that but I looked it up in the not in the Cambridge Dictionary in the Oxford dictionary and it said that a threat is a possibility of something bad happening what does a possibility mean it means the probability of the bad thing happening not doesn’t mean it’s 100% it means it’s more than zero and we had a good question from up there somewhere where do you draw the line well if you’re talking about something like Extinction Suppose there is a an asteroid which will kill us all if it hits us what is the probability threshold where you feel you really need to take action you have to look into your hearts and answer that is it one in 10,000 is it 1 in a th000 one in one in 100 regardless of what it is that’s what you should think of when you choose which door to walk out of if you pick 1% if you think the risk of an existential threat is more than 1% you go through then you’re yay now so that’s that now we got the definitions out of the way uh except there were two really interest there was another interesting twist also two of the opposition speakers said it’s only a threat if it’s happening right now and they said said it’s AI is not a threat because we don’t have AGI yet super intelligence so suppose this asteroid that’s heading towards Earth again isn’t going to hit us for 20 years does it make any sense to say it’s not a threat even if the best for astronomers have calculated it has a 99% chance of hitting us that’s ridiculous uh and ironically Sebastian when you said that the you only count risks that are happening things things are happening now not in the future is they turned around and said that the catastrophic climate change is a threat even though it’s happening in the future so now yeah ask me in in in four minutes so okay so why should we worry about this stuff then raise your hand if you study history you know then that the we can only really understand the present if we study history I think it’s very easy to um get so blinded by all the recent stuff that’s happening with AI that we lose the context so if you are lucky enough to have 50 Quid in your pocket can you take it out please for a moment let’s see how how well Oxford is charging too high fees I think so this gentleman here Alan Turing has already been mentioned really the Einstein of computer science amazing man not only viewed as the founding father of computer science and of artificial intelligence also really an amaz amazing war hero raise your hand if you saw imitation game the movie yeah well then I’m preaching to the choir here help crack the Germans code by building the first ever programmable digital computer pretty smart dude so he he said back in 1951 as Yan already mentioned that if we ever get to machines that can outsmart us then they will pretty quickly leave our feeble intellects far behind and we should expect he said that at some point they will take control the machines will take control that’s not particularly rocket science when we showed up in the rainforest tough luck for a lot of the animals that lived there we took control not because we had bigger muscles but because we were smarter and it didn’t always work out for them but he had this is his argument and it was we have to pay respect to his intellect we can’t just diss him without giving good counterarguments uh I know it sounds like science fiction machines taking control but um you know nuclear weapons sounded like science fiction 1930 and when Alan Turing formulated the touring test which he heralded as the final Milestone before we should really start freaking out having something like chat pt4 sounded like science fiction also so saying that something sounds like sci-fi does not imply it won’t happen that’s just a really really lame argument and it’s not just touring Sam Alman CEO of open AI That’s given us chat GPT has said himself recently that it could be lights out for all of us that’s an existential threat isn’t it and in May of this of this and in fact Dario amade CEO of anthropic one of the AGI competitors even put a number on this risk he said he thinks we have two or three years left until we get outsmarted information in 3 minutes I’ll take it and it’s not just them it’s not just them in May of this year we had all the CEOs of the AGI companies Google Deep Mind here in London open AI anthropic and who’s who of leading AI researchers including at Oxford say this is an existential threat human extinction is something we have to take seriously so I said two minutes you you have you seem to have four clocks here are you from Cambridge so so um this sounds kind of dark so let’s have let’s raise a Spirits a little bit by having some fun and playing Yan talin bingo so he told us that we should keep a bingo card of the opposition argument so that’s actually what I’ve been doing here he said there are four kind of bad arguments tend to be made arguing that it’s not a threat one of them is labeling atomm attacks person X said y also which therefore X is wrong and and um K sh I agree with you on many things but you got that little Bingo score right there because you said Nick Bostrom warned of existential it’s an existential threat and he said these really bad things therefore it’s not an existential threat so yeah so got you and the framing so we have many winners here so Sebastian you said oh AI is not evil therefore it’s not an existential threat technology isn’t evil and it’s also not morally good it’s a tool and we can use it for good things and bad things touring was well aware of this and that does not change the fact that it’s an existential threat uh Sebastian also said that it’s yeah actually let me pick on Silva for some democracy here actually let’s move on I picked on you once already I want to spread my Graces human human Supremacy we had the argument that AI cannot compete with us no matter how advanced it gets because it doesn’t have quite our human culture and the art isn’t as profound this was discussed here right well you know sorry to say Africa had some amazing art when British and other colonialists came there with big guns and that did not protect these people in Africa at all something doesn’t have to be more cultured than us to pose a threat in fact if you think about Alan turing’s warning why he was warning if you think of AI is just another technology like the steam engine or electricity you won’t be very worried but you see Alan Turing was taking the Long View he was thinking of it more as a new species we are as a matter of fact right now building creepy super capable amoral Psychopaths that never sleep think much faster than us can make copies of themselves and have nothing human about them whatsoever what could possibly go wrong once we have out there in the world entities that we cannot control I’ll talk about how to control them tomorrow if you come back at 5:00 P p.m. but if if we can’t control them and uh they are much better at us at thinking speaking manipulating people getting things done the future belongs to them not to us this is what Turing realized and then the 72 years since nobody has convincingly refuted that argument it’s really as simple as that we stand to lose control and um to close an existential threat according to the Oxford definition is not just something that drives us extinct it can also mean something that permanently limits our potential so if we lose control to machines either be directly to the machines or to some horrible people through too much power concentration that managed to control us forever in some sort of dystopia 1984 Plus+ you know that is an existential Threat by this very definition so let’s not do this and I want to just end with a note of Hope Yan you gave them all a term diagnosis that’s a little too harsh I think there is still might this might still be treatable so what I think you’re really voting on if you vote Yes for existential threat is vote Yes for taking this really seriously and being really motivated to go out there and do something about it thank you

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.