FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

collaboration so thank you all so much for joining us I want to start by asking you to make your put put your stake in

the ground make an opening statement um give us give us your thoughts on where

you stand on this question that we’re here to discuss we’ll start with Stuart um is AI an existential threat to

humanity yes yes okay let’s move on um so interestingly I found myself

agreeing with both sides of the the argument that you presented uh at the

beginning right uh yes AI could present an Extinction risk to humanity but yes

yes AI has enormous upsides and yes it’s possible to develop in in a way that’s

safe um and uh so let me talk about how that

works so first of all just to reassure everyone the word threat does not mean

uh definite Extinction so we still have a chance Fe

and also it’s important to point out because there’s a lot of confusion on this point we’re not talking about AI as

it exists today as if even if we completely stopped

developing uh the technology any further the technology we already have would

necessarily lead to Extinction uh but some people who I

respect a great deal think that we might have as little as 5 years uh before we

reach the point where we have to be very immediately concerned about that

risk and so this suggests that we have to act now because we don’t know how

long it’s going to take to develop those safeguards that you mentioned to develop

a form of AI that we can guarantee is going to be safe and

beneficial and uh at the moment the kind of AI that

we’re developing is absolutely not safe and benefit icial partly because we have absolutely

no idea how it works uh partly because we are taking absolutely no steps to

ensure that it’s safe uh we’re taking steps that maybe increase its safety a

bit um but they don’t work very well for reasons we can get into um so that means

that uh you know if you think about nuclear power stations no one is coming forward and saying okay I have have a

design for a safe nuclear power station there are just a bunch of people saying I wonder what happens if we pile

together lots and lots of enriched uranium um is that okay if we do that

right so that’s the situation we’re in right now so why is there a risk well obviously um and many other species on

Earth could tell you this because they’ve been on the receiving end of it uh intelligence is what gives us power

over the world and over those other species and if we build systems that are more

intelligent than human beings then uh then you face this question how do we maintain power

forever over entities that are more powerful than ourselves so anyone who has a really

watertight answer to that question just let me know uh and then we can all go home isn’t the answer don’t let them get

more intelligent than us or is that not going to happen so so that’s a question right are are we necessarily going to

succeed and that’s um one argument that people put forward which is sort of the

following right so we’re you know we’re driving a bus we’ve got all of humanity as passengers we’re heading straight for

a cliff and the bus driver says don’t worry I’m pretty sure we’re going to run out of gas before we get to the cliff or

petrol sorry petrol I should say before we get to the cliff right that’s not a very convincing argument for how we

should manage the Affairs of the human race um and this is not a new problem

right that you know even Alan touring who founded computer science in 1936 said you know if we succeed we

should have to expect the machines to take control and it’s hard to see how that

wouldn’t happen because the the way we build machines is basically we give them

objectives uh and then they figure out how to achieve the objectives and then then off they go and it’s extremely hard

to write an write down an objective that happens to be completely

and correctly aligned with what we want the future to be like it’s very hard for

us to figure out what that is um and if we if we write it down wrong if we leave anything out uh then there are a simple

argument showing that we could have a c catastrophic outcome right right these are systems

that can uh if they’re more intelligent than us synthesize biological weapons

that can put us all out of business change the oxygen content of the atmosphere change the temperature of the

planet to suit them rather than us um and all of these would lead to

Extinction so if you want to deny that there’s a risk you have to argue one of two things

right first of all you have to argue that when never going to create that kind of AI in the first place and you

know right now when you look at the tens of billions of pounds that are being spent on creating exactly this kind of

AI every month it’s kind of hard to argue that those those tens of thousands of

brilliant people are all going to fail um another way would be that yeah

we’re going to create superintelligent I AI but it’s just going to be

safe even if we don’t make any attempt to ensure that right it’s just naturally

that intelligent things are always nice right and I’ve heard this argument

who believes that right uh not not many people um and there’s a there’s a third

thing that we have to do right to make sure that there’s no risk even

if the AI systems we make are super nice

how do we make sure that people who don’t want to use super nice AI systems

never get to create AI systems that are

harmful right so there’s a number of conditions we have to ensure so it’s

a fairly narrow path that we have to tread in order to avoid this

threat okay so work cut out for us there um thank you we’ll come back to you on

laying out those risks a bit more clearly

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.