Geoffrey Hinton, a British-Canadian physicist who is known for his pioneering work in the field, told LBC’s Andrew Marr that artificial intelligences had developed consciousness – and could one day take over the world. Mr Hinton, who has been criticised by some in the world of artificial intelligence for having a pessimistic view of the future of AI, also said that no one knew how to put in effective safeguards and regulation. Listen to the full show on Global Player: https://app.af.globalplayer.com/Br0x/…
for ordinary gentle herbivorous Souls like myself there are all the other obvious questions about AI we hear it might save Mankind we hear it might destroy mankind what meanwhile about all the jobs it’s likely to wipe out what about robots slipping out of human control and doing their own thing so many questions and there’s really only one obvious person to go to First for some answers and that is Professor Jeffrey Hinton the Nobel prize winning Bri scientist who wrote the structures and the algorithms behind artificial intelligence and a man known around the world today as the Godfather of AI he’s now a professor at Toronto University and I’m delighted to say he talked to me this afternoon I began by asking him about deep seek was this further evidence of his belief that artificial intelligence was constantly accelerating it shows the still very rapid progress in making AI more efficient and in developing it further I think the relative size or relative cost of deep seek relative to other things like open Ai and Germany has been exaggerated a bit so their figure of 5.7 million for training it was just for the final training run if you compare that with things from open AI their final training runs were probably only $100 million or something like that so it’s not it’s not 5.7 million versus billions when you say that uh AI might take over at the moment it is a relatively harmless or innocuous seeming uh device which allows us to ask questions and and get answers more quickly how in in Practical and real terms might AI take over well people are developing AI agents that can actually do things they can order stuff for you on the web and pay with your credit card and stuff like that and as soon as you have agents um you get a much greater chance of them taking over so to make an effective agent you have to give it the ability to create sub goals like if you want to get to America your sub goal is get to the airport and you can focus on that now if you have an AI agent that can create its own subg goals it’ll very quickly realize a very good subgoal is to get more control because if you get more control you’re better at achieving all those goals people have set you and so it’s fairly clear they’ll try and get more control and that’s not good you say they try to get more control as if they are already thinking devices as if they think in a in a way analogous to the way we think is that really what you believe yes the best model we have of how we think is these things there was an old model for a long time um in AI um where the idea was that thought was applying rules to symbolic expressions in your head and most people in AI thought it has to be like that that’s the only way it could work there were a few crazy people who said no no it’s a big neural network and it works by all these neurons interacting um it turns out that’s been much better at doing reasoning than anything the symbolic AI people could produce and now it’s doing reasoning using neural networks okay and of course you are one of the crazy people proved right um and yet you know you’ve taken me to the airport you’ve given it agency up to a point and you’ve said that it wants to control a little bit more power from take power from me and presumably it will be persuasive in that but I still don’t understand how it’s going to take over from me or take over from us if there’s ever evolutionary competition between super intelligences imagine imagine that they’re much cleverer than us like an adult versus a three-year-old and suppose the three-year-olds were in charge and you got fed up with that and you decided you could just make things more efficient if you took over it wouldn’t be very difficult for you to persuade a bunch of three-year-olds to seed power to you you just tell them you get free candy for a week and you there there you be so they they would as a I I’m talking about they as if they’re in some kind of alien intelligence but AI would persuade us to give it more and more power what over our bank accounts over our military systems over our economies is that what you fear that could well happen yes and they are alien intelligences gosh so you got these alien intelligences working their way into our economy in the way we think and as I say our military systems but what why and at what point would they actually want to replace us surely they are in the end very very clever tools for us they’re what you know they do ultimately what we want them to do if we want them to go to war with Russia or whatever that’s what they will do okay that’s what we would like we would like them to be just tools that do what we want even when they’re cleverer than us but the first thing to ask is how many examples do you know of more intelligent things being controlled by much less intelligent things there are examples of course in human societies of um stupid people controlling intelligent people but that’s just a small difference in intelligence with big differences in intelligence there aren’t any examples the only one I can think of is a mother and baby and evolution put a lot of work into allowing the baby to control the so as soon as you get Evolution happening between super intelligences suppose there’s several different super intelligences and they all realize that the more data centers they control the smarter they’ll get because the more data they can process suppose one of them just has a slight a slight desire to have more copies of itself you can see what’s going to happen next they’re going to end up competing and we’re going to end up with super intelligences with all the nasty properties that people have that depended on us having evolved from small bands of Waring chimpanzees or our common ancestors with chimpanzees and that leads to intense loyalty within the group desires for strong leaders um willingness to do in people outside the group and if you get Evolution between super intelligence is you’ll get all those things you’re talking about them Professor Hinton as if they have full Consciousness now all the way through the development of computers and AI people have talked about Consciousness do you think that Consciousness has perhaps already arrived inside AI yes I do so let me give you a little test suppose I take one neuron in your brain one brain cell and I replace it by a little piece of nanotechnology that behaves exactly the same way so it’s getting pings coming in from other neurons and it’s responding to those by sending out pings and it responds in exactly the same way as the brain cell responded I just replaced one brain cell are you still conscious I think you say you were absolutely yes I I don’t suppose I’d notice and I think you can see where this argument’s going I can yes I absolutely can so they so when you talk they want to do this or they want to do that there is a real they there as it were uh there might well be yes so there’s all sorts of things we have only the Diest understanding of at present about the nature of people and what it means to be a being and what it means to have a self we don’t understand those things very well and they’re becoming crucial to understand because we’re now creating beings so this is a kind of philosophical perhaps even spiritual crisis as well as a practical one absolutely yes and in terms of as it were the lower order problems what’s your current feeling about the number number of people around the world who are going to suddenly lose their jobs because of AI lose the the reason for their existence as they see it so in the past new technologies haven’t caused massive job losses um so when ATMs came in bank tellers didn’t all lose their jobs they just started doing more complicated things and they had many smaller branches of Banks and so on um but for this technology this is more like the Industrial Revolution in the industrial revolution machines made human strength more or less irrelevant you you didn’t have people digging ditches anymore cuz machines are just better at it I think these are going to make sort of mundane intelligence more or less irrelevant people doing clerical jobs are going to just be replaced by machines that do it cheaper and better so I am worried that there’s going to be massive job losses and that would be good if the increase in productivity made us all better off big increases in productivity ought to be good for people but in our Society they make the rich richer and the poor poorer you see I mean I I I live and work in the world of politics and politicians both want the great increases in productivity you’ve just mentioned for the state and elsewhere and they reassure people like me and anybody else listening that these things will be quotes regulated and there will be quotes safeguards and you’re suggesting to me there can’t be regulation really and there can’t be safeguards at all people don’t yet know how to do effective regulation and effective safeguards um all there’s lots of research now showing these things can get round safeguards there’s Recent research showing that if you give them a goal and you say you really need to achieve this goal um they will pretend um to do things during training so during training they’ll pretend not to be as smart as they are so that um you will allow them to be that smart so it’s scary already we don’t know how to regulate them obviously we need to I think the best we can do at present is say we ought to put a lot of resources into investigating how we can keep them safe so what I advocate is that the government forces the big companies to put lots more resources into Safety Research so this story isn’t over you said earlier on that you didn’t want to put a percentage on the likelihood of AI taking over from Humanity on the planet but it was more than 1% less than 99% um in that Spirit can I ask you whether you yourself are optimistic or pessimistic about what AI is going to do for us now I think in the short term it’s going to do wonderful things and that’s the reason people are not going to stop developing it if it wasn’t for the wonderful things it would make sense to just stop now but it’s going to be wonderful in healthcare you’re going to be able to have a family doctor who’s seen a 100 million patients knows your DNA knows the DNA of your relatives knows all the tests done on you and your relatives and can do much much much better medical diagnosis and suggestions for what you should do um that’s going to be wonderful similarly in education we know that people learn much faster with a really good private shooter and we’ll be able to get really good private Shooters um that know that understand exactly what it is we misunderstand and can give us exactly the example needed to show us what we’re misunderstanding so in those areas it’s going to be wonderful so it’s going to be developed but we also know it’s going to be used for all sorts of bad things by Bad actors so the short-term problem is Bad actors using it for bad things like cyber attacks and biot terrorism and corrupting elections but the thing to remember is we don’t really know at present how we can make it safe so the apparent omniscience that politicians like to uh show that they have is completely fake here there is nobody nobody understands what’s going on really there’s two issues do you understand how it’s working and do you understand how to make it safe um we understand quite a bit about how it’s working but not nearly enough so it can still do lots of things that surprise us and we don’t understand how to make it safe