FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

BNN Bloomberg. AI will become smarter than humans: Geoffrey Hinton.

Geoffrey Hinton, computer scientist and cognitive psychologist, joins BNN Bloomberg to discuss the risks of AI getting smarter than humans.

welcome back we want to get back to our exclusive interview with Jeffrey Hinton the leading AI researcher has dedicated his career to better understanding the future of intelligence and increasingly Hinton is urging lawmakers to insist big tech companies spend more on AI safety his conviction is tied to the speed at which AI is getting smarter and the potential risks associated with that in this next part of our conversation we started by asking Hinton about the chances of AI becoming smarter than humans almost every decent researcher I know believes that in the long run it’ll get more intelligent than us it’s not going to stop at our intelligence a few people believe that but um because it’s training on data from us at present but most of the researchers I know are fairly confident it will get more intelligent than us so the real question is how long will that take and when it does will we still be able to keep control I think there’s like a 50/50 chance it’ll get more intelligent than us in the next 20 years and in terms of us um having to Grapple with control again what does that look like in terms of our our our own role in the society that we help to create nobody really knows we’ve never confronted this before we’ve never had to deal with things more intelligent than us and so people are very people should be very uncertain about what it’s going to look like and it would seem very wise to do lots of empirical experiments when it’s slightly less smart than us so we still have a chance of staying in control and I think the government for example should insist that the big companies do lots of safety experiments spend considerable resources like a third of their computer resources on doing safety experiments well these things are still not as intelligent as us to see how how they might evade control and what we could do about it and I think that’s a lot of the debate at open AI the people interested in safety like Ilia sua wanted significant resources to be spent on safety people interested in profits like Sam Altman didn’t want to spend too many resources on that do we have a sense on where that spending lies right now the spending on safety yes hugely more on development than on safety um because these are profit driven companies what do you make of the fact that there’s been so much wealth created so quickly um in this area how does that complicate the story of safety in your opinion well it makes it clear to the big companies that they want to go Full Speed Ahead and there’s a big race on between Microsoft and Google and possibly Amazon and possibly um Nvidia and possibly the other big companies meta um if any one of of those pulled out the others would keep going so you’ve got the standard competitive dynamics of capitalism where people are trying to make profits in The Fairly short term and they’re going full speed ahead and I think the only thing that can slow that down is strict government regulation I think governments are the only things powerful enough to slow that down and governments obviously represent their respective countries what’s been your observation on not only government efforts so far but the coordination of governments around the world on an issue that doesn’t just relate to to one territory there has been some coordination and that’s very encouraging for many things like autonomous lethal weapons there’s going to be competition they’re not going to coordinate um the US isn’t going to share with Russia its plans for autonomous lethal weapons um but there’s this one issue of will these things get super intelligent and take over from when we’re all in the same boat none of these countries want super intelligence to take over and that will force them to cooperate so even at the height of the Cold War um the Soviet Union and the us could cooperate on trying to prevent a global nuclear war because it wasn’t in either of their interests so there’s this of the many problems of AI there’s just this one where you’re likely to get good cooperation which is how do we prevent them taking over should we be thinking about this in the same way that you would think about a nuclear threat yes I mean I think that’s a good way to think about it there is however one big difference that nuclear weapons are only good for Destruction they tried using them for things like fracking it didn’t work out too well they did that in the 60s peaceful uses of atom bombs that didn’t work out so good they’re basically for Destruction whereas AI has a huge upside is wonderful for lots of things not just answering questions and making fancy images it’s going to be extremely useful in medicine um imagine if you could go to your family doctor and she hadd already seen a 100 million patients and remembered things about them that would be a big win also she knew everything about your genome and your whole history of medical tests which your current doctor probably doesn’t SWAT up on before she sees you um that would be a much much better family doctor and so at present in the states about 200,000 people a year or maybe more die because of bad medical diagnosis um already AI combined with a doctor is much better than a doctor alone because AI can suggest things the doctor didn’t think of and so it can have a huge win in terms of preventing bad medical diagnosis in terms of um much better treatments much better understanding of medical images that’s coming fairly quickly so this’s is huge upside to it in fact anywhere where you want to predict something neural Nets can do a good job of predicting probably much better than the previous technology so it’s hugely important to companies um and that’s why it’s not going to be stopped the idea we should just pause um that was never realistic we have to figure out how can we make a super intelligence not want to take over and that’s what governments should be focused on right now because that’s one of the many things they should be focused on lots of other things too like how we prevent um AI designed bioweapons how we present prevent AI designed cyber attacks how we prevent AI designed fake videos from determining elections there’s lots of other risks how we deal with all the job losses are going to be caused if AI is really as successful as these big companies think it will be those are all other distinct risks with distinct Solutions I’ve just tended to focus on the existential threat because that’s something that many people thought wasn’t real I just wonder is is there a suggestion or a solution that you might present that would find a way to balance these tradeoffs from the opportunity that this technology presents which you’ve outlined versus these significant risks the closest I come to that is the idea that governments should mandate that the big companies spend a lot of their resources on safety that’s the best I can think of it’s not very good but it’s the best I can think of and and your view right now is that they simply are not doing they’re not mandating that no there’s a there some legislation proposed in California where the Californian attorney general can sue big companies so that’s the first with teeth it can sue big companies if they don’t do sensible safety tests and Report the results to the California government um that’s still quite weak but it’s better than nothing what is the time frame in your opinion on getting the regulatory the the government approach right um it’s fairly urgent we may need it to be right in 5 years time um progress has been very slow slow far so for example in Britain they had this Bletchley conference with a lot of publicity where they all agreed that we need to worry about AI safety and then the British government decided we’re not actually going to do anything because it might interfere with Innovation just treat that as that’s again safety versus profits

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.