FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Artificial intelligence ‘godfather’ Yoshua Bengio opens up about his hopes and concerns. 29 JAN.

“Living things all have an instinct to preserve themselves which is fine so long as they’re not more powerful than us and could destroy us, right. And that’s the problem. If we just create new [AI] entities that are weak, fine. Like you know, but you got a cat as a pet not a tiger. […] We don’t know how to program it [AI] so that it doesn’t get to have a mind of its own and turns against humans.”

TRANSCRIPT.

Intro living things all have an instinct to preserve themselves which is fine so long as they’re not more powerful than us and could destroy us right and that’s the problem if if we just create new entities that are weak uh fine like you know but you got a cat as a pet not a tiger yosua Benjo is a Pioneer in the field of artificial neural networks and deep learning and is one of the world’s most influential voices on artificial intelligence but more and more he’s been sounding the alarm about how fast AI is progressing research on counter measures to protect Society from potential Rogue AI now he’s pushing for more government control over the technology he also wants more public supercomputers so those who regulate AI will be able to keep up with the booming private Tech sector I sat down with him at his home in Montreal just next to Mount Royal Park to get a a sense of what’s on his mind for the year ahead and why he’s so determined to bring his message to those in power that unchecked the positives of artificial intelligence are outweighed by its perils what are the scenarios that you are worried about Catastrophic Risks of AI happening especially these catastrophic scenarios so I I I don’t see extremely catastrophic scenarios within a year um the earliest I could see them would be two three years that’s what other people people are you know claiming that seems possible because we still need more advances in in in the capabilities of AI to to get really scary um so what can go wrong there there are two major categories of risks um in my opinion there’s the category of misuse um not necessarily intended by the people who built those but it you know it falls in the wrong hands they use it for disinformation but also for for cyber attacks for helping to design weapons uh and then there is a risk of what people call loss of control uh you have to understand the reason why we have the misuse is that we we don’t know how to program AI system so that they wouldn’t be used for something bad uh we we can’t like we try to tell the eye know don’t don’t do cyber attacks or don’t do disinformation but it’s easy to bypass these safety protections and similarly we don’t know how to program it so that it doesn’t uh get to have a mind of its own and turns against humans there are people scientists technologists CEOs who would be happy to see Humanity replaced by superhuman AIS they think that intelligence is more important than you know Humanity or who are you talking about ah no names um but it’s easy to find so what was the issue with loss of control well we could basically create new types of living entities that have their own preservation as a more important value than our own uh wellbeing and uh and and people who would like to see humans replaced by machines could simply give them that objective fend for yourself preserve yourself and then it’s like we just created a new entity that initially may just live in computers but with robots one day they can roam the planet what is misalignment The Problem of Misalignment in this conversation so misalignment is at the root of both problems so the misalignment is the the problem that we don’t know how to program those AI so that they behave as intended we can give them instructions and in general they seem to do it and sometimes they do weird things and those weird things that are worrisome and um it can turn really bad potentially and there are even arguments showing that by trying to do what we’re asking them they might turn against us which seems very weird and complicated let’s say you you want to train a a a a a Grizzly if the grizzly is in a cage and you train it with rewards like food when it does the right thing it might learn to do the things you want right um and that’s the way we train AI by the way we give it rewards when it does the right thing so so long as we control the rewards like the food and the bear is in its cage all is good you see where I’m going open the cage um now the bear doesn’t care about what we really want it’s just going to grab the food that is its reward right so it’s the same we we could have ai that take control of the rewards that we gave we give them and basically fend for themselves and and then don’t need our approval in your recent paper that you Defensive AI wrote you you argue there’s a need to develop what you call defensive AI yes and when confronted with that enemy that is smarter than us potentially we need to more than just an off switch we need essentially countermeasures what is that okay so we do want an off switch but it’s not going to work because somebody in some country is going to still do something now that we know how to build AI that may eventually be smarter than us somebody’s going to do it so we should put off switches we should put regulation we should minimize the chances that Rogue AI emerge one way or another uh we should minimize the chances of misuse and all these threats but we should also be realistic once the recipe for building such machines is known to everyone who can read a scientific paper it’s going to happen so I think in case we don’t find a way to completely stop this we do need to find defensive Solutions like imagine there was a rogue AI in 5 years from now or 10 years from now are we defenseless if we are able to build a eyes that we can trust then then they can become our protectors against machines that are smarter than us and we wouldn’t be able to defeat ourselves so let me try to be more concrete let’s say somebody builds AI systems that are really good programmers and can do cyber defense or cyber attacks because it’s kind of the same two two signs of the same coin well if they are smarter than us they might design cyber attacks that our current human programmers don’t know how to deal with so we need need other AIS protecting AIS defensive AIS that have at least the same capabilities as those Rogue AIS and are good at the same things like uh cyber uh and can help defend our infrastructure against these kinds of attacks in that sense though we get to the political side which is yes who would control something like that who would build something like that where is a defensive AI going to be stored and who will control it what do you think this is this is at least as important because it’s not enough to have an AI that does what we want and maybe can protect us we need to make sure that it’s not abused um and humans if they have a chance at least some of them enough of them will go for greed will go for power will go for uh military dominance economic dominance how do we avoid that and the only kind of answer which is not completely satisfactory but is what we have is governance in other words we need to make sure that no single individual can decide what these very powerful eyes will do it’s going to be a committee that’s representative of the general will democracy is all about that because AI is going to be so powerful and so important in in coming years and decades it needs to be uh managed democratically so that no one no individual no company no political party no government uh no military organization can abuse that power for achieving their own goals what it sounds AI Arms Race like to some degree though is it’s kind of like a defense department or a defense industry and we know the way that works around the world there are is some sharing but there’s also a quest for Supremacy yeah how do you avoid that from happening in arms race essentially in that way yeah there’s already an arms race that’s starting with for example the us trying to prevent China from having access to those chips that are important to build current AI systems and of course China is not happy that’s is just like the beginning so well there will be an arms race what we the best we can do is to try to reduce the heat just like we’ve done for nuclear um for example if we have these Democratic governance institutions around AI we can also tack onto that monitoring oversight from the International Community that’s going to make sure that well you’re not using the AI or developing its abilities on the military offensive side for example so maybe the US and China make a deal okay we’re going to watch each other’s back and how to do that is not trivial but conceivable arms treaties essentially exactly um so that we know that we’re not going to be attacking each other with that new kind of weapon that’s the best I can see and it it it’s with complexities we’ve seen how difficult it was and it continues to be for nuclear but that’s the only option I can see right now what about moving to Public Supercomputers another part of this equation which is computing power so the fact that the technology needs massive computing resources in order to do these very things that you’re saying um who has access to that right now so right now it takes a lot of capital um maybe 10 million to a billion depending on how big the system like the most expensive one right now like Google’s Gemini probably is in the billion range the cost of building it and so very few companies can afford or want to you know do that investment you also need expertise to do that right what about though the idea of this being private I mean what about building public supercomputers yeah what do you is is that something that you think should happen yeah there are reasons for so it it was kind of natural that up to now it was mostly um um you know capitalism uh doing I mean there are different phases first it was just academics doing it on the small scale and doing all the theory if you want the the methods and then industry took over um to build the really large scale things that are impressive in their performance but then I think there’s a third stage which is where governments and uh the public in general understand that this is extremely powerful and this gives a lot of power to whoever controls that and then governments will want to take back control for it’s going to be regulation first but eventually they will want to take back some control maybe initially by building their own infrastructure and that’s already happening the governments need to build that capability they they need to have people working for the governments that understand what can go wrong uh how the technology works that’s necessary even for just the regulator like how can Regulators say this is acceptable this is not acceptable if they don’t have the and if they can’t do themselves the kind of experiments that are happening right now only in company so that’s that’s where I think it’s going eventually if we want the kind of governance I was talking about well it becomes like controlled by the state or maybe through some multilateral organization and less driven only by profit maximization so I think there it’s likely that over the next decade we’re going to see a shift where because these things are so powerful you need more Democratic oversight eventually it becomes Democratic control but maybe different countries will choose different things but have you spoken you have a position of power have you or at least a voice have you spoken to say the Canadian government I would say a voice not So Much by way of course a powerful voice let’s put it that way have you spoken to the Canadian government and say you know or the Quebec government and said we need to have a supercomputer a public supercomputer here in Canada yes did you get a response we’re listening it’s a lot of money and how much are we talking about well uh we’re talking about like a billion dollar sort of you know amounts that’s uh the UK government put down 900 million pounds for their National AI infrastructure so you know would you like to see that in Montreal well well I think government need will understand at some point hopefully as soon as possible that it’s important for governments to have that muscle to have that capability so that they can do the things that companies might not do so like this research on safety and Alignment yeah companies are going to do it but they’re going to be worried about customer satisfaction not about the end of the world so much um or about threats to democracy or about human rights issues so there are those who criticize AI Hype the approach taken by you and others essentially overselling the dangers of the existential dangers of AI saying that it you know it it it downplays or it’s to the detriment of problems that AI is already causing causing problems in terms of discrimination in terms of environmental impact and sort of the head headlines that come from existential threats are obscuring some of these more real pressing things that AI is doing right now I’m wondering people who call this as you know AI hype essentially well I I wish they consider that the well-being of humans around the world and our democracies are important not just now but in 5 years years from now and 10 years from now and for me there’s no separation we need to manage the risks and minimize the harms now tomorrow in five years in 10 years and 20 years um what is important though and and we and we don’t need to make choices and by the way a lot of the solutions are the same we need regulation we need governments to get you know invested to make sure we develop uh Sol solutions to to to minimize these threats to you know human rights discrimination and so on um and and you know some of the issues I I talked about and in fact a lot of the scientific Solutions are about the same they’re about making sure the eyes behave morally um so um I I I think I think that the the division between these two camps is is reducing as people understand that we’re fighting the same battle essentially um the division that I see now emerging as a stronger division is between uh those who would like us to see accelerate the development of AI who care more about let’s get all the profit that we get uh the economic growth that we we we could from from AI which we should but not at the detriment of destabilizing our societies of uh taking risks with the future of humanity of uh you know endangering our democracies we have a tradition here and a lot of expertise in Quebec um both on the AI science and on the social sciences political science um legal Scholars who care about the impact of AI in society who have been working on the for example the bias and discrimination and are now worried about the other social risks that that are happening uh of course we’re working with people from all around the world but that’s uh something where we have a lot of experience and and voice and we we can we can really lead but but uh do it with our partners from around the world and around Canada thank you very much Professor Benjo we’ll see what the year ahead does in fact bring yes we’ll see

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.