FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

What does the future of Artificial Intelligence (AI) look like and how will it impact democracy and society? This question was explored by Prof.Yoshua Bengio, pioneer AI developer and Turing Award winner, and Prof. Yuval Noah Harari, bestselling author and historian – in a live discussion moderated by political journalist Vassy Kapelos. Recorded on May 24 2023, as part of the C2 Montreal conference, presented by Mila – Quebec Artificial Intelligence Institute.

Professor Yoshua Bengio is recognized as one of the leading pioneers in AI research and development – known for his prestigious work in the field of deep learning. He is a Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute.

Yuval Noah Harari is an Israeli public intellectual, historian and professor in the Department of History at the Hebrew University of Jerusalem. He is the author of the popular science bestsellers Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, and 21 Lessons for the 21st Century – most recently releasing his latest series for middle-grade readers, Unstoppable Us.

HARIRI. It’s really an existential risk to humanity and what we need above all is time. Human societies are extremely adaptable. We are good at it. But we it takes time. And if you look for instance at the at the last time that we had a major technological revolution, the Industrial Revolution, it took us many generations to figure out how to build a relatively good industrial societies. And on the way we had some terrible experiments. Failed experiments in how to build industrial societies. Nazism and Soviet communism can be seen as simply failed experiments that killed millions and millions of people in how to build functioning industrial societies. And now we are dealing with something even more powerful than the trains and radio and electricity that we invented in the industrial revolution. I think there is, there is certainly a way to build good societies with AI. But it will take time. And it will. And we need to make sure that we don’t make any more such failed experiments, because if if we do it now, with with this kind of Technology, we won’t get a second chance we won’t survive it. We managed to survive the failed experiments of the Industrial Revolution only because the technology was not powerful enough to destroy us. So we have to be extremely careful and we need to take things more slowly. 504

BENGIO. The incentive system we’ve built in works reasonably well for our industrial societies, in our liberal democracies, is based on competition. And companies would not survive if they didn’t play that game because another one would take their place. Now there are also individuals in those companies that may think that you know, you know ethics and social values important so humans can temper a bit that profit maximization incentive. But but it’s a very strong one. As far as I’m concerned I didn’t write the letter. I signed it. I thought it was a really good way to call attention to the problem, to the general public, to governments. And even people in companies have been discussing behind closed doors a lot since the letter. So it has worked in that sense which is, you know, I think the real reason why I signed it. 630

KAPELOS. The man behind OpenAI Samuel Altman basically pleaded with the government there, with government representatives, to regulate AI more. How significant do you think that is?

HARIRI. Well, I think it’s very significant, and again, we need to do it quickly. And we when we talk about regulation we need to differentiate between regulating the development of AI the research in Laboratories and so forth and regulating the deployment into the public sphere. And I think it’s it’s more urgent and also easier to regulate the deployment. There are some very simple rules that we need to make. For instance that an AI cannot counterfeit humans. That if you’re talking with someone, you need to know whether it’s a human or an AI. If we don’t do that, then the public conversation collapses and democracy cannot survive. And it’s just common sense. The same way for thousands of years we had laws against counterfeiting money, otherwise the financial system would have collapsed. And even though it’s it’s quite easy today to counterfeit money, people don’t do it because they are afraid they will go to 20 years in jail. We need the same kind of laws about faking humans or counterfeiting humans. And similarly the same way you cannot release powerful new medicines, or vehicles into the public sphere without going through safety checks, and getting approval, it should be the same with AI. That okay, you developed something in your laboratory. Before you can deploy it to the public sphere with potentially immense consequences for society, it has to go through a similar process of of safety checks. 874

BENGIO. We we have laws about data. We have laws about communication. But they were not designed to deal with some of the problems we’re talking about. So for example there’s currently nothing as far as I know against counterfeiting humans, and it’s interesting this counterfeiting example, because counterfeiting money is as far as I know is punished very very severely because the stakes are so high. And I think it’s it’s similar regarding the regulation for these things. There’s a thing that worries me about the way things might be going on in the U.S. I think that is, that we need the regulatory body to have a lot of agility. So here, there’s a bill in front of parliament in Canada which has that kind of structure where the law is principles based. You know it states like even ethical principles that AI and companies doing it should follow and it leaves the details, the rules, to a government agency. And that’s good because the field is moving too quickly. The nefarious users that we can think of now, maybe, you know, they’ll be different in six months from now. The science moves. The technology moves. The market moves. And we need a lot of agility from governments – which is not what they recognize, not their strong suit – yeah I know, but in the US my understanding is that in the last few years governments have moved away from these kind of principles-based legislation where you delegate power to some agency and instead try to have everything written down by Congress because they don’t want to give any kind of control to some government agency. I mean especially the Republicans. And that might be a big obstacle for efficient government intervention. 1000

KAPELOS. What could be efficient government intervention? Professor Harare you’ve talked a little bit about some simple things they could introduce for deployment, what what else ?

HARIRI. Yeah, so one other thing that they could introduce low-tech uh demands like that if you want to have a social media account you must go to some office made from stones and sign a piece of paper. Now it’s very inefficient, but the inefficiency is a feature not a bug. We do it with other things, a passport, a driving license. It’s possible to social media could also be you will need to go through this low-tech uh operation and this will immediately get rid of almost all the bots on social media. And going back to the problems of the collapse of democracy I think introducing such a low-tech demand would immediately get rid of most of the Bots and help us save democracy. What we should remember is that we are now facing a paradox in countries like the USA, also in my country in Israel, we have the most sophisticated and Powerful information technology in history and yet people are no longer able to talk with one another. People are no longer able to agree on anything, even about who won the last elections, or whether vaccines are good for you or bad for you. So how is it possible that with such powerful Information Technology? The conversation is collapsing. Something is wrong deeply wrong with at least the way we deploy our information technology and we need to step back and think about it before we deploy even more powerful tools like AI which could result in in the complete collapse of the conversation. Again if you’re having a discussion about the elections with somebody and you can’t tell whether it’s an AI or a human, that’s the end of democracy, because for a human being it’s pointless to waste time trying to change the mind of a bot. The bot doesn’t have a mind. But for the bot every minute it spends with talking with me, it gets to know me better, it builds even intimacy with me, and then it’s easier for the bot to change my views. We have known for a couple of years that there is a battle for attention going on in social media. Now with the new generation of AI the battlefront is shifting from attention to intimacy. We are likely to be intimate if we don’t regulate it. We are likely to be in a situation when you have millions, maybe billions of AI agents, trying to gain our intimacy because that’s the easiest way to then convince us to buy a product or vote for a politician or whatever. And if we allow this to happen it will lead to social collapse. 1121

HARIRI. I think the only realistic way to save humankind is first of all to save democracy because totalitarian systems will be much much worse than democracies when it comes to regulating AI and keeping it under control. The traditional problem of totalitarian regimes is that they tend to believe in their own infallibility they never make mistakes and they don’t have any strong self-correcting mechanisms mechanisms for identifying and correcting their own mistakes. And with the totalitarian regime or some kind of super powerful World Government the temptation of that system to give too much power to an AI and then not be able to regulate it it will be almost irresistible. And once the totalitarian regime gives power to an AI there will not be any self-correcting mechanism that can point out the mistakes that the system will inevitably make and correct them. And it should be very clear that AI is not infallible. It has a lot of power, it can process a lot of information, but information isn’t truth and there is a long way leading from information to truth and to wisdom. And if we give too much power, to an AI, it is bound to make mistakes. Inly democracies have this kind of checks and balances that allow them to try something and if it doesn’t work to identify the mistake and correct it. 1628

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.