Stuart Russell OBE
Distinguished Professor of Computer Science at University of California, Berkeley, and the Director of the Center for Intelligent Systems. A world renowned expert on AI, his book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI, used in universities around the world. He is a fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science.
2 Sept 2024
A film series exploring urgent solutions to the greatest existential threats facing humanity, brought to you by The Elders and Future of Life Institute.
Artificial Intelligence is developing faster than the regulations needed to ensure its benefits can be safely harnessed for the good of humanity. In this episode on ungoverned AI, The Elders and leading AI experts Kate Kallot and Stuart Russell set out practical and political responses for global leaders to tackle growing risks. Introduced by Zeid bin Ra’ad, former UN High Commissioner for Human Rights and member of The Elders.
It’s time for bold, new thinking on existential threats Our world is in grave danger.
- The climate and nature crisis, nuclear weapons, pandemics and ungoverned AI pose a set of existential threats that put all of humanity at risk. Yet, our leaders are not responding with the wisdom and urgency required. As global decision-makers prepare to gather in New York in September for the UN Summit of the Future, it is time to change direction. It is time for long-view leadership. In partnership with the Future of Life Institute, The Elders present a series of expert-led short films to inspire bold, new thinking from world leaders on the greatest challenges facing humanity.
Will our leaders rise to the challenge?
Watch the series in full: https://theelders.org/LongviewLeadership
TRANSCRIPT
Artificial intelligence has the potential to provide huge benefits to humanity, but its rapid, ungoverned development comes with great risks. We are calling on world leaders to develop strong international governance that allows humanity to take advantage of AI’s opportunities to ensure AI is a force for good, and not allow it to be a runaway risk. It’s time for long-view leadership. Will our leaders rise to the challenge?
I’m Kate Kallot. I’m the Founder and CEO of Amini.
My name is Stuart Russell. I’m a Professor of Computer Science at the University of California, Berkeley.
KATE KALLOT. We’re dealing today with the technology that’s accelerating extremely fast. And if it continues to go unregulated, we are facing a place where it will be too hard for policymakers to catch up.
STUART RUSSELL. So right now, we’re on a road towards creating, what’s called AGI, artificial general intelligence, which means, AI systems that exceed human capabilities in every relevant dimension. And then we have a question. How do we retain power forever over entities more powerful than ourselves? The biggest risk is that the AI systems we create carry out some course of action that results in human extinction. One of the ideas for governing AI in the near-term is actually to draw some red lines and say to the developers, the onus is on you to prove that your systems are not going to cross these red lines. So some simple examples would be AI systems should not replicate themselves. They should not advise terrorists on how to build biological weapons. And that means they have to be able to predict and control and understand the technology that they are creating. And that’s exactly what they’re not doing right now. We have no idea how they work. If we continue along this line where we’re creating systems that are more powerful than human beings, and we can provide no guarantees on, what internal goals they might have and how they’re going to pursue them, that is a recipe for disaster.
KATE KALLOT. It can be absolutely transformative for our lives, but we have to understand that the technology development will not stop. So how do we get ahead? How did we get here? What is to come? And how can we build solutions and build policies that are adapted to their countries and to their people.
ZAID RA’AD AL HUSSEIN. To ensure AI is a force for good, not a runaway risk? We need world leaders to cooperate. A multilateral approach rooted in human rights is the only way to ensure the benefits of AI are seen by all. It’s time for bold new thinking.