“When I think about what it would be like to embark on a future with AI systems that are more powerful than us, it feels a little bit like getting on an airplane. And when you get on an airplane, you always have that moment where you wonder, okay, did they build this airplane right? And is this a good pilot? And then you remind yourself that yes, you know, everything is in place to make sure that this works. And now imagine that we’re going to get on this airplane with our children, everyone in this room, everyone in the world, and it’s going to take off. And it’s never going to land, which means that it has to work perfectly forever, having never been tried or tested before. And in my view, we can’t get on that airplane unless we are absolutely sure that everyone has done their job to make sure that it works.” — Stuart Russell
What does it actually mean for AI to “understand”? In this groundbreaking plenary from IASEAI ’25, Geoffrey Hinton—a Turing Awardee and Nobel Laureate—delves into the concept of “understanding” within machine learning. Drawing on deep neural networks and cognitive science, Hinton investigates whether current models truly grasp meaning or rely on statistical correlations—and why that distinction matters for future AI safety and alignment. 📌 Explore more from IASEAI ’25: https://www.iaseai.org/our-programs/i… 🔗 More on Geoffrey Hinton – Turing Award winner & deep learning pioneer: https://discover.research.utoronto.ca… #GeoffreyHinton #AIUnderstanding #DeepLearning #AISafety #IASEAI #neuralnetworks
What are the “red lines” we must not cross to keep AI under control? In this essential plenary from IASEAI ’25, MIT Professor and Future of Life Institute President Max Tegmark (named one of Time’s 100 most influential people in AI, 2023) argues for binding safety standards that draw clear red lines against loss-of-control—laying the groundwork for a future where society thrives alongside guaranteed-safe AI. 📌 Explore the full IASEAI program: https://www.iaseai.org/conference/pro… 🔗 Learn more about Max Tegmark: https://futureoflife.org/person/max-t… #MaxTegmark #AISafety #ExistentialRisk #AIControl #IASEAI #futureoflife
IASEAI and the Future – Stuart Russell | IASEAI 2025 International Association for Safe & Ethical AI
How can we ensure that AI systems operate safely and ethically, forever? In the closing plenary from IASEAI ’25, Berkeley Professor and IASEAI President Stuart Russell, OBE, FRS, lays out a roadmap that leads to provably safe and beneficial AI systems, identifying open research problems and outlining the forms of governance needed to prevent catastrophic risks. 📌 Explore the full IASEAI program: https://www.iaseai.org/conference/pro… 🔗 Learn more about Stuart Russell: https://en.wikipedia.org/wiki/Stuart_… 🔗 CHAI – https://humancompatible.ai 🔗 Reith Lectures – https://people.eecs.berkeley.edu/~rus… 🔗 AI: A Modern Approach – https://aima.cs.berkeley.edu/ 🔗 Human Compatible – https://penguinrandomhouse.com/books/… #StuartRussell #AISafety #ExistentialRisk #AIControl #IASEAI #CHAI