Does the possibility of a super intelligence arriving within our lifetimes, within a very potentially even near time frame, does that keep you up at night?
“Yeah, for sure. I think that it it has to keep everybody up actually because we have no evidence that we know how to control something that is as powerful as us, let alone something that is by design way, way, way more capable and intelligent than us. Yeah. And so that should be a cause for concern for all of us. And that’s why, you know, right from the very founding mission of Deep Mind, our our business plan was called building AGI safely and ethically for the benefit of humanity. And I think it was very clear that if we were successful then we would have one of the most wicked problems in the history of our species which is wicked because on the one hand it is clearly the most valuable technology that is for sure going to improve the lives of billions and billions of people if we get it right. Like we really will solve our energy crisis. We really will solve our health crisis. we really will be able to produce abundant food. You know, it really will be like that if we can get it right. And yet the challenge of getting it right is just mind-blowingly difficult. And it seems so fragile because even if it’s like, okay, we got it right, we got it right, all it takes is one misaligned moment for the whole thing to come crashing down.” — Mustafa Suleyman
“There’s kind of two theories of safety in the field. There’s a group of people who believe the challenge is around alignment, designing an AI that always has our best interests at heart and reflects our values and doesn’t sort of misbehave. And then there’s a second piece which is say which which I’m more subscribed to which is the process of containment. Right? So containment is about um making sure that the boundaries of the AI’s agency and influence are sufficiently limited and provably limited because it assumes that we’ll never have perfect perpetual alignment. Right? That’s that’s too optimistic an assumption in my opinion. We should strive for it and I definitely believe in alignment research, but I think completely unrestricted access. Um, I just don’t really see how that ends well in three or four decades time if these AIs are able to set their own goals, have their own autonomy, can acquire more resources, you know, have their own intrinsic motivations.” — Mustafa Suleyman
What happens when Microsoft hires a Philosopher to run their AI department? In this one-of-a-kind interview, I sit down with Mustafa Suleyman—founder of DeepMind, co-founder of Inflection AI, and now CEO of Microsoft AI—to talk about the future of artificial intelligence through a radically human lens. Chapters: 00:00 – 01:26 Introduction 01:27 – 04:29 Solving intelligence 04:30 – 07:19 Did we solve intelligence? 07:20 – 09:44 Having an outsized impact on the world 09:45 – 15:11 Leaving Google to start Inflection 15:12 – 18:27 The Coming Wave 18:28 – 19:50 Does super intelligence keep you up at night? 19:57 – 25:05 Getting AI right 25:06 – 31:24 AI Consciousness 31:25 – 35:42 The Steve Jobs look 35:43 – 37:34 Is AI an invention or a discovery? 37:35 – 42:56 Effectiveness in Start ups vs big tech 37:36 – 42:56 Access to social privilege 42:57 – 44:45 How exciting is all of this? 44:46 – 45:12 The inflection point 45:13 – 47:10 Just ask the stupid questions 47:11 – 49:12 The year of the social science hacker 49:13 – 52:12 How AI will impact the job market 52:13 – 53:37 Mustafa’s advice 53:38 – 55:22 Mustafa the outsider 55:23 – 56:34 What motivates Mustafa 56:35 – 57:17 Everything is yet to happen