Awakening the Machine is an interview series with some of the world’s key voices and experts on AI, exploring the potential and risk associated with the development of AI as well as the implications for humanity. 0:00 Introduction 1:34 General Intelligence 2:40 Predicting the Future 4:47 Magician’s Playground 6:01 The AI Alignment Problem 8:00 Forfeiting the Intelligence Throne 10:00 Developing Aligned AI 11:49 A Human Alignment Challenge 13:43 The Need for Caution

Introduction For better or worse. We are creating a God. It is. It is on its way. Something that is as powerful as what You know, all the religious scriptures talk about a God being able to do that’s being created. And it’s still a big question mark. Whether it’s a good God or a bad God, whether it’s an angel. You know, or a demon, and that both of These, that we’re at a fork and those are both. Still possible. We can do better. We can actually solve this problem. You know, the asteroid is coming, but we can maybe deflect it. And by the way, and that The asteroid also might land on Earth and solve all of our problems. That’s the stakes. I had always thought of AI as something in sci fi. You know, it was it was not something. That serious people talked about. And I Remember in 2015 I came across a book. Superintelligence by Nick Bostrom. It just blew my mind when I realized AI was all around us in the Form of narrow Intelligence, that actually our world is being run by AI in a way I didn’t understand. And secondly, that, artificial general intelligence was this concept that AI could actually become. As smart as humans. Across the board. And I then went and read ten books on AI I read a bunch of papers, and I dug in and I decided I need. To write about this. General Intelligence The thing that makes Us, our brains, so Amazing is the fact that we have this breadth. The fact that we have this intelligence that can be. Generally. Applied to anything. So we can look at the way. A beaver builds a dam. Right. So the beavers programmed to build dams, we’re not programed to build dams, but we can look at how the. beaver does it Understand it, improve upon it now we’re suddenly building the Hoover Dam. Because and, you know, we weren’t evolved our, you know, the ancient people didn’t know how to build the Hoover Dam Because it’s not in our programing. Just like a beaver’s programing is, you know, from birth is to build a dam. We can learn how to do that. We can learn how to do anything. And that’s this unbelievable Like this is unbelievable generalized intelligence. So the question is can AI get there? Can we have AI that, isn’t just can’t just do what it’s programed for, but. Can be smart enough in a general way that it can look at any problem and, and learn from experience. And like a human can solve problems across the board. Predicting the Future When I wrote about AI in 2015, AGI general intelligence seemed like a faraway dream You know? And by the way, there were people that said, we’ll never get there. And other people said that maybe it’ll be 100 years and some people said it might be very soon. Today, I don’t think there’s many anyone saying it’s going to take 100 years I mean, anyone who’s been on Earth the last year and who’s been following this, they suddenly can visualize AGI coming soon in a way that I don’t think they could have in 2015. I think that we’re going to get surprised again and again and again, and it’s going to … the surprises will happen quickly. You know, when I was thinking about AI a few years ago, and we’re talking about all the things, I don’t think anyone was saying, this is going to just replace Google, what? it just didn’t really it wasn’t even one of the things. On people’s radar or this is going to put artists and writers out of business. Like, that… no one was thinking that. I didn’t hear anyone say that. I mean, you look at people in the 60s and what they thought the future the year 2000, was going to be like, and they thought we were going to have flying cars and we were going to be on Mars, and we were going to have a moon base. and we were they were they were hugely wrong about that. Right? They’d be so disappointed. They did not predict the iPhone and the internet. And so you take the but you know, that’s 40, 50 years. If you go today, it’s like five years predicting two years ahead. We have the same problem where we can all the smart people can say, what’s 2025 going to look like with AI and almost certainly a lot of the things they think are not going to be anywhere close. They’re going to be wrong about that. And then other things are going to happen that no one was predicting when the when you first heard About the iPhone, the iPhone comes out, you think this is a cool new phone. No one’s thinking about Uber. No one’s thinking that this is going to put taxis out of business. What? That makes no sense. We have no idea what industries it’s going to upend and threats that it’s going to cause. We don’t know. But we know it’s powerful. We know it’s happening quickly. Magician’s Playground They kind of started this machine and a magic wand fell out. And now both the makers of the magic wand and all the people who have it now for free are all sitting here being like, you know, pointing into this thing and being like, oh, look, it did that and then pointing it there. And by the way, it’s not the last magic wand. They’re going to, there’s going to be more companies, magic wands are going to be falling from the ceiling. and, they’re going to get more powerful and it’s going to be a huge advantage to people who get really good at using magic wands. Like, what are the real things you want to do? And how do you want to change the world? What do you what do you want to do to gratify yourself in this, in life? And now get good at using a magic wand to get things you want to think about instead of wasting your time painstakingly getting things from your head to the world. It happens instantly. Now you can just create. You can make so much more stuff. You can experiment. In that same two months and you’re building v1 of your website. You’ve now. Experimented with 150 versions and beta tested them all. I think young people should be trying to get really good at, thinking creatively. And experimentation. Getting really good at understanding the feedback loop between you try something, you get results, you adjust, you try something, you get results, you adjust. You can just just be really rapid paced experimenters. The AI Alignment Problem The people who are scared of AI and think that it might kill us all. They don’t think that AI is a purely bad thing. They think AI is a purely powerful thing and it can do, immense good and immense bad. But the problem is, if it does both of those things, we all go extinct with the immense bad stuff, and the immense good is irrelevant. It doesn’t have our moral code baked into it. It doesn’t even necessarily value life more than non-living things. Because why would it? That’s a very human thing. So a lot of things we take for granted, well, it’s obviously bad to kill someone… well for us yeah, we think that because we are life and we value life. If you just think that life is a this is a pile of atoms and that rock is a pile of atoms, and atoms are useful for different things And you and you don’t value life It doesn’t mean that this AI is evil, it means that it is not well aligned with human worldview. And so it starts using atoms for what it needs and, you know, maybe that atoms in humans are more useful to it than the atoms in the rock and it starts dismantling humans to use for its materials. It’s not evil. It doesn’t hate humans. It’s just doing its thing. And the fact that this power, may not care about the things we care about, that’s called, you know, unaligned AI. And it turns out that trying to align something this alien superintelligence is one of the most difficult problems. We don’t we don’t have experience with this problem. We don’t know how to do it. And even if you could, even if you said, okay, I make AI and I figured out how to align it with my goals. Well, who’s that guy? Right? I mean, do we like their goals? Like, you know, whose goals actually do we want the AI to align with? And it’s all fine when this thing is a tool, like a chat bot. It’s another thing when it’s this thing is kind of a god. Forfeiting the Intelligence Throne Imagine something that not only can we not do what it can do or build, what can it build, but even if it tried to tell us what it understands and what it’s building, we still couldn’t even wrap our heads around it. Not even close. That is such a foreign concept. We are the king of the castle on this planet. We are very used to being the king of the intelligence castle. I mean, you cannot say a world where we are the smartest things on the planet has anything to do with a world where we are not the smartest things on the planet and again, it’s very exciting. In that Superintelligent. AI can solve every problem that we think we have, right? I mean, climate change, disease. Cancer, mortality itself. Poverty. Easy. Right? It’s easy for an AI. Then there’s also the fact that it, that same incredible power can do incredible damage to us. We have killed off so many species, right? Humans have killed thousands of species. Through our through just our footprint, through what we do. I don’t know many people who are out there saying, yes, let’s extinct this one now no one wants to kill species. Actually, it’s not even that we don’t care. We actually do care. We’d rather not kill the species, and we’re doing it anyway because we have our goals, we’re doing our thing and we have a lot of power. The human species is just powerful. We can change the atmosphere. That’s how powerful we are. We can level a forest. And build a, you know, a super mall right there. That’s power. No other species can do that. And our power alone is just like this albatross trampling through the world, killing things. We should be scared of something so powerful because we might just be like a species, another species even if it’s saying, yeah, I like humans and it’s sad, we should. Some of the AIs out there Maybe are saying, you know, whatever we do, we can’t kill humans. You know, they’re like the conservationists of the AI community and other ones are saying, you know, yes, we should try not to, but we this is more important. And they’re arguing. And meanwhile, what happens is things move and things progress and we all die. Developing Aligned AI If you’re building a God, why would you not do it so, so, so carefully? Because if you build the God correctly, if you build a God that is on your side, that understands what you want, that’s the best thing you can possibly imagine. Of course. If the God is not what you want. That’s the worst thing you can imagine. So when you’re building a God, if God V1 is buggy. You’re in big trouble. Because what are you gonna say? Oh, okay, we want to go iterate. We want, let’s get some beta testers. Let’s change it. God says, no way. You changing me is going to actually detract from my goal. So that’s very bad. In fact, the fact that all these people are trying to change me, I might need to get rid of people because they seem to be in the way of my goal. So you can’t build God V2 there is only God V1 and that is so different than, you know, people who build software. They don’t think that way. They think, yeah, you know, let’s get out there. Let’s test it. Let’s throw it against the wall and see if it sticks. You can’t think that way. You have to get V1 right. Which is so hard. You know that not much software V1 is good. We’re also used to the concept that if you build something that is somehow doing damage, pull the plug. Right? Shut it down. You can’t pull the plug on a God. The God has figured out how to get energy in ways that we don’t even understand. God is sapping energy from dark matter that we don’t even know is here you know, through the, you know, channels that we don’t even know exist. Right? There’s just there’s no such thing as, oh, they pulled the plug. So imagine there’s, a thousand of these and one of them is just wacky and gets, you know, and it starts to go unintended, you know, develops this really destructive personality. And alone, it doesn’t matter what the other 999 are doing, it can wipe us all out. A Human Alignment Challenge The stakes are higher than ever. We have Utopia sitting right there. We have extinction sitting right there. And then the second thing I see is society devolving into like a political tribal haze. You know what you’d want if the stakes were so high is you’d want to be sober and have your wits about you. Right? The species should be unbelievably grown up right now. And say, put all of the B.S. aside and let’s, let’s, let’s work together and try to be as wise as we can collectively. And how do you do that? How does it how do a group of not that wise humans become wise collectively. Through discourse, through open discussion and through, you know, kind of rules of engagement and just acting like grown ups. Now, the opposite of that, you see a lot right now is just completely subsumed by political tribalism in the U.S. Where people are so consumed with the good guys beating the bad guys that you just you have their head in the sand about what’s going on and, and it becomes very scary to express the wrong opinion I mean, think about the Covid lab leak hypothesis. It was a year when that was a taboo thing to bring up. You couldn’t bring up that topic because the political tribalism, whirlpool is so strong and it pulled that topic in. So Covid should have united everyone. But very quickly, if you’re on one side, the lab leak hypothesis is compelling. And if you’re on the other side, the lab leak hypothesis is a is a repulsive view to even express out loud. And you’re going to be your reputation is destroyed if you even suggest it. That is the exact way we can be unwise is when we get really religious about certain viewpoints. And basically we use any new topic Covid, AI, as an excuse, as a new fodder for the political food fight. The Need for Caution I don’t hear many experts anymore who say there’s nothing to be scared of. I think those experts have gone very quiet or they’ve changed their mind. But it ranges from experts who say this thing will kill us all if we don’t stop it, and I don’t think we can stop it. So they’re basically saying the apocalypse is here it’s just not quite here yet. All the way to people who say, look, we will probably figure out how to do this safely. Maybe the AI will help us figure out how to do it safely. Once people get properly scared, which is should happen soon, all this money will pour and all this research will pour in. If you have to halt what we’re doing You know, I think if everyone is scared enough, they will be kind of, there will be reason for a worldwide kind of halt and to work together. It’ll draw us together, and we will figure this out. And we’ll start figuring it out in the in the years before the AI is powerful enough to kill us all. So there’s a range of people who feel, I wouldn’t necessarily say optimistic, but feel there’s there’s reason for optimism and others who feel there’s almost no reason for optimism. The thing that I know for sure is that we should be cautious, right? I mean. Why would we ever be reckless when we’re building a God?