“AI Risk=Jenga” For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview
“We don’t know what we’re doing, right, we don’t know what these models are capable of.”
LIRON. “I’m optimistic that if we were to actually admit that there’s a problem and then address it sanely then we would have a shot right so like I’m not particularly pessimistic about that hypothetical but it’s just it just seems like we are currently moving directly toward Doom right just as a realistic assessment of where we are so unless we change it I guess I’m not optimistic about the trajectory that just seems right where we’re headed. `[…] this is all coming relative to the human brain when you have something that’s smarter than human and it can achieve goals there are runaway Cascades as a matter of pure logic that’s something that wants to optimize the universe is going to have all these implications of all these things it has to do right it has to seize power that’s the kind of perspective that I’m saying. […] that’s the problem right now is we’re losing the race losing the Race Against Time losing the race against AI capabilities that’s the thing that people are trying to stick their head in the sand and not see that we’re losing this critical race. Tech usually gives us nice things, just the problem here is there’s no undo button. Right. There’s no debugging phase when you make your AI uncontrollable. […] Just like Jenga, right, it’s literally like the block can look at us sometimes the block look at us and the tower falls down. Absolutely because we don’t know what we’re doing, right, we don’t know what these models are capable of. […] I think will wake up a lot of people is just things that uh look and feel more like the Terminator robot from the movie, which and I think that movie did a good service portraying how AI can be like Dynamic you know like something because people are like, it’s intuitive. […] I think it’s just as likely that the internet will get taken over at and the attack will kind of start from from the internet. […] I think having people gather and make it public knowledge that pausing AI is like an urgent issue worth protesting I do think that that’s a high leverage Point, because the converse of not doing protests and then being like well if this is an important issue where are the protests right so that seems like a high leverage point to me and not only that but how can the politicians make a decision to lead the people if they don’t detect that there’s a ground swell? […] I think this is what the average person this may be the most effective thing to tell the average person, or the average regulator of, like look what they’re telling you that they’re doing. [ergo: (A) The existential threat is real, (B) Nobody knows how it works, and (C) Nobody knows how to control it.] I think there’s a 20% chance that it goes Rogue in the next five years.”
In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Liron’s Youtube Channel: https://youtube.com/@liron00?si=cqIo5… More on rationalism: https://www.lesswrong.com/ More on California State Senate Bill SB-1047: https://leginfo.legislature.ca.gov/fa… https://thezvi.substack.com/p/on-the-… Warren Wolf Warren Wolf, “Señor Mouse” – The Checkout: Live at Berklee https://youtu.be/OZDwzBnn6uc?si=o5Bjl…