I interviewed Roman Yampolskiy, an AI safety professor who has a computer security background. We discussed why people have a hard time understanding and dealing with existential risk. Apparently, most of the common reactions map directly to a known bias in the human brain. Roman is a strong believer in the simulation hypothesis. He believes that advanced AI will necessarily have to simulate reality at a very high level of detail in order to learn things about the world. There could be billions of such simulations, which means it is a lot more likely that we are currently in a simulation. Other attributes about our universe such as how quantum mechanics work also point to the likelihood of being simulated. Roman is also very pessimistic about the ability of humans to control AI in the long run. Specifically, superintelligence that is much smarter than any human will not remain under our control for long. If we want our species to remain masters of our own destiny, we should avoid creating superintelligence in the first place. We are on an evolutionary path that leads inexorably to such intelligence, unless we intentionally decide otherwise. Perhaps that is what our simulation is all about: are we smart enough to avoid this temptation?
AI: Unexplainable, Unpredictable, Uncontrollable [book] / ai Obedience Beyond Reason: Assessing Controllability through Compliance with Irrational Orders https://www.researchgate.net/profile/… The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation https://arxiv.org/pdf/1802.07228 Untestability of AI https://www.researchgate.net/profile/… On monitorability of AI https://link.springer.com/content/pdf… 0:00 Intro 0:34 Background information 1:10 Contents 1:17 Part 1: Denying death 1:38 Why don’t people treat AI risk seriously 2:26 Is Dr Waku a real person? 2:48 Do human biases cause prioritization of less severe risks? 3:34 Denying the problem if it seems impossible 4:20 Billionaires running AI labs 5:16 Can the government step in to halt Moloch’s trap? 6:09 Shifting money from compute to lawyers 6:44 No meaningful enforcement of rules 7:46 Compute limits would have to shrink over time 8:06 Part 2: The simulation hypothesis 8:15 Why we are in a simulation 9:07 How can we escape the simulation? 10:04 What is the purpose of the simulation? 10:52 Testing ground for meta technologies 11:31 Escape like video game speedrunning 12:33 Hacks in quantum computing 13:41 Implications of being in a simulation 14:52 Irrational obedience 15:39 Ensuring safe AI by being evil deities 16:51 The simulation is suppressing citations 17:29 How often does the universe misbehave? 18:01 Too late, we’re on YouTube 18:52 Part 3: Is AI uncontrollable? 19:13 It’s impossible to control superintelligence indefinitely 20:11 Cybersecurity is simple in comparison 20:37 Different types of control 21:22 We no longer decide what happens 22:13 How big a role does cybersecurity play in AI takeover? 22:55 Would superintelligence cause human extinction? 23:49 The only way to win is not to play 24:26 What benefit do people have to superintelligence? 25:23 Summary of the current race 26:13 What could stop the race? 26:46 AI-related accidents don’t matter 27:58 One-way evolutionary path 28:11 Can we keep up with AI by using mind enhancing technology? 29:42 What keeps you working in the face of these challenges? 30:24 What will superintelligence want? 31:05 Is advanced AI a Great Filter? 32:00 What should we as humans do? 32:22 Probabilistic safety guarantees 33:31 Formal verification has limits as well 34:22 We don’t know how to formally verify self-improving software 35:11 Talking about the textbook 36:22 All of these issues have a marketing problem 37:21 Have a wonderful life 37:29 Conclusion 37:45 Outro 38:00 Someone else is waving too