3,812 views 11 Dec 2024 Doom Debates

Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov. Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog. Scott is one of my biggest intellectual influences. His famous “Who Can Name The Bigger Number” essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory. Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account. Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly. 00:00 Introducing Scott Aaronson 02:17 Scott’s Recruitment by OpenAI 04:18 Scott’s Work on AI Safety at OpenAI 08:10 Challenges in AI Alignment 12:05 Watermarking AI Outputs 15:23 The State of AI Safety Research 22:13 The Intractability of AI Alignment 34:20 Policy Implications and the Call to Pause AI 38:18 Out-of-Distribution Generalization 45:30 Moral Worth Criterion for Humans 51:49 Quantum Mechanics and Human Uniqueness 01:00:31 Quantum No-Cloning Theorem 01:12:40 Scott Is Almost An Accelerationist? 01:18:04 Geoffrey Hinton’s Proposal for Analog AI 01:36:13 The AI Arms Race and the Need for Regulation 01:39:41 Scott Aronson’s Thoughts on Sam Altman 01:42:58 Scott Rejects the Orthogonality Thesis 01:46:35 Final Thoughts 01:48:48 Lethal Intelligence Clip 01:51:42 Outro SHOW NOTES Scott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: • Scott Aaronson On The Race To AGI and… Scott’s Blog: https://scottaaronson.blog — PauseAI Website: https://pauseai.info PauseAI Discord: / discord — Watch the Lethal Intelligence video • Lethal Intelligence Guide [Part 1] – … and check out https://LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk. — Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at https://DoomDebates.com and to / @doomdebates .