Professor Geoffrey Miller, what’s your P(doom)?  Fifty percent.

“Increasing awareness of AI safety is not just an American issue or a European issue. It’s a global issue. And I think we should try to be as inclusive as possible about, you know, crucially understanding that both China and America need to understand these risks. And if both countries do, we can actually coordinate and solve this. I’m actually much more optimistic about this than I was a year ago.” — Professor Geoffrey Miller

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Prof. Geoffrey Miller is an evolutionary psychologist, bestselling author, associate professor of psychology at the University of New Mexico, and one of the world’s leading experts on signaling theory and human sexual selection. His book “Mate” was hugely influential to me during my dating years, so I was thrilled to finally get him on the show. In this episode, Geoffrey drops a bombshell 50% P(Doom) assessment, coming from someone who wrote foundational papers on neural networks and genetic algorithms back in the ’90s before pivoting to study human mating behavior for 30 years. What makes Geoffrey’s doom perspective unique is that he thinks both inner and outer alignment might be unsolvable in principle, ever. He’s also surprisingly bearish on AI’s current value, arguing it hasn’t been net positive for society yet despite the $14 billion in OpenAI revenue. We cover his fascinating intellectual journey from early AI researcher to pickup artist advisor to AI doomer, why Asperger’s people make better psychology researchers, the polyamory scene in rationalist circles, and his surprisingly optimistic take on cooperating with China. Geoffrey brings a deeply humanist perspective. He genuinely loves human civilization as it is and sees no reason to rush toward our potential replacement. 00:00:00 – Introducing Prof. Geoffrey Miller 00:01:46 – Geoffrey’s intellectual career arc: AI → evolutionary psychology → back to AI 00:03:43 – Signaling theory as the main theme driving his research 00:05:04 – Why evolutionary psychology is legitimate science, not just speculation 00:08:18 – Being a professor in the AI age and making courses “AI-proof” 00:09:12 – Getting tenure in 2008 and using academic freedom responsibly 00:11:01 – Student cheating epidemic with AI tools, going “fully medieval” 00:13:28 – Should professors use AI for grading? (Geoffrey says no, would be unethical) 00:23:06 – Coming out as Aspie and neurodiversity in academia 00:29:15 – What is sex and its role in evolution (error correction vs. variation) 00:34:06 – Sexual selection as an evolutionary “supercharger” 00:37:25 – Dating advice, pickup artistry, and evolutionary psychology insights 00:45:04 – Polyamory: Geoffrey’s experience and the rationalist connection 00:50:42 – Why rationalists tend to be poly vs. Chesterton’s fence on monogamy 00:54:07 – The “primal” lifestyle and evolutionary medicine 00:56:59 – How Iain M. Banks’ Culture novels shaped Geoffrey’s AI thinking 01:05:26 – What’s Your P(Doom)™ 01:08:04 – Main doom scenario: AI arms race leading to unaligned ASI 01:14:10 – Bad actors problem: antinatalists, religious extremists, eco-alarmists 01:21:13 – Inner vs. outer alignment – both may be unsolvable in principle 01:23:56 – “What’s the hurry?” – Why rush when alignment might take millennia? 01:28:17 – Disagreement on whether AI has been net positive so far 01:35:13 – Why AI won’t magically solve longevity or other major problems 01:37:56 – Unemployment doom and loss of human autonomy 01:40:13 – Cosmic perspective: We could be “the baddies” spreading unaligned AI 01:44:34 – “Humanity is doing incredibly well” – no need for Hail Mary AI 01:49:01 – Why ASI might be bad at solving alignment (lacks human cultural wisdom) 01:52:06 – China cooperation: “Whoever builds ASI first loses” 01:55:19 – Liron’s Outro

Geoffrey: My hunch, and I can’t prove this, but my hunch is that both inner alignment and outer alignment are both unsolvable in principle, ever. I don’t actually think either of those are well-formed, coherent. Problem. That is something that we could ever actually solve…

Yeah, and I guess. What I wonder is from an evolutionary timescale, right? What is the hurry? Why are we pushing this so fast? If alignment is solvable, it might take 10 years, it might take a hundred years, it might take 10,000 years. It might be the hardest problem we’ve ever confronted.

And I think there’s some pretty good reasons to think it might be very, very, very hard. Why not? Wait, why not? Take it slow. It took arguably tens of millions or hundreds of millions of years to align animal nervous systems with the interests of their genes, right, to get sensory and cognitive systems that could reliably help genes survive and reproduce. That’s the history of the evolution of nervous systems.

It’s almost like genes are trying to sort of align nervous systems with their own interests, but there’s so many ways to fail, right? There’s so many possible misalignments. We call it mismatch in evolutionary psychology, right? Where your brain is going for certain goals that don’t actually result in survival or having kids reliably, right?

There’s so many ways to get distracted from mainline evolution. I think aligning humanity with ASIs is structurally analogous to that challenge. It’s almost like where’s sort of the genes? We’re trying to build this super nervous system and we hope that it acts in our interests.

If that alignment problem took tens of millions of years and many, many, many generations, I don’t see any reason why ASI alignment would take just a few years.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.