FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders. Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI. They discuss:

  • Why even AI insiders are losing faith in control
  • What a “Chernobyl moment” could actually look like
  • Why regulation isn’t anti-innovation — it’s survival
  • The myth that America is “allergic” to AI rules
  • How liability, accountability, and provable safety could still save us
  • Whether we can ever truly coexist with a superintelligence

This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one.

00:00 – Intro: John Sherman on why AI must be safe or nothing

01:00 – Who is Stuart Russell? Pioneer, AI godfather turned whistleblower

02:00 – The shocking confession: An AI lab CEO’s “Chernobyl-scale disaster” hope

03:00 – Talking to family about extinction risk

05:30 – Why the public still doesn’t get it

07:30 – Hollywood myths and false optimism

10:00 – How AI could actually kill us – from words to world-changing actions

13:00 – Human imitation and deceptive goals in LLMs

17:00 – Consciousness doesn’t matter for survival

20:00 – P(doom) and why Russell rejects fatalism

22:30 – The real doomers are the builders – tech incentives and tribalism

25:00 – The Ground News sponsor segment

29:45 – Global governance and the China question

33:00 – How cooperation is still possible

36:00 – The AI CEO who wants catastrophe

38:30 – Why unsafe AI kills industries, not just people

41:00 – America is not allergic to regulation

46:00 – Liability, accountability, and slowing down AI

52:00 – The limits of liability for extinction risk

56:00 – Warning shots: what an AI Chernobyl might look like

59:00 – Financial crashes, cyber attacks, engineered pandemics

1:03:00 – What happens after a global blackout?

1:06:00 – What provably safe AI regulation should look like

1:10:00 – Why we can’t just “switch it off”

1:17:00 – The ‘giant bird’ metaphor for current AI development

1:20:00 – Job loss and extinction — connected fights

1:23:00 – Could we coexist with superintelligence?

1:25:00 – Stuart Russell’s hope: global collaboration through safety ethics

1:29:00 – The pendulum is swinging back — public awareness is rising

1:31:00 – Outro: Why slowing down AI is not anti-innovation

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.