Trevor (who is also Microsoft’s “Chief Questions Officer”) and Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google’s DeepMind, do a deep dive into whether the benefits of Artificial Intelligence (AI) to the human race outweigh its unprecedented risks. 0:00 – Introduction 1:42 – Early DeepMind 3:36 – How AI Works 6:36 – Do Machines Think? 9:00 – Humanist AI 11:04 – Philosophy vs. Tech 14:53 – Future of Energy 17:02 – AI’s Environmental Cost 20:15 – Scale & Risk 21:42 – Jobs vs. Workers 29:28 – Identity & Work 32:07 – Speed of AI 37:22 – Containment Challenge 44:07 – AI as Agent 47:25 – Self-Improvement Risk 50:00 – Early Evangelism 1:02:22 – Trust & Prediction 1:02:30 – AlphaGo & AlphaFold 1:09:33 – Smarter or Lazier? 1:13:42 – Power Decentralized 1:17:22 – Four Red Lines 1:21:14 – AI Rights? 1:26:52 – Manipulation Risks 1:27:52 – Deepfakes & Truth 1:29:43 – Optimism vs. Cynicism 1:30:25 – Work vs. Jobs 1:33:03 – Vision of Abundance 1:35:43 – Global Impact 1:38:32 – Community Lessons 1:42:08 – Closing Thoughts
NOAH. Is there anything that you would ever see in the field that would make you want to, you know, sort of hit like a kill switch? Is there anything that you could you could experience with AI where you you would come out and go, nope, shut it all down.
SULEYMAN. Yeah, definitely. It’s very clear if an AI has the ability to recursively self improve, that is, it can modify its own code combined with the ability to set its own goals, combined with the ability to act autonomously, combined with the ability to accrue its own resources. So those are the four criteria. Recursive self-improvement, setting its own goals, acquiring its own resources, and acting autonomously. That would be a very powerful system. That would be that would require, like military grade intervention to be able to stop in, you know, say 5 to 10 years time if we allowed it to do that. And so it’s on me as a model developer at Microsoft. My peers at the other companies, the governments to audit and regulate those capabilities because I think they’re going to be like sensitive capabilities, just like you can’t just go off and say, I’ve got $1 billion, I’m going to go build a, you know, nuclear power plant.
Humble editorial comment: Kill-switch? WHAT kill-switch?? Where is the plan? Where is the specification? Where is the “kill-switch”?