FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Nobel laureate Geoffrey Hinton, known as one of the “godfathers of AI” for his pioneering work in deep learning and neural networks, joins Kara to discuss the technology he helped create — and how to mitigate the existential risks it poses.  Hinton explains both the short- and long-term dangers he sees in the rapid rise of artificial intelligence, from its potential to undermine democracy to the existential threat of machines surpassing human intelligence. They discuss:

  • How to gauge concerns over AI’s ability to corrupt democracies or develop new viruses and autonomous weapons
  • Job creation vs. displacement
  • Chatbots and the need for safety guarantees
  • The opposing views on the risks of AI
  • The need for an international framework for AI; and
  • The various AI models out there

Chapter markers: 00:00 Intro 00:48 The difference between AI and nuclear weapons 02:28 Why the sudden focus on AI? 05:15 When did AI become a threat? 07:10 Why Hinton decided to leave Google 12:23 Will AI cause unemployment? 20:00 How should we be training AI? 23:45 Preventing an AI takeover 27:30 Opposing views on the risks of AI 33:53 Thoughts on how to regulate AI 35:34 Will the China outpace the US? 37:37 An international framework for AI 40:40 The potential AI bubble 44:00 Open source vs. closed source models 49:30 How to avoid deepfakes 54:03 The Godfather of AI

“You can test for lots of things and you can make them much better by doing that. The question is, can you make them sort of guaranteed safe? And I think the answer is you’ll never get that correct the same way as you’ll never get it for people. It’s never going to be completely safe.” (22:12)

I thought long and hard about whether I should sign that moratorium precisely for that reason. Um I signed it because I think it’ll have a political effect. And I really think that humanity would be very ill advised to allow anybody to develop super intelligence until we have some understanding of whether we can do it safely. If we know we can’t do it safely, we should stop. (39:19)

“I believe it’s a huge mistake to release the weights. I believe that’s a gift to cyber criminals and terrorists and all sorts of people and other countries. Meta was the first to do that … But I think it’s stupid releasing the weights.”(45:31)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.