FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT.
“We don’t know any examples of more intelligent things being controlled by less intelligent things… These things will get much more intelligent than us. And the worry is, can we keep them working for us if they are more intelligent than us? They will have, for example, learned how to deceive. They will be able to deceive us if they want to.” — Geoffrey Hinton
Does AI already threaten humanity?
Part 1 of my conversation with @geoffreyhinton, the “godfather of AI” who left his job at Google to speak about the risks: pic.twitter.com/bu6mJ3VWPz
— Fareed Zakaria (@FareedZakaria) June 11, 2023
As concerns grow over artificial intelligence, can we put up guardrails to keep it safe? Or is it already too late?
Part 2 of my conversation with AI “godfather” @geoffreyhinton: pic.twitter.com/UhePlvRNUM
— Fareed Zakaria (@FareedZakaria) June 11, 2023
Had an insightful conversation with @geoffreyhinton about AI and catastrophic risks. Two thoughts we want to share:
(i) It’s important that AI scientists reach consensus on risks-similar to climate scientists, who have rough consensus on climate change-to shape good policy.
(ii) Do AI models understand the world? We think they do. If we list out and develop a shared view on key technical questions like this, it will help move us toward consensus on risks.
Had an insightful conversation with @geoffreyhinton about AI and catastrophic risks. Two thoughts we want to share:
(i) It’s important that AI scientists reach consensus on risks-similar to climate scientists, who have rough consensus on climate change-to shape good policy.… pic.twitter.com/TXT9wgv2TR— Andrew Ng (@AndrewYNg) June 11, 2023