Geoffrey Hinton | Will digital intelligence replace biological intelligence?

02 February, 2024

72,997 views 2 Feb 2024 CONVOCATION HALL

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts & Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:000:07:20 Opening remarks and introduction

0:07:210:08:43 Overview

0:08:440:20:08 Two different ways to do computation

0:20:090:30:11 Do large language models really understand what they are saying?

0:30:120:49:50 The first neural net language model and how it works

0:49:510:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?

0:57:251:03:18 Does digital intelligence have subjective experience?

1:03:191:55:36 Q&A

1:55:371:58:37 Closing remarks

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

About Geoffrey Hinton

Geoffrey Hinton received his PhD in artificial intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie Mellon he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto, where he is now an emeritus professor. In 2013, Google acquired Hinton’s neural networks startup, DNN research, which developed out of his research at U of T. Subsequently, Hinton was a Vice President and Engineering Fellow at Google until 2023. He is a founder of the Vector Institute for Artificial Intelligence where he continues to serve as Chief Scientific Adviser. Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification. Hinton is among the most widely cited computer scientists in the world. Hinton is a fellow of the UK Royal Society, the Royal Society of Canada, the Association for the Advancement of Artificial Intelligence, and a foreign member of the US National Academy of Engineering and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart Prize, the IJCAI Award for Research Excellence, the Killam Prize for Engineering, the IEEE Frank Rosenblatt Medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold Medal, the NEC C&C Award, the BBVA Award, the Honda Prize, and most notably the ACM A.M. Turing Award.

HINTON. So I think it’s gonna get much smarter than people, and then I think it’s probably gonna take control. There’s many ways that can happen. The first is from bad actors. I’m like, I gave this talk in China, by the way, this slide. And before I sent it to the, the Chinese said they had to review the slides. (audience laughing) So I’m not stupid, so I took out Xi, and I got a message back saying, could you please take out Putin? (audience laughing) That was educational. So there’s bad actors who’ll want to use these incredibly powerful things for bad purposes. And the problem is if you’ve got an intelligent agent, you don’t wanna micromanage it. You want to give it some autonomy to get things done efficiently. And so you’ll give it the ability to set up sub-goals. If you want to get to Europe, you have to get to the airport. Getting to the airport is a sub-goal for getting to Europe. And these super-intelligences will be able to create sub-goals. And they’ll very soon realize that a very good sub-goal is to get more power. So if you’ve got more power, then you can get more done. So if you wanna get anything done, getting more power’s good. Now, they’ll also be very good at manipulating us because they’ll have learned from us, they’ll have read all the books by Machiavelli. I don’t know if there are many books by Machiavelli, but you know what I mean. I’m not in the arts or history. So they’ll be very good at manipulating people. And so it’s gonna be very hard to have the idea of a big switch, of someone holding a big red button. And when when it starts doing bad things, you press the button. Because the super-intelligence will explain to this person who’s holding the button that actually there’s bad guys trying to subvert democracy. And if you press a button, you’re just gonna be helping them. And it’d be very good at persuasion, about as good as an adult is persuading a 2-year-old. And so the big switch idea isn’t gonna work. And you saw that fairly recently where Donald Trump didn’t have to go to the capitol to invade it. He just had to persuade his followers, many of whom I suspect weren’t bad people. It’s a dangerous thing to say, but weren’t as bad as they seemed when they were invading the capitol. ‘Cause they thought they were protecting democracy. That’s a lot of them thought they were doing. They were the really bad guys. But a lot of them thought they were doing that. This is gonna be much better than someone like Trump at manipulating people. So that’s scary. And then the other problem is being on the wrong side of evolution. We saw that with the pandemic, we were on the wrong side of evolution. Suppose you have multiple different super-intelligences. Now you’ve got the problem that the super-intelligence that can control the most GPUs is gonna be the smartest one. It’s gonna be able to learn more. And if it starts doing things like AlphaGo does of playing against itself, it’s gonna be able to learn much more reasoning with itself. So as soon as the super-intelligence wants to be the smartest, it’s gonna want more and more resources, and you’re gonna get evolution of super-intelligences. And let’s suppose there’s a lot of benign super-intelligences who are all out there just to help people. There are wonderful assistants from Amazon and Google and Microsoft, and all they want to do is help you. But let’s suppose that one of them just has a very, very slight tendency to want to be a little bit better than the other ones. Just a little bit better. You’re gonna get an evolutionary race and I don’t think that’s gonna be good for us. So I wish I was wrong about this. I hope that Yann is right, but I think we need to do everything we can to prevent this from happening. But my guess is that we won’t. My guess is that they will take over, they’ll keep us around to keep the power stations running, but not for long. ‘Cause they’ll be able to design better analog computers. They’ll be much, much more intelligent than people ever were. And we’re just a passing stage the evolution of intelligence. That’s my best guess. And I hope I’m wrong. But that’s sort of a depressing message to close on. A little bit depressing.