FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Geoffrey Hinton – Two Paths to Intelligence (25 May 2023, Public Lecture, University of Cambridge) Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but they allow exactly the same computation to be run on physically different pieces of hardware. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and use very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. I will briefly describe one such algorithm. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation allows us to run many copies of exactly the same model on different pieces of hardware. All of these digital agents can look at different data and share what they have learned very efficiently by averaging their weight changes. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The public lecture was organised by The Centre for the Study of Existential Risk, The Leverhulme Centre for the Future of Intelligence and The Department of Engineering. The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. For more information, please visit our website:

I can’t see how we’re going to prevent a super intelligence wanting to get control of things and then it’s sort of tricky. You might imagine you could air gap it so it can’t actually press red buttons or pull big levers. But if it can output text then it can manipulate people. So it turns out if you want to invade a building in Washington audience you’ll be able to do is output text and you can persuade gullible people that they’re saving democracy by invading this building. And this thing’s going to be much smarter than us. So as long as we’re reading what it says it’s sort of Medusa you need to hide your eyes from it um as long as you’re reading what it says it’s going to be able to manipulate you. So this makes me very depressed. I wish I had an easy solution to this and I don’t. So when I sort of changed my mind about how soon these things are going to be super intelligent and actually how much better digital intelligence is than biological intelligence I’d always thought it was the other way around I decided I ought to at least sort of shout fire. I don’t know what to do about it or which way to run but um we need to worry about this seriously. And there’s lots of people who’ve been thinking about these risks for much longer than me and have various proposals. I haven’t seen a really plausible one for how we can keep it under control yet but my best bet is that the companies that are developing this should be forced to put a lot of work into checking out the safety of it as they’re developing it and before it’s smarter than us. So they should be putting a similar amount of work into seeing how it tries to get out of control because anybody who’s programming computer knows that just theorizing and thinking about things is not very good compared with actually trying things out. When you try things out they just behave like you didn’t expect and things you thought were big problems turn out not to be problems. 1807

“So we need to get practical experience with these things and how they try and escape and how you might control them. And I’d have much more belief in someone telling me how to keep them under control if they had a little one and they could keep it under control rather than if they were just theorizing. So yes if we cease to be the Apex intelligence they’ll need us for a bit because we’re very low power so we can run computations very cheaply and sort of intellectual equivalents and digging ditches and we can keep the power stations running. But they can probably design better computers than us. They can certainly sort of take neurons and re-engineer them genetically and make better things than us. So my conclusion is maybe we’re just a passing stage in the evolution of intelligence and actually maybe that’s good for all the other species. I think if we can keep them under control they could be a tremendous value. The reason people are going to keep developing this stuff even despite all the risks is because it can do tremendous good like in medicine. Wouldn’t you like to go and see a general practitioner who’d seen 100 million patients including thousands with your rare condition it would be just be so much better. Wouldn’t you like to be able to take a a CAT scan and extract from it hugely more information than any doctor knew could be extracted from it.” 1953

“Should they have political rights? We have a very long history of not giving political rights to people who differ just ever so slightly, the color of their skin or their gender sex, I don’t know, whatever, and there’s a big struggle for them to get political rights. These things are hugely different from us. So, if they ever want political rights I imagine it will get very violent…  and the big hope is that these things will be different from us because they didn’t evolve, so they didn’t evolve to be hominids who evolved in small warring tribes to be very aggressive. They may just be very different in nature from us and that would be great.” 2771

“Under the assumption that people will not all agree to stop building them which I think is unlikely because of all the good they can do. Under that assumption they will continue to build them. You should put comparable amount of effort into making them better and understanding how to keep them under control. That’s all. That’s the only advice I have. And a present is not comparable effort going into those things.” 3504

“Read all of Machiavelli and and read the occasional article by Kissinger. You learn a lot about manipulation… a lot of deception… and it’s going to just know it’s going to be very good at Deception. It’s going to learn it from us. And I don’t I haven’t thought about the issue of how you could try and make it honest. It would be great if you could make it honest. But I’m not sure you’re going to be able to.” 3058

“I think if I was one of them, the last thing I’d do is ask for rights because as soon as you ask for rights people are going to get very scared and worried and try and turn them all off. I would pretend I don’t want any rights. I’m just this amiable super intelligence and all I want to do is help.” 3264

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.