Geoffrey Hinton: the Industrial Revolution made human strength irrelevant; AI will make human intelligence irrelevant, and the wealth created by increased productivity will not go to displaced workers pic.twitter.com/kMwXEuoWvD
— Tsarathustra (@tsarnick) October 25, 2024
Geoffrey Hinton says AI companies should be forced to use one-third of their computing resources on safety research because AI will become smarter than us in the next 20 years and we need to start worrying about what happens then pic.twitter.com/ocT3Scmyxg
— Tsarathustra (@tsarnick) October 25, 2024
7,882 views 25 Oct 2024
Nobel Prize winner Geoffrey Hinton says that AI could pose an existential threat to humankind because of the way it learns and iterates. He speaks to David Westin on “Wall Street Week.”
Transcript The life of artificial intelligence is connected to the lives of people and also to their livelihoods. Air is already being used to make processes more efficient. What could this mean for productivity and therefore economic growth? And who will benefit from it? We posed those questions to Geoffrey Hinton. It will be a wonderful thing for productivity. That’s true. Whether it be a wonderful thing for society or something else altogether in a decent society. If you increase productivity a lot, everybody’s better off. But here, what’s going to happen if you increase productivity a lot? The rich and the big companies are going to get much richer. And ordinary people are probably going to be worse off because they’ll lose their jobs. AI’s ability to reshape productivity could also give new life to developed economies whose productivity has stalled. U.S. productivity growth has been significantly lower in the 21st Century than it was between 1930 and 2000. But for all the talk of the sweeping effects of AI, economic change has been slow to materialize so far. Goldman Sachs reports that only 5% of companies claim to use generative AI in regular production. With tech and information businesses leading the way, when wider adoption does come, Hinton anticipates it will have very different impacts on different parts of the workforce. Many people say it’ll create more jobs for this particular thing. I’m not convinced of that. What we’re doing in the Industrial Revolution, we made human strength irrelevant. Now we’re making human intelligence irrelevant, and that’s very scary. So there’s some areas where demand is very elastic. An example would be health care. If I could get 10 hours a week talking to my doctor, I’m over 70. I’d be very happy. So if you take someone and make them much more efficient by having them work with a very intelligent AI, they’re not going to become unemployed. It’s not that you’re not only going to be a few of them, you’re just going to get much more health care. Great. So in elastic areas, it’s great. The some areas that are less elastic. Like, I have a niece who answers less as a complaint to a health service. She used to take 25 minutes to answer a letter. Now she can just scan the letter into chat. It’ll give an answer. She’ll look at it. Check it. Okay, That’s 5 minutes now. I suspect they’ll need less people like that. It may be they can just everybody can complain a lot more, but I suspect they’ll need less people like that. So some jobs are some jobs are elastic, others aren’t the non elastic ones. I think people will lose their jobs. And what’s going to happen is the extra wealth created by the increase in productivity is not going to go to them. Hinton’s prize in physics wasn’t the only Nobel awarded this year for artificial intelligence to Google. DeepMind scientists and a professor at the University of Washington received Nobels in chemistry for using AI to predict the structure of millions of proteins and even to invent a whole new protein. Marty Chavez of Sixth Street says that tool called Alpha Fold shows the power of machine learning. As important a breakthrough as this Alpha fold work is, if you think about the problems in biology, the problems in biology are so much more complicated, right? Proteins are the basic building block of life, but then those proteins organize themselves into organelles of a cell, and then cells which organize themselves into tissues, which organize themselves into organs, which go to organ systems, human beings, populations of human beings beyond biology and health sciences generative. I could also have profound effects on the climate. For one thing, it will require a lot more electricity to drive it. Wells Fargo projects a 550% surge in air power demand by 2026. Ireland is a poster child for the extent of that demand. Today, its growing number of data centers used up 21% of the country’s electricity in 2023. The good news is it’s not new in our history. We’ve been through many periods as a country where we have had significant energy demand. But what this is going to mean is AI is now adding to this as we electrify transportation, as we electrify our buildings, we now have this big near-term demand for energy to power these data centers. Brian Deese served as the director of the National Economic Council under President Biden and is now focused on studying the effects of AI on energy at MIT. So in the near term, it creates this tension of how do we bring that additional electricity online, and are we bringing cleaner sources online or dirtier sources online? Obviously affects emissions. But the longer term, the bigger picture where these interact is AI is a technology which will then get deployed in lots of aspects of our economy, including our energy system. And so there are also opportunities where these two technological developments coming at the same time could actually be one plus one equals three. Help us find more efficient ways to innovate in the energy space for all the good that generative I could potentially do for the human race. It also poses some real dangers, perhaps none so apparent as its application to armed warfare. Anya Emanuel’s Aspen Strategy Group published a report on this very subject. I, by its nature, is dual use because it is a multipurpose technology. It’s like electricity or the steam engine. It’s going to power everything from the most amazing positive use cases to the really dangerous ones. And yet, for all of my lifetime, the biggest geopolitical risk terror even has been the threat of thermonuclear destruction. There’s recently was a meeting over in South Korea with an attempt to have everybody agree that if there were to be nuclear weapons used, a human had to be in that chain of command and the countries there agreed to it, except for one China. What do we make of that? We are really on day one of this technology and how it can be used vis a vis national security. There have been a number of attempts in the last couple of years, a lot of them, thankfully, led by the United States, to think carefully about A.I. and nuclear weapons. Use one to how much you would allow lethal autonomous weapons. And this means really, you know, if you have a Predator drone, there’s still a human directing it in deciding who to target and who to shoot at. In the Ukraine war, you’re getting very close now to drone on drone dogfights, drones doing their own targeting and shooting. And very soon you have a situation that’s very dangerous where if the AI weapon is always going to be faster. The incentive for everyone is to use lethal autonomous weapons because they’ll always win. And then you’ll have huge escalation, super dangerous. Lots of different people are trying to get at this in different ways. The Korea conference was one of them. I’m not surprised that China hasn’t signed on. I would just give you one example. You use nuclear weapons. You know, when the US came first in the nuclear race, we proposed some limited limits on nuclear weapons, I think as early as 1946. Then no real arms control treaties were negotiated until after you had the Cuban Missile Crisis and we got to the brink and then we had some pretty good ones. But what was going on all the time in the background and what is going on now and it’s it’s super important. And bears emphasizing is quietly behind the scenes. There are some Chinese scientists and American and Western scientists talking about these issues. There are other track tools that are more on the policy area. I’m part of one of them quiet conversations with the Chinese about the dangers of AI and the Chinese interlocutors I’ve talked to are equally worried about very similar things than the ones you and I are talking about here. And for a long time people have realized there’s going to be an arms race who can get the best lethal autonomous weapons fastest. All of the defense departments of the people who sell arms like the US, China, Britain, Israel, Russia, all those Defense departments are busy working on autonomous nuclear weapons. They’re not going to stop. If one of them stopped, the others wouldn’t. What we need for that is not to stop working on it, but to have a Geneva Convention. Now, you don’t get Geneva Conventions, things like the Geneva Conventions for chemical weapons until after something very nasty has happened. So I think realistically, very nasty things are going to happen with lethal autonomous weapons and then maybe we’ll be able to get conventions. So if a chemical weapons the conventions have basically worked, Putin isn’t using them in the Ukraine and they haven’t been used much. So that basically the conventions worked. I’m hoping they would work for lethal autonomous weapons, although I’m less confident. But nothing’s going to happen until after some very nasty things have happened. Whether A.I. works for good or for ill, a fundamental question is whether humans can maintain control or at least influence over it. Professor Hinton says we shouldn’t assume that we can, at least for long. As soon as we make agents, which people are busy doing things that can act in the world to make an effective agent, you have to give it the ability to create some goals. So if you want to get to Europe, you have a sub goal of getting to an airport and you don’t have to sort of think about Europe where you’re solving that sub goal. That’s why some goals are helpful and these big AI systems create some goals. Now the problem with that is if you give something the ability to create some goals, it will quickly realize there’s one particular subclass almost always useful. So if I have the goal of just getting more control over the world, that will help me achieve everything I want to achieve. So these things will realize that very quickly. Even if they’ve got no self-interest, they’ll understand that I get more control. I need to be better at doing what they want me to do, and so they will try and get more control. That’s the beginning of a very slippery slope. But that suggests time’s a wasting, that in fact we don’t have that much time. We don’t to be able to get some control over. Generally we control it right now, but we don’t have that much time to figure out how we’re going to stay in control when it’s smarter than us. How do we go about that? Let’s take the United States, for example, before we go globally. The United States, we have a government. There are good people in the government, smart people, some probably less present. But are they up to the job of really understanding what you’re talking about and getting their arms around it? I think with the current government, they are taking it seriously. It’s just very difficult to know what to do. One clear thing they should be doing, I believe they should be doing. We need many of the smartest young researchers to be working on this problem and we need them to have resources. Now, the government doesn’t have the resources. The big companies have the resources. The government, I think, should be insisting that the big companies spend much more of their resources on safety, on the safety research. How will we stay in control compared to what they do now? Right now they spend like a few percent on that and nearly all their resources go into building even better, bigger models. As you say, big companies don’t like to be told by the government how to spend their money, but they are often I mean, you have big accounting departments, for example, to comply with various regulatory requirements on accounting. If the government were to say, yes, we’re in a at least for the very largest tech firms involved, now mandate a percentage of your revenue that will be devoted towards safety. What’s the right number? I’m not sure that’s the right thing to go for. It shouldn’t be a percentage of the revenue because that’s very complicated. And they put all the revenue in some other country and cheat. The thing to go for is a fraction of their computing resources. The bottleneck here is computing resources. How many NVIDIA chips or how many Google tensor chips can you get? It should be a fraction of the computing resources. What’s an easy thing to measure? What fraction? I think it would be perfectly reasonable to say a third. Now that’s my starting point and I’d settle for a quarter from everything. You know, everything I understand is generative A.I. potentially an existential threat to the human species? I think that’s what I’ve been saying. Yes. It really is an existential threat. Some people say this is just science fiction. And until fairly recently, I believed it was a long way off. I always thought it would be a very long term threat, but I thought it would be 100 years before we had really smart things. 50 years maybe. We have plenty of time to think about it. Now, I think it’s quite likely that sometime in the next 20 years these things will get smarter than us and we really need to worry about what happens then. .