LinkedIn founder Reid Hoffman says there’s a 2 in 10 chance that AI will cause human extinction. pic.twitter.com/Q8I9HJtuEU
— ControlAI (@ai_ctrl) August 30, 2024
Eliezer Yudkowsky: “there’s inevitable doom at the end of this, where if you keep on making AIs smarter and smarter, they will kill you.” pic.twitter.com/XTjWvmTqPR
— ControlAI (@ai_ctrl) August 30, 2024
Learn more: PBS NewsHour
As artificial intelligence rapidly advances, experts debate level of threat to humanity (7:17)
- 44,073 views 23 Aug 2024 The development of artificial intelligence is speeding up so quickly that it was addressed briefly at both Republican and Democratic conventions. Science fiction has long theorized about the ways in which machines might one day usurp their human overlords. As the capabilities of modern AI grow, Paul Solman looks at the existential threats some experts fear and that some see as hyperbole.
TRANSCRIPT:
[Reid Hoffmann P(doom) = 20%]
The development of artificial intelligence is speeding up so quickly that it was addressed briefly at both political conventions. Science fiction writers and movies have long theorized about the ways machines might one day usurp their human overlords. As the capabilities of modern artificial intelligence grows, we looked at the existential threats some experts fear and some see as hyperbole. >> There is inevitable doom the end of this. If you keep on making ai smarter and smarter, they will kill you. >> She is a tech pundit and the founder of a nonprofit called the machine intelligence research institute. It exploits the uses of friendly ai. Do you think everyone will die in? ? My lifetime in your lifetime >> I would wildly gas my lifetime and your lifetime. >> We have heard this before. When the godfather of ai warned us last spring. >> Is a threat for everyone. The Chinese, Americans, Europeans. Just like a global nuclear war was. >> More than a century ago, this morning was dramatized. People have been mentioning that the robots will become Cindy and and destroy eyes. >> That is right. That has created an entire mythology that is played out in endless science fiction treatments. >> Like the terminator series. >> They decided our fate in a microsecond. >> That is Hollywood. Look on the bright side. Robots will be everywhere soon enough. As mass production drives down their costs. Will they soon turn against us? >> They don’t want anything. They don’t need anything. We design and build these things to our own specifications. That is not to say we cannot build some very dangerous machines and tools. >> He thinks what humans do with ai is much scarier than ai on its own. Create super viruses, mega-drones, god knows what else. The big question still is will ai bring doomsday? >> I reached the existential threat around three or four out of 10. >> We said this question to an avatar. What does he say? >> I will go for two on an answer. >> Your avatar went higher. >> It is trying to approximate something that is a bulk of >> Told me there was only a one in 10 chance of a ticking time bomb in my room, I would be out of there. He started warning about rogue ai >> >> In 2005. Since then, things have got it worse than hoped for. This staying at the end of this is ai gets smarter than us and humanity becomes collateral damage to its own expansion. They are smarter than humanity. It now wants to get independent of humanity. It does not want to be running on computers that requires electricity that humans need to generate. There are many more ai companies now. Some say maybe it will wipe out humanity. I don’t care about that. >> With all the worlds problems like tribalism and climate change, you think ai is a bigger problem? >> I think climate change is unlikely to kill literally everyone down to the last man, woman, and child. I expect leaders would prefer not to wipe out humanity. But it might not be up to them. They have to stay ahead of their competitors. >>? ? What does ai at south thank I asked a robot hooked up to chatgpt. >> The potential for harmony and enhancement exists. But vigilance is paramount. >> I think we will be rolling the dice on that. You are saying there is a 30% chance that ai will destroy humanity. Shouldn’t I be scared? >> Your concern is valid. But 30% is not a direct prediction but a cautious uncertainty. >> I’m afraid that human beings not be capable of protecting themselves from the risks of ai. >> That dread is profound. It is daunting. But not without hope. Humanity’s resilience is often defined expectations. >> Perhaps it is no surprise that the actual human that created chatgpt, Sam Altman, thinks the same. >> I think ai will be a net good. But with any other tool, you can do great things with a hammer or you can kill people with a hammer. We are trying to mitigate the bad. >> Reid Hoffman thinks we can maximize the good. >> We have climate change is a possibility. Pandemic, nuclear war. We have World War as a possibility. We have all of these existential risks. Is ai also an existential risk? Potentially. But you look at the portfolio and you see what improves our overall portfolio? What reduces existential risk for humanity? Ai adds a lot of positive in the column. It is the only way I think we can do that. It might even help us with climate change. In the net portfolio, our existential risk might go down. >> For the sake of us all, grown-ups, children, grand-children, let’s hope he is right.