FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“I think it’s pretty likely the entire surface of the earth will be covered with solar panels and data centers.” — Open AI Chief Scientist Ilya Sutskever

“If you have an arms race Dynamics between multiple teams trying to build the AGI first they will have less time make sure that the AGI that they will build will care deeply for humans because the way I imagine it is that there is an avalanche like there is an avalanche of AGI development. Imagine you have this huge unstoppable force and I think it’s pretty likely the entire surface of the Earth will be covered with solar panels and data centers […] the future is going to be good for the AIs regardless. It would be nice if it were good for humans as well. (1:27:20)

Quotes from AI leaders – Stop AI

Statement from the Center for AI Safety

  • “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” (statement signed by OpenAI CEO Altman, DeepMind CEO Hassabis, Anthropic CEO Amodei, Turing Prize winners Hinton and Bengio, May 2023)

OpenAI CEO Sam Altman

  • Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” (Sam Altman’s blog, Feb 2015)
  • “Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off. This is sloppy, dangerous thinking.” (Sam Altman’s blog, Feb 2015)
  • “Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.” (Open AI’s website, Feb 2023)
  • “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” (Open AI’s website, Feb 2023)
  • The bad case — and I think this is important to say — is like lights out for all of us. … So I think it’s like impossible to overstate the importance of AI safety and alignment work. I would like to see much much more happening.” (StrictlyVC: Youtube | Transcript, Jan 2023)
  • “Unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen.” (Sam Altman’s blog, Dec 2017)
  • “[Sam Altman] said that the opportunity with artificial general intelligence is so incomprehensibly enormous that if OpenAI manages to crack this particular nut, it could ‘maybe capture the light cone of all future value in the universe’”. (TechCrunch, May 2019)
  • Altman: “When we start a big training run, I think there could be government insight into that. And then if that can start now […] what I mean is government auditors sitting in our buildings.” (NY Mag, March 2023)

DeepMind CEO Demis Hassabis

  • “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” (Time, Jan 2023)
  • “I would advocate not moving fast and breaking things.” (Time, Jan 2023)
  • “I don’t see any reason why that progress is going to slow down. I think it may even accelerate. So I think [AGI] could be just a few years, maybe within a decade away.” Hassabis defined AGI – artificial general intelligence – as a system with “human level cognitive abilities” (Wall Street Journal, May 2023)
  • “Maybe you’ll start collaborating with [AIs], say scientifically … eventually we could end up with [AIs] controlling a fusion power station and eventually I think one system or one set of ideas and algorithms will be able to scale across those tasks and everything in between … It will definitely be beyond what humans can do” (DeepMind: The Podcast, March 2022)

DeepMind Co-founder and Chief Scientist Shane Legg

  • “My prediction for when human level AGI … I give a log-normal distribution with a mean of 2028 and a mode of 2025, under the assumption that nothing crazy happens like a nuclear war.” (Shane Legg’s blog Vetta, Dec 2011)
  • From talk slides: “If we can build human level, we can almost certainly scale up to well above human level. A machine well above human level will understand its design and be able to design even more powerful machines. We have almost no idea how to deal with this. “ (Machine Super Intelligence, Aug 2009)
  • We’re not going to have a practical theory of friendly AI. I’ve spoken to a bunch of people … none of them, that I’ve ever spoken to, think they will have have a practical theory of friendly AI in about ten years time. No way. … It’s really really hard. We have no idea how to solve this problem.” (Machine Super Intelligence, Aug 2009)
  • “Do possible risks from AI outweigh other possible existential risks…? It’s my number 1 risk for this century, with an engineered biological pathogen coming a close second” (LessWrong, Jun 2011)
  • “A lack of concrete AGI projects is not what worries me, it’s the lack of concrete plans on how to keep these safe that worries me. A massive legion is being assembled at the gate, and the best response we have come up with is an all-star debate team.” (SL4 Mailing List, Re: Safety of brain-like AGIs, March 2007)

Open AI Chief Scientist Ilya Sutskever

  • The future is going to be good for the AIs regardless; it would be nice if it would be good for humans as well” (iHuman, Nov 2019)
  • “I think it’s pretty likely the entire surface of the earth will be covered with solar panels and data centers…” (iHuman, Nov 2019)
  • “It’s not that it’s going to actively hate humans and want to harm them, but it’s just going to be too powerful, and I think a good analogy would be the way humans treat animals. It’s not that we hate animals. I think humans love animals and have a lot of affection for them, but when the time comes build a highway between two cities, we are not asking the animals for permission. We just do it because it’s important for us. And I think by default that’s the kind of relationship that’s going to be between us and AGIs which are truly autonomous and operating on their own behalf.” (iHuman, Nov 2019)

Open AI Cofounder and CTO Greg Brockman

  • “The core danger with AGI is that it has the potential to cause rapid change. This means we could end up in an undesirable environment before we have a chance to realize where we’re even heading. The exact way the post-AGI world will look is hard to predict — that world will likely be more different from today’s world than today’s is from the 1500s. […] We do not yet know how hard it will be to make sure AGIs act according to the values of their operators. Some people believe it will be easy; some people believe it’ll be unimaginably difficult; but no one knows for sure” (Testimony of Mr. Greg Brockman: Video | Transcript, June 2018)

Google CEO Sundar Pichai

  • “We don’t have all the answers there yet – and the technology is moving fast … So does that keep me up at night? Absolutely.” (Sky News, April 2023)

Anthropic CEO Dario Amodei (previously OpenAI VP of Research)

  • “We are finding new jailbreaks. Every day people jailbreak Claude, they jailbreak the other models. […] I’m actually deeply concerned that in two or three years, we’ll get to the point where the models can, I don’t know, do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death.” (Hard Fork Podcast, July 2023)
  • “We found that today’s AI systems can fill in some of these steps, but incompletely and unreliably – they are showing the first, nascent signs of risk. However, a straightforward extrapolation of today’s systems to those we expect to see in 2-3 years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.” (Testimony to the US Senate, July 2023)
  • “There’s a long tail of things of varying degrees of badness that could happen. I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.” (80,000 Hours Podcast, July 2017)

Open AI Cofounder Elon Musk (left the board in 2018)

  • “Larry Page and I used to be close friends, and I would stay at his house in Palo Alto, and I would talk to him late into the night about AI safety. And at least my perception was that Larry was not taking AI safety seriously enoughHe really seemed to want digital superintelligence, basically a digital god, if you will, as soon as possible.” (Fox, April 2023)
  • “One of the biggest risks to the future of civilization is AI” (CNBC, Feb 2023)
  • “I think we need to regulate AI safety, frankly … It is, I think, actually a bigger risk to society than cars or planes or medicine.” (CNBC, Feb 2023)
  • “Mark my words — A.I. is far more dangerous than nukes” (CNBC, March 2018)
  • Regulation “may slow down AI a little bit, but I think that that might also be a good thing,” Musk added.” (CNBC, Feb 2023)
  • “I am very close to the cutting edge in AI and it scares the hell out of me,” said Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.” (CNBC, March 2018)

Turing Prize Winner Geoffrey Hinton (left Google in 2023)

  • “You can get very effective spam bots … There’s another particular thing I want to talk about, which is the existential risk – what happens when these things get more intelligent than us … Right now, they’re not more intelligent than us, as far as I can tell … given the rate of progress, we expect things to get better quite fast” (BBC, May 2023)
  • “These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English … “It’s a completely different form of intelligence … A new and better form of intelligence.” (MIT Tech Review, May 2023)
  • Hinton recently quit his job at Google. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business… As long as I’m paid by Google, I can’t do that.” (MIT Tech Review, May 2023). On whether he regrets his life’s work, he says, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have” (NY Times, May 2023)
  • “The idea that this stuff could actually get smarter than people — a few people believed that … But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” (NY Times, May 2023)

On how AI could kill humans (CNN, May 2023)

  • Interviewer: “How could [AI] kill humans?”
  • Hinton: “If it gets to be much smarter than us, it’ll be very good at manipulation, because it will have learned that from us. There are very few examples of a more intelligent thing being controlled by a less intelligent thing. It’ll figure out ways of manipulating people to do what it wants.”

On why Hinton was pursuing AGI (NY Mag, Nov 2015):

  • “Hinton was saying that he did not expect [AGI] to be achieved for decades. ‘No sooner than 2070,’ he said. ‘I am in the camp that is hopeless.’
  • ‘In that you think it will not be a cause for good?’ Bostrom asked.
  • ‘I think political systems will use it to terrorize people,’ Hinton said. Already, he believed, agencies like the N.S.A. were attempting to abuse similar technology.
  • ‘Then why are you doing the research?’ Bostrom asked.
  • ‘I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.’”

Others

  • Yann LeCunn, Chief AI scientist at Meta and Turing Prize winner: “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future. It’s a question of when and how, not a question of if.” (MIT Tech Review, May 2023)
  • Yoshua Bengio, Turing Prize winner: “I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about” (MIT Tech Review, May 2023)
  • Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race … It would take off on its own, and re-design itself at an ever increasing rate” (BBC, Dec 2014)
  • Anthropic: “So far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. …The results of [rapid AI progress] could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.” (Anthropic’s website, March 2023)
  • Paul Graham: “One difference between worry about AI and worry about other kinds of technologies (e.g. nuclear power, vaccines) is that people who understand it well worry more, on average, than people who don’t. That difference is worth paying attention to.” (Twitter, April 2023)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.