FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Given Extinction Worries, Why Don’t AI Researchers Quit? Well, Several Reasons.

Daniel Eth

14 min read · Jun 6, 2023

Recently, many prominent AI researchers and tech executives have been warning that AI might lead to human extinction. In response, lots of people seem pretty confused about why all these researchers and executives continue to work on AI despite this worry. There are actually numerous reasons researchers have for continuing — in this piece, I’ll list several.

I’ll note that the point of this piece isn’t to defend the decision of researchers who continue to work on AI despite thinking it presents extinction risks, nor to criticize them for their decision — it’s simply to make common knowledge some things that aren’t widely known outside of tech and science circles. I think some of the following reasons are more reasonable than others, but I’ll try not to editorialize too much.

A preamble — worries about AI causing human extinction are common among AI researchers

Before I go on, we should note that the view that advanced AI might cause human extinction is not a fringe view among AI researchers themselves. This point was most clearly demonstrated by a one-sentence statement that was signed by many prominent AI researchers last week:

A couple things stand out about this statement:

  1. Signatories are worried that AI might cause human extinction, not just “AI takeover” in a metaphorical sense like “AI taking over our jobs.”
  2. Signatories consider this extinction risk substantial enough to place it in the same category as things like “pandemics and nuclear war” — so they’re not just thinking about it as some unbelievably remote risk that’s nonetheless in theory possible, like human extinction from an asteroid impact.¹

The second point is consistent with a previous survey of top AI researchers, in which the majority of survey respondents estimated at least a 5% or 10% (depending on question phrasing) chance of existential catastrophe from AI (of course, we can’t know how those who didn’t respond to the survey felt, but the responses should suffice to show that the worry isn’t fringe).²

The signatories of this recent AI extinction letter were more-or-less a who’s who of AI, including:

  • Geoffrey Hinton and Yoshua Bengio — the two most cited AI researchers in the world, and two of the three “godfathers of AI,” known for their work on deep learning and their corresponding Turing Award (the third “godfather of AI,” Yann LeCun, disagrees with the concerns within the letter)
  • The CEOs of OpenAI, Google DeepMind, and Anthropic, the 3 main labs at the cutting-edge of AI
  • The Chief Scientist of OpenAI and the Chief AGI Scientist of Google DeepMind, as well as further C-suite execs at all of OpenAI, Google DeepMind, and Anthropic
  • Both the CTO and the Chief Scientific Officer of Microsoft
  • Bill Gates (okay, he’s not technically an AI-guy, but still)
  • Many, many extremely impressive academic researchers — seriously, the list of professors that signed this letter is incredibly impressive, to the point that I feel awkward not listing many of these people by name, but there’s just too many to list; in fact, the majority of signatories are actually university professors, which should put to rest the idea that the letter is nothing but a confusing corporate strategy by Big Tech firms.

Several of these researchers have previously publicly voiced these concerns. Hinton has been loudly sounding the alarm on AI for several weeks now, and, in his own words, “What I’ve been talking about mainly is what I call the ‘existential threat,’ which is the chance that [AI systems] get more intelligent than us, and they’ll take over from us — they’ll get control… It’s possible that there’s no way we will control these superintelligences, and… in a few hundred years’ time, there won’t be any people, it’ll all be digital intelligences.”

Bengio has noted, “a rogue AI may be dangerous for the whole of humanity, irrespective of one’s nationality. This is similar to the fear of nuclear Armageddon.”

Ilya Sutskever, co-founder and Chief Scientist of OpenAI, has said, “I would not underestimate the difficulty of alignment of [AI systems] that are actually smarter than us,” and further, “I think a good analogy would be the way humans treat animals… When the time comes to build a highway between two cities, we are not asking the animals for permission… I think it’s pretty likely that the entire surface of the Earth will be covered with solar panels and data centers.”

I could go on, but you get the point.

Okay, so then why are these researchers still working on AI (well, except for Hinton, who recently quit his job at Google³), and why aren’t CEOs pulling the plug on their own labs?

Reasons AI researchers continue, despite thinking AI might cause human extinction

Note that these reasons are not exhaustive — I’m aware of several more that I didn’t list, and I’m sure there’s others that I’m not aware of. Individual researchers may subscribe to multiple of these reasons, or to only one.

And to be clear, there are also AI researchers who don’t think AI poses a risk of human extinction (I mentioned Yann LeCun above, but he’s hardly the only one) — this piece isn’t about them, as their reasons for continuing to work on AI aren’t as confusing.

Reason 1: Their specific research isn’t actually risky

The term “AI” is an incredibly broad term referring to many different types of technology, from large language models like GPT, to chess-playing software like Stockfish, to self-driving car technology. Worries about extinction from AI typically involve artificial general intelligence (AGI) — a hypothetical future type of AI that could outcompete humans generally across many domains, such as long-term planning, social persuasion, and technological development. If your AI research involves, for instance, improving self-driving car technology or applying current AI techniques to better diagnose cancer, then it is unlikely to actually accelerate us closer to AGI, and thus it may be irrelevant for extinction risk.

Some AI research even makes it more likely that if AGI is ever developed, it’ll be easier to control, and thus less likely to cause human extinction. For instance, mechanistic interpretability research tries to make AI systems more understandable (instead of being opaque black boxes), and scalable oversightresearch attempts to make it easier for a human overseer to provide feedback to an advanced AI system so that it may internalize goals aligned with the desires of its designers.

There is absolutely nothing hypocritical about an AI researcher who is pursuing either research that’s not on the path to AGI or alignment research to be sounding the alarm about the risks of AGI. Consider if we had one word for “energy researcher” which included all of: a) studying the energy released in chemical reactions, b) developing solar panels, and c) developing methods for fossil fuel extraction. In such a situation, it would not be hypocritical for someone from a) or b) to voice concerns about how c) was leading to climate change — even though they would be an “energy researcher” expressing concerns about “energy research.”

Reason 2: Belief that AGI is inevitable and more likely to go better if you personally are involved

As a piece of background context, it’s worth noting that a large portion of technologists are, to first approximation, techno-determinists in the sense of believing technological progress is basically inevitable, with regulation or other social forces only temporarily slowing down technology or moving around where it gets developed. (Whether or not this view is accurate is outside the scope of this piece.) From that assumption, many researchers conclude that AGI will get developed eventually, whether or not that’s a good thing.

Additionally, many AI researchers think there is an unstoppable race to AGI, so even if most technological development isn’t inevitable, AGI in particularmight be. (Again, whether or not this “unstoppable race” framing is correct is beyond the scope of this piece — many people assume it is, though others disagree.)

And from the view that AGI is inevitable, researchers who are concerned about extinction risk may conclude that if they personally get involved, they might be able to help steer the ship in a better direction. I’ve heard several reasons for thinking this, but two are particularly common:

  • Being a voice of caution might push their lab or the field to be marginally safer — remember, while many AI researchers are concerned about extinction risk from AI, it’s not a universal concern among AI researchers, so a researcher at a top lab pushing to take the risk seriously could in principle affect how much the lab as a whole does. Alternatively, a prestigious researcher pushing to take the concern seriously might make other researchers also take it seriously (Hinton and Bengio, for instance, seem to be successfully acting in this manner now, though I have no reason to assume their recent pro-safety stances were planned far ahead of time).
  • Working at a relatively more cautious or ethical lab may make that lab more likely to “win” the AGI race — AI labs as a whole vary in how generally cautious they are and how seriously they take extinction risk. An AI researcher that works for a relatively more cautious lab might consider that they’re helping a more cautious lab “win” against less cautious labs in a race to AGI, which they might think would decrease extinction risk on net. Similar arguments are also sometimes used for advancing the AI capabilities of one country over another.

Reason 3: Thinking AGI is far enough away that it makes sense to keep working on AI for now

Some researchers think AGI will eventually present an extinction risk, but not for a while. These researchers might still support alignment research so that we’re in a better spot when AGI is closer, but think that calls for slowing down or restricting AI development now are premature.

In an extreme case, someone with this view might consider worrying about extinction risk at all to be fruitless. For instance, Andrew Ng (co-founder of Coursera and former head of Baidu AI Group and Google Brain) likensworrying about extinction risk from AI to worrying about “overpopulation on Mars” — perhaps something that’ll become an issue in hundreds of years, but not for a while.

It should be noted that AI researchers are all over the map regarding how far away they think AGI is — some think AGI is less than 10 years away, some think it’ll take a few decades, and some think it’ll take over a century.

Reason 4: Commitment to science for science sake

Many scientists and technologists have unwavering support for “scientific progress” as a matter of principle, or otherwise an impulse towards advancing scientific progress, regardless of how any cost-benefit analysis of any specific scientific advance would come out.

This sentiment was neatly captured by Robert Oppenheimer’s statement, “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.”

Similar statements have been made by AI researchers regarding AGI. Hinton himself says that he used to paraphrase Oppenheimer’s above statement when asked how he could work on a technology so potentially dangerous. Others are drawn to the creation of AGI as an intellectual challenge.

Reason 5: Belief that the benefits of AGI would outweigh even the risk of extinction

In order to understand this viewpoint, you have to understand just how powerful many technologists expect AGI could be. Many technologists expect AGI to relatively quickly become far smarter than the smartest human, providing corresponding technological advances. This viewpoint is illustrated nicely in the hit blog Wait But Why, which is particularly popular among the tech community (note that Wait But Why uses the term “ASI,” meaning “artificial superintelligence,” for an AGI that’s far smarter than the smartest human):

In the left figure, the “Biological Range” of intelligence spans ants, chickens, non-human apes, and humans, and the intelligence difference between “ASI” and humans is shown to be far, far larger than the difference between humans and ants. The right figure illustrates how the production of ASI might appear fast, and it compares this potential rapid transition to the much slower process of biological evolution. (Source for images)

What sorts of technologies could such an intelligent AI create? Many consider that it could perform thousands of years of technological advancement within a few calendar years. A common view is that it could relatively quickly unlock advanced nanotechnology (i.e., “atomically-precise manufacturing”) — the ability to build complex objects with atomic precision. And AGI, combined with advanced nanotechnology, might allow for some particularly spectacular technologies.

Above all, many consider the possibility that AGI might enable a cure for aging — allowing those alive at the time to live for thousands or even billions of years in good health and biological youth (even if they are chronologically much older).

This sounds crazy because people typically think of aging as “inevitable,” but while it’s inevitable today, the thought goes, that’s only because we don’t have a robust treatment for it today. At base, the deterioration process in aging is simply a matter of the atoms in your body being rearranged into configurations where you become more frail and your health declines — in principle, a comprehensive enough understanding of the relevant processes, combined with the ability to rearrange molecules and “fix” subcellular damage, should allow for rejuvenation from the damages of aging itself (allowing, for instance, someone who is chronologically 70 years old, or even 700, to become biologically 25).

It’s not uncommon for techies to claim in public that AGI might “allow for a cure for cancer,” and then in private admit that what they’re really excited about is a potential cure for aging. This coyness isn’t generally a cynical attempt to deceive anyone — it’s just that scientists recognize that if you talk about “curing aging” in public, you’ll usually be labeled a crackpot, versus you can talk about “curing cancer” all you want. But understanding what scientists really imagine here is key for understanding why many are willing to run an extinction risk — almost no one would seriously risk human extinction in exchange for a cure for cancer, but there are many people who would seriously risk human extinction in exchange for a fountain of youth.

The same post from Wait But Why referenced above shows the way that many who think about this issue grapple with it. To quote the author Tim Urban:

I have some weird mixed feelings going on inside of me right now.

On one hand, thinking about our species, it seems like we’ll have one and only one shot to get this right. The first ASI we birth will also probably be the last — and given how buggy most 1.0 products are, that’s pretty terrifying…

When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right — no matter how long we need to spend in order to do so.

But thennnnnn

I think about not dying.

Not. Dying

And then I might consider… maybe we don’t need to be over-the-top cautious, since who really wants to do that?

Cause what a massive bummer if humans figure out how to cure death right after I die.

Reason 6: Belief that advancing AI on net reduces global catastrophic risks, via reducing other risks

This reason is somewhat of a variation of Reason 5. There are a few different specific lines of argument that this reason can take, but the most common are that AI is key to solving either climate change, pandemics, or the risk of nuclear war (or some combination of those 3).

AI could, for instance, help develop better solar panels or more energy-efficient materials, combating climate change. It could help develop better vaccines or other pandemic preparedness measures. (I’m somewhat less clear on how AI is supposed to help with the risk of nuclear war.)

Though it should perhaps be noted that there are other pathways by which AI may exacerbate these risks. As AI advances, it’s expected to use more and more energy as models scale up and are deployed more (exacerbating climate change), make the creation of novel virus easier (which is the most concerning situation for pandemics, as engineered pandemics could be much more lethal and transmissible than what we tend to see naturally), and might even destabilize nuclear balances (basically, AI might allow for accurate enough targeting that it could allow for a first strike to nullify second-strike capabilities, providing an incentive for nuclear powers to “go first” and take out their adversaries’ nuclear capabilities).

Reason 7: Belief that AGI is worth it, even if it causes human extinction

Some technologists believe that creating AGI would be of cosmic significance outweighing the extinction of humans (often rooted in the idea that this significance is due to AGI’s superior intelligence). This belief is often expressed as something along the lines of “AI will simply be the next step in evolution after humans,” said in an approving manner.

For one example of this view, the AI researcher Hugo de Garis has said, “These machines might, for whatever reason, wipe out humanity — there’s always that risk… They’ll be godlike… As [an AI researcher] myself, am I prepared to risk the extinction of the human species for the sake of building an [AGI]? Because that’s what it comes down to. Yup.”

Another alleged example of this view comes from Larry Page (co-founder and ex-CEO of Google) — both Elon Musk and MIT Physics professor Max Tegmark have claimed that Larry has dismissed concerns about extinction risk from AI, based on the idea that it would be “speciesist” to prioritize the survival of humanity over the creation of AGI (Max having made this claim in his book Life 3.0, Elon having made it in various venues, including an interview with Fox News).

Conclusion

Again — this list isn’t meant to be exhaustive; there are further reasons some researchers have for continuing to work on AI despite thinking AI poses extinction risk. (And, again, there are additional AI researchers who don’tthink AI poses an extinction risk.)

We should also consider that quitting your job and leaving your field is a big and gutsy step, and day-to-day concerns could amplify inclinations to instead continue; depending on an individual’s personal situation, quitting AI research may mean financial instability or social alienation (if their social circle is primarily other AI researchers).

I want to reiterate that my point in this piece is neither to defend AI researchers for continuing to work on AI, nor criticize them for it. But I do think there’s a gap in public understanding about why many AI researchers who think AI poses an extinction risk still continue to work in the field, and I hope this piece can help to fill that gap in understanding. You can agree or disagree with these reasons being sensible, but there’s nothing inherently unexplainable about both believing AI poses a substantial risk of causing human extinction and continuing to work on AI research.

Endnotes:

  1. ˄ Researchers consider the risk of human extinction in the next century due to an asteroid/comet impact to be about 1 in 1,000,000, which is largely based on the observation that, on Earth, mass extinctions from asteroids appear to happen every 100M years or so.
  2. ˄ A few further notes about the survey in question: a) survey participants were drawn from those who published at the prestigious AI conferences NeurIPS or ICLM in 2021; b) the response rate for the survey as a whole was 17% (which isn’t terrible for this kind of survey, but obviously leaves open the possibility that nonrespondents could swing the central estimate), and a subset of the participants were given questions about existential risk; c) the survey was not primarily about existential risk, but instead about “high-level machine intelligence” in general; and d) the questions about “existential risk” did not focus exclusively on AI causing “human extinction,” but instead on AI causing “human extinction or similarly permanent and severe disempowerment of the human species.”
  3. ˄ Note — he wasn’t trying to make a negative statement about Google in particular.
  4. ˄ This category of research is typically referred to as “AI alignment research.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.