FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

AI Impacts Report January 2024

THOUSANDS OF AI AUTHORS ON THE FUTURE OF AI

38% of participants put at least a 10% chance on extremely bad outcomes (e.g. human extinction) Page-13

  • Figure 14: 70% of respondents thought AI safety research should be prioritized more than it currently is. Page-17. 
  • The majority of respondents said that the alignment problem is either a “very important problem” (41%) or “among the most important problems in the field” (13%), and the majority said the it is “harder” (36%) or “much harder” (21%) than other problems in AI. However, respondents did not generally think that it is more valuable to work on the alignment problem today than other problems. (Figure 15) Page-18.
  • Discussion. Summary of Results: Page-20
    • Participants expressed a wide range of views on almost every question: some of the biggest areas of consensus are on how wide-open possibilities for the future appear to be. This uncertainty is striking, but several patterns of opinion are particularly informative.
    • While the range of views on how long it will take for milestones to be feasible can be broad, this year’s survey saw a general shift towards earlier expectations. Over the fourteen months since the last survey [Grace et al., 2022], a similar participant pool expected human-level performance 13 to 48 years sooner on average (depending on how the question was phrased), and 21 out of 32 shorter term milestones are now expected earlier.
    • Another striking pattern is widespread assignment of credence to extremely bad outcomes from AI. As in 2022, a majority of participants considered AI to pose at least a 5% chance of causing human extinction or similarly permanent and severe disempowerment of the human species, and this result was consistent across four different questions, two assigned to each participant. Across these same questions, between 38% and 51% placed at least 10% chance on advanced AI bringing these extinction-level outcomes (see Figure 13).
    • In general, there were a wide range of views about expected social consequences of advanced AI, and most people put some weight on both extremely good outcomes and extremely bad outcomes. While the optimistic scenarios reflect AI’s potential to revolutionize various aspects of work and life, the pessimistic predictions—particularly those involving extinction-level risks—serve as a stark reminder of the high stakes involved in AI development and deployment.
    • Concerns were expressed over many topics beyond human extinction: over half of eleven potentially concerning AI scenarios were deemed either “substantially” or “extremely” concerning by over half of respondents.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.