FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

A must read. Highly recommended!

Close the Gates to an Inhuman Future: How and why we should choose to not develop superhuman general-purpose artificial intelligence

Anthony Aguirre

October 22, 2023

Summary

In the coming years, humanity may irreversibly cross a threshold by creating superhuman general-purpose artificial intelligence. This would present many unprecedented risks and is likely to be uncontrollable in several ways. We can choose not to do so, starting by instituting hard limits on the computation that can be used to train and run neural networks. With these limits in place, AI research and industry can work on making AI that humans can understand and control, and from which we can reap enormous benefit.

2/ First, we have human-competitive GPAI, and are at the threshold of creating “outside the Gate” expert-competitive and superhuman GPAI systems, in a time that could be as short as a few years.

3/ Such systems pose profound risks to humanity, ranging from, at minimum, huge disruption of society to at maximum permanent human disempowerment or extinction. Ten risk classes are listed in the paper, with loss of control being very prominent.

4/ AI has huge potential benefits. However, humanity can probably reap nearly all of the benefits we really want from AI inside the Gate, and we can do so with safer and more transparent architectures.

5/ Many of the purported benefits of superhuman GPAI are also double-edged technologies with large risk. If there are benefits that can only be realized with superhuman systems, we can always choose to develop them later once if is sufficiently – and preferably provably – safe.

6/ Systems inside the Gate will still be very disruptive and pose a large array of risks – but these risks are potentially manageable with good governance.

7/ Finally, we not only should but *can* implement a “Gate closure”: although the required effort and global coordination will be difficult, there are dynamics and technical solutions that make this much more viable than it might seem.

8/ Many seem to feel that this runaway progress in AI is inevitable. It isn’t. Developing superhuman AI would involve doing giant, incredibly expensive, and difficult computations using megawatts of energy and thousands of people. NOT doing it is the easiest thing in the world!

Anthony Aguirre
Executive Director and Board Secretary
Future of Life Institute
Member of ,
Biography. Anthony is the Executive Director & Secretary of the Board at the Future of Life Institute, and the Faggin Presidential Professor for the Physics of Information at UC Santa Cruz. He has done research in an array of topics in theoretical cosmology, gravitation, statistical mechanics, and other fields of physics. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a creator of the science and technology prediction platform Metaculus.com, and is founder (with Max Tegmark) of the Foundational Questions Institute.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.