A very important read from a very respected and credible source!
On the Controllability of Artificial Intelligence: An Analysis of Limitations
Author: Roman V. Yampolskiy, University of Louisville, USA
https://doi.org/10.13052/jcsm2245-1439.1132
Keywords: AI safety, control problem, safer AI, uncontrollability, unverifiability, X-risk
Abstract
The invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid the pitfalls of such a powerful technology it is important to be able to control it. However, the possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI cannot be fully controlled. The consequences of uncontrollability of AI are discussed with respect to the future of humanity and research on AI, and AI safety and security.
REFERENCES for BOXING.
[1] Yampolskiy, R.V., Leakproofing Singularity-Artificial Intelligence Confinement Problem. Journal of Consciousness Studies JCS, 2012.
[2] Babcock, J., J. Kramar, and R. Yampolskiy, The AGI Containment Problem, in The Ninth Conference on Artificial General Intelligence (AGI2015). July 16–19, 2016: NYC, USA.
[3] Armstrong, S., A. Sandberg, and N. Bostrom, Thinking inside the box: Controlling and using an oracle ai. Minds and Machines, 2012. 22(4): pp. 299–324.
[4] Babcock, J., J. Kramar, and R.V. Yampolskiy, Guidelines for Artificial Intelligence Containment. arXiv preprint arXiv:1707.08476, 2017.
[5] Pittman, J.M. and C.E. Soboleski, A cyber science based ontology for artificial general intelligence containment. arXiv preprint arXiv:1801.09317, 2018.
[6] Yampolskiy RV. On the Controllability of Artificial Intelligence: An Analysis of Limitations. JCSANDM [Internet]. 2022 May 25 [cited 2024 Jan. 10];11(03):321–404.