FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

A very good read from a respected source!

Abstract

Managing AI Risks in an Era of Rapid Progress In this short consensus paper, we outline risks from upcoming, advanced AI systems. We examine large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems. In light of rapid and continuing AI progress, we propose urgent priorities for AI R&D and governance.

Call to action

AI may be the technology that shapes this century. While AI capabilities are advancing rapidly, progress in safety and governance is lagging behind. To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it.

Yoshua Bengio
Geoffrey Hinton
Andrew Yao
Dawn Song
Pieter Abbeel
Yuval Noah Harari
Ya-Qin Zhang
Lan Xue
Shai Shalev-Shwartz
Gillian Hadfield
Jeff Clune
Tegan Maharaj
Frank Hutter
Atılım Gunes Baydin
Sheila McIlraith
Qiqi Gao
Ashwin Acharya
David Krueger
Anca Dragan
Philip Torr
Stuart Russell
Daniel Kahneman
Jan Brauner*
So ̈ren Mindermann*

Mila – Quebec AI Institute, Universite ́ de Montre ́al, Canada CIFAR AI Chair
University of Toronto, Vector Institute
Tsinghua University
UC Berkeley
UC Berkeley
The Hebrew University of Jerusalem, Department of History
Tsinghua University
Tsinghua University, Institute for AI International Governance
The Hebrew University of Jerusalem
University of Toronto, SR Institute for Technology and Society, Vector Institute University of British Columbia, Canada CIFAR AI Chair, Vector Institute
University of Toronto, Vector Institute
University of Freiburg
University of Oxford
University of Toronto, Vector Institute
East China University of Political Science and Law
Institute for AI Policy and Strategy
University of Cambridge
UC Berkeley
University of Oxford
UC Berkeley
Princeton University, School of Public and International Affairs
University of Oxford
University of Oxford, Mila – Quebec AI Institute, Universite ́ de Montre ́al

Rapid AI progress

In 2019, GPT-2 could not reliably count to ten. Only four years later, deep learning systems can write software, generate photorealistic scenes on demand, advise on intellectual topics, and combine language and image processing to steer robots. As AI developers scale these systems, unforeseen abilities and behaviors emerge spontaneously, without explicit programming 1. Progress in AI has been swift and, to many, surprising.

The pace of progress may surprise us again. Current deep learning systems still lack important capabilities and we do not know how long it will take to develop them. However, companies are engaged in a race to create generalist AI systems that match or exceed human abilities in most cognitive work 2,3.

They are rapidly deploying more resources and developing new techniques to increase AI capabilities. Progress in AI also enables faster progress: AI assistants are increasingly used to automate programming 4, data collection 5,6, and chip design 7 to improve AI systems further 8.

There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. Indeed, AI has already surpassed human abilities in narrow domains like protein folding and strategy games 9–11. Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.

The rate of improvement is already staggering, and tech companies have the cash reserves needed to scale the latest training runs by multiples of 100 to 1000 12. Combined with the ongoing growth and automation in AI R&D, we must take seriously the possibility that generalist AI systems will outperform human abilities across many critical domains within the current decade or the next.

What happens then? If managed carefully and distributed fairly, advanced AI systems could help humanity cure diseases, elevate living standards, and protect our ecosystems. The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well. Humanity is pouring vast resources into making AI systems more powerful but far less into safety and mitigating harms. For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.

We are already behind schedule for this reorientation. We must anticipate the amplification of ongoing harms, as well as novel risks, and prepare for the largest risks well before they materialize. Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.

Societal-scale risks

AI systems could rapidly come to outperform humans in an increasing number of tasks. If such systems are not carefully designed and deployed, they pose various societal-scale risks. They threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance 13–16.

Many of these risks could soon be amplified, and new risks created, as companies are developing autonomous AI: systems that can plan, act in the world, and pursue goals. While current AI systems have limited autonomy, work is underway to change this 17. For example, the non-autonomous GPT-4 model was quickly adapted to browse the web 18, design and execute chemistry experiments 19 , and utilize software tools 20 including other AI models 21 .

If we build highly advanced autonomous AI, we risk creating systems that pursue undesirable goals. Malicious actors could deliberately embed harmful objectives. Moreover, no one currently knows how to reliably align AI behavior with complex values; several research breakthroughs are needed (see below). Even well-meaning developers may inadvertently build AI systems that pursue unintended goals—especially if, in a bid to win the AI race, they neglect expensive safety testing and human oversight.

Once autonomous AI systems pursue undesirable goals, embedded by malicious actors or by accident, we may be unable to keep them in check. Control of software is an old and unsolved problem: computer worms have long been able to proliferate and avoid detection 22. However, AI is making progress in critical domains such as hacking, social manipulation, deception, and strategic planning 17,23 . Advanced autonomous AI systems will pose unprecedented control challenges.

To advance undesirable goals, future autonomous AI systems could use undesirable strategies—learned from humans or developed independently—as a means to an end 24–27. AI systems could gain human trust, acquire financial resources, influence key decision-makers, and form coalitions with human actors and other AI systems. To avoid human intervention 27, they might copy their algorithms across global server networks 28, as computer worms do. AI assistants are already co-writing a substantial share of computer code worldwide 29; future AI systems could insert and then exploit security vulnerabilities to control the computer systems behind our communication, media, banking, supply-chains, militaries, and governments. In open conflict, AI systems could threaten with or use autonomous or biological weapons. AI systems having access to such technology would merely continue existing trends to automate military activity, biological research, and AI development itself. If AI systems pursued such strategies with sufficient skill, it would be difficult for humans to intervene.

Finally, AI systems will not need to plot for influence if it is freely handed over. As autonomous AI systems increasingly become faster and more cost- effective than human workers, a dilemma emerges. Companies, governments, and militaries might be forced to deploy AI systems widely and cut back on expensive human verification of AI decisions, or risk being outcompeted 14,30. As a result, autonomous AI systems could increasingly assume critical societal roles.

Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective. Large-scale cybercrime, social manipulation, and other highlighted harms could then escalate rapidly. This unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity.

Harms such as misinformation and discrimination from algorithms are already evident today 31; other harms show signs of emerging 23. It is vital to both address ongoing harms and anticipate emerging risks. This is not a question of either/or. Present and emerging risks often share similar mechanisms, patterns, and solutions 32 ; investing in governance frameworks and AI safety will bear fruit on multiple fronts 33.

A path forward

If advanced autonomous AI systems were developed today, we would not know how to make them safe, nor how to properly test their safety. Even if we did, governments would lack the institutions to prevent misuse and uphold safe practices. That does not, however, mean there is no viable path forward. To ensure a positive outcome, we can and must pursue research breakthroughs in AI safety and ethics and promptly establish effective government oversight.

Reorienting technical R&D

We need research breakthroughs to solve some of today’s technical challenges in creating AI with safe and ethical objectives. Some of these challenges are unlikely to be solved by simply making AI systems more capable 25,34–38. These include:

• Oversight and honesty: More capable AI systems can better exploit weaknesses in oversight and testing 35,39,40—for example, by producing false but compelling output 38,41,42.

• Robustness: AI systems behave unpredictably in new situations (under distribution shift or adversarial inputs) 37,43,44 .

• Interpretability and transparency: AI decision- making is opaque. So far, we can only test large models via trial and error. We need to learn to understand their inner workings 45 .

• Inclusive AI development: AI advancement will need methods to mitigate biases and integrate the values of the many populations it will affect 15,46.

• Risk evaluations: Frontier AI systems develop unforeseen capabilities only discovered during training or even well after deployment 47. Better evaluation is needed to detect hazardous capabilities earlier 48,49 .

• Addressing emerging challenges: More capable future AI systems may exhibit failure modes we have so far seen only in theoretical models. AI systems might, for example, learn to feign obedience or exploit weaknesses in our safety objectives and shutdown mechanisms to advance a particular goal 27,50 .

Given the stakes, we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to ensuring safety and ethical use, comparable to their funding for AI capabilities. Addressing these problems 37, with an eye toward powerful future systems, must become central to our field.

Governance measures

We urgently need national institutions and international governance to enforce standards to prevent recklessness and misuse. Many areas of technology, from pharmaceuticals to financial systems and nuclear energy, show that society requires and effectively uses governance to reduce risks. However, no comparable governance frameworks are currently in place for AI. Without them, companies, militaries, and governments may seek a competitive edge by pushing AI capabilities to new heights while cutting corners on safety, or by delegating key societal roles to AI systems with little human oversight. Like manufacturers releasing waste into rivers to cut costs, they may be tempted to reap the rewards of AI development while leaving society to deal with the consequences.

To keep up with rapid progress and avoid inflexible laws, national institutions need strong technical expertise and the authority to act swiftly. To address international race dynamics, they need the affordance to facilitate international agreements and partnerships 51,52. To protect low-risk use and academic research, they should avoid undue bureaucratic hurdles for small and predictable AI models. The most pressing scrutiny should be on AI systems at the frontier: a small number of most powerful AI systems— trained on billion-dollar supercomputers—which will have the most hazardous and unpredictable capabilities 53,54 .

To enable effective regulation, governments urgently need comprehensive insight into AI development. Regulators should require registration of frontier systems in development, whistleblower protections, incident reporting, and monitoring of model development and supercomputer usage 53,55–60. Regulators also need access to advanced AI systems before deployment to evaluate them for dangerous capabilities such as autonomous self-replication, breaking into computer systems, or making pandemic pathogens widely accessible 48,61,62.

For AI systems with hazardous capabilities, we need a combination of governance mechanisms 53,57,63,64 matched to the magnitude of their risks. Regulators should create national and international safety standards that depend on model capabilities. These standards should follow best practices for risk management, including putting the onus on companies to show that their plans keep risks below an acceptable level 65. They should also hold frontier AI developers and owners legally accountable for harms from their models that can be reasonably foreseen and prevented. These measures can prevent harm and create much-needed incentives to invest in safety.

Further measures are needed for exceptionally capable future AI systems, such as autonomous systems that could circumvent human control. Governments must be prepared to license their development, restrict their autonomy in key societal roles, halt their development and deployment in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready. Governments should build these capacities now.

To bridge the time until regulations are complete, major AI companies should promptly lay out if-then commitments: specific safety measures they will take if specific red-line capabilities 48 are found in their AI systems. These commitments should be detailed and independently scrutinized.

AI may be the technology that shapes this century. While AI capabilities are advancing rapidly, progress in safety and governance is lagging behind. To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.