FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

OpenAI. Governance of superintelligence. 22 MAY 2023

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.

We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.

A starting point

There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

And of course, individual companies should be held to an extremely high standard of acting responsibly.

Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.

Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.

What’s not in scope

We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here  (including burdensome mechanisms like licenses or audits).

Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.

By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.

Public input and potential

But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves.

Given the risks and difficulties, it’s worth considering why we are building this technology at all.

At OpenAI, we have two fundamental reasons. First, we believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces a lot of problems that we will need much more help to solve; this technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us. The economic growth and increase in quality of life will be astonishing.

Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.

Learn More:

  • Our approach to alignment research – Open AI. 22 August 2022.

  • We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems. –
  • A.I. Needs an International Watchdog, ChatGPT Creators Say – The New York Times. 24 May 2023

  • ChatGPT maker OpenAI calls for AI regulation, warning of ‘existential risk’ – The Washington Post. 22 May 2023

    • The leaders of OpenAI, the creator of viral chatbot ChatGPT, are calling for the regulation of “superintelligence” and artificial intelligence systems, suggesting an equivalent to the world’s nuclear watchdog would help reduce the “existential risk” posed by the technology.
    • In a statement published on the company website this week, co-founders Greg Brockman and Ilya Sutskever, as well as CEO Sam Altman, argued that an international regulator would eventually become necessary to “inspect systems, require audits, test for compliance with safety standards, (and) place restrictions on degrees of deployment and levels of security.”
    • They made a comparison with nuclear energy as another example of a technology with the “possibility of existential risk,” raising the need for an authority similar in nature to the International Atomic Energy Agency (IAEA), the world’s nuclear watchdog.
    • Over the next decade, “it’s conceivable that … AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the OpenAI team wrote. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there.”
    • The statement echoed Altman’s comments to Congress last week, where the U.S.-based company’s CEO also testified to the need for a separate regulatory body.
    • Critics have warned against trusting calls for regulation from leaders in the tech industry who stand to profit off continuing development without restraints. Some say OpenAI’s business decisions contrast these safety warnings — as their rapid rollout has created an AI arms race, pressuring companies such as Google parent company Alphabet to release products while policymakers are still grappling with risks.
    • Few Washington lawmakers have a deep understanding of emerging technology or AI, and AI companies have lobbied them extensively, The Washington Post previously reported, as supporters and critics hope to influence discussions on tech policy.
    • Some have also warned against the risk of hampering U.S. ability to compete on the technology with rivals — particularly China.
    • The OpenAI leaders warn in their note against pausing development, adding that “it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing.”
    • In his first congressional testimony last week, Altman issued warnings on how AI could “cause significant harm to the world,” while asserting that his company would continue to roll out the technology.
    • Altman’s message of willingness to work with lawmakers received a relatively warm reception in Congress, as countries including the United States acknowledge they need to contend with supporting innovation while handling a technology that is unleashing concerns about privacy, safety, job cuts and misinformation.
    • A witness at the hearing, New York University professor emeritus Gary Marcus, highlighted the “mind boggling” sums of money at stake and described OpenAI as “beholden” to its investor Microsoft. He criticized what he described as the company’s divergence from its mission of advancing AI to “benefit humanity as a whole” without the constraints of financial pressure.
    • The popularization of ChatGPT and generative AI tools, which create text, images or sounds, has dazzled users and also added urgency to the debate on regulation.
    • At a G-7 summit on Saturday, leaders of the world’s largest economies made clear that international standards for AI advancements were a priority, but have not yet produced substantial conclusions on how to address the risks.
    • The United States has so far moved slower than others, particularly in Europe, although the Biden administration says it has made AI a key priority. Washington policymakers have not passed comprehensive tech laws for years, raising questions over how quickly and effectively they can develop regulations for the AI industry.
    • The ChatGPT makers called in the immediate term for “some degree of coordination” among companies working on AI research “to ensure that the development of superintelligence” allows for safe and “smooth integration of these systems with society.” The companies could, for example, “collectively agree … that the rate of growth in AI capability at the frontier is limited to a certain rate per year,” they said.
    • “We believe people around the world should democratically decide on the bounds and defaults for AI systems,” they added — while admitting that “we don’t yet know how to design such a mechanism.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

2023-10-06T05:30:55+00:00May 22, 2023|
Go to Top