FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

THE WASHINGTON POST. Opinion. AI is at a dangerous juncture. June 7, 2023

By Robert Wright.

Robert Wright, whose books include “The Moral Animal: Evolutionary Psychology and Everyday Life” and “Nonzero: The Logic of Human Destiny,” publishes the Nonzero Newsletter and hosts the Nonzero podcast.

At last month’s Senate hearing on artificial intelligence, Sam Altman, the chief executive of OpenAI, conceded that the technological revolution his company is accelerating has its downsides. Though AI has “the potential to improve nearly every aspect of our lives,” it could also bring socially disruptive changes such as job displacement, he said — and “if this technology goes wrong, it can go quite wrong.” At a conference in January, he had gone further: “I think the worst case is lights out for all of us.”

If you ask other experts on AI to describe its likely impact, an unsettling number of them come down somewhere on the spectrum defined by Altman’s concerns — somewhere between socially destabilizing and globally catastrophic.

So it’s not surprising that there is serious talk in Washington of regulating AI. But the ideas getting the most attention — such as Altman’s suggestion that building the most powerful AIs require a federal license — aren’t enough. The AI challenge calls not just for innovative domestic policies but also for a basic reorientation of foreign policy — a change comparable in magnitude to the one ushered in by George Kennan’s 1947 “X” article in Foreign Affairs, which argued for a policy of “containing” the Soviet Union.

But this time the adversary is China, and the required redirection is toward engagement, not confrontation. In light of the AI revolution, it is in America’s vital interest to reverse the current slide toward Cold War II and draw China into an international effort to guide technological evolution responsibly.

You don’t have to buy the most catastrophic AI scenarios — such as the ones in which AIs gain power and deem humans a disposable nuisance — to believe technology is at a dangerous historical juncture. Focus instead on the socially destabilizing end of the spectrum, and imagine where present trends could lead within a few years. Whatever AI’s eventual benefits (which will be many), it is pretty much guaranteed to:

  1. Cause big waves of unemployment — and there’s reason to worry that these won’t be counterbalanced, as in past tech revolutions, by long-term job creation.
  2. Give partisan online warriors powerful new ways to stoke tribal conflict. Imagine armies of humanlike bots that not only pick fights with people on social media but also constantly refine their rhetorical tactics via feedback about which ones are most incendiary.
  3. Give malicious hackers more potent tools — hackbots that, like those political chatbots, continually refine their techniques, learning new tricks as they move from computer to computer, enlisting each conquered computer in their growing armies.
  4. Give “mind hackers” — people who aim to draw the vulnerable into dark worlds for malign purposes — new tools. Imagine bots whose mission is to recruit people to a paranoid religious cult — and, again, are constantly honing their techniques en masse.

AI poses many other such challenges, and none are insurmountable; adaptation is possible. The question is whether, with progress proceeding so rapidly, the adaptations can come fast enough to stave off chaos…

This is one reason some people support a slowdown of AI work that would allow time to prepare for its consequences. In March, prominent tech figures and others proposed a six-month moratorium on developing the next generation of large language models (such as OpenAI’s ChatGPT or Google’s Bard).

But such proposals — as well as some regulatory ideas that would slow AI progress as a side effect — are ritually met with an objection from the realm of foreign policy: What about China? If the United States pauses or even slows down while China doesn’t, won’t China get a leg up in the Cold War II race for world dominance?

This question has two answers — one for the short term and one for the longer term.

The short-term answer is: Actually, there’s a good chance that not slowing AI development would give China a leg up. If the United States is beset by AI-induced turmoil while China stays relatively stable, China will be that much stronger on the international stage; it will appear that much more worthy of emulation and that much more appealing as a long-term partner. And this comparative advantage is likely — not just because China’s authoritarian government has long prioritized stability, but also because that emphasis is reflected in its strict approach to regulating AI.

The longer-term answer to the “What about China?” question begins with an examination of its premise. The question assumes that intense struggle against China is a given, a kind of immutable law of geopolitics. It then follows that the AI challenge must take a back seat to this reality.

But isn’t it at least possible that the AI challenge is so momentous that it, not Cold War II, should be the organizing principle of U.S. foreign policy? Maybe if this grave and conceivably existential challenge can’t be dealt with amid deep international tensions, then the United States should try to lessen the tensions?

Some of the people who best understand the technology’s emerging capabilities think the answer is yes. Geoffrey Hinton, sometimes called “the godfather of AI,” fears that AI may constitute an “existential threat” and believes that responsibly governing it will be impossible in an environment of bitter competition between the United States and China.

“My one hope,” he said recently, is that the two countries will recognize that “we’re all in the same boat” and cooperate to keep AI under control.

Hinton cited as precedent nuclear arms control agreements between the United States and the Soviet Union. But there are big differences between nuclear arms and AIs. Nuclear missiles are pretty conspicuous, and their underlying technology doesn’t evolve rapidly, constantly giving rise to new mutations. So formulating a treaty and verifying compliance is easier than in the case of AI. Indeed, as history shows, nuclear arms control can be done even in a cold war atmosphere, with limited visibility into your adversary’s society.

In contrast, the effective international regulation of AI will call for fine-grained and intrusive monitoring and for periodic refinement of the rules — things much harder to reconcile with a cold war atmosphere. Ideally, there will be, in addition, the kind of organic transparency afforded by an atmosphere of economic engagement, cultural exchange and scientific collaboration.

In other words, what’s needed is a world more like that of a couple of decades ago, before America’s relations with China (and Russia) started to go downhill. That this world existed suggests that returning to it is possible, even if doing so will require sustained effort and a reordering of foreign policy priorities on both sides.

To many people involved in AI, the need for international regulation seems obvious. Google CEO Sundar Pichai has embraced the idea of an international AI treaty, and Altman has cited the International Atomic Energy Agency as a precedent for the kind of body that’s needed.

The logic behind such ideas is simple: Any country can be the origin of various kinds of bad AI effects — generated by national governments, by terrorists, by criminals, even by accidents — and any other country can be on the receiving end. AI, like a lethal virus, readily crosses borders, giving all nations a legitimate interest in conditions in all other nations, and giving all nations a common interest in guiding the technology’s evolution.

Speaking of viruses: the fact that bad actors could right now be engineering a new contagious pathogen in any number of labs around the world — including in the United States — is another sign that international governance is lagging technological evolution, and another reason the world cannot afford a new cold war. (And, in the not-too-distant future, these bad actors could use unregulated AI as prodigious lab assistants.)

So different is AI from past technological challenges that the jury’s out on whether truly effective international regulation is possible. But, as various experts have noted, there is a way to at least slow AI progress while national leaders start an international conversation about the prospects for regulation. Training a new generation of large language models on vast bodies of text is a time-consuming, resource-intensive and relatively conspicuous undertaking. Since only a handful of companies have the wherewithal to undertake the next big wave of training, achieving a six-month (or longer) pause is possible.

What happens if we fail to prepare the geopolitical ground for such a conversation? Nothing is more likely to encourage the feverish and dangerously irresponsible development of AI than a world in which the two emerging AI superpowers, the United States and China, live in an atmosphere of low transparency and high suspicion and even fear — the Cold War environment that has been taking shape in recent years.

America survived the first Cold War intact. But that was before various technological challenges that transcend borders had gotten so formidable. Yes, there were biological weapons, but it was a lot harder back then for rogue researchers in obscure labs to invent new ones. Yes, there were satellites in orbit, but they weren’t packed together so tightly that anti-satellite weapons (which Russia, China and the United States have tested) might set off a globally devastating chain reaction sustained by space debris.

So essentially toothless cold war agreements — the 1975 Biological Weapons Convention and the 1967 Outer Space Treaty — sufficed as placeholders, as legal frameworks that could be strengthened later and meanwhile could strengthen norms. Now those frameworks are in a state of advanced obsolescence, and the AI revolution has brought another border-transcending challenge, the most formidable one yet.

Those large language models are sending us the same message other technologies have been sending for decades now: Increasingly, national security will require wise international governance.

And wise international governance will require a change of course.

LEARN MORE: Are AI luminaries too freaked out by their creation?

by Robert Wright

If you feel you’ve been spending too much time reading about the perils of artificial intelligence, this week brought good news: The latest AI warning issued by tech luminaries is the most concise ever—a mere 22 words.

The statement reads, in its entirety, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

You could generate more words than that just by listing the signatories who qualify as either AI pioneers (such as Geoffrey Hinton and Yoshua Bengio) or AI corporate titans (OpenAI CEO Sam Altman, Google/Deep Mind CEO Demis Hassabis, Anthropic CEO Dario Amodei, et. al.), or a cross between the two (like Ilya Sutskever, co-founder of OpenAI and its chief scientist). All told, this statement has a significantly higher AI eminence quotient than the last big warning, the one that called for a six-month pause on training the biggest Large Language Models (and a lower clown quotient, since Elon Musk’s name isn’t affixed this time).

As for the substance of the statement: “Extinction” is a strong word! In AI alarm circles, “extinction from AI” connotes scenarios where the AI seizes power, deems humans a pesky nuisance, and assigns them a fate warranted by that status. Is that really the AI peril that most need’s the world’s attention right now?

This question was raised by Princeton computer scientist Arvind Narayanan and two co-authors on Substack. “Extinction from AI,” they wrote, “implies an autonomous rogue agent.” But “what about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a ‘rogue human’ with AI’s assistance.”

That’s probably true. Even an extinction-level pandemic—listed in the statement as a risk separate from AI—could result from a rogue human using AI to help engineer a new virus. And it’s easy to imagine less cinematic but collectively momentous episodes of AI-assisted wrongdoing in the near future: dispatching an army of robot hackers that disable a series of power grids or a network of satellites, or just dispatching an army of social media bots that massively trigger the opposing tribe.

So why did all these tech luminaries focus on “extinction from AI”? For one thing, that is the threat that some of them (such as Hinton) seem genuinely haunted by. Also, the earlier statement that some of them signed—the six-month pause statement—did include less exotic scenarios (“Should we let machines flood our information channels with propaganda and untruth?”… “Should we automate away all the jobs, including the fulfilling ones?”) and failed to galvanize much political action. So maybe they figured they should turn up the volume this time.

Here’s a deeply challenging two-part truth about artificial intelligence: (1) Even if AI doesn’t constitute an extinction-level risk, or even a risk very close to extinction-level, it is definitely a very big risk, when you add up all the bad consequences it could have. (2) AI is an extremely hard technology to regulate, a technology whose effective governance requires not just very creative new national laws, but very creative new international laws. A regulatory scheme that significantly constrains AI’s downside without unduly constraining its upside would be a big ask even if the world’s systems of national governance were working well. Speaking as an American, I can confidently say that at least one of them isn’t.

So I don’t blame the signatories of the latest AI warning for wanting to grab people and shake them by the shoulders.

Max Tegmark, a professor at MIT who helped organize the six-month pause proposal, has compared the AI challenge to an asteroid hurtling toward Planet Earth—a textbook example of something that gives nations cause to focus on international cooperation at the expense of policy priorities that only yesterday seemed essential.

Tegmark seems to take the rogue AI threat seriously, but you don’t need to share that concern to embrace something like the asteroid metaphor. Even the more mundane disturbances that AI will bring could add up to a kind of epic meteorological event—a storm that hits the planet and leaves no nation untouched, roiling the social fabric and shaking the foundations of major institutions.

That may not be an existential threat, but it’s enough to make you want to press pause.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

2023-10-05T10:26:10+00:00June 7, 2023|
Go to Top