FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Opinion. There’s no way for humanity to win an AI arms race. THE WASHINGTON POST.

We need to change the questions we’re asking AI and the information we’re giving it.

LETTER TO THE EDITOR. August 4, 2024

Regarding Sam Altman’s July 28 op-ed, “Who will control the future of AI?”:

In 2017, hundreds of artificial intelligence experts signed the Asilomar AI Principles for how to govern artificial intelligence. I was one of them. So was OpenAI CEO Sam Altman.

The signatories committed to avoiding an arms race on the grounds that “teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.”

But in Mr. Altman’s recent op-ed, he argued that the United States should accelerate trying to win just such a race at all costs, claiming American dominance of advanced AI systems is critical to preserving global democracy and freedom. Perhaps we should not be surprised. As the stakes and competition in the industry grow, caution — let alone “active cooperation” — is being thrown to the side. We can’t forget the risks that led us to the Asilomar AI Principles in the first place.

The stated goal of OpenAI is to create artificial general intelligence, a system that is as good as expert humans at most tasks. It could have significant benefits. It could also threaten millions of lives and livelihoods if not developed in a provably safe way. It could be used to commit bioterrorism, run massive cyberattacks or escalate nuclear conflict, to name just a few scenarios. Altman characterized the danger as “lights-out for all of us.” He is not alone in this assessment.

Given these dangers, a global arms race to unleash artificial general intelligence AGI serves no one’s interests. Such competitions could drive participants to take drastic measures, escalate confrontation and risk nuclear annihilation. Mr. Altman quotes Russian President Vladimir Putin’s observation that the AI winner will “become the ruler of the world.” What might potential losers resort to when threatened with the emergence of an unassailable AGI-powered technological superpower?

There is little historical evidence to suggest that some sort of democratized international governance system for AGI will emerge in the midst of an arms race. Giving a handful of billionaires in Silicon Valley unchecked global power is far from a “democratic solution” either.

And should AGI actually emerge, it’s a folly to assume that the world’s governments, much less a few companies, could control it for long. A system that is better than us at most things, including AI development, would quickly become vastly better than us at everything, able to improve and replicate itself without limit. Such a system is not a tool but an intelligent species — one we could not constrain or govern. Speeding toward AGI and beyond without sufficient guardrails is a suicide mission, not a competition.

Without the necessary regulation, safety standards and appropriate oversight, a catastrophic accident or misuse could render artificial intelligence radioactive for decades to come, denying us its benefits. We should learn from the Three Mile Island incident, in which a lack of precautions and training led to a nuclear meltdown and America’s tragic rejection of nuclear energy.

Despite enormous corporate pressure, Mr. Altman and others must remember their pledge to develop AI “for the benefit of all humanity rather than one state or organization.” They should embrace it once again not as idealism but as necessary precondition for everything they hope to achieve — and everything they hope to prevent.

Anthony AguirreSanta Cruz, Calif.

The writer is executive director and board secretary of the Future of Life Institute.

—–

The wrong questions about AI

The future of humanity may hinge on our ability to harness artificial intelligence for global cooperation rather than for control and conflict. Framing AI development as a zero-sum game between democratic and authoritarian regimes, as Sam Altman did in his recent op-ed, risks exacerbating existing geopolitical tensions and overlooking the transformative potential of AI as a tool for global problem-solving.

The true power of AI lies not in its capacity to cement the dominance of any single nation or ideology but in its potential to bridge divides and facilitate understanding across cultures. Just as Plato spoke of abstract “forms” underlying reality, AI might help us identify fundamental patterns in global conflicts and human behavior, leading to more profound solutions.

AI’s ability to process vast amounts of data could help identify patterns in global conflicts by suggesting novel approaches to resolution that human negotiators might overlook. Advanced natural language processing could break down communication barriers, allowing for more nuanced dialogue between nations and cultures. Predictive AI models could identify early signs of potential conflicts, allowing for preemptive diplomatic interventions.

Moreover, AI’s capacity to analyze and synthesize information from diverse sources presents an unprecedented opportunity to develop a cross-cultural philosophical framework. By identifying common threads and complementary ideas across different traditions — say, between the thinking of such figures as Socrates and Imam Husayn  AI could help us formulate a more universal ethical foundation for its own development and application.

This is not to say that the development of AI is without risks. Indeed, the potential for AI to exacerbate conflicts through advanced weaponry or surveillance is real and must be addressed. However, by shifting our focus from control to cooperation, we can direct AI development toward mitigating these risks rather than amplifying them.

The challenge before us is not to win an AI arms race but to ensure that AI serves the collective interests of humanity. This requires a collaborative approach that transcends national boundaries and political systems.

To achieve this, we need not control AI but ensure its development aligns with globally agreed-upon ethical and meta-philosophical principles. We should ensure that diverse voices from across the globe contribute to AI research and application, and foster open dialogue about AI capabilities and limitations to build public trust and prevent misuse.

The future of AI is not predetermined. It will be shaped by the choices we make today.

Shan RizviBrooklyn, N.Y.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

2024-08-23T15:53:24+00:00
Go to Top