FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

THE GUARDIAN. The OpenAI meltdown will only accelerate the artificial intelligence race.

Optimists and ‘doomers’ are fighting over the direction of AI research – and those who want speed may have won this round.

Sarah Kreps

Wed 22 Nov 2023 11.12 CET

n November 2022, OpenAI launched ChatGPT, a consumer-facing artificial intelligence tool that could hold a conversation with users, answer questions, and generate anything from poems to computer code to health advice. The initial technology was not perfect – it would sometimes “hallucinate”, producing convincing but inaccurate information – but its potential generated enormous attention.

A year later, ChatGPT’s popularity has continued, with 100 million people using it on a weekly basis, and over 92% of Fortune 500 companies and several competitor firms looking to cash in or improve on the technology. But that’s not why ChatGPT’s creator, OpenAI, was in the news this week. Instead, OpenAI was the center of a fierce philosophical debate about what it means to develop artificial general intelligence for the benefit of humanity.

AI promises incredible benefits, but also terrible risks. It’s not luddism to rein it in – Sonia Sodha

To understand the current debate and its stakes requires going back to OpenAI’s founding in December 2015. As OpenAI’s website notes, the organization was founded as a non-profit “with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity”. The company saw no public-sector paths forward and chose to follow in the footsteps of SpaceX and pursue private-sector talent and funding.

But as the capital-intensive nature of AI research and development became clear – computing power and talent are not cheap – OpenAI transitioned to a limited liability company. The idea was to benefit from for-profit resources while holding true to OpenAI’s founding ideals, with a non-profit board that had full control of the for-profit subsidiary and capping the profit.

Early on in that transition, some of OpenAI’s ethos appeared to change. Among other things, the company concluded that being open-source for all the world to see and exploit might not be in humanity’s best interest. The premise was plausible. One critic claimed that OpenAI went from “open source to ‘open your wallet’” – turning into a profit-driven enterprise that deviated from its founding principles – but OpenAI was right to be wary of how malicious actors could misuse its tools for misinformation, plagiarism and bots.

My own research and early academic collaboration with OpenAI showed that the technology, even in its earlier iterations, could be misused in misinformation campaigns or to disrupt democracy.

OpenAI’s CEO, Sam Altman, was himself transparent about the risks of the technology. In May 2023, while testifying on Capitol Hill, he said: “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that [and] work with the government to prevent that from happening.” He said all the right things and seemed to mean it. He made time to meet with journalists and world leaders to hear their concerns.

In the months between May and November 2023, OpenAI’s valuation nearlytripled, from about $29bn to $80bn. Investment flooded into OpenAI, with considerable injections from Microsoft, and competitors such as Anthropic. It was now a veritable arms race in a winner-takes-all economy.

The recent developments in the saga are somewhat unclear, but it appears as though the non-profit board was no longer confident that the company could do both fast and safe AI development – or at least that the first might compromise the second.

On Friday, the board of OpenAI released a nebulous statement that said that Sam Altman had not been “consistently candid in his communications” and that the board no longer had confidence in his leadership. A flurry of negotiations followed over the weekend, with the most notable and stroke-of-genius move from the Microsoft, CEO Satya Nadella, who immediately agreed to welcome Altman and members of his senior engineering team as a new independent entity within Microsoft.

These experiences highlight the challenges of trying to move slowly with a fast-moving technology. Critics of OpenAI’s new now-ex CEO, Emmett Shear, consider him an AI “doomer” who rejects Silicon Valley optimism about AI’s positive potential. In September, Shear wrote that “if we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for 1-2 instead.” He has also praised companies for “staying well within established bounds of safety”.

He will have a hard time achieving that goal, however, based on events of the last several days. More than 700 OpenAI employees threatened to quit if the board did not step down, saying that “we are unable to work for or with people that lack competence, judgment and care for our mission and employees.” These individuals even offered to join Microsoft, a threat that would have seemed preposterous, even heretical, before last week. In other words, they made it clear where they stand on moving quickly versus proceeding more cautiously.

The dust is still settling but Microsoft and venture capital firms are putting the full weight of their resources behind Altman and OpenAI.

As more money pours in, the AI arms race not just across companies but countries will only intensify. The AI policy challenge will become harder. Microsoft has considerable law and policy experience, having come a long way since the 1990s when Bill Gates went to Washington and derided the technical expertise of policymakers. Further, the potential diffusion of talent and expertise will mean more companies will be competing for the elusive goal of artificial general intelligence, which will make it more difficult to track. Against that backdrop, advocates for decelerating that goal will have to find more compelling arguments to counter the resources, momentum and vision of those leading the charge.

  • Sarah Kreps is a professor of government at Cornell University and the director of the Tech Policy Institute

  • This article was amended on 22 November 2023 to reflect changing developments

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.