FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

KEY POINTS

  • The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.
  • “The push to retain dominance is leading to toxic competition. It’s a race to the bottom,” says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.
  • In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed — in line with OpenAI’s ‘superalignment’ mission — to promote humanity’s best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that can’t be turned off and veers onto a destructive path is very real.

NATURE. NEWS EXPLAINER.  What the OpenAI drama means for AI progress — and safety.

23 November 2023.

A debacle at the company that built ChatGPT highlights concern that commercial forces are acting against the responsible development of artificial-intelligence systems.

By Nicola Jones

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November — but has now reinstated him.

OpenAI — the company behind the blockbuster artificial intelligence (AI) bot ChatGPT — has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the company’s board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

“The push to retain dominance is leading to toxic competition. It’s a race to the bottom,” says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altman’s initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altman’s return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he “was not consistently candid in his communications with the board” and later adding that the decision had nothing to do with “malfeasance or anything related to our financial, business, safety or security/privacy practice”.

But some speculate that the firing might have its origins in a reported schism at OpenAIbetween those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the company’s mission “to ensure that artificial general intelligence benefits all of humanity”.

Changing Culture

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altman’s firing “is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims”, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAI’s chief scientist and a member of the board that ousted Altman, this July shifted his focus to ‘superalignment’, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

It’s unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown University’s Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

Competition heats up

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the company’s GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be “detrimental for society”.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies — Google, Microsoft and Amazon — potentially creating a race for dominance between these controlling giants.

Safety concerns

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. “If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes,” he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) — a deep-learning system that’s trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. “The jury is very much out on that front,” says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. “Right now, I think we’ll probably get it in 5–20 years,” he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors — people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because today’s AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed — in line with OpenAI’s ‘superalignment’ mission — to promote humanity’s best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that can’t be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that it’s important to focus on already-present threats from AI ahead of far-flung concerns — and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power — something she thinks needs more scrutiny from anti-trust regulators. “Regulators for a very long time have taken a very light touch with this market,” says West. “We need to start by enforcing the laws we have right now.”

doi: https://doi.org/10.1038/d41586-023-03700-4

References

  1. Urbina, F., Lentzos, F., Invernizzi, C. & Ekins, S. Nature Mach. Intell. 4, 189–191 (2022).

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

2023-11-24T04:47:05+00:00
Go to Top