THE WASHINGTON POST. BUSINESS. The idea of OpenAI is dead.
Now, Microsoft is in the driver’s seat. Sam Altman sold the world on the notion that a nonprofit could keep AI safe. That was before he was ousted.
By Will Oremus, Julian Mark and Eli Tan Updated
November 20, 2023 at 7:19 p.m. EST|Published November 20, 2023 at 1:27 p.m. EST
OpenAI, which was reeling Monday after the ouster of CEO Sam Altman and his subsequent hire by Microsoft, was never meant to be a typical highflying tech start-up. The creator of wildly popular artificial intelligence tools such as ChatGPT and Dall-E was founded in 2015 as a nonprofit, with the stated mission of developing AI in the manner “most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
The idea was that a nonprofit could put ethics before profits and safety before a race to develop and commercialize a technology that its founders believed could pose an existential threat to the human race.
But after Altman’s stunning ouster, followed by his apparent leap to Microsoft and a mutiny among OpenAI’s staff demanding his reinstatement, that idea is in shambles. And no one stands to benefit more than Microsoft, which can now work with Altman and potentially many of his loyalists without the constraints of OpenAI’s nonprofit board.
Microsoft CEO Satya Nadella “just pulled off a coup of his own,” Fred Havemeyer, a senior enterprise software analyst at the Macquarie financial services firm, said in a note to investors on Monday.
Under Altman, OpenAI had developed a deep partnership with Microsoft, which used the models underlying ChatGPT and Dall-E to power its own AI tools, including a chatbot built into its Bing search engine. For its part, OpenAI got billions of dollars of investment from Microsoft, including access to its vast computing resources. The astronomical expense of the computing power that cutting-edge AI systems require has been a major barrier to start-ups trying to compete with tech’s incumbents.
In 2019, OpenAI spun up a for-profit subsidiary to stimulate further investment. Still, Altman leaned on OpenAI’s nonprofit status to convince regulators around the world they could trust him as a responsible steward of AI and an ally in the push to regulate it.
All the while, OpenAI’s board thought it controlled an emergency brake on the development of AI, which it could pull the moment it sensed the technology’s commercialization was racing ahead of society’s ability to adapt to it. On Friday, for the first time, it yanked on that e-brake by publicly firing Altman — and the lever broke off in its hand.
The board announced publicly that it had lost confidence in Altman as a leader and would be replacing him with CTO Mira Murati, effective immediately. The board said in a blog post that it had pushed out Altman because he was not “consistently candid” with some of its members.
While the motives behind that move remain murky, it reportedly was fueled at least in part by concern that Altman was prioritizing the rapid commercialization of products such as ChatGPT, GPT-4 and Dall-E 3 above the organization’s founding mission and research on AI safety.
By Monday, Altman and founding OpenAI board member Greg Brockman had been snatched up by Microsoft. And nearly all of OpenAI’s 770 employees had signed a letter threatening to quit and join them there unless the board resigned and reinstated Altman. Among them were Murati, whose appointment as Altman’s interim successor lasted two days, and Ilya Sutskever, one of the board members who had approved forcing Altman out.
“I deeply regret my participation in the board’s actions,” Sutskever tweeted Monday. “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
What will become of OpenAI, or what’s left of it, is not clear. There have even been suggestions Altman could make a triumphant return, though the employee letter said that board members have told “the leadership team” that “allowing the company to be destroyed ‘would be consistent with the mission.’”
What is clear, analysts say, is that the radical experiment in tech governance that OpenAI represented has backfired.
“They thought they could have it all,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University. “So they could be fast and safe in developing AI. And that worked fine for a while.”
But after ChatGPT captured the world’s imagination and sparked an industry-wide race to commercialize large language models, OpenAI became “a victim of their own success,” Kreps added, reliant on Altman to keep bringing in revenue and investments so it could stay in the lead. The board may have thought it could rein things in by firing Altman, but it underestimated his popularity, she believes. “And so now you have this principled ethos of safety, but no one left to implement it.”
Working with Altman and those loyal to him without the oversight of a nonprofit board of directors may be an even more promising development for Microsoft than its initial OpenAI investment, said Adam Struck, managing partner of venture capital firm Struck Capital.
“Microsoft is in the driver’s seat, because they’ve essentially acquired all of OpenAI’s value for essentially zero. … Now, they’ve got Sam and now they’re not beholden to a 501(c)(3),” said Struck, referencing a nonprofit organization. “What’s scary now, though, is Sam was obviously removed for a reason. He’s now going to have no limitations at Microsoft.”
Analysts widely predicted that many OpenAI employees loyal to Altman would follow him to Microsoft, and that a brain drain of OpenAI talent into a new Microsoft unit comes with less antitrust baggage than a traditional acquisition.
“There’s never going to be an antitrust issue here because Sam was literally fired by the board,” Struck said.
In a note Monday, Wedbush analyst Dan Ives compared Microsoft’s hiring of Altman to a “World Series of Poker move for the ages,” saying the company’s already strong AI position is now stronger.
The reshuffling in the industry will not happen instantly, Deb Raji, an AI researcher and fellow at Mozilla, tweeted Monday. Altman’s jump to Microsoft could effectively result in the six-month development pausethat some AI leaders sought this spring.
OpenAI might reemerge as a much smaller research organization, one that more closely follows its founding mission, mused Havemeyer, the Macquarie analyst. He added that if OpenAI loses most of its talent, a huge question remains: What happens to ChatGPT, which attracts more than 100 million users weekly?
Havemeyer said it’s possible the chatbot would be kept running on a “skeleton crew,” with resources still available through its long-term partnership with Microsoft.
“However, if ChatGPT performance degrades, we think an exodus of ChatGPT users to alternatives … or a product shipped by Mr. Altman’s new team would be likely,” he said.
Whatever happens, the notion that OpenAI’s nonprofit board could simultaneously lead the AI revolution and keep it under control has likely gone out the window.
“The personal and dramatic and inconsistent events of the past few days raise a question: Are these the people keeping us safe from AI?” said Matt Calkins, CEO of the software company Appian. “They look a lot like the rest of us. Nobody in this market is infallible.
Learn More:
- Opinion OpenAI drama explains the human penchant for risk-taking By Megan McArdle Columnist | Follow November 20, 2023
- It has been not quite a year since OpenAI launched ChatGPT, sparking a million think pieces about whether humanity had just robo-signed its own death warrant. Now, it’s an open question whether the company will make it to the Nov. 30 GPT-versary as a viable firm, after a few days of astonishing corporate drama in which the board ousted Chief Executive Sam Altman, President Greg Brockman resigned, and virtually the entire staff threatened to follow them to their new jobs.
- Exactly what happened remains murky, but broadly, it seems as though the people inside OpenAI were having the same debate as the rest of us: how to balance the potential benefits of AI against its risks — especially the “alignment problem” of ensuring that intelligent machines don’t become inimical to human interests.
- “Altman’s dismissal by OpenAI’s board on Friday,” reports the Atlantic, “was the culmination of a power struggle between the company’s two ideological extremes — one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution.”
- If you’ve read the think pieces, you know the broad outlines of the debate. Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.
- Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots. Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.
- OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization. It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.
- Eventually, it became clear that building products such as ChatGPT would take more resources than a nonprofit could generate, so OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).
- On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.
- All this was astonishing enough, but what happened next was … well, no AI fiction engine could generate such a scenario. After a flurry of bad publicity and general chaos, Microsoft announced that it was hiring Altman and Brockman to lead a new advanced AI research team there.
- Meanwhile, most of OpenAI’s employees signed a letter threatening to quit and join Altman at Microsoft unless the board resigned and reappointed Altman as CEO. Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
- This peculiar drama seems somehow very Silicon Valley, but it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won. Oh, sure, that part might not survive as OpenAI; it might migrate to Microsoft. But a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.
- More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem. And that’s why we are probably not going to “solve” it so much as hope we don’t have to. Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.
- When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen. But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.
- Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.