A very good read from a respected source!


Nov. 23, 2023.

One of the nice things about OpenAI is that it was built on distrust. It began as a nonprofit research lab because its founders didn’t think artificial intelligence should be pioneered by commercial firms, which are driven overwhelmingly by the profit motive.

As it evolved, OpenAI turned into what you might call a fruitful contradiction: a for-profit company overseen by a nonprofit board with a corporate culture somewhere in between.

Many of the people at the company seem simultaneously motivated by the scientist’s desire to discover, the capitalist’s desire to ship product and the do-gooder’s desire to do this all safely.

The events of the past week — Sam Altman’s firing, all the drama, his rehiring — revolve around one central question: Is this fruitful contradiction sustainable?

Can one organization, or one person, maintain the brain of a scientist, the drive of a capitalist and the cautious heart of a regulatory agency? Or, as Charlie Warzel wrote in The Atlantic, will the money always win out?

It’s important to remember that A.I. is quite different from other parts of the tech world. It is (or at least was) more academic. A.I. is a field that had a research lineage stretching back centuries. Even today, many of the giants of the field are primarily researchers, not entrepreneurs — people like Yann LeCun and Geoffrey Hinton, who won the Turing Award (the Nobel Prize of computing) together in 2018 and now disagree about where A.I. is taking us.

It’s only in the last several years that academic researchers have been leaving the university aeries and flocking to industry. Researchers at places like Alphabet, the parent company of Google; Microsoft; OpenAI; and Meta, which owns Facebook, still communicate with one another by publishing research papers, the way professors do.

But the field also has the intensity and the audacity of the hottest of all startup sectors. While talking with A.I. researchers over the past year or so, I have often felt I was on one of those airport moving walkways going three miles per hour and they were on walkways going 4,000 miles per hour. The researchers kept telling me that this phase of A.I.’s history is so exhilarating precisely because nobody can predict what will happen next. “The point of being an A.I. researcher is you should understand what’s going on. We’re constantly being surprised,” the Stanford Ph.D. candidate Rishi Bommasani told me.

The people in A.I. seem to be experiencing radically different brain states all at once. I’ve found it incredibly hard to write about A.I. because it is literally unknowable whether this technology is leading us to heaven or hell, and so my attitude about it shifts with my mood.

The podcaster and M.I.T. scientist Lex Fridman, who has emerged as the father confessor of the tech world, expressed the rapid-fire range of emotions I encountered again and again: “You sit back, both proud, like a parent, but almost like proud and scared that this thing will be much smarter than me. Like both pride and sadness, almost like a melancholy feeling, but ultimately joy.”

When I visited the OpenAI headquarters in May, I found the culture quite impressive. Many of the people I interviewed had arrived when OpenAI was a nonprofit research lab, before the ChatGPT hullabaloo — when most of us had never heard of the company. “My parents didn’t really know what OpenAI did,” Joanne Jang, a product manager, told me, “and they were like, ‘You’re leaving Google?’” Mark Chen, a researcher who was involved in creating the visual tool DALL-E 2, had a similar experience. “Before ChatGPT, my mom would call me like every week and she’d be like, ‘Hey, you know you can stop like bumming around and go work at Google or something.’” These people are not primarily driven by the money.

Even after GPT made headlines, being at OpenAI was like being in the eye of a hurricane. “It just feels a lot calmer than the rest of the world,” Jang told me. “From like the early days, it did feel more like a research lab, because mainly we were only hiring for researchers,” Elena Chatziathanasiadou, a recruiter, told me. “And then, as we grew, it started becoming apparent to everyone that progress would come from both engineering and research.”

I didn’t meet any tech bros there, or even people who had the kind of “we are changing the world” bravado I would probably have if I were pioneering this technology. Diane Yoon, whose job title is vice president of people, told me, “The word I would use for this work force is earnest … earnestness.”

Usually when I visit a tech company, as a journalist, I get to meet very few executives, and those I do interview are remorselessly on message. OpenAI just put out a sign-up sheet and had people come to talk to me.

I confess I have a history of going into these tech workplaces with a degree of defensive humanistic snobbery: These people may know code, I tell myself, but they probably don’t know the literary and philosophical things that really matter.

I was humbled at OpenAI. Yoon grew up as a dancer and acting Shakespeare. Nick Ryder was a mathematician at the University of California, Berkeley, with an interest in something called finite differential convolutions before he became a researcher at OpenAI. Several people mentioned a colleague on the research side who studied physics as an undergrad, went to Juilliard for two years to study piano and then got a graduate degree in neuroscience. Others told me their original academic interests had been in philosophy of mind or philosophy of language or symbolic systems. Tyna Eloundou, a member of the company’s technical staff, studied economic theory and worked at the Federal Reserve before coming to OpenAI.

As impressive as they all were, I remember telling myself: This isn’t going to last. I thought there was too much money floating around. These people may be earnest researchers, but whether they know it or not, they are still in a race to put out products, generate revenue and be first.

It was also clear that the folks were torn over safety. On the one hand, safety was on everybody’s mind. For example, I asked Marc Chen about his emotions the day DALL-E 2 was released. “A lot of it was just this feeling of apprehension. Like, did we get the safety.” On the other hand, everybody I spoke to was dedicated to OpenAI’s core mission — to create a technology capable of artificial general intelligence, capable of matching or surpassing human intelligence across a broad range of tasks.

A.I. is a field that has brilliant people painting wildly diverging but also persuasive portraits of where this is going. The venture capital investor Marc Andreessen emphasizes that it is going to change the world vastly for the better. The cognitive scientist Gary Marcus depicts an equally persuasive scenario about how all this could go wrong.

Nobody really knows who is right, but the researchers just keep plowing ahead. Their behavior reminds me of something Alan Turing wrote in 1950: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

I had hoped that OpenAI could navigate the tensions, though even then there were worries. As Brad Lightcap, OpenAI’s chief operating officer, told me: “The big thing is, is really just maintaining the culture and the mission orientation as we grow. The thing that actually keeps me up, if you’re asking honestly, is how do you maintain that focus at scale.”

Those words were prescient. Organizational culture is not easily built but is easy to destroy. The literal safety of the world is wrapped up in the question: Will a newly unleashed Altman preserve the fruitful contradiction, or will he succumb to the pressures of go-go-go?