LEX.
- The following is a conversation with Sam Altman, CEO of OpenAI, the company behind GPT4, ChatGPT, DALL·E, Codex, and many other AI technologies which both individually and together constitute some of the greatest breakthroughs in the history of artificial intelligence, computing and humanity in general.
- Please allow me to say a few words about the possibilities and the dangers of AI in this current moment in the history of human civilization. I believe it is a critical moment.
- We stand on the precipice of fundamental societal transformation where, soon, nobody knows when, but many, including me, believe it’s within our lifetime.
- The collective intelligence of the human species begins to pale in comparison by many orders of magnitude to the general super intelligence in the AI systems we build and deploy at scale.
- This is both exciting and terrifying.
- It is exciting because of the enumerable applications we know and don’t yet know that will empower humans to create, to flourish, to escape the widespread poverty and suffering that exists in the world today and to succeed in that old all too human pursuit of happiness.
- It is terrifying because of the power that super intelligent AGI wields that destroy human civilization, intentionally or unintentionally. The power to suffocate the human spirit in the totalitarian way of George Orwell’s “1984” or the pleasure-fueled mass hysteria of “Brave New World” where, as Huxley saw it, people come to love their oppression, to adore the technologies that undo their capacities to think.
- That is why these conversations with the leaders, engineers, and philosophers, both optimists and cynics, is important now. These are not merely technical conversations about AI. These are conversations about power, about companies, institutions, and political systems that deploy, check and balance this power. About distributed economic systems that incentivize the safety and human alignment of this power. About the psychology of the engineers and leaders that deploy AGI and about the history of human nature, our capacity for good and evil at scale.
- I’m deeply honored to have gotten to know and to have spoken with, on and off the mic, with many folks who now work at OpenAI, including Sam Altman, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, Andrej Karpathy, Jakub Pachocki, and many others. It means the world that Sam has been totally open with me, willing to have multiple conversations, including challenging ones, on and off the mic.
- I will continue to have these conversations to both celebrate the incredible accomplishments of the AI community and to steel man the critical perspective on major decisions various companies and leaders make always with the goal of trying to help in my small way. If I fail, I will work hard to improve. I love you all.
- This is the Lex Fridman podcast. To support it, please check out our sponsors in the description.
- And now, dear friends, here’s Sam Altman. High level, what is GPT4?
LEX. Just to linger on the alignment problem a little bit, maybe the control problem, what are the different ways you think AGI might go wrong that concern you? You said that fear, a little bit of fear, is very appropriate here. You’ve been very transparent about being mostly excited but also scared.
SAM. I think it’s weird when people, like, think it’s like a big dunk that I say, like, I’m a little bit afraid and I think it’d be crazy not to be a little bit afraid. And I empathize with people who are a lot afraid.
LEX. What do you think about that moment of a system becoming super intelligent? Do you think you would know?
SAM. The current worries that I have are that they’re going to be disinformation problems or economic shocks or something else at a level far beyond anything we’re prepared for. And that doesn’t require super intelligence, that doesn’t require a super deep alignment problem and the machine waking up and trying to deceive us. And I don’t think that gets enough attention. I mean, it’s starting to get more, I guess.
LEX. So, these systems, deployed at scale, can shift the winds of geopolitics and so on?
SAM. How would we know if, like, on Twitter we were mostly having like LLM’s direct the whatever’s flowing through that hive mind?
LEX. Yeah, on Twitter and then, perhaps, beyond.
SAM. And then, as on Twitter, so everywhere else, eventually.
LEX. Yeah, how would we know?
SAM. My statement is we wouldn’t and that’s a real danger.
LEX. How do you prevent that danger?
SAM. I think there’s a lot of things you can try but, at this point, it is a certainty there are soon going to be a lot of capable open source LLM’s with very few to none, no safety controls on them. And so, you can try with regulatory approaches, you can try with using more powerful AI’s to detect this stuff happening. I’d like us to start trying a lot of things very soon.
LEX. How do you, under this pressure that there’s going to be a lot of open source, there’s going to be a lot of large language models, under this pressure, how do you continue prioritizing safety versus, I mean, there’s several pressures. So, one of them is a market driven pressure from other companies, Google, Apple, Meta and smaller companies. How do you resist the pressure from that or how do you navigate that pressure?
SAM. You stick with what you believe in. You stick to your mission. You know, I’m sure people will get ahead of us in all sorts of ways and take shortcuts we’re not gonna take. And we just aren’t gonna do that.
LEX. How do you out=compete them?
SAM. I think there’s gonna be many AGI’s in the world, so we don’t have to, like, out-compete everyone. We’re gonna contribute one. Other people are gonna contribute some. I think multiple AGI’s in the world with some differences in how they’re built and what they do and what they’re focused on, I think that’s good. We have a very unusual structure so we don’t have this incentive to capture unlimited value. I worry about the people who do but, you know, hopefully it’s all gonna work out. But we’re a weird org and we’re good at resisting.
LEX. You know, AGI can make a lot more than a 100X.
SAM. For sure.