Refuting the “AI Doomers”: We Have More Agency Than We Think
And another topic I wanted to talk about was the mathematical universe. The easy stuff. So there are three, but we don’t have time for all three. If you could think of a way to tie them all together, then feel free like a gymnast or juggler. But otherwise, then I would like to end on the advice from your parents. Okay. Well, the whole reason I spent so many years thinking about whether we are all part of a mathematical structure and whether our universe actually is mathematical rather than just described by it is, of course, because I listened to my parents. Because I got so much shit for that. And I just felt, no, I think I’m going to do this anyway because to me it makes logical sense. I’m going to put the ideas out there. And then in terms of misconceptions about me, one misconception I think is that somehow I don’t believe that being falsifiable is important for science. I talked about earlier, I’m totally on board with this. And I actually argue that if you have a predictive theory about anything—gravity, consciousness, et cetera—that means that you can falsify it. So that’s one. And another one, probably the one I get most now because I’ve stuck my neck out a bit about AI and the idea that actually the brain is a biological computer and actually we’re likely to be able to build machines that we could totally lose control over, is that some people like to call me a doomer, which is of course just something they say when they’ve run out of arguments. It’s like if you call someone a heretic or whatever. And so I think what I would like to correct about that is I feel actually quite optimistic. I’m not a pessimistic person. I think that there’s way too much pessimism floating around about humanity’s potential. One is people, “Oh, we can never figure out and make any more progress on consciousness.” We totally can if we stop telling ourselves that it’s impossible and actually work hard. Some people say, “Oh, we can never figure out more about the nature of time and so on unless we can detect gravitons or whatever.” We totally can. There’s so much progress that we can make if we’re willing to work hard. And in particular, I think the most pernicious kind of pessimism we suffer from now is this meme that it’s inevitable that we are going to build superintelligence and become irrelevant. It is absolutely not inevitable. But if you tell yourself that something is inevitable, it’s a self-fulfilling prophecy, right? This is convincing a country that’s just been invaded that it’s inevitable that they’re going to lose the war if they fight. It’s the oldest psyop game in town, right? So of course if there’s someone who has a company and they want to build stuff and they don’t want you to have any laws that make them accountable, they have an incentive to tell everybody, “Oh, it’s inevitable that this is going to get built, so don’t fight it. It’s inevitable that humanity is going to lose control over the planet, so just don’t fight it. And hey, buy my new product.” It’s absolutely not inevitable. You could have had people… People say it’s inevitable, for example, because they say people will always build technology that can give you money and power. That’s just factually incorrect. You’re a really smart guy. If I could do cloning of you and start selling a million copies of you on the black market, I could make a ton of money. We decided not to do that, right? They say, “Oh, if we don’t do it, China’s going to do it.” Well, there was actually one guy who did human cloning in China. And you know what happened to him? No. He was sent to jail by the Chinese government. Oh, okay. People just didn’t want that. They thought we could lose control over the human germline and our species. “Let’s not do it.” So there is no human cloning happening now. We could have gotten a lot of military power with bioweapons. Then Professor Matthew Meselson at Harvard said to Richard Nixon, “We don’t want there to be a weapon of mass destruction that’s so cheap that all our adversaries can afford it.” And Nixon was like, “Huh, that makes sense, actually.” And then Nixon used that argument on Brezhnev and it worked, and we got a bioweapons ban. And now people associate biology mostly with curing diseases, not with building bioweapons. So it’s absolutely not… it’s absolute BS, this idea that we’re always going to build any technology that can give power or money to some people if there’s… we have much more control over our lives and our futures. We have much more control over our futures than some people like to tell us that we have. We are much more empowered than we thought. I mentioned that if we were living in a cave 30,000 years ago, we might’ve made the same mistake and thought we were doomed to just always be at risk of getting eaten by tigers and starving to death. That was too pessimistic. We had the power to, through our thought, develop a wonderful society and technology where we could flourish. And it’s exactly the same way now. We have an enormous power. What most people actually want to make money on AI is not some kind of sand god that we don’t know how to control. It’s tools, AI tools. People want to cure cancer. People want to make their business more efficient. Some people want to make their armies stronger and so on. You can do all of those things with tool AI that we can control. And this is something we work on in my group, actually. And that’s what people really want. And there’s a lot of people who do not want to just be like, “Okay, yeah, it’s been a good run, hundreds of thousands of years, we had science and all that, but now let’s just throw away the keys to Earth to some alien minds that we don’t even understand what goals they have.” Most Americans in polls think that’s just a terrible idea, Republicans and Democrats. There was an open letter by evangelicals in the U.S. to Donald Trump saying, “We want AI tools. We don’t want some sort of uncontrollable superintelligence.” The Pope has recently said he wants AI to be a tool, not some kind of master. You have people from Bernie Sanders to Marjorie Taylor Greene that come out on Twitter saying, “We don’t want Skynet. We don’t want to just make humans economically obsolete.” So it’s not inevitable at all. And if we can just remember we have so much agency in what we do, what kind of future we’re going to build, if we can be optimistic and just think through what is a really inspiring, globally shared vision for not just curing cancer but all the other great stuff we can do, then we can totally collaborate and build that future. The audience member now is listening. They’re a researcher. They’re a young researcher, they’re an old researcher. They have something they would like to achieve that’s extremely unlikely, that’s criticized by their colleagues for even them proposing it. And it’s nothing nefarious, something that they find interesting and maybe beneficial to humanity. Whatever. What is your advice? Two pieces of advice. First of all, about half of all the greatest breakthroughs in science were actually trash-talked at the time. So just because someone says your idea is stupid doesn’t mean it is stupid. A lot of people’s ideas… you should be willing to abandon your own ideas if you can see the flaw and you should listen to destructive criticism against it. But if you feel you really understand the logic of your ideas better than anyone else and it makes sense to you, then keep pushing it forward. And the second piece of advice I have is you might worry then, like I did when I was in grad school, that if I only worked on stuff that my colleagues thought was bullshit—like thinking about the many-worlds interpretation of quantum mechanics, that there were multiverses—then my next job was going to be at McDonald’s. Then my advice is to hedge your bets. Spend enough time working on things that get appreciated by your peers now so that you can pay your bills, so that your career continues ahead. But carve out a significant chunk of your time to do what you’re really passionate about in parallel. If people don’t get it, well, don’t tell them about it at the time. And that way you’re doing science for the only good reason, which is that you’re passionate about it. And it’s a fair deal to society to then do a little bit of chores for society to pay your bills also. That’s a great way of viewing it. And it’s been quite shocking for me to see actually how many of the things that I got most criticized for or was most afraid of talking openly about when I was a grad student, even papers that I didn’t show my advisor until after he signed my PhD thesis and stuff, have later actually been pretty picked up. And I actually feel that the things that I feel have been most impactful were generally in that category. You’re never going to be the first to do something important if you’re just following everybody else.