“I’m on team human. I’m going to fight for the right of my one-year-old son to have a meaningful future, even if some digital eugenics dude feels that his robots are somehow more worthy.” — Max Tegmark
Max Tegmark says more was invested in AI in the last year in the US alone than in the 5 years of the Manhattan Project combined, and recent progress is collapsing timelines to AGI on prediction markets pic.twitter.com/wfrab0pMwt
— Tsarathustra (@tsarnick) December 21, 2024
Has big tech distracted the world from the existential risk of AI? Or can AI be the answer to problems we face daily? There is still a lot to learn about the potential of this accelerating technology, but are we ready for what it could be capable of? Learn more about what can be done to make AI safe: https://futureoflife.org/safety/ “Is AI the answer or the problem?” delivered on 12 November 2024 by Max Tegmark on Centre Stage at WebSummit 2024. Fireside chat hosted by Parmy Olson, Technology Columnist at Bloomberg.
TRANSCRIPT. Boa tarde! I am an optimist, and I’m going to argue that we can create an amazingly inspiring future with tool AI, as long as we don’t build AGI, which is unnecessary, undesirable, and preventable. Now, I want you to raise your hands if you want AI tools that can help us cure diseases and solve problems. That is a lot of hands. Now, raise your hand if you instead want AI that just makes us economically obsolete and replaces us. I can’t see a single hand. Now, there are gloomers who argue that we can’t have the AI that you wanted without also getting the AI that you don’t want. But I’m going to argue that, yes, we can. The AI that you didn’t vote for there in your show of hands is AGI, smarter-than-human AI that outperforms us at virtually all tasks, which includes tasks like building smarter robots that can build robot factories, that can make more robots and could turn into a new digital species that we lose control over. Why am I so optimistic then? Well, imagine for a moment that someone tells you, Hey, we need to have completely unregulated biotech with human cloning, eugenics, gain of function research we could lose control over. Otherwise, you can’t have any biotech at all, and you have to die of stupid preventable diseases like smallpox? What would you say? Well, you would call bullshit on this because you know that if we have legally mandated safety standards, then companies will innovate and meet them and give you safe medicines. In the same way, if someone tells you, Hey, we need to have completely unregulated AGI, otherwise you can’t have any of the exciting tool AI that you’re excited about here at WebSummit, you would call BS on that, too. You should. Because if we have legally mandated safety standards saying that AI has to be controllable, companies will innovate to give you those tools. But, you might say, That sounds good, but surely AGI is just sci-fi and decades away, right? Well, we used to think that, pretty recently, because AI used to suck pretty recently. But welcome to 2024. It’s really amazing how fast AI has improved. Robots can now not only dance, but fold your laundry. Gen AI went from this, to this in just one year with the same prompt. As recently as six years ago, most of my AI colleagues thought that AI that could master language and knowledge, as well as ChatGPT-4, was decades away. They were, of course, all wrong because we already have it. And arguably, we’ve already passed the Turing test. If you look into the future, the momentum is just amazing. In the US alone, we invested much more in AI last year than in the five years of the Manhattan project combined, inflation adjusted. This is collapsing the timelines on prediction markets from thinking AGI was decades away, to just years away. Tech CEOs like Dario Amodei from Anthropic, guesses a couple of years. Sam Altman says this: “This is the first time ever where I felt like we actually know what to do. I think from here to building an AGI will still take a huge amount of work. There are some known unknowns, but I think we basically know what to go do.” I disagree with Sam on some important things, but I totally agree with him on this. We have to stop being confident that AGI is some kind of long-term thing, or we might get accused of being dinosaurs stuck in 2021. All right, but you may say, Isn’t AGI necessary to do all the cool stuff with AI that we talk about here at WebSummit? No, that’s a myth. I’m going to argue that AGI is actually unnecessary for all the things I’ve heard you talk about here so far at the meeting. Here are some examples. Tool AI can save up to a million lives per year on the roads of the world by preventing accidents without AGI. Tool AI can save even more lives in hospitals without AGI. And tool AI can give us almost free, amazing diagnosis of prostate cancer, lung cancer, eye diseases, you name it, without AGI. Tool AI can help us fold proteins and develop amazing new medicines and even win you the Nobel Prize without AGI. Tool AI can give us great improvements for pandemic prevention, for reducing power consumption, for improving education, democratizing education, and transforming basically every other sector of the economy without AGI. Tool AI can help us accomplish the United Nations Sustainable Development Goals much faster without AGI. So no, AGI is not necessary. It’s unnecessary right now. But you might say, It’s surely controllable. So why don’t you just shut up, Max, get off stage and stop worrying? Well, it is not controllable. We have no idea how to control it. You don’t have to take my word for it. “Once these artificial intelligences get smarter than we are, they will take control. They’ll make us irrelevant, and that’s quite worrying, and nobody knows how to prevent that for sure.” AI godfather Alan Turing said the same thing in 1951: expect the machines to take control, because he clearly wasn’t thinking of AI as just another technology like the Internet, but as a new species. Then it’s natural that the smarter species controls, just like we control tigers because we’re smarter. It doesn’t matter if the AI is evil or conscious. It just matters if it’s extremely competent and accomplishes goals that aren’t aligned with our goals. If you are chased by a heat-seeking missile and you’re like, you don’t care if it has goals in any deep philosophical sense, you just care if it behaves as if it has goals. You should not assume that the goals of an AGI will be kind or good just because it’s so smart, because if Hitler had been smarter, it wouldn’t have been better. It would have been worse. It’s for reasons like these that people like Sam Altman have said AGI could mean lights out for all of us, and last year, a who’s who of AI experts said that this could cause human extinction. Now, I do want to commend the AGI companies for spending at least a small fraction of their money on trying to solve this unsolved control problem, but they’re nowhere close, whatever their press releases say. The biggest success they’ve had so far is training large language models to not say harmful things, as opposed to making them not want harmful things. That’s like if you trained a serial killer to never say anything that would reveal his murderous desires: it doesn’t solve the problem. And worse, it’s pretty clear now that the first AGI would not be a pure large language model. It would be some hybrid scaffolded system which is more agentic, and we have no idea how to control such a thing as it recursively self-improves. But, you may say, maybe we should just want uncontrollable AGI. I know you all, not a single one of you voted for it. You voted against it, in your show of hands there at the beginning, but there is actually a small minority out there who want humanity to be replaced by machines. Professor Richard Sutton argues that this is just the natural next step in evolution, and we should welcome it. Beff Jezos, who created the E/acc movement that some of you may have seen on Twitter, says he’s fine with us being replaced by machines. And his fan, Marc Andreessen, is excited here on Twitter saying that the AGI is gloriously inherently uncontrollable. You know, I think of this as digital eugenics, basically wanting to replace all of humanity by some digital master race. Look, this minority of people, they’re entitled to their opinion, but so are the rest of us. And I’m on team human. I’m going to fight for the right of my one-year-old son to have a meaningful future, even if some digital eugenics dude feels that his robots are somehow more worthy. Okay, but, you might say, Isn’t AGI inevitable? Why don’t you just get off stage Max, and stop trying to prevent the inevitable, because humans will build all tech that gives money or power. No. There are lots of powerful, profitable technologies that we’ve successfully banned, from human cloning to bioweapons. The first step here is always to stigmatize the tech. If you bump into someone here at WebSummit who’s for AGI, give them a hard time. Remind them that if they have any kind of influence, or agency over the future, they’re going to lose it all to AGI. Step two for stopping AGI is to have safety standards, as I said. Imagine if you walk into the FDA and say, Hey, it’s inevitable that I’m going to release this new I’m going to get a drug with my company next year. I just hope we can figure out how to make it safe first. You would get laughed out of the room. And in the same way, it’s no more inevitable that we’re going to build AGI if we have safety standards than is that we’re going to release some unsafe medicine. But China! Don’t we need to build AGI anyway fast before the Chinese do? No. The framing here is just totally wrong. The greatest national security threat, of course, is not another country, but out-of-control AGI that would be way more powerful than any country has ever been. I call this global battle to build AGI first, the Hopium War, fueled by ‘hopium’, delusional hope that we can control AGI. Why? Because we’re closer to building AGI than we are to figuring out how to align or control it. We’re closer to building AGI than figuring out how to control it. This AGI race isn’t an arms race. It’s a suicide race, just like in the classic movie War Games. “Strange game. The only winning move is not to play.” The only winning move is not to play. There’s no incentive for any government, including the Chinese government, to play this game and build AGI that could take away all their power as they lose control over it. What’s the solution then? How can we have all this wonderful tool AI that we’ve heard so many talks about at WebSummit without losing control of AGI? I really like the very detailed Narrow Path plan that was put out by OpenAI (*ControlAI) recently. Here’s my vision in summary. On the policy side, the US and China unilaterally decide to treat AI just like they treat any other powerful technology industry with binding safety standards, not to appease the rival superpower, but just to prevent their own companies from causing harm and building uncontrollable AGI. Next, the US and China get together and push the rest of the world to join them in an AGI Moratorium Treaty so that AGI isn’t built in North Korea or anywhere else. After that, we get this amazing age of global prosperity fueled by amazing tool AI, like we’re discussing here at WebSummit. Because on the technical side, these safety standards drive unprecedented innovation. I think we’re actually underestimating how much and how fast we’ll be able to innovate to get safe tool AI, because we’re neglecting the fact that AI itself can help us build this tool AI. As we describe in this paper here, we’ve had a revolution in AI’s ability to produce art, to produce text, and soon code, and we’re getting a revolution in AI’s ability to produce proofs that the code does what you want it to do and meet your spec. Here’s my technical nerd vision for this. You, the human, write down your specification that you want your tool to obey. Some very powerful neural network-based AI that you don’t trust, figures out how to make the tool, writes the code, writes the proof that it meets your spec, and—wait a minute. How are you possibly going to understand this giant AI and the tool it made and the proof? Well, here’s the great news. You don’t need to understand any of that. Because just like it’s much harder to find a needle in a haystack than to verify that it’s a needle after you found it, it’s much harder to find a proof than to verify the proof after you found it. It can be done, in fact, with 300 lines of Python code on your laptop. This is how we can not only make, but also trust really powerful tool AI, even if it’s made by systems we don’t trust. In summary, I’m really optimistic that we can have an inspiring and long future of global flourishing with tool AI as long as we heed the warning from ancient Greece and don’t get hubris like Icarus. AI is giving us humans amazing intellectual wings with which we can do things beyond the wildest dreams of our ancestors, as long as we don’t squander at all by just obsessively trying to fly into the sun and build AGI. Thank you. Hey, Max. All right. That was a wonderful speech, and I loved your suggestion that we all give a really hard time to the builders of AGI. So Sam Altman, get ready for things to get awkward because the attendees of WebSummit are coming for you. So on that point, actually, it was interesting to hear you talk about AGI so much just now. And I’m curious, a lot of people actually question whether it’s even possible. You’re a professor. Can you quantify for me the actual probability in your mind of AGI actually ever being built? I think the probability that it’s physically possible to build it is basically 100%. What’s powered the whole AI revolution is this insight that our brain is a biological computer. It doesn’t matter whether this information is being processed by carbon atoms and neurons and brains or by silicon atoms in our technology. Whether we’ll actually build it will just depend on whether we, as a society, decide we want to or not. As I’m arguing, there’s absolutely no reason to try to, to build it now or put ourselves in some weird situation where it’s going to be built in two years, then we just have to hope we can make it safe. That’s so backward. We should set the safety standards there, and then we can have all these wonderful AI tools. If in twenty-five years, if someone figures out a way of controlling it, then we can have a discussion about whether we want this. Yeah, and I totally take your point on that, on putting the cart before the horse and not putting the safety standards in early enough. But I’m still just, again, on the definition of AGI, because, as a journalist, when I think about how people define AI for a number of years, I think people have capitalized on that word and overmarketed it and overused it and misused it as well because the definition is so swishy in a way and open to interpretation. I can imagine something similar happening with AGI maybe in a few years. A lot of companies saying, Well, we’ve got AGI now. Might that muddy the waters a little bit? We just won’t even know we’re there. That’s a very good point. Whenever there’s any tech that has promise, people will latch onto it for marketing purposes and put a little sticker ‘AI inside’, even if it’s just a glorified AI Excel spreadsheet or something like that. People working in another office signing NDAs in another country. Exactly. That does happen. To see clearly among all this fog, I think it’s best to just go back to the guy who first coined the term AGI, Shane Legg, a leader at Google DeepMind, and he defined it the way I define it here. It’s just something which can do basically everything we can do. By that definition, we certainly don’t have it now. People who are trying to dismiss this as we are scaremongering, will also often falsely suggest that the people warning about it are worried about what we have now. I don’t have any nightmares about ChatGPT-4o causing me problems. But at the same time, it’s very clear, we could make it pretty soon, probably within years. I think of this actually a lot like in 1942, Enrico Fermi, the physicist, built the world’s ever self-sustaining nuclear chain reaction, actually under a football stadium in Chicago. The physicists freaked out when they learned of this, not because his reactor was particularly big or dangerous, but because they realized that this was the last big hurdle before we got the bomb. It took three years later and we had the bomb. Alan Turing, when he gave those warnings that I mentioned, he said, Don’t freak out now. We have a lot of time, decades and decades, but I’ll give you a ‘canary in the coal mine’. It’s called the Turing test. When machines can master language and knowledge to the point of fooling many into thinking they’re human, that’s when you’re close. That’s when you should pay attention. The Enrico Fermi moment for us, I think it happened last year. Now is the time to have this conversation. Now it’s just a question of time, it sounds like. Okay, so I’m glad you mentioned history. I’d love to just go back a little bit more recent history, 10 years ago-ish. You’re a physics professor at MIT, studying the Cosmos, the Nature of the Cosmos, and you decide to start the Future of Life Institute to study research into the existential risk of AI. Why did you do that at that time? Also, how did you convince Elon Musk to donate $10 million to your organization? I decided to start that organization because I just felt humanity was falling so far short of our potential. I love technology. That’s why I’m so happy I can work at a university that has technology in its name. Instead of building a great future with it, we’re getting into these political and geopolitical pissing contests. I felt that the key to a great future with high tech was to win the wisdom race, win the race between the growing power of the tech and the wisdom with which we steer it towards good uses. We’ve done work on trying to reduce the risk of nuclear war, trying make sure we use biotech for good, not for engineered pandemics. Most of all AI. It wasn’t just about AI. It was all these other potential threats from tech. That’s right. How did I convince Elon to support us initially? Those of you who’ve done a lot of fundraising know that you don’t convince people by changing their mind on anything. You just help them understand that you can help them accomplish what they’re already convinced of. Elon was quite freaked out already back then that we would do something reckless with AI. He is a very long-term thinker. That’s why he’s so excited about making life multi-planetary and having sustainable energy and all that. I reached out to him after he had tweeted, warning that AI could be more dangerous than nukes, and asked if he would be interested in meeting. I pitched him on this idea that we would organize a bunch of conferences that he wouldn’t have time to organize, bringing together those building the AI with people who were concerned, and that we would fund also the first ever nerd research program for nerds to work on making AI safe. Yeah, that’s how you described it, a nerd research program to make AI safe. I think that actually helped mainstream AI safety research. People saw this isn’t just a bunch of tree huggers complaining or something. It’s actually nerds going to AI conferences and presenting papers. That was great. What’s happened since then, though, is that we’ve actually realized that it turned out we had less time than we thought, it turned out to be easier to build human-level AI than we thought. In hindsight, I feel a bit dumb about this. Should have seen it coming because we made the same mistake with not just thinking machines, but with flying machines. Should have seen what coming? Sorry. Like, the advent is a large language model? That it might be easier than we thought. Yeah, because if we were sitting here on stage, in 1895 or 1900, trying to predict when are we going to get the first flying machines, we could be like, well, we aren’t even close to understanding how birds work. First, we have to figure that out. It’s probably decades. Turned out there was a much easier way to build flying machines. That’s how we both got here to Lisbon. Similarly, we were so stuck on this idea that we wouldn’t be able to build machines that could outthink us until after we figured out how our brains work. But it turned out to be a much simpler machine, a simpler way to do it. That’s probably something that’s quite disconcerting for you is the speed at which this pace, the pace of change. Is there anything… Actually, no, let’s just stick with Elon Musk one more time because now he is likely to have quite a key role in the Trump administration. President-elect Trump has talked about rescinding Joe Biden’s Executive Order on AI, which could mean unraveling things like the US AI Safety Institute, which is part of NIST, and perhaps some of those standards. Yet, we know this about Elon, that he’s got this quite deep long-time concern about the risk of AI. Do you think that might actually temper some of perhaps Trump’s efforts to loosen the rules so that the US could get ahead of China? Might he have Elon whispering in his ear and saying, actually, keep those safety policies in place? I hope so. You asked me about persuading Elon. It’s the same with Donald Trump or Kamala Harris or the Prime Minister of Portugal or Ursula von der Leyen. You don’t actually need to persuade them that they shouldn’t want to be replaced and dominated by some new weird machine species. They are already against that. It’s just a matter of education. I’m definitely hoping that Elon can help Donald Trump understand that this is just not in America’s national security interests, that American companies build something that we lose control over in two years, and that the real path to prosperity in America and even strength is to build AI tools that can be controlled. Yeah, okay. You also mentioned in your speech about the AI pause letter that you organized last year, more than 30,000 signatories, including Elon Musk, Yoshua Bengio, and a number of other big thinkers, Andrew Ng, in the AI community signed that. I’m not sure there really was… Not andrew Ng, though. Okay, sorry. I’m not sure how much of a pause there really was as a result of that, but here’s another question. Another thing I’m hearing, and correct me if I’m wrong, is that apparently the research and the capabilities of large language models starting to get some diminishing returns, we might be hitting something of a plateau. If you really want to pause, maybe we’re going to get a pause because some of these models just aren’t growing as fast as and are becoming as capable as they were before. Those are great questions. Let me say two separate things there. Sure. First about the pause letter. My theory of change with that was not that I thought we were going to immediately get a pause. My goal was just to make it socially safe for people to raise concerns. Yeah, fair. That really worked remarkably. There were a lot of people who just had a lot of pent-up angst about this and were afraid of speaking up for fear of seeming like luddites. Then after the pause letter and the extinction letter, nobody felt afraid of talking about this anymore when even Geoff Hinton and Yoshua Bengio and others and the AI CEOs had signed these things. That totally worked. That in turn catalyzed all this political activity we’re seeing now with international AI summits, AI Safety Institutes, and so on. But in terms of actually getting a pause, no, we certainly don’t have one. Unfortunately, sorry to pour a little cold water on this. Even if the training of pure large language models, which start showing diminishing returns, which I think is still debatable, that’s in no way going to slow down the speed with which we’re approaching AGI. It’s more like, yeah, since you like history, look at… How many articles have you ever read in your lifetime about how Moore’s Law is dead, whatever? Missing the point entirely. Moore’s Law was just one particular aspect of the progress, how densely you can pack transistors on a chip. Before that, you could have written articles about how punch cards are plateauing, et cetera. We went through about five different technological paradigms that each plateaued and got overtaken by a new one. Now, if pre-training of large LLMs starts to plateau a little bit, we’re already seeing a new paradigm where you get enormous benefits by post-training instead. But like ChatGPT-4, o1, who’s used that, it gets much more powerful because instead of just forcing the LLM to output the next token, the first one it’s thought of, it’s allowed to have a lot of internal deliberation before it outputs something, which is what you do and I do. That’s why I sometimes pause a little bit. Reasoning capabilities. We’re going to keep seeing— another huge new thing is we’re training LLMs now to not just output not words, tokens, but to use tools. We’re training the LLM to use calculators, databases, to be able to write Python code and execute it and so on. To act as agents. This comes back to that history trend I mentioned. A lot of little individual things will, of course, plateau after a while, but there are so many other things which keep us steaming ahead. I actually think Sam Altman was right when he said that at this point, there’s no really… We can start seeing a path to get these to human level within the next few years. Last question, if you can answer it very quickly because we’re almost out of time. Any thoughts on just the growing concentration of power in AI? Because we’ve seen some of the most promising AI startups acqui-hired by Amazon, Microsoft, and Google, and so much of the innovation and control and wealth is being concentrated on just a handful of enormous companies. Does that worry you? Of course. I think—I’m very against power concentration. That’s why I like this very democratic approach that so many people have here. I want to remind us there are two ways you can get horrible power concentration, either in the hands of one small tech company or big tech company that decides to dominate everybody else, unelected tech pros, or we somehow lose power to machines. That’s going to be the worst power concentration, possibly imaginable. It could get worse. I want the democratic future where it’s all… It’s everybody here in charge. That’s why I want to have these safety standards. So level of playing field and the big bully companies don’t have any more say than the rest of us. Can I just end on a positive remote again so we don’t end on gloom. Yeah. I think after 13.8 billion years of cosmic history, we’ve never, ever had an opportunity to get so empowered as a species before, as with AI. There’s so many things we would love to do, but just haven’t figured out how to do, like prevent our loved ones from dying of diseases and safeguard our climate and so on and so forth. If we can get this right with tool AI, we can have a more amazing future than we ever dreamt of. Okay, great to end it on a positive note. Max, thank you very much. Thank you.