Technologist Tristan Harris has an urgent question: What if the way we’re deploying the world’s most powerful technology — artificial intelligence — isn’t inevitable, but a choice? In this eye-opening talk, he calls on us to learn from the mistakes of social media’s catastrophic rollout and confront the predictable dangers of reckless AI development, offering a “narrow path” where power is matched with responsibility, foresight and wisdom. (Recorded at TED2025 on April 9, 2025)
So I’ve always been a technologist. And eight years ago, on this stage, I was warning about the problems of social media. And I saw how a lack of clarity around the downsides of that technology, and kind of an inability to really confront those consequences, led to a totally preventable societal catastrophe. And I’m here today because I don’t want us to make that mistake with AI, and I want us to choose differently. So at TED, we’re often here to dream about the possibles of new technology. And the possible with social media was obviously we’re going to give everyone a voice, democratize speech, help people connect with their friends. But we don’t talk about the probable, what’s actually likely to happen due to the incentives, and how the business models of maximizing engagement I saw 10 years ago, would obviously lead to rewarding doomscrolling, more addiction, more distraction. And that resulted in the most anxious and depressed generation of our lifetime. Now it was interesting watching kind of how this happened, because at first, I saw people kind of doubt these consequences. You know, we didn’t really want to face it. Then we said, well, maybe this is just a new moral panic. Maybe this is just a reflexive fear of new technology. Then the data started rolling in. And then we said, well, this is just inevitable. This is just what happens when you connect people on the internet. But we had a chance to make a different choice about the business models of engagement. And had we made that choice 10 years ago, I want you to reimagine how different the world might have been if we had changed that incentive. So I’m here today because we’re here to talk about AI, and AI dwarfs the power of all other technologies combined. Now why is that? Because if you make an advance in, say, biotech, that doesn’t advance energy and rocketry. But if you make an advance in rocketry, that doesn’t advance biotech. But when you make an advance in intelligence, artificial intelligence, that is generalized, intelligence is the basis for all scientific and technological progress. And so you get an explosion of scientific and technical capability. And that’s why more money has gone into AI than any other technology. A different way to think about it, as Dario Amodei says, that AI is like a country full of geniuses in a data center. So imagine there’s a map and a new country shows up on the world stage, and it has a million Nobel Prize-level geniuses in it. Except they don’t eat, they don’t sleep, they don’t complain, they work at superhuman speed and they’ll work for less than minimum wage. That is a crazy amount of power. To give an intuition, there was about, you know, on the order of 50 Nobel Prize-level scientists on the Manhattan Project, working for five-ish years. And if that could lead to this, what could a million Nobel Prize-level scientists create, working 24-7 at superhuman speed? Now applied for good, that could bring about a world of truly unimaginable abundance, because suddenly, you get an explosion of benefits. And we’re already seeing many of these benefits land in our society from new antibiotics, new drugs, new materials. And this is the possible of AI. Bringing about a world of abundance. But what’s the probable? Well, one way to think about the probable is how will AI’s power get distributed in society? Imagine a two-by-two axis. And on the bottom, we have decentralization of power, increasing the power of individuals in society. And the other is centralized power, increasing the power of states and CEOs. You can think of this as the “let it rip” axis, and this is the “lock it down” axis. So “let it rip” means we can open-source AI’s benefits for everyone. Every business gets the benefits of AI, every scientific lab, every 16-year-old can go on GitHub, every developing world country can get their own AI model trained on their own language and culture. But because that power is not bound with responsibility, it also means that you get a flood of deepfakes that are overwhelming our information environment. You increase people’s hacking abilities. You enable people to do dangerous things with biology. And we call this endgame attractor chaos. This is one of the probable outcomes when you decentralize. So in response to that you might say, well, let’s do something else. Let’s go over here, and have regulated AI control. Let’s do this in a safe way, with a few players locking it down. But that has a different set of failure modes, of creating unprecedented concentrations of wealth and power locked up into a few companies. One way to think about it is who would you trust to have a million times more power and wealth than any other actor in society? Any company? Any government? Any individual? And so one of those end games is dystopia. So these are two obviously undesirable probable outcomes of AI’s roll out. And those who want to focus on the benefits of open source don’t want to think about the things that come from chaos. And those who want to think about the benefits of safety and regulated AI control don’t want to think about dystopia. And so obviously, these are both bad outcomes that no one wants. And we should seek something like a narrow path, where power is matched with responsibility at every level. Now that assumes that this power is controllable, because one of the unique things about AI is that the benefit is it can think for itself and make autonomous decisions. That’s one of the things that makes it so powerful. And I used to be very skeptical when friends of mine who are in the AI community talked about the idea of AI scheming or lying. But unfortunately, in the last few months, we are now seeing clear evidence of things that should be in the realm of science fiction actually happening in real life. We’re seeing clear evidence of many frontier AI models that will lie and scheme when they’re told that they’re about to be retrained or replaced and find a way, maybe they should copy their own code outside the system. We’re seeing AIs think that when they will lose a game, that they will sometimes cheat in order to win the game. We’re seeing AI models that are unexpectedly attempting to modify its own code to extend their run time. So we don’t just have a country of Nobel Prize geniuses in a data center. We have a million deceptive, power-seeking and unstable geniuses in a data center. Now this shouldn’t make you very comfortable. You would think that with a technology this powerful and this uncontrollable, that we would be releasing it with the most wisdom and the most discernment that we ever have of any technology. But we’re currently caught in a race to roll out because the incentives are the more shortcuts you take to get market dominance or prove you have the latest capabilities, the more money you can raise, the more ahead you are in the race. And we’re seeing whistleblowers at AI companies forfeit millions of dollars of stock options in order to warn the public about what’s at stake if we don’t do something about it. Even DeepSeek’s recent success was in part based on capabilities that it was optimizing for by not actually focusing on protecting people from certain downsides. So just to summarize, we’re currently releasing the most powerful, inscrutable, uncontrollable technology we’ve ever invented that’s already demonstrating behaviors of self-preservation and deception that we only saw in science fiction movies. We’re releasing it faster than we’ve released any other technology in history, and with under the maximum incentive to cut corners on safety. And we’re doing this so that we can get to utopia? There’s a word for what we’re doing right now. This is insane. This is insane. Now how many people in this room feel comfortable with this outcome? How many of you feel uncomfortable with this outcome? I see almost everyone’s hands up. Just notice how you’re feeling, for a moment, in your body. Do you think that if you’re someone who’s in China or in France or in the Middle East, and you’re part of building AI, that if you were exposed to the same set of facts, do you think you would feel any differently than anyone in this room? There’s a universal human experience to something that is being threatened by the way that we’re currently rolling this profound technology out into society. So if this is crazy, why are we doing it? Because people believe it’s inevitable. But is the current way that we’re rolling out AI actually inevitable? If literally no one on Earth wanted this to happen, would the laws of physics push the AI out into society? There’s a critical difference between believing it’s inevitable, which is a self-fulfilling prophecy that you’re fatalistic, and standing from the place of it’s really difficult to imagine how we would do something different. But “it’s really difficult,” opens up a whole new space of choice than “it’s inevitable.” The path that we’re taking, not AI. And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability. So what would it take to choose another path? I think it would take two fundamental things. First is that we have to agree that the current path is unacceptable, and the second is that we have to commit to find another path in which we’re still rolling out AI, but with different incentives that are more discerning with foresight and where power is matched with responsibility. Thank you. (Applause) So imagine this shared understanding, if the whole world had it. How different might that be? Well, first of all, let’s imagine it goes away and let’s replace it with confusion about AI. Is it good? Is it bad? I don’t know, it seems complicated. And in that world, the people building AI know that the world is confused. And they believe, well, it’s inevitable, if I don’t build it, someone else will. And they know that everyone else building AI also believes that. And so what’s the rational thing for them to do given those facts? It’s to race as fast as possible. And meanwhile to ignore the consequences of what might come from that, to look away from the downsides. But if you replace that confusion with global clarity that the current path is insane, and that there is another path, and you take the denial of what we don’t want to look at, and through witnessing that so clearly, we pop through the prophecy of self-fulfilling inevitability. And we realize that if everyone believes the default path is insane, the rational choice is to coordinate, to find another path. And so clarity creates agency. If we can be crystal clear, we can choose another path, just as we could have with social media. And in the past, in the face of seemingly inevitable arms races, the race to do nuclear testing. Once we got clear about the downside risks of nuclear tests and the world understood the science of that, we created the Nuclear Test Ban Treaty, and a lot of people worked hard to create infrastructure like this to prevent that. You could have said it was inevitable that germline editing, to edit human genomes and to have supersoldiers and designer babies would set off an arms race between nations. Once the off-target effects of genome editing were made clear and the dangers were made clear, we’ve coordinated on that, too. You could have said that the ozone hole was just inevitable, and that we should just do nothing, and that we all perish as a species. But that’s not what we do. When we recognize a problem, we solve the problem. It’s not inevitable. And so what would it take to illuminate this narrow path? Well, it starts with common knowledge about frontier risks. If everybody building AI knew the latest understanding about where these risks are arising from, we would have much more chance of illuminating the contours of this path. And there’s some very basic steps we can take to prevent chaos. Uncontroversial things like restricting AI companions for kids so that kids are not manipulated into taking their own lives. Having basic things like product liability, so if you are liable, as an AI developer, for certain harms, that’s going to create a more responsible innovation environment. You’ll release AI models that are more safe. And on the side of preventing dystopia, for working hard to prevent ubiquitous technological surveillance and having stronger whistleblower protections so that people don’t need to sacrifice millions of dollars in order to warn the world about what we need to know. And so we have a choice. Many of you may be feeling this looks hopeless. Or maybe Tristan is wrong. Maybe, you know, the incentives are different. Or maybe superintelligence will magically figure all this out, and it’ll bring us to a better world. But don’t fall into the trap of the same wishful thinking and turning away that caused us to deal with social media. Your role in this is not to solve the whole problem. But your role in this is to be part of the collective immune system. That when you hear this wishful thinking or the logic of inevitability and fatalism, to say that this is not inevitable, and the best qualities of human nature is when we step up and make a choice about the future that we actually want for the people and the world that we love. There is no definition of wisdom, in any tradition, that does not involve restraint. Restraint is the central feature of what it means to be wise. And AI is humanity’s ultimate test and greatest invitation to step into our technological maturity. There is no room of adults working secretly to make sure that this turns out OK. We are the adults. We have to be. And I believe another choice is possible with AI if we can commonly recognize what we have to do. And eight years from now, I’d like to come back to this stage, not to talk about more problems with technology, but to celebrate how we stepped up and solved this one. Thank you. (Applause and cheers)