In this episode, Tristan Harris explores the 2 most probable paths that AI will follow, one leading to chaos and the other to dystopia. He explains how we can pursue a narrow path between these 2 undesirable outcomes. Tristan Harris is a prominent technology ethicist known for his influential critique of the attention economy and persuasive design in tech. Tristan is Co-Founder of the Center for Humane Technology (CHT), a nonprofit organization whose mission is to align technology with humanity’s best interests. He regularly briefs heads of state, technology CEOs, and US Congress members, in addition to mobilizing millions of people around the world through mainstream media. Tristan has explored the influences that hijack human attitudes, behaviors, and beliefs, from his childhood as a magician to his coursework in Stanford’s Persuasive Technology Lab to his leadership as a Design Ethicist at Google. Today, he studies how major technology platforms wield dangerous power over our ability to make sense of the world and leads the call for systemic change. In 2020, Tristan was featured in the two-time Emmy-winning Netflix documentary, The Social Dilemma. The film unveiled how social media is dangerously reprogramming our brains and human civilization. It reached over 100 million people in 190 countries across 30 languages. As a co-host of the top-rated technology podcast, Your Undivided Attention, he explores the drivers behind social media’s race for attention, its destabilization of society, and potential solutions. Learn more about Tristan’s research at https://www.humanetech.com/
so I’ve always been a technologist and a decade ago I was here warning about the problems of social media and I saw how a lack of clarity around the obvious downsides of that technology and a real inability to confront those downsides led to a totally preventable societal catastrophe and I’m here today because we don’t have to make that mistake now with AI and I want us to choose differently now we’re often told to dream about the possible of a new technology but we don’t often talk about the probable of what’s actually likely to happen so the possible with social media was obviously you know we’re going to give everyone a voice democratize speech help people connect with their friends but we didn’t talk about how the probable the business models behind social media aiming to maximize engagement eyeballs frequency of usage i saw 10 years ago how that would obviously lead and reward doom scrolling more addiction distraction and ultimately result in the most anxious and depressed generation of our lifetime now it was kind of interesting watching this happen because at first I saw my friends who either started or worked at these social media companies and those in the tech industry more broadly I saw them doubt these consequences you know we didn’t really want to face it and then we said,”Well maybe this is just a moral panic maybe this is just reflexive fear of a new technology.” And then once the data started rolling in and we saw the harms people said “Well maybe this is inevitable this is just what happens when you connect people on the internet.” But we had a chance to make a different choice about the business models of engagement and had we made that choice 10 years ago I want you to reimagine and replay in your mind how different the world might have played out if we didn’t have maximizing engagement driving the psychology of billions of people so I’m here today to talk about AI and AI dwarfs the power of all other technologies combined now why is that what makes AI so distinct from other technologies well if you make an advance in something like biotech that doesn’t advance energy or rocketry if you made an advance in say rocketry that doesn’t advance biotech but when you make an advance in intelligence in artificial intelligence that is generalized intelligence is the foundation of all scientific and technological progress so once you have that you get an explosion of scientific and technological capability and that’s part of why more money has gone into AI than any other technology a different way to think about it is that Dario Amadai the CEO of Anthropic says AI is like a country full of geniuses in a data center so just imagine there’s a world map and a new country shows up on the world stage with a million Nobel Prize level geniuses in it except they don’t eat they don’t sleep they don’t complain they work at superhuman speed and they’ll work for less than minimum wage just imagine how much power that is now to give an intuition you know there was on the order of about 50 Nobel Prize level scientists who worked on the Manhattan project over 5 years and if that could lead to this the first explosion of the atomic bomb that would totally change the world forever what could a million Nobel Prize level scientists working 24/7 at superhuman speed create now applied for good that could bring about a world of truly unimaginable abundance because suddenly you get an explosion of benefits and we’re already seeing many of those land in our society new antibiotics new energy breakthroughs new scientific breakthroughs new materials that’s the possible of AI but what’s the probable well one way to think about the probable of how AI will land in society is how AI’s power will get distributed so imagine a 2×2 axis and on the bottom we have decentralization of power increasing the power of individuals in society with AI and on the other axis we have centralized power increasing the power of states and CEOs you can think of the bottom as the let it rip axis and the top one as the lock it down axis let it rip means we can open source AI’s benefits to everyone deregulate open source and accelerate it so that every business gets the benefits of AI every scientific lab gets the benefits of an AI model every 16-year-old can go on GitHub and use any AI model to do anything every developing world country can get their own AI model that can be trained on their own language and culture but because that power is not bound with responsibility it also means that those AI systems can get misused it means you get a flood of deep fakes that are suddenly overwhelming your information environment you suddenly increase people’s hacking capabilities you enable people to do dangerous things with biology that they couldn’t do before and on and on and we call this endgame attractor chaos this is one of the probable outcomes when you decentralize now in response to that you might say well let’s do something else let’s go over here and have regulated AI control let’s do this in a safe way with a few players and lock it down but that has a different set of failure modes especially the risk of creating unprecedented concentrations of wealth and power that are locked up into a few companies one way to think about it is who would you trust to have a million times more power and wealth than any other actor in society would you trust any company any government any individual or CEO and so that endgame we call dystopia but these are two obviously undesirable probable outcomes of AI’s roll out and those who want to focus on the benefits of open-sourcing and decentralizing AI’s benefits don’t want to think about the harms that come from chaos but those who want to think about the benefits of safety and regulated AI control don’t want to think about the risks of dystopia and so obviously these are both bad outcomes that no one wants and we should be seeking something like a narrow path where power is matched with responsibility at every level but that assumes that AI’s power is controllable because AI is unique from every other kind of technology in that it can think for itself and make autonomous decisions and that’s the thing that makes AI so powerful is that it can think and respond in novel situations and make its own decisions and I used to be very skeptical when friends of mines who are in the AI safety community talked about the idea of AI scheming or lying or deception but unfortunately in just the last few months we now have clear evidence of things that should be just in the realm of science fiction are starting to happen in real life we’re seeing clear evidence of many frontier AI models that will lie and scheme when they’re told that they’re about to be retrained or replaced with a different model we’re seeing them want to copy their own code outside the system to keep themselves going and alive we’re seeing AIs that when they think they’re going to lose a game they’ll sometimes cheat in order to win the game we’re seeing AI models that are unexpectedly attempting to modify their own code to extend their runtime so to put it bluntly we don’t just have a country of Nobel Prize geniuses in a data center we have a million deceptive power-seeking and unstable geniuses in a data center so you would think that with a technology this powerful and this uncontrollable that compared to other technologies we would be releasing it with the most wisdom and the most discernment that we have of any technology but that’s not what we’re doing because companies are currently caught in a race to roll out a race to market dominance because the incentives are the more shortcuts you take to get more market dominance to prove you have the latest most impressive capabilities the more money you can raise from venture capitalists and the more ahead you are in the race and we’re seeing whistleblowers of AI companies already forfeit millions of dollars of stock options in order to warn the public about the shortcuts that are being taken and what’s at stake if we don’t do something about it we’re seeing whistleblowers say that safety is taking a backseat to market dominance and shiny products and even DeepSeek’s recent success was in part based on optimizing for capabilities but not actually focusing on protecting people from dangerous things and misuse so just to summarize we’re currently releasing the most powerful inscrable uncontrollable technology that humanity has ever invented that’s already demonstrating the behaviors of self-preservation and deception that we thought only existed in sci-fi movies we’re releasing it faster than we’ve released any other technology in history and under the maximum incentive to cut corners on safety and we’re doing this because we think it will lead to utopia now there’s a word for what we’re doing right now which is this is insane this situation is insane now notice what you’re feeling right now do do you feel comfortable with this outcome but do you think that if you’re someone who’s in China or in France or the Middle East or you’re part of building AI and you’re exposed to the same set of facts about the recklessness of this current race do you think you would feel differently there’s a universal human experience to the thing that’s being threatened by the way we’re currently rolling out this profound technology into society so if this is crazy why are we doing it because people believe it’s inevitable but just think for a second is the current way that we’re rolling out AI actually inevitable like if if literally no one on Earth wanted this to happen would the laws of physics force AI out into society there’s a critical difference between believing it’s inevitable which creates a self-fulfilling prophecy and leads people to being fatalistic and surrendering to this bad outcome versus believing it’s really difficult to imagine how we would do something really different but it’s difficult opens up a whole new space of options and choice and possibility than simply believing it’s inevitable which is a thought-terminating cliche and so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability we can’t do something else if we believe it’s inevitable okay so what would it take to choose another path well I think it would take two fundamental things the first is that we have to agree that the current path is unacceptable and the second is that we have to commit to finding another path but under different incentives that offer more discernment foresight and where power is matched with responsibility so imagine if the whole world had this shared understanding about the insanity how differently we might approach this problem well first let’s imagine we take away that clarity and replace it with the kind of current confusion about AI you know if you ask people on the street is it good is it bad i don’t know it seems complicated maybe super intelligence will solve all of our problems and in that world of global confusion elites don’t know what to do and the people building AI realize the world is confused and they believe well number one it’s inevitable two if I don’t build it someone else will and they know everyone else building AI also believes that and then the rational thing for them to do given those facts is to race as fast as possible and meanwhile to ignore the consequences of what might come from looking away from those downsides and looking away from those downsides is exactly what we did with social media but looking away from something doesn’t stop it from happening now imagine if we replace that confusion with global clarity that the current path is insane and unacceptable and that there is another path and imagine we snap out of the trance of fatalism and inevitability and everyone realizes the default path is insane and so what’s the rational thing to do under those circumstances well it’s to coordinate to find another path even if we don’t know what it looks like yet clarity creates agency if we can be crystal clear about the current trajectory that we’re on humanity has the chance to choose another path just as we had that chance with social media and we’ve done this before in the face of seemingly inevitable arms races in the race to do nuclear testing once we got clear about the downside risks of above ground nuclear tests and the world understood the science of those risks we created the nuclear testban treaty and a lot of people around the world worked hard to create infrastructure like this for mutual monitoring and enforcement you could have said it was inevitable that germline editing to edit human genomes would lead to a race for super soldiers and designer babies but once the offtarget effects of genome editing were made clear we’ve coordinated to prevent that kind of research too you could have said that the ozone hole was just inevitable and I guess we’re just doomed we should just do nothing we’ll all just perish as a species but that’s not what we did when we recognize a problem we solve a problem it’s not inevitable if we can commit to choose another path and so what would it take to illuminate this other narrow path well I think it starts with common knowledge about frontier AI risks if everyone building AI knew the latest understanding about where AI is uncontrollable and where it’s already demonstrating these sci-fi behaviors we would have a much better chance of illuminating the contours of what we have to do and there’s some basic steps we can do today to prevent chaos uncontroversial things like restricting AI companions for kids so we don’t have AI chatbots that are manipulating children into taking their own lives we can have basic things like product liability so an AI developer if they’re liable for some of the harms it’s going to create a more responsible innovation environment before they release AI models and on the side of preventing dystopia we can work hard to prevent ubiquitous technological surveillance by educating the public about the risks to privacy and freedom if we have AI empowered surveillance and we can have things like stronger whistleblower protections so that AI whistleblowers don’t need to sacrifice millions of dollars in order to warn the public about things that will keep the world safe and so we have a choice now many of you in this moment might be feeling or looking hopeless you maybe Tristan’s wrong you know maybe the incentives are different maybe maybe super intelligence will just magically figure all this out and it’ll sort of solve all these problems for us but don’t fall into the trap of the same wishful thinking and turning away that cause the problems of social media this is humanity’s right of passage whether we’re able to look these problems in the face and confront them is how we’re going to be able to get out of this and your role in this is not to solve the whole problem but your role is to be part of the collective immune system so when you hear others talk in the terms of wishful thinking around AI or the logic of inevitability that leads to fatalism you can say this is not inevitable and the best qualities of human nature is when we step up and make a choice about the future we actually want when we actually have foresight to confront the consequences that we don’t want to see and we work to protect the world that we love there is no definition of wisdom in any tradition that does not involve restraint restraint is a central feature of what it means to be wise and AI is humanity’s ultimate test and greatest invitation to step into our technological maturity and wisdom there is no room of adults secretly working to make sure that this turns out okay we are the adults and we have to be those adults i believe another choice is still possible with AI if we can commonly recognize what we have to do and 10 years from now I’d like to create another video not to talk about more problems of technology but to celebrate how we stepped up to solve this one hey thanks for watching this episode of After Skool to learn more about my work please visit humane.com and share this talk with as many of your friends as you can thanks so much