Lawyer and professor Lawrence Lessig examines governments and other collectives in the context of AI, dissecting the potential effects this technology can have on democratic institutions. His talk brings insights from his extensive legal and academic background. Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School. Prior to rejoining the Harvard faculty, Lessig was a professor at Stanford Law School, where he founded the school’s Center for Internet and Society, and at the University of Chicago. He clerked for Judge Richard Posner on the 7th Circuit Court of Appeals and Justice Antonin Scalia on the United States Supreme Court. Lessig serves on the Board of the AXA Research Fund, and on the advisory boards of Creative Commons and the Sunlight Foundation. He is a Member of the American Academy of Arts and Sciences, and the American Philosophical Association, and has received numerous awards, including the Free Software Foundation’s Freedom Award, Fastcase 50 Award and being named one of Scientific American’s Top 50 Visionaries. Lessig holds a BA in economics and a BS in management from the University of Pennsylvania, an MA in philosophy from Cambridge, and a JD from Yale.
Transcriber: Guilherme Magalhães Bertholino Reviewer: Dana Sarhan So in January 6th, 2021. My nation suffered a little bit of a democracy heart attack. Thousands of Americans had been told that the election had been stolen, and tens of thousands of them showed up because they believed the election had been stolen. And indeed, in polling immediately after January 6th, the Washington Post found 70% of Republicans believed the election had been stolen, and a majority of college educated Republicans believed the election had been stolen. That was their perception, and I don’t know what the right way to act on that perception is. They thought the right way was to defend what they thought was a democracy stolen. Now, these numbers were astonishing. 30% or two thirds of Republicans believing the election was stolen. But even more extraordinary are these numbers. The fact that in the three years since that astonishing event, the numbers have not changed. The same number believed today that the election was stolen, as believed it was stolen three years ago. Despite the fact that we’ve had hundreds and hundreds of investigations and overwhelming evidence that there was no fraud sufficient to ever change even a single state. This is something new. When Richard Nixon went through the Watergate scandal as the news was being reported, Nixon’s popularity collapsed not just among Democrats, but among independents and Republicans. But we’re at a stage where it doesn’t matter what happens. This is Donald Trump’s popularity over the course of his administration. Nothing changes. The facts don’t matter. Now, I think this truth should bother us a lot. I think we need to develop a kind of paranoia about what produces this reality. A particular paranoia, the paranoia of the hunted. Think of the kids in the Birds first realizing that those crows were attacking them, or black mirrors, metal heads. When you see these creatures chasing and surrounding you, the point is, we need to recognize that there is an intelligence out there to get us because our perceptions, our collective perceptions, our collective misimpressions are not accidental. They are expected. They are intended. They are the product of the thing. Okay, I want to be careful introducing the thing. I’m going to talk a little bit about I. But I’m not going to slag on I, I think AI is the most extraordinary technology humanity has ever even conceived of. But I also think it has the potential to end humanity. But I’m not going to slag on AI, because I’m pretty sure that our robot overlord is going to be listening to these Ted talks someday, and I don’t want to be on the wrong side of the Overlord, so AI is just fine. I’m not going to talk about this I first I want to instead put I in a little bit of a perspective, because I think that we’re too obsessed with the new, and we fail to recognize the significance of I in the old. We think about intelligence and we’re distinguishing between artificial and natural intelligence. And we, of course, as humans claim, pride of kingdom in the world of natural intelligence. And then we build artificial intelligence. It’s intelligence that we make. But here’s the critical point. We have already for a long time lived with systems of artificial intelligence. I don’t mean digital I, I mean analog AI, any entity or institution that we build with a purpose that acts instrumentally in the world is in this sense an AI. It is an instrumentally rational entity that’s mapping how it should behave, given the way the world evolves and responds to it. So think about democracy as an AI. It has institutions, elections, parliaments, constitutions for the purpose of some collective ends. Our Constitution says it’s for the common good. So the democracy in our Constitution is an analog artificial intelligence devoted to the common good. We think about corporations as an AI. They have institutions, boards, management , finance for the purpose of making money, or at least conceived of narrowly. Today, that’s the way it is. Their corporation is an analog intelligence devoted to maximizing shareholder value. These are AIS. They have purposes and objectives sometimes complementing each other. So the purpose of a school bus company complements the purpose of a school board to produce school bus transportation in a district that’s just beautiful. But sometimes they’re competing. The purpose of a government and having a clean environment conflicts with the purpose of a coal company designing to produce electricity by spewing carbon and soot into the environment. And when they conflict, we tell ourselves this happy story. We tell ourselves the story that democracy is going to stand up and discipline that evil corporation, to get the corporation to do the right thing, to do the thing that’s in the interest of all of us. That’s our happy story. It’s also a fantasy, because at least in my country, corporations are more effective eyes than democracy. Think about it a little bit like this. If we think about instrumental rationality along one axis of this graph and time across the other. Humans, of course, are the first instrumentally rational entity we care about. Where better than cows. Maybe not as good as ants, but the point is, we’re pretty good as individuals figuring out how to do things strategically, and then we build democracy to do that a little bit better, to act collectively for all of us. And that’s a more instrumentally rational entity than we individual humans can be. Then we created corporations, and it turns out they have become at least in corrupted political regimes, which I’ll just submit. My political regime is better than democracy in bringing about their objective ends. Now, of course, in this system, each of these layers has an aspiration to control the higher layer. So humans try to control democracy through elections. Democracy tries to control corporations through regulation. But the reality of control is, of course, a little bit different. In the United States, corporations control democracy through the extraordinary amount of money they pour into elections, making our representatives dependent not on us, but on them. And in democracy then controls the humans by making representation, not actually representation, corrupting representation. Now, this structure, this layer of higher order intelligence or instrumental rationality might evoke. For those of you who think about AI, a statement by the godfather of AI, Geoffrey Hinton. Hinton warns us there are few examples of a more intelligent thing being controlled by a less intelligent thing. Or we could say, a more instrumentally rational thing being controlled by a less instrumentally rational thing. And that is consistent with this picture of AIS. And then we add digital eye into this mix. And here too once again we have corporations attempting to control their digital eye. But the reality of that control is not quite perfect. Facebook in September of 2017 was revealed to have a term in their ad system called Jew haters. You could buy ads targeting Jew haters. Now, nobody in Facebook created that category. There was not a human in Facebook who decided we’re going to start targeting Jew haters. It’s I created that category because it’s I figured. Jew haters would be a profitable category for them to begin to sell ads to, and the company was, of course, embarrassed that it turned out they didn’t actually have control over the machine that ran their machines that run our lives. The real difference in this story, though, is the extraordinary potential of this instrumentally rational entity versus us, this massively better instrumentally rational entity versus even corporations and certainly democracies, because it’s going to be more efficient at achieving its objective than we are. And here’s where we cue the paranoia I began to seed because our collective perceptions are collective misconception. Perceptions are not accidental. They are extended, expected, intended. The product of this I we could think of it as the AI perception machine. We are its targets. Now, the first contact we had with this AI , as Tristan Harris described it, came from social media. Tristan Harris, who started the center for Humane Technology, co-founded it famous in this extraordinary documentary, which more than 130 million people have seen the social dilemma. Before he was famous, he was just an engineer at Google and at Google he was focused on the science of attention, using AI to engineer attention, to overcome resistance, to increase human engagement with the platform because engagement is the business model. Compare this to think of it as brain hacking. We could compare it to what we could call body hacking. This is the exploiting of food science. Scientists engineer food to exploit our evolution our mix of salt, fat and sugar to overcome the natural resistance. So you can’t stop eating food so that they can sell food or sell, quote, food more profitably. Brain hacking is the same, but focused on attention. It’s exploiting evolution. The fact that we have an irrational response to random rewards, or can’t stop consuming bottomless pits of content with the aim to increase engagement, to sell more ads. And it just so happens to bad for us that we engage more. The more extreme, the more polarizing, the more hate filled this content is. So that is what we’re fed by these eyes with the consequence that we produce a people more polarized and ignorant and angry than at any time in democracy’s history and in America since the Civil War, and democracy is thereby weakened. They give us what we want. What we want makes us into this. Okay, but recognize something really critically important. This is not because I is so strong. It’s because we are so weak. Here’s Tristan Harris describing this all, looking. Out for the moment when technology would overwhelm human strengths and intelligence. When is it going to cross the singularity? Replace our jobs? Be smarter than humans. But there’s this much earlier moment when technology exceeds and overwhelms human weaknesses. This point being crossed is at the root of addiction, polarization, radicalization, outrage ification, vanity ification, the entire thing. This is overpowering human nature. And this is checkmate on humanity. So Tristan’s point is we’re always focused on this corner when AGI comes, when it’s superintelligent, when it’s more intelligent than any of us. And that’s what we now fear. Whether we will get there in three years or 20 years, what will happen then? But his point is, it’s actually this place that we must begin to worry, because at this place it can overcome our weaknesses. The social dilemma was about the individual weaknesses. We have not to be able to turn away from our phones or to convince our children to turn away from their phones, but I want you to see that there’s also a collective human weakness, that this technology drives us to disable our capacity to act collectively in ways that any of us would want. So we are surrounded individually by these metal heads, and we are also surrounded as a people by these metal heads. Long before AGI is anywhere on the horizon , it overwhelms us. Air gets us to do what it seeks, which is engagement, and we get democracy hacked in return. Now, if the first contact that we had gave us that if social media circa 2020 gave us that, what’s the second contact with I going to produce when I is capable, not just in figuring out how to target you with the content it knows will elicit the most reaction and engagement from you, but can create content that it knows will react or get you to engage more directly, whether whether true or false, whether slanderous or not. What does that contact do? I so hate the writers of Game of Thrones because their last season, they so completely ruined the whole series that we can’t we can’t use memes from Game of Thrones anymore. But if we could, I would say winter is coming, friends. I’m just going to say it anyway. Winter is coming, friends, and these eyes are the source that we have to worry about. So then what is to be done? Well, you know, if there’s a flood, what you do is you turn around and run. You move. You move to higher ground or protected ground. You find a way to insulate democracy or to shelter democracy from air force or from AI’s harmful force. And, you know, the law does this in America with juries. We have juries. They deliberate, but they are protected in the types of information that they’re allowed to hear or talk about or deliberate upon, because we know we need to protect them if they’re to reach a judgment that is just. And democracy reformers, especially across Europe, are trying to do this right now. Reformers building citizen assemblies across Europe, mainly in Japan as well, and not yet in the United States. And citizen assemblies. assemblies are these random, representative, informed and deliberated bodies that take on particular democratic questions and address them in a way that could be protected from this corruption of the eye? So Iceland was able to craft a constitution out of a process like this. Ireland was able to approve gay marriage and deregulation of abortion Through a process like this, France has addressed climate change and also end of life decisions. And across Germany there are many of these entities that are boiling up to find ways for a different democratic voice , to find voice. But here’s the point. These are extraordinarily hopeful and exciting, no doubt. But they are not just a good idea. They are existential for democracy. They are security for democracy. They are a way to protect us from this AI hacking that steers against a public will. This is changed not just to make democracy better, a tweak to just make it a little bit more democratic. It’s a change to let democracy survive. Given what we know, technology will become. This is a terrifying moment. It’s an exhilarating moment long before superintelligence, long before AGI threatens us. A different AI threatens us. But there is something to do while we still can do something. We should know enough now to know we can’t trust democracy just now. We should see that we still have time to build something different. We should act with the love that makes anything possible. Not because we know we will succeed. I’m pretty sure we won’t. But because this is what love means. You do whatever you can, whatever the odds for your children, for your family, for your country, for humanity. humanity. While there is still time, while our robot overlord is still just a sci fi fantasy. Thank you very much. (Applauses)