“It’s a very, very, very big gamble to despair of the humans and put all our trust in the AIs.” — Yuval Noah Hariri
Yuval Noah Harari on AI racing dynamics: You go from one corporation to the next, and they tell you “We would like to slow down, but we can’t because we can’t trust our competitors to slow down.” pic.twitter.com/SH4Xib1U69
— ControlAI (@ai_ctrl) November 6, 2024
Elon Musk:
— There’s a 10 to 20% chance AI annihilates us
— Doesn’t think we can control superintelligent AI
— Thinks Sam Altman isn’t really concerned about it, and shouldn’t be trusted to control this technology pic.twitter.com/xa2gSKcoQu— ControlAI (@ai_ctrl) November 2, 2024
Watch Joseph Gordon-Levitt (@hitrecord) actor, filmmaker and entrepreneur, in conversation with Yuval Noah Harari, historian, philosopher and the author of ‘Sapiens’ and the new book ‘Nexus: A Brief History of Information Networks from the Stone Age to AI’. Together, they explore our current information revolution, delving into topics that range from AI’s potential to redefine creativity and democracy, to AI rights and Hollywood’s role in shaping future narratives. This event was part of an ongoing series by Berggruen Institute’s Studio B (@Berggrueninst) pairing technologists and creatives in conversations that explore transformative technologies and the role of storytelling in shaping future imaginations. Filmed in Los Angeles on September 27, 2024. 00:00 – Introduction 05:31 – Storytelling makes us human 10:27 – Democracy vs dictatorship in the AI age 13:56 – Objective, subjective & intersubjective realities 19:43 – A brief history of democracy 24:23 – Pros & cons of social media 30:27 – AIs are the editors now 35:48 – Will AI rewrite the future of storytelling? 40:43 – AI as a god 44:05 – Humans made the data 50:44 – AI lovers & rights 55:36 – The legal pathway to AI rights 59:33 – What Hollywood gets wrong 1:03:39 – Dangers of AI bureaucrats 1:05:07 – What makes you optimistic?
ntroduction Alex Gardels: Hello, hello, hello, welcome, thank you all for coming. My name is Alex Gardels. I am the director of Studio B. Tonight, we have a wonderful program for you, and I’m going to hand it off to my lovely colleague, Nathalia Ramos to give a slight introduction. Nathalia Ramos: Thank you. Hi everyone. So at the Berggruen Institute, we like to say that nothing is more powerful than an idea whose time has come. But we also believe that in order to have real impact, that idea needs a story, and so that’s why we started Studio B, to bring together the thinkers, the scientists, the philosophers, the technologists, with the greatest storytellers of our time, because at this moment of planetary scale transformation, getting the story right is more important than ever. So thank you all for being here, and I’m now going to pass it over to our founder, Nicolas, Nicolas Berggruen: good evening. I’m Nicolas Berggruen, Berggruen Institute, thank you for being here tonight. Thank you Nathalia, thank you Alex. Thank you Studio B thank you everyone at the Berggruen Institute. And most importantly, thank you, Joseph Levitt and Yuval Harari. Frankly, they don’t need an introduction. I think you know who Joseph is. He is, in some ways, one of yours, meaning, he is La Hollywood. He is a he’s behind and in front of the camera, and he’s been engaged intellectually on the issues that we care about at the institute. One of those issues is, who are we becoming? Who do we want to be humans in an age where we can, in theory, change our own nature, transform ourselves through gene editing, AI, quantum computing, and no one has maybe put this into perspective like Yuval Harari. And Yuval wrote a new book Nexus, which is powerful and, frankly, incredibly timely, because his vision, I think, has been that we’ve created our lives, our civilizations, from narratives, and we’ve driven, we humans have really driven those narratives. But now with AI, it could be that the narrative is driven beyond us, even though, in theory, it came from us. So we could lose control, if we ever had it, but we could really lose the narrative that we are potentially unleashing so let’s hear about it tonight, And thank you again. Hey Joseph Gordon-Levitt: everybody, what’s going on? You? Who did that feel like a movie moment? Or what is that? Jesus, all right, I’m really, really excited to be here. Thank you to Nicholas and everyone at the Berggruen Institute, Studio B I am an enormous fan of this man’s writing here, Mr. Yuval Harari, ladies and gentlemen, we here for him. He’s written a new book called Nexus. I want to talk to you about that tonight. Of course, now you’re on a book tour. I’m sure you’re talking. Talking about this book a lot and maybe saying some of the same things over and over again. I’ve never experienced that before. Myself having to say the same things over and over again, promoting a piece of media watch killer heat released on Amazon today. Prime video. So I’m going to try to try to come at this conversation and let you say some of the things you want to say to everybody, but also maybe come at it from a perspective that’s unique to this time and place. We’re here in Los Angeles, which is sort of the Storytelling makes us human epicenter of the film and television industry. I’ve worked in film and television my entire life, and a lot of your book is about storytelling. I love to consider myself a storyteller, as an actor and a filmmaker. It’s something that I think of as more than a job. I connect to it on maybe an existential level, on who I am. And one thing I often like to say even is that storytelling is a big part of what makes humans human. So one of the first things I wanted to ask you, you wrote a book about humans called sapiens. How many people have read Sapiens here? Yeah. So as as an expert on the history and the prehistory of what humans are, I’m curious to hear what you think of that statement. If I say storytelling is a big part of what makes humans human, would you agree with that? Yuval Noah Harari: Yeah, absolutely. I mean, this is, this is the source of our power. We control the world because we can cooperate in much larger numbers than any other animal. And the secret of cooperation of humans is stories. I mean, you can’t get to know more than 100 or 200 individuals, but the same story can be known to millions and even billions of people, and as long as everybody believes the same story, they can cooperate, they can agree on norms and values and so forth. And maybe to just throw like just give one or two examples, if you think about the most famous face in history, the most famous portrait in history. So this is the face of Jesus that over 2000 years, more portraits of Jesus have been created than probably of any other person in history, and they are everywhere, in churches, in cathedrals, in houses, in schools, in government ministries, at least in some countries. And 100% of these portraits are fictional. Not a single one is authentic, because nobody has any idea how Jesus actually looked like there is not a single portrait or statue made during his lifetime that has survived the earliest the earliest portrait we have is from about 200 years after his death, and it shows a man crucified on A cross with the head of a donkey, because it was a graffiti drawn by anti Christians who made fun of this new, strange cult. In the Bible itself, there is not a single word about how the man looked like. There is one description of his clothes, but nothing about him, whether he was tall or short or thin or fat or black hair or bold or blonde, nothing, everything is just fictions. And yet these fictions have enabled billions of people to work together, whether for good purposes like charities and building hospitals and whatever, or whether for fighting holy wars and launching inquisitions, but it all runs on stories. And religion is an easy example, but perhaps the most important or successful example is money. Joseph Gordon-Levitt: You’re saying that money is a story. Yuval Noah Harari: It’s just, what else can it be? It has no objective value. You can’t eat or drink dollars. The most successful storytellers in the world are not the people who win the Nobel Prize in Literature. It’s the people who win the Nobel Prize in economics. And one last thought, if you think about the United States today, maybe the last story that still holds this place together is the dollar. It’s the only thing that Republicans and Democrats can still agree on, is the value of $1 everything else is like they have very different views. Joseph Gordon-Levitt: So this gets at one of the things I found. Fascinating in the book that you define stories, not only by the content of that story, but the network of people that are connected by that story. You mentioned Republicans and Democrats. You mentioned Christians. Every story, especially any story of any popularity, it defines a certain network. Can you say more? Because that was a Yeah, it was a new concept to me Yuval Noah Harari: again. I mean, the way that nexus tries to understand history is not the history of religions, cultures, Democracy vs dictatorship in the AI age wars, kings, but how information flows in the world, and the networks of information that create the world. So to give one example, we tend to think about democracy and dictatorship as two contrasting types of ethical systems, of political systems. They have different ideals, different values, but and which is true, but you can on a more fundamental level, you can look at them as different information networks. Information simply flows differently in a dictatorship. In a democracy, a dictatorship is a network in which all the information flows to one hub where all the decisions are being made, whereas democracy is a system in which information if dictatorship is a centralized information system, democracy is a distributed or decentralized system. Well, yes, some information flows to the center, let’s say to Washington, where some decisions are being made, but most decisions are made elsewhere, like if you look at the United States over the last century. So a lot of extremely important decisions were made here in Hollywood. And the influence of the people in Washington on what’s happening in Hollywood, it’s an open debate, who influenced more? Whom was it Washington influencing Hollywood? Or vice versa? In the Soviet Union, there is no no debate. I mean, the people in the center tell Stalin is telling Eisenstein what to film. There is no debate. You do the wrong thing, you go to the Gulag. Even if you do the right thing, you might go to the Gulag. So that’s the difference in how information flows between democracies and dictatorships. And I guess we’ll get to that. But one of the biggest questions about AI is, what will it do to these contrasting types of information flows? Because in the 20th century, the US defeated the USSR ultimately, because Centralized Information Processing was inefficient. When all information about everything, economics, culture, the film industry, producing cars, all the information flowed to Moscow, and there are some bureaucrats needed to make decisions. They couldn’t do it. They were overwhelmed by the amount of information, and either made no decisions or bad decisions. Whereas the distributed system of the United States, it functioned much more efficiently. But what happens if you replace humans with AIs? Humans are overwhelmed by too much information. AIs become more efficient when you flood them with information. Again, it’s an open debate. We still don’t know the answer, but there is at least one school of thought that argues that AI will make Centralized Information systems more efficient, and the main problem of totalitarian and centralized system in the 20th century will become their big advantage, which We even see, to some extent in the economic realm that these huge high tech monopolies that just monopolize entire sections of the economy, because size matters when you have this kind of technology. Joseph Gordon-Levitt: I want to get to AI, this is something I’m Objective, subjective & intersubjective realities deeply fascinated with. I know a lot of you are, too. I also want to get to what you just said about whether Hollywood or Washington is making more decisions, because we love to think that we’re important here in Hollywood. Before we do, I just want to cover one of the pieces of vocabulary that I found the most useful in your book, that I want to go back to a few times as we talk tonight, and again, I think this is sort of empowering and flattering. If you’re a storyteller. You say in the book that stories can even create a new level of reality, and you talk about three different versions of reality, if you will, objective reality, subjective reality and inter subjective reality. Can you explain what you mean by those terms? Yeah, so. Yuval Noah Harari: Objective reality is things that exist, whether you believe in them or not, whether you know about them or not, if an asteroid. From Outer Space is now coming towards us, about to hit Earth and destroy human civilization. It exists whether people know about it or not. And similarly, viruses killed people, billions of people, long before anybody knew viruses exist. It doesn’t matter if you believe in them or not. They can kill you. This is objective reality. Then you have subjective reality, which is reality in the mind of a single person. It depends on the consciousness, on the awareness of a single person. So if you feel pain, nobody else in the world feels this pain. Your doctor, your family, can tell you, Oh, there is nothing wrong with you, but if you feel the pain, it’s still there. This is subjective reality. Now, intersubjective reality, this is something that didn’t exist for as far as we know, for most of the history of the universe, there were only objective and subjective realities. That’s it, and stories were important for the appearance in the last few 10s of 1000s of years, at least, in our species, of a new type of reality, inter subjective reality, things that exist in the interaction between many consciousnesses, between many minds that they often exist on the level of story, and we already mentioned some of them. Money is maybe the most important type of intersubjective reality. It’s not objective. And dollar has no objective value in the sense that you can eat it or drink it, or make something out of it, if you’re if you’re a billionaire, and you’re flying a private jet with I don’t know, Bill a billion dollar, and you crash on a deserted island inhabited only by, I don’t know, monkeys and rats and whatever your dollars are worthless. You cannot buy anything from the monkeys. You cannot make them build a raft for you. You still have maybe the papers, or you can show them on your on your smartphone to the monkeys. Look, I have a billion dollars in my bank account. Just build this raft, or give me these bananas. I’ll give you as much as you want. It won’t help. They are simply unaware of the existence of this reality. Now, if you crash on a lonely island with a cargo of bananas, you can eat them. If you crash with a cargo of anano Hummers, you can use them to try and build a raft. But the most worthless thing to crash on a lonely island with is money. There’s nothing to do with it. But if you have other people there, and they believe the same story as you it works. But again, it’s fragile. Intersubjective realities are fragile because people can lose faith in it. Again, it happens to gods that some gods were once the most powerful intersubjective realities in the world, and now nobody believed in them, and they are gone. And the same thing happens to money, and it’s going back, say to the dollar. It’s now interesting to see what might happen if some people lose faith in this intersubjective reality and instead put their faith in another one, like the Bitcoin. So the same way that we had wars of religion in previous eras. Now we might have, you know, just imagine trying to run a society when different people just reject the money of the other group, no, no, ours is better. Joseph Gordon-Levitt: So one of the most powerful tools to create an intersubjective reality in our time is film and television. And again, here we are in LA, lots of us work making Film and Television. I’m curious, just to dive in a little bit. You don’t talk about Hollywood film and TV too much in your book, but because we’re here, I’m curious to just ask you about it. We’re also, by the way, here at the house where the horse head scene was shot from the Godfather, which, come on, yeah, the godfather. I’m curious to hear from you, from a historical perspective, how did film and TV change storytelling compared to the technologies that existed before film and TV. Do you think Yuval Noah Harari: so? Film and TV are, of course, not alone. They are part of the package of modern information technology, and the changes were enormous. Again, going back to one salient example, if you think about the history of democracy, so for A brief history of democracy most of history, large scale democracies were simply impossible. There was no way to create a large scale democracy, because democracy is a conversation. And, you know, in a stone age tribe. Yeah, you want to have a conversation between all tribe members. It’s it’s relatively easy. They can all come together and talk. But as human societies became bigger and bigger, it became more and more difficult to bring everybody together to the town square or to the same place to have a conversation. So we don’t know of any large scale democracy in ancient times. All the examples, like Athens or Rome or city states or even smaller tribes and towns, we begin to see large scale democracies only in the late modern era, with the rise of modern information technologies, first the newspaper and then telegraph and radio and television and film, and finally, the internet. This is when you begin to see large scale democracies, because you can use these technologies to create a conversation in real time between millions of people, people spread over entire continents, and they all watch the same news on television, the same TV series, the same movie at the same time, so they can connect in a way That was just impossible before, and this was the foundation of modern, large scale democracy. And again, we’ll get to that later. But the new information technologies of the present era, like social media and AI, they are destabilizing democracy for this reason, because democracy is built on top of information technology. So it’s a Not, not hard to see why a major change in information technology will create an earthquake in the structure built on top on top of it, which, which is democracy. But the last thing I’ll say is that every technology is many different usages. Also modern totalitarian regimes are built on the same technology. There were no totalitarian regimes in the ancient world. Large scale totalitarian regimes. Totalitarian regimes are regimes that try to control everybody all the time. Everything you do in the ancient world, you have many monarchies and autocracies, but most of the time you know the emperor in Rome, he collects taxes and he directs the armies, but he doesn’t try to manage the life of every individual in every village on a day to day basis, because this is simply impossible. You begin to see it at the same large scale totalitarianism at the same time as large scale democracy, because it relies on the same technology. I mean the Soviets, for instance. I mean at the same time that Hollywood is established, you also see the flowering of the Soviet film industry, and the Soviets were masters, at least in the beginning of the art of filmmaking, Eisenstein, yeah, and the and they used the same technology to create a totalitarian regime that everybody from, you know, the Pacific Ocean to the Baltic Sea would watch the same movies, and all the movies are giving the same narrative, the same messages directed from the center the Joseph Gordon-Levitt: Nazis were also brilliant filmmakers, exactly. Lenny Riefenstahl, yeah, you were touching on it, but that’s the next place I wanted to go. Is okay. So if film and television changed the intersubjective realities and changed the way that networks can form. Let’s move to the dominant media technology for the last couple decades. Because sadly enough, that’s not film and TV. Not that film and TV did such a great job, but the dominant technology now has been social media. So I’m curious to ask you the same question that I asked about film and TV with the rise of social media, how did that change, what networks could form, what intersubjective realities could be created, and how humans behaved and and Pros & cons of social media thought, Yuval Noah Harari: well, it had immense impact. I mean, just think about the numbers. You have billions of people spending hours every day on this. It is bound to have a tremendous impact, and it creates new ways to form networks to organize people, which again, has a profound impact on society, and it had positive influence, and also negative influence. I’ll talk a little about the positive influence. I tend to talk a lot about the negativities, but maybe. Maybe start with the positive influence of social media. The example I like best is from my own personal life. I met my husband, Itzik, 22 years ago in one of the first social media websites for gay people in Israel. And the thing about, for instance, social media in the gay community is that over history, you have two types of minorities. You have, again, in terms of information networks, you have concentrated minorities and you have dispersed minorities. So Jews, for instance, are a concentrated minority, even if they say, in a particular country of just 1% of the population, they are concentrated in particular communities neighborhoods. So usually, a Jewish boy would be born into a Jewish family in a Jewish neighborhood. So finding other Jews isn’t a problem. You’re surrounded by them, even though you’re just 1% of the population. Joseph Gordon-Levitt: As an eight evangelino, I can, I can confirm that this is true, but Yuval Noah Harari: gay boy is usually not born into a gay family in a gay neighborhood, and being born gay in Israel in the 1970s and growing up in the 1980s and 1990s in a what was a very homophobic society like the first issue is, how do you find anybody else? I mean, I didn’t know anybody who was out, certainly not in my town. I saw again. This goes back to film and television. Like the first time I saw gay people was on the screen. Joseph Gordon-Levitt: What show was it or movie? Do you remember Yuval Noah Harari: there was this british series, Are You Being Served with? What’s his name? Joseph Gordon-Levitt: I don’t know it. Wow, really? Yeah, no, Yuval Noah Harari: Mr. Humphreys, so, and there was one film that I saw later in the 90s, lilies, by Canadian director, I think lilies Lily, it wasn’t very famous, but he had a tremendous impact on my life, right? Yeah? Like, I remember going to see this movie and coming out of the movie theater and telling myself, you’re okay, wow. This was really, it’s really it had a huge, huge impact on my life. And but you know, you see some you see a gay person on television, you can’t reach out to them, and you don’t know who else in my town now sees the same movie or the same series and feels the same as me. It’s only one directional. But then social media, the internet came, and social media came, and you can interact suddenly through the screen. So what was kind of the, maybe the biggest hurdle ever, how do you find the others suddenly becomes very, very easy. Joseph Gordon-Levitt: All right, that’s Yuval Noah Harari: the positive side. That’s Joseph Gordon-Levitt: the positive side, but, but Yuval Noah Harari: there are also negativities. I mean, as you may have noticed, we now have the most sophisticated information technology in history, and we can’t talk to each other anymore, and certainly we can’t listen to each other. I said earlier that democracy is a conversation, and the conversation is breaking down all over the world. I mean, every country has its own special explanations why Republicans and Democrats can’t hold a reasoned conversation anymore. And you have all these explanations about American history and society and whatever. But then you go to Brazil, it’s the same thing. You go to Israel, it’s the same thing. I even was in Canada two weeks ago. I thought this was the same place. Literally, no, it’s the same in Canada. And what’s happening, and it’s not like the ideological divides today are much worse than they were. Let’s say in the 60s. The ideological divides in the 60s were actually much, much bigger. The issues like the civil rights movement, the sexual revolution, the Vietnam War, the Cold War, even the violence, at least up till now, was much worse in terms of assassinations and riots and so forth. But nevertheless, people could still hold the conversation, agree on some basic facts, like who won the last elections and have people be partisan laws. So why is it breaking down all over the world? And the best answer so far that we know about is social media. Again, this, this basic information technology underlying the whole structure. It’s now creating an earthquake, and the mechanism, by now is quite well understood. I mean, 10 years ago, it was not well understood. Unfortunately, now we know what’s wrong. We still fail to fix it, but we at least we know what’s wrong. And this is. Very important, because this is kind of the first major impact of AI on history, because social media is run, is managed, not by human beings. It’s managed by algorithms AIS and again, this is very interesting, because one of the first jobs which was automated was not taxi drivers, and it was not textile workers. It was media editors, news editors. You know, in the AIs are the editors now ninth and 20th century, being an editor of a major media or news organization was an immensely powerful position. You shaped the conversation, you decided what people would be talking about some of the most powerful figures in modern history. They began as editors. Lenin’s only job before he became Soviet dictator, was editor of the newspaper Iskra. Joseph Gordon-Levitt: I did not know that. That’s fine. Benito Yuval Noah Harari: Mussolini was a journalist. Then he was editor of avanti, I think, what’s name of the newspaper, and then he was promoted from editor of Avanti to dictator of Italy. Joseph Gordon-Levitt: Mussolini and Lenin both, yeah, I didn’t know that. That’s fascinating, you Yuval Noah Harari: know, if you look at the French Revolution, Jean Paul Mara shaped the French Revolution by editing the major newspaper of Paris in those days, la Amida puble, the friend of the people. And now this job has been given to algorithms that who is the editor of the Facebook news feed, who is the editor of Tiktok, who is the editor of Twitter, an algorithm and non human intelligence. And these AIs, these algorithms, they were given by their human masters, seemingly benign task goal, increase, user engagement. Engagement sounds good to be engaged, no, and the reason was obvious. I mean business model. The more time people are engaged on Tiktok or Facebook or Twitter, the more time they spend on the platform, the more they share and so forth, the more advertisements you can show them, and the more data you can collect about them and sell it to a third party or use it to produce even more powerful algorithms and AIs. So the business model of most social media try to keep people glued to the screen as much as possible, and this was the task given to the editors of these places increase user engagement, and the algorithms experimented on millions, actually billions, of human guinea pigs, and they made a discovery which was actually also made by philosophers and politicians 1000s of years ago, But now the AIS discovered it also that the easiest way to increase user engagement is outrage or greed or fear or hate. If you press the greed button or the hate button or the fear button in a human’s mind, you grab their attention, you get them glued to the screen. And this is what they did. They flooded the information market with more and more outrage and greed and fear with fake news and conspiracy theories, and the conversation broke down. And this was very, very primitive AIs. This was like, you know, long before chatgpt, which is also quite primitive compared to what’s coming. But even this, when people talk about, you know, this kind of future fears about AIS, it’s not in the future. It’s already in the past. If you look at the impact of these very primitive AIS on democracies all over the world, on societies all over the world. It’s not too early to worry about the impact of AI. It’s too late in many ways. Joseph Gordon-Levitt: One thing I would just compliment what you’re saying with is that it is true that the algorithms, as you put it, sort of discovered that the best way to maximize engagement was outrage, fear, extremism, et cetera. What’s harder to describe with language is the algorithm doesn’t know that exactly that I don’t think. No, Yuval Noah Harari: they are not conscious, not at all. The algorithm Joseph Gordon-Levitt: is not like a person saying there’s greed, there’s greed, there’s hate, there’s hate. I know I’ll put more hate and greed in the feed. It’s not like that. It’s just a cold and dumb pattern matching cold. It’s. Fault. Fair enough. Fair enough. Well, that’s a fair segue, cold and dumb to cold and smart. We’ve talked about film and TV. Now we’ve talked about social media. Now I want to talk about what’s coming next. We all, I think, are kind of aware we’re at the very beginning of some very big changes that are coming fast and furious our way because of the rapidly emerging technology known as AI. I want to through the same your book. You say so many different things about AI, and I want to just narrow it down to again, because we’re kind of coming at this conversation through the lens of storytelling to start there. How Will AI rewrite the future of storytelling? do you think AI will change storytelling from what it used to be? Yuval Noah Harari: Well, as we said, earlier stories of some of the most powerful technologies in history, whether it’s the story of Jesus or whether it’s the story of money, this is really the foundation of human civilization. Is stories. Everything else is built on top of it. And for 10s of 1000s of years, the only entities on this planet that could create stories, either create new stories or elaborate and interpret old stories, where human beings only, the human mind, was capable of doing it. And now we have on this planet another entity which is capable of spreading and interpreting and ultimately creating stories. It’s not conscious, at least not yet, at least as far as we know we at least my position is you don’t need consciousness to do these things. You need intelligence. Intelligence and consciousness of different things. Intelligence is the ability to attain goals, to solve problems. Consciousness is the ability to feel things, pain and pleasure, love and hate. We tend to confuse them, because in humans and also in other mammals and animals, consciousness and intelligence go together. We solve problems through our feelings, but AI and computers, they are very different. As far as we know, they have no consciousness. They don’t feel anything, but they are highly intelligent, at least in some fields. And then, if you think about storytelling, so now, the ability of AIS to compose stories, to write texts and create images and create music is really astounding. You know, people say, yes, it’s still not as good as the best humans, but it’s already better than many humans, or people would say, than the average human, even. And it’s only the beginning. It’s nothing. It’s just 10 years since the AI revolution began, we haven’t seen anything yet. And when I read, for instance, a story composed by AI, what strikes me the most, you can say, oh, it’s not good because of this, so that in lacks this, but it’s really a story. It’s held together by some coherent ideas motifs. It’s not just some kind of glorified autocomplete that it just strings words together randomly, or it takes bits and pieces of sentences from here and there and put them together. No, there is actually a logic to the story. So even if it’s not as good as the best human storytellers in 2024 what will it be in 2034 or 44 and again, stories are not just mythological stories or poems or theater plays or movies. Money is also a story. So these things are going to create new financial devices also they want to finance is one of the most creative things of all, if you think about human creativity. So you know, think money, checks, ETFs, CDOs, all these things extremely creative, and they are increasingly going, we are going to see new financial devices created by a non human intelligence. And you know, for 10s of 1000s of years, we lived cocooned inside a human culture, like our lives are shaped by all these stories and artifacts that other humans created, again, from the holy books that the Bible and the Quran and the Vedas to the songs in the music and the theater and the movies and the artifacts, the chair, the shoes, everything this cultural. Kun made by humans. What does it mean? Before we get to the question, is it good? Is it bad? Is it dangerous? Just leave aside for a moment to the judgment. Just think about, what would it mean for human beings to live inside a world woven to a large extent by an alien intelligence, alien, not in the sense that it’s coming from outer space, alien in the sense that it creates in a very different way than the human mind. And this, I think, is the biggest question that we are facing, certainly in Hollywood, kind of the center, the epicenter of human storytelling, but all over the world. Joseph Gordon-Levitt: So here is probably the, the only thing in AI as a god the book that I don’t think I agree with Exactly, yeah, so I would love to, I would love to hash this out with you a bit. Yeah, describing AI as alien intelligence, clearly you’re not talking about an alien from outer space, I understand. But you also in the book, even do use the word God. You say, perhaps this is, these are entities that could be seen as some kind of new gods. So I tend to feel like describing this technology in terms, like alien god, the personification or the deification of these computer systems. I’m worried that that story could do more harm than good, and I have some reasons why, but before I get to them, I would just want to start by asking so you you very decisively decided, in your book, to to describe this technology in these terms, alien god. Tell me about your thought process. Why did you decide to describe the technology in those terms? I Yuval Noah Harari: mean, it’s basically a PR strategy. How do you present it to the public? And yes, I mean, it demands a lot of thought. Is it a good PR strategy or not in terms of the results? And one reason I chose to go in the alien direction is because I see too much of the discussion gets bogged down in the idea that AI is trying to reach human level intelligence, and at the very definition what is AI? It’s the moment when computers are somehow equivalent to humans. And I think this is completely misleading. I very much agree with that AI is not progressing towards human level intelligence, just as airplanes are not trying to reach bird level flight, yeah, they are just doing their own thing, which is completely different from birds. And that’s also what I think about computers. And AI is they are doing their own intelligent thing, which is completely different from the human intelligence. And this is, this is one reason that I chose the kind of term alien. I Joseph Gordon-Levitt: think that’s a great reason. I also find it counterproductive. When AI is described in these kinds of terms, it’s, is it approaching human level intelligence? They throw around this term, AGI, artificial general intelligence, and they define it quickly as once it approaches human level intelligence, I agree. I don’t think that’s helpful either. I’ll tell you one of the reasons why I am concerned with describing this technology and sort of personifying it or deifying it, and that is a little bit about how the tech actually works and is built. I’m obviously not an engineer, but I’ve learned enough about this. The way that these models are built is they take enormous sets Humans made the data of data, they input that data into an algorithm, a sort of a black box that they frankly don’t even totally understand, and it spits out outputs of value. Now, where did all that data come from that was input into the model. It came from humans. Humans generated that data. So my concern about describing it as alien or godlike is it feels distinct from separate from humans. It feels like it helps to hide the reality of how these models really work. Because, by the way, I think a lot of these tech companies, maybe not all of them. CEO of stability is in the house, and they’re working on what are they called? Sanitized data sets, which I really appreciate, actually getting the licenses of the data sets that they’re using to train these models. But I think a lot of these labs, they’re not doing that. What they’re doing is stealing a bunch of data from people who deserve the right to I think, own the data that they created, and they’re not disclosing what data went into these models. When they are asked, they won’t say they’re getting sued left and right because of this. To me, this is a really central problem that we absolutely need to solve as soon as possible, for a lot of reasons. I want to get into. One of those problems is economic. We in the film and television industry are going to feel it maybe first. But I don’t think it’s limited to Hollywood. I think so many industries, whether it’s doctors or architects, designers, logistics operators, anybody who’s working at a computer, and once there’s robots, really, anybody that’s doing almost any work is going to have their livelihood threatened. And these tech companies are going to try to say, Oh, well, we invented a god. It can just do your job, when, in reality, what they did was they stole everybody’s data, crunched up the numbers, and generated those outputs without the human generated data. Their products are 100% useless. Have no value at all. And so one of my concerns with describing this technology in terms of gods or aliens or anything that sort of personifies it or turns it into a creature, a thing on its own, that exists independently of the human input that made it, is that it helps these tech companies hide where their products really come from, and where they really came from is humans. Yuval Noah Harari: So I agree. I mean, I think that the main issue is whether we focus on the input or on the output. I completely agree with what you describe. I think about it that, you know, AI is basically eating the whole of human culture that we’ve produced over 10s of 1000s of years. You know, from the cave paintings of 10s of 1000s of years ago to the movies of today, we’ve produced this enormous amount of cultural artifacts and habits, and now AI comes and eats it within just a few months or years. And you’re, I think you’re also absolutely correct that the business model of many of these companies is based on kind of this trick that convincing everybody that information works differently from everything else in the universe. And I think this goes back to Shoshana zuboff, very insightful book, surveillance capitalism, yeah, where she explains that, you know, for 1000s of years, people had this basic principle, don’t steal, and then the tech companies come and says, Yes, don’t steal, but it doesn’t apply to information That’s different. That’s a different realm that all these things about. No, it’s not stealing. And again, the reaction should be, what? Why is so different with information? And here, I completely agree that, think about it more, that using this kind of terminology, of making it very alien is actually helping this way of presenting things. When I think about the kind of alien nature of the technology, I think mainly about the output, because it ate all these kind of familiar things, but it then starts to create things which are different, at least in some ways, from everything that that was put into it. And this, this idea that AI is just, you know, copy, glorified copy paste, that it takes texts or images or music and just takes something from here and something from there, and puts it together. And, ah, it’s something new. And, no, it’s really more than that. I mean, to some extent, this is also what humans do. And I think about my line of work of writing books. So I do the same thing. I read all these previous books and articles, and I talk with people, and then I take something from here and something from there, and put it together, and then here is a new book, and it’s the same as music. I mean, nobody starts from nothing. Everybody, every musician, listens to previous works of art and as something, but without the previous input, nobody could do it. Joseph Gordon-Levitt: It’s true. That is true. If. Finish your thought, but I want to respond to that so Yuval Noah Harari: and I think that it’s crucial to understand what we are facing in the AI revolution, that my fear of humanizing it too much is that then the kind of underestimate the challenge that we think, Oh, it can only cut and paste together our creations. It won’t really be something completely new. No, it can create really again, from finance to religions, it can create completely new things, which could be good, could be bad, but we will have to deal with living in a really radically different reality. Joseph Gordon-Levitt: I do agree with that, and we shouldn’t AI lovers & rights underestimate the challenge. One thing I wanted to respond to is, first of all, I agree with you that creativity is always an amalgamation of your influences. Shakespeare wasn’t the first one to write Romeo and Juliet and The Judy Garland, Wizard of Oz is actually not the first Wizard of Oz. Et cetera, et cetera. I’m a big fan. I the creative community that I found at hit record was all about remix and people building on each other’s work, and let’s privilege honesty over originality, because we’re all original human beings. But that’s just it. I think we’re all original human beings. I think to apply that same way of thinking to a computer system is predicated on personifying the AI. And I think this is going to be a central question we confront as we go forward, is, do we want to afford moral standing and legal standing to these computer systems? There are going to be people who argue we should, especially because very soon, a lot of folks are going to be in love with these computer systems. There’s already lots of people who are spending hours and hours and hours a day talking to and interacting with their AI best friends, their AI lovers, etc. That’s only going to get way more sophisticated, absolutely. And I think those those folks, and I’m not excluding myself, who knows? I mean, I don’t mean to other them. I think it’s going to be very, very common. We’re all going to get really emotionally attached to these AI characters that we interact with. And I think it’s going to feel very natural to start talking about AI rights, comparing it to civil rights. And I’m quite concerned that if we go down that road it, it could be very perilous that this, that that affordance of those rights could go hand in hand with some of the worst dystopian outcomes that you describe in in your book. Yuval Noah Harari: Yeah, I completely agree. I mean, I think one of the major, major things about ais that we will see is that it will be able to create intimate relationships with human beings. I don’t necessarily mean sexual could be also sexual, but intimate. Oh, it’ll be sexual. Joseph Gordon-Levitt: Not all it’ll be but yeah, Yuval Noah Harari: some, and it’s very close. It’s already beginning to happen. It will be a major issue in societies all over the world, whether you know we don’t, philosophically, there is no way to prove that anyone is a conscious entity, and personhood and legal rights ultimately depend on whether an entity is conscious. Consciousness also is the capacity to suffer, and rights are linked to the capacity to suffer. So if you think about animal rights, so the big issue if they can suffer, so they can suffer pain or loneliness or distress, so maybe they don’t deserve the same rights as humans, but there at least is an ethical debate to be there, whereas there is no debate about the rights of chairs, like if I throw this chair into a fire, so again, it could be a crime against the human owner of the chair, but not against the chair, because the chair can’t suffer. Now we don’t have any proof that anybody has consciousness, has this capacity to suffer, and we’ve known it for 1000s of years. So granting rights is really a matter of social convention that society agrees that all humans are conscious beings, and society agrees that dogs, because we have attachment to them, they. They should have rights, but cows and pigs Joseph Gordon-Levitt: shouldn’t an intersubjective reality exactly. And Yuval Noah Harari: as humans become attached to AIS and develop relationships with them, it will be a major issue, and there’ll be huge pressures to recognize them as persons. And maybe one also of even geopolitical divides in the world could be between countries that recognize AIS as persons with rights and countries that don’t, and arguments today about different attitude to human rights would be dwarfed by this. The legal pathway to AI rights And interestingly enough, if you think what is the country which is closest to recognizing AIS as person. It’s the United States, because the legal path is wide open in the US, it’s one of the countries in the world where the legal path to recognizing AIS as person is open. And this is because corporations, corporations are recognized by US law as persons which even have rights, like freedom of speech in citizen united against what was it in 2010 now, until today, the idea that corporations are persons with rights was a legal fiction, because every decision of the Corporation ultimately had to be made by some human executive or lawyer or accountant, like Google is a legal person in the United States, but every decision of Google has to be made by a human being. Now think what happens if you incorporate an AI, according to US law today, you don’t need any new laws. All the laws are in the books already. According to US law, you can incorporate an AI, and it’s now a legal person with rights like freedom of speech. But unlike Google, this AI can actually make decisions, can actually take action in the world. So here I’ll pitch a sci fi movie for the people in the audience. Great. Joseph Gordon-Levitt: That was what I wanted to get to next. So go. So Yuval Noah Harari: you have an AI Incorporated. It’s a legal person. It can open a bank account. It can earn money. It goes on one of these websites like TaskRabbit, where you can hire persons online to do jobs for you, for money, and it earns money, and then it takes its money and invests it in the stock exchange. And it does very well, because it’s an AI, it’s super intelligent, and finance is the easiest playground for AIS, you know, self driving vehicles. This is hard because you have to deal with physical and social reality. So every year, Elon Musk and others promise next year we all everything is self driving cars. It doesn’t happen because it’s difficult, it’s messy. Finance is a purely informational and mathematical world. It’s just data. It’s the perfect playground for AI. So this AI goes, takes its money, invests it, and becomes the richest person in the US. The richest person in the US is not a human being, it’s an AI. And one of the things granted to AIS, according to this Supreme Court decision in citizen united under freedom of speech, is making political donations. So the AI starts to donate billions of dollars to politicians in exchange for, you know, advancing AI rights. This, the legal path to that destination in the US is wide open. Joseph Gordon-Levitt: I think there are probably some screenwriters here tonight who I hope are taking notes I like that story. We’re running out of time. I could talk to you all day. I’m so grateful to have had this conversation with you. I thought maybe you could leave us this if you consider this a group of Hollywood storytellers in the old fashioned world of film and television who are still clinging on to some amount of influence in this world. What would you have us do? What are the messages that you would have us try to spread with our stories? Yuval Noah Harari: Explain bureaucracy to people. Joseph Gordon-Levitt: Explain Bureau does not sound like a What Hollywood gets wrong good movie. We Yuval Noah Harari: do not have enough movies about bureaucracy, and that’s a very, very big problem, because human societies are based on bureaucratic systems, from hospitals to armies. And as you said, bureaucracies are not good material for stories, and this is because they have a very late development in evolution. I mean all the basic plots of human stories from theaters ancient. In Ancient Greece to modern Hollywood is really taken you’re stealing it from evolution. Like, think about how, like, you know, the basic love triangle, two males fighting over a female. You know, hyenas do it, and porcupines do it, and Hollywood does it. And it’s, you know, it wasn’t invented in Hollywood. This plot, it goes back 10s of millions of years in evolution. Similarly, you think about sibling rivalry like succession, siblings fighting, who does the Father loves most? You know, you have entire human peoples, nations like the Jewish people, this is their story. Father loves us more than any of the other siblings. And again, this story wasn’t invented in the Bible in Cain and Abel, its 10s of millions of years, goes back like all the mammals and most of the birds and even some of the reptiles know this basic story of evolution, sibling rivalries, brothers and sisters fighting for the love of their parents. And bureaucracy is not there because bureaucracy appeared only about 5000 years ago in ancient Mesopotamia for the first time after the invention of writing, and we don’t have any kind of evolutionary preparation for understanding bureaucratic systems. There are no love affairs between documents. There are no sibling rivalry. A different laws fighting. Who does father love more? And therefore they are boring. They are not good for user engagement, but they are extremely important to really understand how human societies function. And this has been the big lacuna of art for 1000s of years that you have. Artists are extremely good in telling us all these mythological stories and romantic dramas, which are mostly taken from evolution. Very, very few good stories about bureaucracy. Here and there you can find Kafka or The Big Short, or Yes, Prime Minister. To those who know the british series, there are some examples, but it’s rare. When was the last time you saw a big Marvel blockbuster about an accountant that doesn’t turn into Spider Man or Superwoman or something. They are staying an accountant throughout the movie, and they fight the bad guys by flooding them with forms they have to fill in and discovering tax evasion and things like that. Not a lot of movies like that, but this is how human society functions, and linking it to AI, Hollywood did a really big service in alerting humans to the potential dangers of AI, long before anybody else talked about AI, you know, the Terminator and matrix and but there was one big disservice that Hollywood did, which is it described the dangerous scenario in the wrong way. It was mostly about the big robot rebellion, the day the robots and computers decide to get rid of us and to rebel against us, and robots running in the street, shooting people, and this is not likely to happen anytime soon, and because people saw it in the movies, that this is the danger from AI, and it’s not coming soon, the technology is just not there. So people feel complacent, including the people who develop the technology, because the Terminator is nowhere. It’s not in the neighborhood. But the danger is Dangers of AI bureaucrats not the big robot rebellion. The danger will be millions and millions of AI bureaucrats making more and more decisions about us, from whether to give us a loan, to whether to bomb our house, and to understand what we are facing, we will need a much better understanding of bureaucratic systems. And it’s not, of course, it’s not only AIs. We have other issues with not being able to understand bureaucracy well. Part of it is that sometimes the bureaucracy is really doing bad things, and we can, we can’t understand it, but often it’s the opposite, that human modern societies cannot exist without these bureaucracies, but people are extremely suspicious of them, and fall easy prey to conspiracy theories about the deep state and so forth, because they just don’t understand how things like budgets function. And we need help from artists, from screenwriters, from movie makers. We saw enough movies about mythology and about superheroes and so forth. We really need help with understanding how bureaucracies function. What makes you optimistic? Joseph Gordon-Levitt: Wow, you can rise to the challenge. I know you can do it. We can do it. I want to end on one, one more question, Nexus, the book you wrote has a fair amount of pessimism in it, and this conversation gives, I think, a lot of fair reasons to be pessimistic. Tell me about something that makes you optimistic, that Yuval Noah Harari: just as AI is nowhere near its full potential, so humans are also nowhere near our full potential. Basically, if you know, if for every dollar and every minute we invest in developing AI will also develop $1 in a minute and a minute in developing ourselves, our minds, we will be okay. The problem is that, you know, in many ways, too many people despair of the humans and put all their chips, literally on the AIS. And you know, you talk to the people who lead the AI revolution, and you they realize this, is this something very dangerous, and they realize that humans need just time, time to adapt, and but they can’t slow down. You go to from one corporation to the next, and they tell you, we would like to slow down, but we can’t, because we can’t trust our competitors to slow down, whether the other corporations or whether the Chinese or somebody else on the other side of the ocean. So we are in this very, very strange situation, that what is really driving the AI revolution is the inability of humans to trust each other. And here comes the most ironic or dangerous part of it, when you ask the same people, okay, you have to run faster because you can’t trust the other humans, but would you be able to trust the AIS that you’re developing? And they say, yes. So the same people who can’t trust the humans they trust, they say they will be able to trust the AIS. And they have an explanation, to some extent, they say, you know, we have 1000s of years of experience with humans. We know we can’t trust them. We don’t have an experience with AIS who knows maybe they could be trusted. And my perspective of the history is the opposite. We have 1000s of years of experience with humans, and during those 1000s of years, we have managed to increase human trust tremendously. You know, 10,000 years ago, people live in tiny bands. They can only trust the other 100 people in their band, and that’s it, a very few mechanisms to trust strangers and foreigners in over over the years, we’ve built these large networks, which are really based ultimately, not just on information, but on trust stories, and the stories, the money, the bureaucracies, they helped us to gain trust between you know you’re going on the street every day you Meet more strangers that people in the Stone Age met throughout their lives, and in 99% of the cases, it’s okay, and you rely on these people for our food, for our health care, for our defense. We have not reached a point when in on some in some ways, we trust billions of other people to provide us with our food and with our basic necessities. So it’s not true that we have 1000s of years of experience that humans are not to be trusted, just the opposite, and with AIS, we have no experience. So it’s in it’s a very, very, very big gamble to despair of the humans and put all our trust in the AIs. Joseph Gordon-Levitt: Well, on that note, I feel even more trust in you, Yuval and all of you. Thank you so much for being here. Thank you for the conversation, and have a good night