FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

One of the reasons why humans are in control of so much is because we are the most intelligent. We’re able to build nuclear power plants, we’re able to go to the moon, and we’re able to do a lot of different things because of our intelligence. But if we build AI systems that are more powerful than us, then we’ll be passing that mantle on to them. And then our fate might be the fate of some different species, like, say, gorillas, where although they’re stronger than us, we are the more intelligent ones. So we’ve had control over them. But if AIs are the ones that are more intelligent than us, then we could be in a similar situation. I think that one reason there might be analogies between AI and religion is just because this has such cosmic significance. This isn’t like any other generic technology. And so that makes the stakes all the more higher. We could end up in some utopia where AIs have automated everything and then we get, leisure and and bliss or things could be very catastrophic. And, you know, at worst, humans could go extinct. I don’t think AI is just another technology. They’re not just tools. They’re agents. They can accomplish sequences of long decisions and plan. This is this is not like, a word processor. I think it’s an incorrect framing that, this is just like another stage in the, Industrial Revolution. I think a more appropriate framing is not that this is any other economic development, but instead that this is what a cosmically relevant transition period. There was single cellular life. Many billions of years ago, and then multicellular life. And now here’s the transition period from biological life to digital life. So I don’t think this is like any other technological development. Also, I don’t think we would just adjust to this technology like any other and then it would create new jobs. Then we can go on our way. I don’t think it’s like that, because if we have human level AI that can do tasks as well as humans and more cheaply, then there isn’t any reason for human labor anymore. So by definition, there wouldn’t be new jobs that would be created for humans to do, because AI could do those as well. So I think that’s what makes this fairly distinct compared to other types of technologies. Other technologies might automate some aspect of society, but it will create many others. But human level AI and artificial general intelligence is so general that it would include these new economic roles that could emerge. So there would be nothing left for humans to end up doing. It seems the definition of artificial general intelligence has shifted across time. It used to just mean not narrow AI. So an example of narrow I would be an AI that can play chess, or an AI that can identify whether a movie review is, you know, a mean movie review or a positive movie review. That would be an example of a narrow AI system. However, now we have AI systems that can do all of those things. They can play chess. It can write email. It can write poetry. It can do calculus, etc.. So now we actually have something that’s kind of general and is an artificial intelligence system. But people don’t want to call that AGI. So I think the definition has somewhat, slipped or drifted into being something like, human level AI or the expert level in every single domain. And I think that’s, definitely not something we have today. The AI systems have a lot of knowledge. They know about a lot of different topics, but they don’t have a lot of the know-how. And that’s why people are, not willing to call this, artificial general intelligence, currently. But we might achieve artificial general intelligence this decade. Although the public, by and large, is quite concerned about, these developments in AI. A lot of the people developing them, especially the people building them, are mostly interested in increasing their company’s valuation. So OpenAI was originally founded to try and create safe and beneficial AI. And I think that organization should take safety very seriously. But nonetheless, they shifted from being a nonprofit to a capped for-profit so as to raise more capital in the now their productionizing AI and trying to develop the most powerful AI systems as quickly as possible. Some people inside of OpenAI thought that they weren’t concerned about safety enough, so they left and created an organization called Anthropic. And what happened is they, prioritized safety initially, but then they ended up a year later recognizing that to be competitive, they’re going to have to start racing ahead. Just like all the other organizations. And so they ended up prioritizing the productionization of a AIs. And, scaling them and making them as intelligent as quickly as possible. So there’s this repeated pattern of of people caring about safety, but they ultimately end up pursuing making AIs as competitive as possible and trying to build a AIs that can automate jobs very quickly. Right now, AI development is driven not necessarily by human values, but instead what will make the most sense competitively. I think by by default, there are strong incentives for everybody to integrate AI into their systems. Either that’s their their companies and displace humans and automate away their jobs, or that is, automate big chunks of the military and transfer a lot of lethal force to AI systems. I think that’s, among the most pressing, risks that we have to deal with. I think it’s important to try to address all of the risks from AI because they’re so interconnected. I think there is a risk of malicious actors where people are using AIs for destructive purposes. They could really just, upend society if they do something like create a bioweapon or if they hack, and attack our critical infrastructure. This could, create a lot of global turbulence or lead to a large scale loss of life. An additional risk source would be the racing dynamics of the AI developers, where they’re prioritizing the profit of AI systems and making them more intelligent as quickly as possible, and prioritizing that over their safety. A third risk source would be generic organizational safety, where we see that nuclear power plants can melt down, rocket ships can explode. Scientists don’t always get everything right. And there could also be accidents with late stage AI systems that could also spell a catastrophe, too. Finally, there’s the risks from rogue AIs. If we created systems that are smarter than us, then they could do things like extort us with bioweapons that if you try and shut us down, we will release bioweapons that would affect that would affect humans, but not necessarily, destroy AI systems. They could do other things, like they could hack to steal cryptocurrency. They could, try and sew political, discontent by manipulating online interactions and pretending to be humans in these interactions to convince and sway sway politics. So, rogue AIs would be very dangerous. I don’t think it’s more likely than not that we, by default lose control of them. But if we do, then we’re in a very difficult situation. We’re having an adversary like nothing we’ve ever had before. There’s been a lot of ways to harm civilization, that we haven’t had to worry about because the barriers to entry were so high, you’d need expertise in hacking or a PhD in virology to pull it off. But if that intelligence is given to everyone, then we’re suddenly at substantially more risk. We have lots of people who are willing to cause substantial harm to society. We have things like school shooters and other mass killers, people who want to just take down society. Those would be very risky people who could potentially bend AI systems to cause, a large scale loss of life and create something like the next pandemic. But this time would be, you know, 100 times worse, potentially. There are other types of malicious use, like using AI for personalized persuasion. You could imagine chat bots that know your weak points and can customize themselves specifically for you, and argue with you at length and try and convince you of a viewpoint. So this could be potentially very dangerous in, future elections where the public is being influenced by, at large, by, a lot of AIs that they think are actually, humans on the other end, trying to convince them of some specific view and vote in some particular way. You could imagine, people, this facilitating echo chambers, people living in their own ideological enclaves. And I think that would, also, pose some societal scale risks. And then finally, another risk is not random rogue actors misusing AIS, but instead, powerful groups misusing AIs such as corporations or governments. They could potentially use AIs to create a surveillance state where it’s processing all the information about you, knows exactly where you are, and it could be used by some governments to squash dissent and sort of entrench their own power and keep every day people under extreme control. This wasn’t feasible historically because there’s too much information to process and you would need people to manually do it. But if I can automate that process, then you can have entrenched totalitarian regimes or unshakable totalitarian regimes. Unfortunately. Finding a solution will require striking an appropriate balance, where multiple different stakeholders are influencing how the AIs are being developed and how it’s being distributed across society. And if there are restrictions trying to keep those restrictions minimal. So an example restriction for malicious use would be just don’t have advanced virology knowledge inside of the AI systems that everybody has access to. This wouldn’t really impose a burden on most people’s usage of the technology. And then you could have this special model for people who want to do research in virology, that there could be a specific model for them. And so that would be a fairly minimal restriction that would really bypass many of these potential malicious use concerns. The best solution to this isn’t having a random aristocracy, where, an AI company is determining, everything that happens in AI. And I don’t think, a good solution is everybody gets, the ability to create bioweapons, everybody is able to hack, everybody is able to ask an AI how to build nuclear weapons, etc., etc.. So I think it needs to be somewhere in between. I think that looks like a, something more like a, democratic institution, with multiple different stakeholders influencing the decisions and how AI is developed and distributed across society. So if we create an AI system and we instruct it to accomplish a goal for us, it creates a lot of subgoals that we don’t particularly anticipate. If I request that in a robot. Go fetch me a coffee. Then there’s a subgoal that’s created, which is that it must preserve itself. If it gets shut off, then it can’t give me the coffee. So even for fairly simple goals, there might be some subgoals that are created that are actually in tension with what we want. I guess a mythical example of this would be like with King Midas where we say I want gold, but what happens is, well, we all know how that goes. But, this is sort of an issue that we have with AIs. If they’re extremely powerful, they might accomplish our goals in very unexpected ways. And so we need to, make sure that they reliably represent very complicated human values. So we’ve got a sort of we’ve got a reliability problem mixed in with an AI ethics problem. And if we’re wanting AI to be beneficial, we’re going to have to resolve both of those. One way we could reduce the risks from rogue AIs is by trying to detect whether they are lying to us. For example, if we could ask rogue AI systems what their ultimate plans are, if they’re intending to turn on us at some point, then we’re no longer at risk because we can just ask them what their plans are. But that’s not an easy problem, because to do that, where you would need to make more progress on the transparency of these AI systems right now, these AI systems, are comprised of lots and lots of long numbers. That and there are several billions, sometimes trillions of them connected in very complicated ways. The designers of these AI systems did not arrange them in that way. All we did was create some, process that gave rise to these, these sorts of numbers. We’re not really hand designing, what these AI systems are like. We’re more like growing them is maybe a more appropriate analogy to make sure that AIs would not have intentions or goals separate from our own. We would need to make substantial progress in making sure that these aren’t black boxes. But that we can see transparently inside of them and understand their inner workings. But that that’s a problem that the field has been grappling with for a very long time. So, it’s not clear that will be solved in time, but more technical research will be needed for reducing these risks from rogue AI’s. In the short term, this is not as much of a concern of mine, but an ounce of prevention is a pound of cure and it might take decades to solve. So it’s important to start now. AI sentience is relevant for AI risk, because what might happen is if if AIs become sentient, then people might try to give them things like rights. Then we can’t control them anymore because they have rights and protections against, us turning them off randomly. We can’t just flip that off switch because that might be tantamount to murder. So this reduces our control and our ability to select the AIs that are more beneficial, and the AIs that are potentially more harmful if they have some sphere of protection. Imagine that we had, two species, the human species and some AI species. This AI species is sentient and so deserves rights. This the species might get smarter, you know, 30% a year. It could create adult children in a matter of copy and pasting a few seconds. And if it only costs, you know, a few thousand dollars to make such an AI system, then I think I know, which would end up, taking over the larger ecosystem. People take 20 years to get to get a human, adult, that is, capable at making money. So, the growth rate of AIs with rights would be substantially higher than with humans. And so I think that would be a path toward them taking over the ecosystem and, and acting like an invasive species relative to humans in the long run. Across many decades, it seems more likely than not that we would have AIs that would be sentient, which creates substantial moral issues. How much should we be valuing their well-being? Are we like slave owners with respect to AI? These are very complicated moral questions that we’ll all have to grapple with if they become sentient. One large source of disagreement is how quickly will these AI technologies become human level or superintelligent. So a lot of the AI developers like OpenAI, many of their leaders, think that it could easily be during this decade. Elon Musk thinks that it’d be, within 5 to 6 years, we would have a superintelligence that would be vastly more intelligent than any person on Earth. He thinks maybe there’s a 20% to 30% chance of doom or apocalypse from AI in the next few decades. So he’s taking it very seriously, and he’s trying to create a company that is, focused on making AIs truth seeking that could make them, beneficial to humanity. He’s also mentioned other things like maximize human freedom of action or human autonomy as well as maximize human civilizational happiness over time. So I think this shows that he’s having a more pluralistic approach to, the values that we’re putting into AI systems, not just trying to have them do one thing or accomplish one goal, but pursue a variety of different goals, such as truth or autonomy or human happiness. We all are on the same boat with respect to existential risk. We are all we are all part of the human species, and we wouldn’t want to have AIs. end up overtaking us, I don’t think the winner of this race would be any specific corporation or the US or China. But instead the AIs themselves end up getting most of the decision making, potentially all the decision making in society. So, I think if we can coordinate, this might look like a shared international institution, a global project where we’re developing AI together with multiple different countries, and then we can proceed forward at a pace where we understand the risks and are able to manage them appropriately, instead of barreling ahead trying to race to a superintelligence in a few years. In the shorter term, we could potentially have, companies to have more of an incentive for controlling the risks they’re imposing on society by things like having liability laws where they get fined if they if people use their AIs to cause a lot of damage, some regulations would cause, AI developers not to cut corners on safety, but eventually this probably becomes a larger global arms race between countries and their AI projects. So we’re going to need coordination on that level as well. I would hope that we would get to a state where AI development would proceed at a more cautious, prudent pace. That could be a democratic, international institution where multiple different countries, have influence over AI development, and it proceeds at a pace that, keeps the risks at a negligible level. So then we could develop AI and we could reap the economic benefits from it, and we wouldn’t be racing ahead cutting corners on safety or getting in, arms races with each other. I think that’s the sort of path I would want to go down. And if we could get to that, then we could decide what to do with AI technologies when we’re there. But if we don’t get to that state, then we’ll basically be pushed along by competitive pressures. What makes the most sense in the market? What makes the military the most competitive? And I think in those situations, we very quickly lose control.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.