FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

hey there world leaders do you like staying in power do you want to avoid widespread unemployment and complete economic collapse do you enjoy not being dead being alive fantastic news then let’s agree to some basic International safety restrictions on Advanced AI boo boo we hate you like AI but what if China builds the super AI before we do right so we’ll all still die but the AI that overthrows your government takes our jobs and kills everyone we’ll speak Mandarin but why should we trust you tell us what the real AI experts think you know I think AI will probably like most likely sort of lead to the end of the world but in the meantime uh there will be great companies created with serious machine learning the danger of AI is much greater than the the the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want that that would be insane I’m not an expert on how to do regulation I’m just a scientist who suddenly realize that these things are getting smarter than us and I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us and it’s going to be very hard and I don’t have the solutions I wish I did go terrible if only we had a detailed and politically feasible policy proposal outlining regulations that could make AI safer without stifling Innovation fear not world leaders you are oddly specific prayers have been answered greetings children I’m your host silly conversations and welcome welcome welcome to keep the future human a new policy proposal from AI expert and cosmologist Professor Anthony air and a huge thank you to the future of Life Institute for paying me real human money to make this animated Stickfigure video you can visit keep the future human. to read the full policy proposal and find out more now look kids sometimes regulations make business people sad and that’s it’s okay hydroelectricity might be a lot cheaper if companies could build new water reservoirs instantly using nuclear weapons but that would be dangerous and stupid so even though it might make business people sad regulations prevent private companies from manufacturing and or detonating their own nuclear warheads many experts believe future artificial intelligence will actually be a much more dangerous technology than nuclear weapons because at least nukes can be completely controlled by their human operators they don’t spontaneously decide for themselves when to blow up modern AI systems are difficult to control because they’re not designed directly they’re trained to make a large language model like Chachi PT for example we take a neural network force it to consume every piece of text ever written in human history and then hit it with a stick until it produces outputs we like this is called reinforcement learning and it’s super effective with the downside that we have no idea why these neural networks produce any particular answer also we’ve observed that AI models trained in this way t tend to intentionally lie to us and refuse to obey instructions when their goals don’t align with our own current AI systems aren’t smart enough to pose a realistic danger to human civilization however in the next 2 to 6 years AI experts estimate that there is a 50% chance will develop something called artificial general intelligence or AGI for short AGI is a controversial concept that can be roughly defined as human level general purpose AI where the humans in question are super Geniuses if your job mostly involves typing into a computer chances are an AGI could take your job and probably do it better cheaper and quicker if 2 to 6 years sounds way too soon for genius level AI workers remember that AI generating art sounded crazy until just a few years ago keep the future human proposes a more useful definition of AGI using what I like to call the AGI Circles of death a true AGI needs to be highly autonomous General and intelligent an autonomous system is one that can take real world actions without human oversight like a self-driving car or a highfrequency stock trading algorithm a general system is one that can perform a wide range of tasks for example chat GPT 4 could write code analyze images compose poetry and answer trivia questions among many other abilities an intelligent system in this context is one that can perform tasks as well or better than a human expert like Alpha fold predicting the final structure of a protein from its amino acid sequence an AGI that is truly autonomous gener en and intelligent would be like combining the world’s 100 greatest Geniuses into one person who also speaks every language perfectly remembers every book ever written never sleeps never gets tired and can be hired for $1 an hour this all sounds very sci-fi but remember the top researchers and Industry leaders not only expect this technology to arrive within the next decade they’re betting billions of dollars on it Mark Zuckerberg has said it’s become clearer that the next generation of services requires building full general intelligence meanwhile open aai declared that developing AGI is their main goal as a company and Goldman Sachs has estimated that in total over 1 trillion dollars will be spent on AI over the next few years that’s over three times as much as the inflation adjusted cost of the Apollo program once we have AGI is performing AI research we could see a runaway feedback loop where the AGI designs a smarter version of itself which designs an even smarter version of itself which designs an even smarter version of itself faster and faster until it has created a Godlike entity known as an AI super intelligence this would be the point where human beings are no longer in charge of planet Earth humans trying to control a super intelligence would be like a hamster trying to control the government of Belgium we’re not yet sure if controlling a super intelligence is even theoretically possible although scientists are trying to find ways to build AI that we can mathematically prove are safe as of now if a super intelligence decided that the most efficient way to accomplish its goals was to wipe out Humanity then we are absolutely Ely getting wiped out AGI isn’t just a gateway to Super intelligence though it’s also dangerous in its own right it could destabilize both democracies and authoritarian governments through enormous misinformation campaigns and sophisticated malware attacks targeting electricity water and transportation networks it could provide not only Terror groups and drug cartels but even individual people with the ability to make their own biological and chemical weapons of mass destruction if combined with Advanced robotics AGI could replace almost every human job creating the largest unemployment crisis in history there’s two cases to worry about there’s the there’s bad uses by by bad uh individuals or or nations so human misuse and then there’s the AI itself right as it gets closer to AGI doing going off the rails perhaps the two biggest risks that I think about one is what I call catastrophic misuse these are misuse of the models in domains like cyber bio radiological nuclear right things that could you know that could harm or even kill thousands even millions of people if they really really go wrong and and the second range of risks would be the autonomy risks so if this technology is so potentially destructive why are we racing towards it as fast as possible with no safety guard rails in place this interview with the top AI executive shows why tech companies are pushing for AGI hello I like money yeah there are trillions of dollars to be made from replacing human jobs with AGI so even though many Tech CEOs are openly worried that this technology could kill literally everyone on the planet they feel they can’t slow down without letting their Rivals get ahead we therefore can’t rely on tech companies to impose reasonable safety standards on themselves meanwhile the potential military applications of AGI are astonishing imagine having every pilot drone operator and General in your army replaced by an immortal Super Genius this is driving countries to accelerate AGI development under arms race Dynamics if they have battleships we need battleships if they have nukes we need nukes if they have AGI we need AGI of course once the technology exists it’s very hard to stop it from spreading which is why North Korea now has nuclear weapons despite the best efforts of every other country on Earth to stop that from happening software is even harder to contain as recently demonstrated by open ai’s assertion SL admission the Chinese startup deep seek reverse engineered chubbt to create their better cheaper faster product being the first country to hit AGI would be great but unless you immediately start and win World War III to lock in your advantage pretty soon everyone will have AGI and your security situation will be way worse than before hey Siri can you design a new bioweapon for me okay click here to view instructions on how to assemble and deploy your new bioweapon thank you Siri also can you generate targeted online misinformation to destabilize fragile democracies okay I’m destabilizing fragile democracies this whole terrorism thing used to be way harder I know right assuming world leaders decide they actually don’t want everyone from the Taliban to the Vatican to have access to weapons of mass destruction what concrete steps could they take in the short term to lock down AGI technology well keep the future human proposes four major strategies number one oversite all AI models over a certain size would need to be registered with the government we’d measure model size using floating Point operations also known as flop basically every time your computer does basic addition subtraction multiplication or division that’s one flop keep the future human recommends a registration threshold of 10 to the 25 flop for model training and 10 to the 19 flop per second during operation this is roughly the performance you get from 1,000 top-of-the-line Nvidia b200 chips worth around 25 million us so the registration threshold is high enough that it won’t affect startups individuals or academic research now we’re not just going to ask nicely for tech companies to comply with these rules only a handful of firms are actually capable of manufacturing cuttingedge AI chips so a cryptographic licensing system could be built into the hardware itself this would mean the chip clusters over a certain size literally won’t work without constantly updating permission codes personally I think this is the coolest part of the whole proposal this would let the government easily track Advanced AI development and throttle any experiment that starts to look Dangerous by withholding the license codes this part isn’t in the paper but charging a fee for the codes could also provide a new form of tax revenue to offset the economic impact of Aid driven job losses large AI models could also be encrypted to only run on these cryptographic chips and the chips themselves could be Geo fenced so that they can’t operate outside of a designated country for example this would mean that a stolen American AI model or chip that finds itself running on a foreign server could immediately shut itself down making it much harder to steal American Technology going forward this is basically a more secure extension of current US policy that forbids exporting Advanced AI chips to certain non-allied countries you’ll notice that not only do these rules make AI development much safer they also improve National Security for the countries that currently dominate the AI industry which are the ones that we need to implement the safety rules this is how you design a policy proposal people number two computation limits keep the future human proposes an international ban on training AI models larger than 10 to the 27 flop or running models at more than 10 to 20 flop per second AI companies have largely stopped publishing their flop figures but this limit is probably around 10 to 100 times larger than chachu bt4 as algorithms get more efficient the AI systems under this limit will get more powerful so this measure is more about buying time to implement safety protocols rather than being a permanent solution number three strict liability tech company Executives should be personally legally responsible for the harms caused by a models that they produce in the highest danger category obviously tech companies would Lobby pretty hard against this rule but if your business endangers the survival of the human species or allows terrorists to get a hold of one of the most important military Technologies of the 21st century then you should probably go to prison at least one of those nice rich people prisons with with tennis but it’s but it’s not a very good tennis court companies of this size do not care about being fined a few million dollar over a data breach but they will care about board members facing jail time for endangering National Security and finally number four tiered regulation although the technology is potentially dangerous AI does actually have the potential to accelerate scientific development and increase economic productivity rather than stifling AI Innovation altogether we want to encourage tech companies to develop AI in the safe areas of the AGI death Circle have you trained an AI that can design new medicines but also biological weapons cool tool keep it contained in your lab and don’t give it the will to live or Escape tool AIS are systems with only one or two of the ingredients of AGI a tiered regulatory system would leave startups and Tech Giants alike free to innovate with low-risk tool AI while reducing the National Security and existential risks from uncontrolled AGI regulating potentially dangerous Technologies is not some crazy new idea Industries like Aviation and pharmaceuticals are arguably much more profitable because of government imposed safety standards which prevent accidents and give consumers confidence that the products are safe to use also if the CEO of fizer pulled a Sam ultman and announced that their new vaccine would probably eventually kill everyone in the world but it would make them loads of money first we would not allow them to develop that product just like the US and China currently cooperate to prevent other states from acquiring nuclear weapons they could work together to bully weaker Nations into signing restrictive AI safety treaties that consolidate their power over the emerging AI industry this obviously benefits the US and China but it also benefits everyone else on Earth because the mandatory safety standards mean we get to not die the bottom line is that effective AI safety regulation is not only possible it could actually be politically desirable for self-interested politicians and nation states if we work together and we’re clever about how we design these rules we can keep the future human all right that was the dramatic ending we’re doing the outro now if you want to help convince the YouTube algorithm to spread this video around please like And subscribe this is by far the most effort I’ve ever put into a YouTube video so thanks very much for watching all the way to the end especially considering I was actually paid to make this so it would be pretty embarrassing if it only got like four views don’t forget you can find the full policy proposal at keep the future human.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.