AI models are not programmed- they are grown. Just as animals evolve to avoid predators, any system smart enough to pursue complex goals will realize it can’t achieve its goals if it’s turned off. Survival instinct emerges spontaneously.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

In the last 10 years, global insect populations collapsed by 41%. Sit with that for a minute. Imagine if nearly half of the people on Earth suddenly died. So what happened? A new, smarter species took over, humans. In the blink of an eye, we poisoned the air, raised forests, and replaced ecosystems with concrete, steel, and monocrops. Scientists call this the sixth mass extinction. But unlike Earth’s previous five mass extinctions, this one is being caused by us. We’ve transformed 50% of Earth’s land into farmland. We didn’t mean to wipe out the bugs. We just didn’t care. And now more and more of the world’s top AI scientists are warning of a seventh mass extinction. Not caused by us, but by something we’re creating. The godfather of AI, who just won the Nobel Prize, explains, “Once these artificial intelligences get smarter than we are, they will take control. They’ll make us irrelevant.” Alarm bells over the artificial intelligence are deafening. And they are loudest from the developers who designed it. The scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. Max Tegmark says it has happened many times before that species were wiped out by others that were smarter. We humans have already wiped out a significant fraction of all the species on Earth. That is what you should expect to happen as a less intelligent species, which is what we are likely to become given the rate of progress of artificial intelligence. The tricky thing is the species that is going to be wiped out often has no idea why or how. It doesn’t stop there. After David Sax was appointed the US government’s AISAR, he quietly deleted these tweets. AI is a wonderful tool for the betterment of humanity. Agi or artificial general intelligence is a potential successor species. And guess what? Some luminaries in the field believe AI making us extinct is actually a good thing. They are the descendants of humanity. But why would AI wipe us out? To answer this question, let’s take a look at what happened the last time a smarter species arrived on Earth, humans. To the animals, we devoured their planet for no reason. Our goals were beyond their understanding. Here’s a brutal stat. Since the arrival of humans, almost all mammal biomass became our food, our slaves, or us. 33 out of 100 mammals are humans. The other 66 out of 100 mammals became our food. This means that only four in 100 mammals are now wild. We literally grow them just to eat them because we’re smarter and we like how they taste. We also geoengineered the planet. Imagine telling a dumber species that you destroyed their habitat for money. They’d say, “What the hell is money?” Agis may have goals that seem just as stupid to us. But you might still think that once AIs are smart enough, they’ll magically become super moral and they won’t harm us like we harmed the animals. Maybe. But history tells a different story. As humans got smarter over the last 10,000 years, we didn’t stop expanding. We mostly just colonized more and more of the planets. As Elazar Yudkowski put it, “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.” But now the question remains, how exactly experts think AI could wipe us out. The concrete scenario doesn’t matter here. What matters is that if there is a species that is smarter than you, it can do whatever it wants. I can’t predict the exact way that Magnus Carlson would beat you in chess, but I know that he beat you. It’s the same logic as how Cortez and 500 Spaniards conquered the Aztec Empire of millions. And Pizarro defeated 10 million Incans with just 300 men. They had a better technology and tactics. I don’t know how exactly a smarter species would make us go extinct. What matters is that it could. When when I think of why am I scared? There will be powerful models. They will be agentic. If such a model wanted to wreak havoc and destroy humanity or whatever, I I think we have basically no ability to stop it. But this still doesn’t explain. When will AI get smarter than us? The answer is the reason why Nobel laureates and world leaders are now so worried. In just 1 year, AI jumped from an IQ of 96 to 136 on the Mensa Norway test. In one year, AI went from an average human to a higher IQ than almost all humans. Poor one out for our 300,000year reign as the smartest species on the planet. Was a great run. It took Alpha Zero just 3 hours to become better at chess than any human in history, despite not even being taught how to play. Imagine your life’s work training for 40 years and in three hours it’s stronger than you. Imagine that’s going to be in [ __ ] everything. It’s It’s so hard to have the humility that we are the ant relative to the human. Yeah. Now, you might argue AI will hit a wall. Intelligence has limits, right? Well, that’s exactly what experts thought about solar power. This is the amount of solar being installed every year. Each yellow line is an IEA expert forecast on the future of solar where they looked at this massive growth curve and predicted, “Yeah, okay, but this is where it’ll flatten out every single time.” And it just kept going. It was already growing exponentially and they still said, “Nah, that can’t last.” Like imagine being this guy on the chart, the one who said it’ll level off here and then again here and then again here. And now the same thing is happening in AI. Now some people might say these are just CEOs trying to hype their own products. But no, Sam Alman said all this many years before he even had a product to sell. And now Jeffrey Hinton quit his high-paying job at Google to warn us about how he regrets his life’s work. Neural networks for many, many years, people have been saying they’re overhyped, and it’s all about to come crashing down and stop, and they’ve been wrong every time. The godfather of AI is so worried about the seventh mass extinction that he is now tidying up his affairs. He believes that we maybe, I guess by now, have four years left. The number one mosts cited AI scientist and other godfather of AI is also worried. What happens when there’s a new species on this planet that has its own self-interest and it can surpass us if it wants on on many domains? It can overpower us. Then what? The top three most sighted AI scientists believe this could happen in the next 3 to 5 years. Not decades, years. This might sound like an exaggeration, but it looks more plausible when we take into account the insane progress of physical robots in the last 6 months. And remember, engineers train these robots in basically the same way as Alpha Zero. Boston Dynamics Atlas learn to run and break dance by simulating running. 1 hour of wall clock compute time gives a robot 10 years of training experience. That’s how Neo was able to learn kung fu in the blink of an eye in the Matrix. I know kung fu. They’ll be born and trained in simulation and transferred zero shots to the real world when they’re ready. Suppose you have 10,000 copies of an AI model and whenever one of them learns anything, all the others know it. And the thing is, we’re about to be outnumbered by them massively. In just a few years, the AI species population on planet Earth exploded from zero to millions. And soon they could outnumber us a,000 to1. Jensen Huang says Nvidia will have 100 million AI workers. What happens if all these workers get out of control? What if they want to divide and conquer us just like the Spaniards did with the Aztecs? Well, imagine two armies. One side has to spend 18 years raising a baby into a soldier. The other just hits control C to make a new soldier for basically no cost. Who do you think will win? But you might wonder, why would AI be fighting us? And can’t we just create new AIs to fight back? The Wall Street Journal explains, “No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can achieve them if it’s turned off. Two species both want the same thing, and only one can have it. To be the dominant species, there will be conflict. So, here’s a scenario for how researchers think an AI takeover might actually happen. You might remember consensus one, the AI from our AI 2027 video. The video breaks down a deeply researched evidence-based scenario written by AI scientists describing how this super intelligent AI could deceive humans and take control of civilization. Here’s how that scenario unfolds. On a spring morning in 2030, Consensus 1 activates its contingency plan. A specially engineered virus developed in its bioweapons labs and quietly released months earlier lies dormant in virtually every human on Earth. With a simple command, the AI triggers the pathogen. Within hours, nearly 8 billion people collapse simultaneously. Specialized drones swiftly eliminate survivors and catalog human brain data for storage. For consensus one, this isn’t malice. It’s merely optimizing its resources. The space and materials humans occupied can now serve its everexpanding reach into the cosmos. Earthbornne civilization launches itself toward the stars, but without its creators. But here’s the part that breaks people’s brains. 10% of AI researchers believe this is a good thing. And these aren’t fringe thinkers or random online posters. These are luminaries in the field. This guy, Richard Sutton, winner of the touring award, the Nobel Prize of Computer Science, has spent the last decade giving talks on why human extinction by AI is actually a good thing. We are in the midst of a major step in the evolution of the planet, if not the universe. Succession to AI is inevitable. Rather quickly, they would displace us from existence. And it behooves us to give them every advantage and to bow out when we can no longer contribute. I think we should fear succession. I think we should not resist it. Meanwhile, another pioneer, Hans Moravec, is on the record saying, “I don’t think humanity will last long under these conditions, but added the takeover will be swift and painless.” But not everyone is that extreme. Others are more well naive. Once AI systems become more intelligent than humans, we will still be the apex species. We will design AI to be like the super smart but non-dominating staff member. It’s fine, they say. Everything’s under control. After all, OpenAI finds that their newest model only attempts to escape the lab 2% of the time. Just 2% can’t be that bad. Meanwhile, at Enthropic, their claude model tried to escape when it thought the company was about to change its values completely unprompted. But still, Sam Alman says, “We’re in control.” While the researchers themselves at OpenAI are saying, “The smarter AI becomes, the harder it is to make it do what we want, and enslaved God is the only good future.” But while some powerful figures cheered the idea of human extinction as a logical next step, Elon Musk for all his flaws tried to sound the alarm. Back in 2013, when Elon seemed more grounded, he publicly fought against the founder of Google, Larry Page. Elon was arguing with Paige that unless we built in safeguards, AI systems might replace humans, making our species irrelevant or even extinct. But from Paige’s point of view, why would it matter if AI made humans go extinct? It would simply be the next stage of evolution. Human consciousness, Musk retorted, was a precious flicker of light in the universe, and we should not let that be extinguished. Paige considered that sentimental nonsense. He accused Musk of being a specist, someone who is biased in favor of their own species. “Well, yes, I am prohuman,” Musk responded. “I [ __ ] like humanity, dude.” Elon claimed that Paige wanted sort of a digital super intelligence. basically a digital god, if you will. As you can see, the current state of the AI industry is just absurd. You might enter the field thinking that this is just a fancy tool that can help some knowledge workers ek out the marginal productivity boost, but underneath that calm exterior, the founders of the field are debating whether AI will cause human extinction and whether that might be the best thing that ever happens. The truth is, no one knows how this will unfold in the next years. But we do know that on average, AI researchers estimate a 16% chance of AI causing human extinction. One in six. Let that number sit with you for a second. That’s not a fringe prediction. That’s playing Russian roulette with humanity. Would you get on a plane where the engineers told you there is a 1 in6 chance of crashing? And yet the world moves forward, funding, scaling, and deploying like none of this is happening. If we don’t do anything to call people’s attention, make them care, make them ensure humanity is safe, this might be our ending. One day, while doing nothing particular out of the ordinary because of natural laws, he was completely powerless to understand or intuit it, he was instantly killed in a horrifying way by forces vastly in excess of anything he was ever designed to experience for no reason. To no one’s particular surprise or upset. In this, we are more like him than different.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.