COMMENT:
- “AI doomer” is a put-down for the “AI accelerationist” to gas-light their audience.
- “AI realist” is more accurate.
- “AI accelerationist” is a pipe-dreamer at best. “Suicidal AI Lunatic” is more accurate.
- One of the best ideas in their book is aligning the “Alchemist” with the “AI accelerationist” – they are not scientists. they are like hackers or con-artists who stumbled across a dream-come-true. No effort. Just steal data (call it fair-use). Con yourself into cash (like Altman did to Musk) add tons of compute (easy with money, like from MSFT) and voila. magic outcome: the LLM. Over 5 centuries ago, making gold from lead was a real scientific endeavor which was
actually a con. - Apparently it is now possible to make gold from lead with nuclear physics but the energy cost far outweighs the value.
- One of the best ideas in their book is aligning the “Alchemist” of the 15th century with the “AI accelerationist” of the 21st century- they are not scientists. they are like hackers or con-artists who stumbled across a dream-come-true. No effort. Just steal data (call it fair-use). Con yourself into cash (like Altman did to Musk) add tons of compute (easy with money, like from MSFT) and voila. magic outcome: the LLM.
- Over 5 centuries ago, making gold from lead was a real scientific endeavor which was actually a con.
- $4 trillion will go into infrastructure in next 5 years… so that investment has to generate ROI for investors- BUT IT MUST BE SAFE! and they are right about the ROI because they can replace labor with capital- bad for everybody else. So the solution is market-driven with incentives for Fortune 2000 (they do NOT want to go out of business to disruptors) AND the Fortune 2000 employs people- stakeholders who believe it or not do have a say- inside.
- The “AI Doomers” are NOT… they are actual “Leading AI Scientists” – as in REAL scientists. Not con-men profiting on the “Alchemy of the 21st century”.
LEARN MORE:
- Red Lines. We urgently call for international red lines to prevent unacceptable AI risks. (2025)
- Statement on AI Risk. AI experts and public figures express their concern about AI risk. (2023)
- Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
No “Doomer”. Beyond the mirroring gas-lighters — A Reckoning for the Alchemists of the 21st Century
In the high-stakes, accelerating discourse surrounding artificial intelligence, language itself has become a weapon. No term is more illustrative of this than “AI doomer.” Lobbed with dismissive intent, it is a rhetorical cudgel designed not to engage with an argument, but to pathologize it. The label is a masterstroke of gaslighting, wielded by “AI accelerationists” and their commercial patrons to frame sober, evidence-based risk assessment as a form of psychological failing—a pessimistic personality trait rather than a logical conclusion. It seeks to shut down the most important conversation of our time by implying that those who raise the alarm are not clear-eyed thinkers, but emotionally broken defeatists. The time has come to set the record straight. Those labeled “doomers” are not doomers; they are the realists. They are the leading AI scientists—the very architects of this field—who possess the intellectual honesty to follow the logic of their creations to its terrifying, yet currently unavoidable, conclusion.
The true dichotomy is not between optimism and doom. It is between scientific realism and a new, reckless form of 21st-century alchemy. To understand this is to reframe the entire debate. The so-called “AI accelerationist,” far from being a visionary pioneer, is a pipe-dreamer at best and, more accurately, a participant in a race of civilizational recklessness. They are not advancing science; they are practicing a form of high-tech sorcery, and in doing so, they are gambling with the fate of all humanity for profit and prestige.
The Alchemist’s Playbook: How to Conjure a Black Box
One of the most powerful insights offered by Eliezer Yudkowsky and Nate Soares is the parallel between today’s AI development and the alchemy of ages past. For centuries, alchemy presented itself as a serious scientific endeavor: the transmutation of base metals like lead into gold. It was a tantalizing promise of infinite wealth and power, pursued with a mixture of mystical belief, flawed theory, and, in many cases, outright fraud. It was a con masquerading as a science.
Today’s race to Artificial General Intelligence (AGI) follows a disturbingly similar playbook. It is not, at its highest levels of capability development, a discipline of careful, methodical, and understandable scientific progress. It is a hacker’s trick, a con artist’s dream-come-true, built on a simple, three-step formula that eschews deep understanding for brute-force results.
First, you acquire your base metal: data. But not carefully curated, ethically sourced, scientifically validated data. No, you take all of it. You scrape the entirety of the public internet—the sum total of human creativity, knowledge, folly, and filth—and label this grand theft “fair use.” This is not lead; it is a chaotic, radioactive ore, and the alchemists have no real understanding of its composite elements or the toxic properties within.
Second, you devise your “Philosopher’s Stone”: massive, unprecedented computational power. The magic trick of the Large Language Model (LLM) was not born from a profound new theory of cognition or a breakthrough in understanding intelligence itself. It was born from the realization that if you apply enough computational force to enough stolen data, something magical seems to happen. This requires capital, which brings us to the third and most crucial ingredient: the con.
You must con someone into funding the operation. The story of OpenAI’s genesis is a case study. It began as a non-profit, a noble quest to ensure AGI benefits all of humanity, a narrative that successfully extracted a foundational $50 million from figures like Elon Musk. But the non-profit structure was a Trojan horse. Once the “magic” was demonstrated, the mission pivoted. A “capped-profit” entity was created, allowing the organization to absorb billions from Microsoft and, more recently, to become the centerpiece of a staggering $100 billion funding round involving NVIDIA. The alchemists had successfully transmuted a humanitarian promise into pure gold.
This analogy holds even under the most charitable light. With the advent of nuclear physics, we discovered that it is technically possible to transmute lead into gold. However, the energy cost of this nuclear reaction is so astronomically high that the resulting gold is worth a tiny fraction of the cost to create it. This is a perfect metaphor for the AGI race. The accelerationists claim they are on the verge of creating gold (sentient, aligned, beneficial AGI), but they are doing so using a black-box process (the nuclear reactor of deep learning) that they cannot control or understand. The “energy cost” is the existential risk. Even if they succeed in creating a flicker of what looks like gold, the risk of a catastrophic meltdown—an unaligned superintelligence—infinitely outweighs any conceivable benefit.
The Suicidal Economics of the AI Race
The accelerationist is not just an alchemist; they are one locked in a system of perverse incentives that makes catastrophe almost inevitable. The projected investment of over $4 trillion into AI infrastructure over the next five years is not a philanthropic grant for human flourishing. It is capital that demands a return on investment (ROI). And the business model for generating that ROI is brutally simple: the replacement of human labor with AI capital.
This is the endgame. The promise sold to investors is the ability to automate vast swathes of the economy, from artists and writers to software engineers and scientists, dramatically reducing costs and concentrating wealth and power in the hands of those who own the models. This is already a recipe for unprecedented societal disruption. But the existential danger lies in the pressure this model creates. Safety research is a bottleneck. It is a cost center that slows down deployment. The market, however, rewards speed and capability. In a frantic race for market share, the incentive is to deploy ever-more-powerful, less-understood systems and to “solve” safety later.
This creates a dynamic where the CEO of an AI lab is functionally a con man, whether by intent or by circumstance. He must sell a narrative of manageable risk and imminent breakthroughs to secure funding, while simultaneously being aware (as their own safety teams and public statements attest) that the underlying technology is not understood and its risks are potentially infinite. They are selling a dream of utopia to fund a machine that operates on the logic of extinction.
The Realists’ Stand: A Mandate for Scientific Integrity
This is why the “AI doomer” label is so pernicious. It is an attempt by the alchemists to discredit the physicists. The individuals and organizations signing statements on AI risk are not fringe bloggers or sci-fi novelists. As the signatories on the Center for AI Safety’s statement and the red-lines.ai initiative show, they are the titans of the field: Geoffrey Hinton and Yoshua Bengio, winners of the Turing Award (the Nobel Prize of computing); Stuart Russell, author of the standard textbook on AI; and hundreds of leading professors, researchers, and engineers from institutions like Google DeepMind, Anthropic, and the world’s top universities.
These are the people who built the foundations upon which the current AI boom rests. Their warnings are not born of ignorance, but of the deepest possible understanding. They are the physicists who, having designed the nuclear reactor, are now screaming that it is being built without control rods. To call them “doomers” is like calling a climatologist who warns of global warming a “weather doomer.” It is an obscene distortion of reality. They are AI realists.
A Market-Driven Path to Sanity
If the economic incentives of the race are suicidal, then perhaps the solution can also be found in the market—not the venture-capital-fueled market of reckless disruption, but the risk-averse market of the established global economy. The Fortune 2000—the world’s largest and most established corporations—have the most to lose from catastrophic AI risk. They do not want their industries, their workforces, and their very existence to be upended by a disruptive startup wielding an unstable, god-like AI. Their business models depend on stability, predictability, and risk management.
Herein lies a path forward. The realists must appeal to the enlightened self-interest of the global economic establishment. These corporations, along with their stakeholders—the millions of employees who do not wish to be automated into irrelevance by an unsafe machine—can become a powerful force for sanity. They can create a market-driven demand for mathematically, provably safe AI. They can, as a coalition, refuse to adopt or integrate with any AI system that cannot pass rigorous, transparent, and verifiable safety audits. They can refuse to do business with the reckless alchemists and instead fund the real scientists who are working on the foundational problems of alignment.
This would create a powerful counter-incentive to the current race. It would make safety not a cost center, but a prerequisite for market access. It would shift the flow of capital from those who can conjure the most impressive demos to those who can provide the strongest guarantees.
In conclusion, the narrative must be reclaimed. The lazy “doomer” epithet must be exposed for the anti-intellectual slur that it is. The debate is not, and has never been, about being for or against technology. It is about demanding a standard of scientific rigor, intellectual honesty, and fiduciary responsibility to our own species that is commensurate with the power we are unleashing. The alchemists have had their day, dazzling the world with their statistical sleight of hand. Now, the realists—the scientists—must be heard. They are not predicting doom; they are providing a diagnosis and a prescription. The diagnosis is that our current path leads to ruin. The prescription is a pause, a global commitment to solving the safety problem first, and the courage to treat the creation of superintelligence with the seriousness it demands. The future of humanity is not a line item on a balance sheet, to be risked for the promise of a quarterly return. It is the entire ledger, and it is time we started acting like it.