FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

On AI safety. “It’s in everyone’s self-interest to make sure that goes well.” — Demis Hassabis

Demis Hassabis

by Jennifer Doudna

Demis Hassabis is reshaping what’s possible in science. His work at DeepMind—most notably the development of AlphaFold, which earned him a share of the 2024 Nobel Prize in Chemistry—is already accelerating discoveries across biology and medicine. The ripple effects are real: labs around the world, including my own, are using his AI tools to tackle rare genetic diseases, antibiotic resistance, and even climate-driven challenges in agriculture. Demis brings a rare kind of mindset to biology, one shaped by deep study of neuroscience, cognition, and computation. He doesn’t just build powerful systems; he builds understanding. His approach reveals biology as a system of patterns we can read, predict, and eventually design for in ways that have never been previously possible.

In our conversations, I’ve been struck by his clarity of vision and sense of responsibility. He’s building tools that don’t just help us understand life, but help us shape it wisely. The future of biology won’t be siloed—it will be collaborative, interdisciplinary, and deeply creative. Demis is helping us get there faster.

Doudna is a Nobel Prize–winning biochemist, the founder of the Innovative Genomics Institute, and a professor at the University of California, Berkeley

One thing is for sure: in his pursuit of AGI, Hassabis is playing the highest-stakes game of his life.

Updated:

In an AI industry whose top ranks are populated mostly by businessmen and technologists, that identity sets Hassabis apart. Yet he must still operate in a system where market logic is the driving force. Creating AGI will require hundreds of billions of dollars’ worth of investments—dollars that Google is happily plowing into Hassabis’ DeepMind unit, buoyed by the promise of a technology that can do anything and everything. Whether Google will ensure that AGI, if it comes, benefits the world remains to be seen; Hassabis points to the decision to release AlphaFold for free as a symbol of its benevolent posture. But Google is also a company that must legally act in the best interests of its shareholders, and consistently releasing expensive tools for free is not a long-term profitable strategy. The financial promise of AI—for Google and for its competitors—lies in controlling a technology capable of automating much of the labor that drives the more than $100 trillion global economy. Capture even a small fraction of that value, and your company will become one of the most profitable the world has ever seen. Good news for shareholders, but bad news for regular workers who may find themselves suddenly unemployed.

So far, Hassabis has successfully steered Google’s multibillion-dollar AI ambitions toward the type of future he wants to see: one focused on scientific discoveries that, he hopes, will lead to radical social uplift. But will this former child chess prodigy be able to maintain his scientific idealism as AI reaches its high-stakes endgame? His track record reveals one reason to be skeptical.

When DeepMind was acquired by Google in 2014, Hassabis insisted on a contractual firewall: a clause explicitly prohibiting his technology from being used for military applications. It was a red line that reflected his vision of AI as humanity’s scientific savior, not a weapon of war. But multiple corporate restructures later, that protection has quietly disappeared. Today, the same AI systems developed under Hassabis’s watch are being sold, via Google, to militaries such as Israel’s—whose campaign in Gaza has killed tens of thousands of civilians. When pressed, Hassabis denies that this was a compromise made in order to maintain his access to Google’s computing power and thus realize his dream of developing AGI. Instead, he frames it as a pragmatic response to geopolitical reality, saying DeepMind changed its stance after acknowledging that the world had become “a much more dangerous place” in the last decade. “I think we can’t take for granted anymore that democratic values are going to win out,” he says. Whether or not this justification is honest, it raises an uncomfortable question: If Hassabis couldn’t maintain his ethical red line when AGI was just a distant promise, what compromises might he make when it comes within touching distance?

To get to Hassabis’s dream of a utopian future, the AI industry must first navigate its way through a dark forest full of monsters. Artificial intelligence is a dual-use technology like nuclear energy: it can be used for good, but it could also be terribly destructive. Hassabis spends much of his time worrying about risks, which generally fall into two different buckets. One is the possibility of systems that can meaningfully enhance the capabilities of bad actors to wreak havoc in the world; for example, by endowing rogue nations or terrorists with the tools they need to synthesize a deadly virus. Preventing risks like that, Hassabis believes, means carefully testing AI models for dangerous capabilities, and only gradually releasing them to more users with effective guardrails. It means keeping the “weights” of the most powerful models (essentially their underlying neural networks) out of the public’s hands altogether, so that models can be withdrawn from public use if dangers are discovered after release. That’s a safety strategy that Google follows but which some of its competitors, such as DeepSeek and Meta, do not.

The second category of risks may seem like science fiction, but they are taken seriously inside the AI industry as model capabilities advance. These are the risks of AI systems acting autonomously— such as a chatbot deceiving its human creators, or a robot attacking the person it was designed to help. Language models like DeepMind’s Gemini are essentially grown from the ground up, rather than written by hand like old-school computer programs, and so computer scientists and users are constantly finding ways to elicit new behaviors from what are best understood as incredibly mysterious and complex artifacts. The question of how to ensure that they always behave and act in ways that are “aligned” to human values is an unsolved scientific problem. Early signs of misaligned behaviors, like strategic lying, have already been identified by researchers working with today’s language models. Those problems are only likely to become more acute as models get better. “How do we ensure that we can stay in charge of those systems, control them, interpret what they’re doing, understand them, and put the right guardrails in place that are not movable by very highly capable self-improving systems?” Hassabis says. “That is an extremely difficult challenge.”

It’s a devilish technical problem—but what really keeps Hassabis up at night are the political coordination challenges that accompany it. Even if well-meaning companies can make safe AIs, that doesn’t by itself stop the creation and proliferation of unsafe AIs. Stopping that will require international collaboration—something that’s becoming increasingly difficult as western alliances fray and geopolitical tensions between the U.S. and China rise. Hassabis has played a significant role in the three AI summits held by global governments since 2023, and says he would like to see more of that kind of cooperation. He says the U.S. government’s export controls on AI chips, intended to prevent China’s AI industry from surpassing Silicon Valley, are “fine”—but he would prefer to avoid political choices that “end up in an antagonistic kind of situation.”

He might be out of luck. As both the U.S. and China have woken up in recent years to the potential power of AGI, the climate of global cooperation —which reached a high watermark with the first AI Safety Summit in 2023—has given way to a new kind of realpolitik. In this new era, with nations racing to militarize AI systems and build up stockpiles of chips, and with a new cold war brewing between the U.S. and China, Hassabis still holds out hope that competing nations and companies can find ways to set aside their differences and cooperate, at least on AI safety. “It’s in everyone’s self-interest to make sure that goes well,” he says.

Even if the world can find a way to safely navigate through the geopolitical turmoil of AGI’s arrival, the question of labor automation will rear its head. When governments and companies no longer rely on humans to generate their wealth, what leverage will citizens have left to demand the ingredients of democracy and a comfortable life? AGI might create abundance, but it won’t dispel the incentives for companies and states to amass resources and compete with rivals. Hassabis admits he is better at forecasting technological futures than social and economic ones; he says he wishes more economists would take the possibility of near-term AGI seriously. Still, he thinks it’s inevitable we’ll need a “new political philosophy” to organize society in this world. Democracy, he says, “is not a panacea, by any means,” and might have to give way to “something better.”

Automation, meanwhile, is already on the horizon. In March, DeepMind announced Gemini 2.5, the latest version of its flagship AI model, which outperforms rival models made by OpenAI and Anthropic on many popular metrics. Hassabis is currently hard at work on Project Astra, a DeepMind effort to build a universal digital assistant powered by Gemini. That work, he says, is not intended to hasten labor disruptions, but instead is about building the necessary scaffolding for the type of AI that he hopes will one day make its own scientific discoveries. Still, as research into these AI “agents” progresses, Hassabis says, expect them to be able to carry out increasingly more complex tasks independently. (An AI agent that can meaningfully automate the job of further AI research, he predicts, is “a few years away.”) For the first time, Google is also now using these digital brains to control robot bodies: in March the company announced a Gemini-powered android robot that can carry out embodied tasks like playing tic-tac-toe, or making its human a packed lunch. The tone of the video announcing Gemini Robotics was friendly, but its connotations were not lost on some YouTube commenters: “Nothing to worry [about,] humanity, we are only developing robots to do tasks a 5 year old can do,” one wrote. “We are not working on replacing humans or creating robot armies.”

Hassabis acknowledges the social impacts of AI are likely to be significant. People must learn how to use new AI models, he says, in order to excel professionally in the future and not risk getting left behind. But he is also confident that if we eventually build AGI capable of doing productive labor and scientific research, the world that it ushers into existence will be abundant enough to ensure a substantial increase in quality of life for everybody. “In the limited-resource world which we’re in, things ultimately become zero-sum,” Hassabis says. “What I’m thinking about is a world where it’s not a zero-sum game anymore, at least from a resource perspective.”

Five months after his Nobel Prize, Hassabis’s journey from chess prodigy to Nobel laureate now leads toward an uncertain future. The stakes are no longer just scientific recognition—but potentially the fate of human civilization. As DeepMind’s machines grow more capable, as corporate and geopolitical competition over AI intensifies, and as the economic impacts loom larger, Hassabis insists that we might be on the cusp of an abundant economy that benefits everyone. But in a world where AGI could bring unprecedented power to those who control it, the forces of business, geopolitics, and technological power are all bearing down with increasing pressure. If Hassabis is right, the turbulent decades of the early 21st century could give way to a shining utopia. If he has miscalculated, the future could be darker than anyone dares imagine. One thing is for sure: in his pursuit of AGI, Hassabis is playing the highest-stakes game of his life.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.