Stuart Russell, professor of computer science at the University of California, Berkeley, and co-founder of the International Association for Safe and Ethical AI (IASEAI), delivered the first toast at the TIME100 Impact Dinner celebrating the third annual TIME100 AI list. Russel gave a provocative call to make wise choices about how we use AI, given the high existential stakes involved. Russell, who has long warned that building AI systems more intelligent than humans when we don’t know how to control them reliably could destroy civilization, called it an “unsettling truth” that we have “no idea” where large language models (LLMs) will take us. Read more: https://time.com/7322682/time100-ai-impact-dinner-transform-business/
TRANSCRIPT. So let me begin rather pedantically by noting that AI like physics and entomology is a field of study not a specific technology. And over the last 75 years through a series of wonderful discoveries it has given us a better understanding of how to make decisions as well as algorithms that implement that understanding. But being better at making decisions is not the same as making better decisions. It does not it by itself ensure a better future. Rather, what matters is the purpose to which decision-making, whether by humans or by machines, is directed. As the late Charlie Munger, Warren Buffett’s longtime business partner once said, “Show me the incentive and I’ll show you the outcome.” So, to what end are the decisions made by large language models directed? The unsettling truth, to quote one of Microsoft’s lead evaluators of GPT4, is that we have no idea. Large language models are trained to imitate human beings. In the process, we suspect that they absorb humanlike goals such as self-preservation and self-empowerment and pursue those goals on their own account. This is a fundamental error. We need to recognize the possibility that not only may the bus of humanity be headed towards a cliff, but the steering wheel is missing and the driver is blindfolded. I believe other directions are possible if we choose to open our eyes and act together. We can build AI systems whose only purpose is to serve the interests of human beings, of all human beings, although they and we may be uncertain about what those interests truly are. These AI systems could enhance human understanding, widen the horizons of our experience, and unlock possibilities we have yet to imagine. But many thinkers dating back at least to Aristotle have worried that beneficial coexistence between humanity and superior AI systems will prove to be impossible if they take away our sense of purpose and our incentive to strive toward a better future. In that case, those AI systems I mentioned whose only purpose is to serve human interests will recognize this and gracefully withdraw, allowing us to shape our own future, knowing that the road not taken was only a dead end. So, please raise your glasses to a better future shaped by and for humanity with or without AI. Thank you.