The policy world didn’t seem to know how seriously to heed these warnings. Asked if AI is dangerous, President Biden said Tuesday, “It remains to be seen. Could be.”
The dystopian visions are familiar to many inside Silicon Valley’s insular AI sector, where a small group of strange but influential subcultures have clashed in recent months. One sect is certain AI could kill us all. Another says this technology will empower humanity to flourish if deployed correctly. Others suggest the six-month pause proposed by Musk, who will reportedly launch his own AI lab, was designed to help him catch up.
The subgroups can be fairly fluid, even when they appear contradictory and insiders sometimes disagree on basic definitions.
But these once-fringe worldviews could shape pivotal debates on AI. Here is a quick guide to decoding the ideologies (and financial incentives) behind the factions:
The argument: The phrase “AI safety” used to refer to practical problems, like making sure self-driving cars don’t crash. In recent years, the term — sometimes used interchangeably with “AI alignment” — has also been adopted to describe a new field of research to ensure AI systems obey their programmer’s intentions and prevent the kind of power-seeking AI that might harm humans just to avoid being turned off.
Many have ties to communities like effective altruism, a philosophical movement to maximize doing good in the world. EA, as it’s known, began by prioritizing causes like global poverty but has pivoted to concerns about the risk from advanced AI. Online forums, like Lesswrong.com or AI Alignment Forum, host heated debates on these issues.
Some adherents also subscribe to a philosophy called longtermism that looks at maximizing good over millions of years. They cite a thought experiment from Nick Bostrom’s book “Superintelligence,” which imagines a safe superhuman AI could enable humanity to colonize the stars and create trillions of future people. Building safe artificial intelligence is crucial to secure those eventual lives.
Who is behind it: In recent years, EA-affiliated donors like Open Philanthropy, a foundation started by Facebook co-founder Dustin Moskovitz and former hedge funder Holden Karnofsky, have helped seed a number of centers, research labs and community-building efforts focused on AI safety and AI alignment. FTX Future Fund, started by crypto executive Sam Bankman-Fried, was another major player until the firm went bankrupt after Bankman-Fried and other executives were indicted on charges of fraud.
How much influence do they have?: Some work at top AI labs like OpenAI, DeepMind and Anthropic, where this worldview has led to some useful ways of making AI safer for users. A tightknit network of organizations produces research and studies that can be shared more widely, including this 2022 survey that asked machine learning researchers to estimate the probability that human inability to control AI could end humanity. The median response was 10 percent.
AI Impacts, which conducted the study, has received support from four different EA-affiliated organizations, including the Future of Life Institute, which hosted Musk’s open letter and received its biggest donation from Musk. Center for Humane Technology co-founder Tristan Harris, who once campaigned about the dangers of social media and has now turned his focus to AI, cited the study prominently.
The argument: It’s not that this group doesn’t care about safety. They’re just extremely excited about building software that reachesartificial general intelligence, or AGI, a term for AI that is as smart and as capable as a human. Some are hopeful tools like GPT-4, which OpenAI says has developed skills like writing and responding in foreign languages without being instructed to do so, means they are on the path to AGI. Experts explain that GPT-4 developed these capabilities by ingesting massive amounts of data, and most say these tools do not have a humanlike understanding of the meaning behind the text.
Who is behind it?: Two leading AI labs cited building AGI in their mission statements: OpenAI, founded in 2015, and DeepMind, a research lab founded in 2010 and acquired by Google in 2014. Still, the concept might have stayed on the margins if not for the same wealthy tech investors interested in the outer limits of AI. According to Cade Metz’s book, “Genius Makers,” Peter Thiel donated $1.6 million to Yudkowsky’s AI nonprofit and Yudkowsky introduced Thiel to DeepMind. Musk invested in DeepMind and introduced the company to Google co-founder Larry Page. Musk brought the concept of AGI to OpenAI’s other co-founders, like CEO Sam Altman.
How much influence do they have?: OpenAI’s dominance in the market has flung open the Overton window. The leaders of the most valuable companies in the world, including Microsoft CEO Satya Nadella and Google CEO Sundar Pichai, now get asked about and discuss AGI in interviews. Bill Gates blogs about it. “Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever,” Altman wrote in February.
The argument: Though doomers share a number of beliefs — and frequent the same online forums — as people in the AI safety world, this crowd has concluded that if a sufficiently powerful AI is plugged in, it will wipe out human life.
Who is behind it?: Yudkowsky has been the leading voice warning about this doomsday scenario. He is also the author of a popular fan fiction series, “Harry Potter and the Methods of Rationality,” an entry point for many young people into these online spheres and ideas around AI.
His nonprofit, MIRI, received a boost of $1.6 million in donations in its early years from tech investor Thiel, who has since distanced himself from the group’s views. The EA-aligned Open Philanthropy donated about $14.8 million across five grants from 2016 to 2020. More recently, MIRI received funds from crypto’s nouveau riche, including ethereum co-founder Vitalik Buterin.
How much influence do they have?: While Yudkowsky’s theories are credited by some inside this world as prescient, his writings have also been critiqued as not applicable to modern machine learning. Still, his views on AI have influenced more high-profile voices on these topics, such as noted computer scientist Stuart Russell, who signed the open letter.
In recent months, Altman and others have raised Yudkowsky’s profile. Altman recently tweeted that “it is possible at some point [Yudkowsky] will deserve the nobel peace prize” for accelerating AGI, later also tweeting a picture of the two of them at a party hosted by OpenAI.
The argument: For years, ethicists have warned about problems with larger AI models, including outputs that are biased against race and gender, an explosion of synthetic media that may damage the information ecosystem, and the impact of AI that sounds deceptively human. Many argue that the apocalypse narrative overstates AI’s capabilities, helping companies market the technology as part of a sci-fi fantasy.
Some in this camp argue that the technology is not inevitable and could be created without harming vulnerable communities. Critiques that fixate on technological capabilities can ignore the decisions made by people, allowing companies to eschew accountability for bad medical advice or privacy violations from their models.
Who is behind it?: The co-authors of a farsighted research paperwarning about the harms of large language models, including Timnit Gebru, former co-lead of Google’s Ethical AI team and founder of the Distributed AI Research Institute, are often cited as leading voices. Crucial research demonstrating the failures of this type of AI, as well as ways to mitigate the problems, “are often made by scholars of color — many of them Black women,” and underfunded junior scholars, researchers Abeba Birhane and Deborah Raji wrote in an op-ed for Wired in December.
How much influence do they have?: In the midst of the AI boom, tech firms like Microsoft, Twitch and Twitter have been laying off their AI ethics teams. But policymakers and the public have been listening.
Former White House policy adviser Suresh Venkatasubramanian, who helped develop the blueprint for an AI Bill of Rights, told VentureBeat that recent exaggerated claims about ChatGPT’s capabilities were part of an “organized campaign of fearmongering” around generative AI that detracted from stopped work on real AI issues. Gebru has spoken before the European Parliament about the need for a slow AI movement, ebbing the pace of the industry so society’s safety comes first.
A previous version of this article incorrectly construed the results of a survey asking machine learning researchers to estimate the probability that AI could end humanity. The median response was 10 percent, not 10 percent of respondents agreeing with the premise. This article has been corrected.