Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”
“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.
Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.
Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year.
The one similar group that previously had an official lobbying operation was the Future of Life Institute, which since 2020 has spent roughly $500,000 to lobby Washington on AI. FLI is supported by Tallinn, the Skype co-founder, along with tech mogul Elon Musk and cryptocurrency billionaire Vitalik Buterin.
The uptick in lobbying work — and the policies CAIP and CAIS are pushing — could directly benefit top AI firms, said Suresh Venkatasubramanian, a professor at Brown University who co-authored a 2022 White House documentthat focused more on AI’s near-term risks, including its potential to undermine privacy or increase discrimination through biased screening tools.
“We should discuss what the science is telling us about AI,” said Venkatasubramanian. “But if they want to go lobbying, then that is a different path. It’s about who has more money, and [who] wants to fund their agenda through infusions of cash from a rich doomsday cult.”
Green-Lowe disputed Venkatasubramanian’s claim that CAIP advocates for policies that would benefit leading AI firms. He called his organization “fully independent of big AI labs” and said CAIP’s proposed safety policies “won’t give those labs unfair advantages because the cost of compliance is tiny compared to the price of hardware and talent.”
Both CAIS and CAIP have strong ties to “effective altruism” — a philanthropic movement increasingly focused on AI’s alleged threat to humanity, which some researchers worry is being co-opted by top AI firms in a bid to lock in their policy priorities.
About a third of CAIP’s total donations were provided by Lightspeed Grants, according to Green-Lowe. Lightspeed lists Skype’s Tallinn, a longtime effective altruist, as its primary funder. Tallinn has put significant money into leading AI firms — he was an early investor in DeepMind, now owned by Google, and spearheaded a 2021 investment into Anthropic that included funding from Moskovitz and former Google CEO Eric Schmidt.
The Center for AI Safety has also mobilized lobbyists via its new CAIS Action Fund, which spent $80,000 between October and December on congressional and agency lobbying. CAIS tapped Varun Krovi, a longtime lobbyist and former chief of staff for retired Michigan Democratic Rep. Brenda Lawrence, to lead the effort.
In May, CAIS issued a single-sentence statement labeling AI an extinction risk that should be approached as “a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement included signatures from industry luminaries like OpenAI CEO Sam Altman and Demis Hassabis, head of Google DeepMind.
In addition to its new lobbying arm, CAIS has also directly advised governments. It recently partnered with the United Kingdom on AI risks and became one of the first members of the U.S. Commerce Department’s AI Safety Institute Consortium announced earlier this month.
Funding for CAIS also shows connections to the AI industry and effective altruism. Open Philanthropy has donated more than $10.5 million to CAIS since November 2022. Moskovitz and other leaders at Open Philanthropy have financial and personal ties to leading AI firms, including OpenAI and Anthropic. CAIS also received $6.5 million from the cryptocurrency firm FTXbefore it collapsed in late 2022. Its disgraced founder, Sam Bankman-Fried, is another well-known effective altruist. (The bankrupt FTX has since demanded information from CAIS regarding that donation, and Calvin declined to say whether CAIS would return the money.)
Calvin took pains to separate the work of the CAIS Action Fund, saying Open Philanthropy “has not made any contributions” and that its new lobbying push is instead funded by a “mix of donors.” Beyond lobbying Congress on AI safety, Calvin said his group is advocating for increased R&D investments and more AI funding for agencies like the National Science Foundation and the National Institute of Standards and Technology.
Divyansh Kaushik, an AI policy researcher at the Federation of American Scientists — a group that’s also part of the AI safety consortium launched this month by the Commerce Department — said the direct lobbying efforts represent a “second stage” for existential risk organizations in Washington. After previously failing to provide lawmakers with draft legislation or other practical approaches to addressing abstract AI risks, he framed the new push as an attempt to professionalize.
“I think these organizations are now starting to realize that perhaps the rhetoric that was being put forward early on has been somewhat counterproductive to their causes, and they’re trying to make it more relatable for those members,” said Kaushik. “They’re trying to build more mature efforts to get in front of more members, make arguments that at least sound reasonable.”