đ¨ Huge news:
2 million members strong California Federation of Labor Unions @CaliforniaLabor has joined the diverse chorus of organizations, individuals, and 80% of the public calling on @CAgovernor @GavinNewsom to #signSB1047! đ pic.twitter.com/F3PzZvYu9D
â Future of Life Institute (@FLI_org) September 27, 2024
“For AI to be sustainable, it must be safe. As with any transformative technology, the risks imperil the benefits.”
“Why California Gov. Gavin Newsom should sign an AI model regulation bill,” by @AnthonyNAguirre.https://t.co/QwSdvvuuya
â Bulletin of the Atomic Scientists (@BulletinAtomic) September 27, 2024
Why California Gov. Gavin Newsom should sign an AI model regulation bill
Bulletin of the Atomic Scientists
By Anthony Aguirre | September 27, 2024
Seven years ago, my colleagues and I convened hundreds of the worldâs foremost experts to explore how to realize the best possible futures with artificial intelligence. Guests included leaders from the largest AI corporations, including Google, Meta, and OpenAI. At a meeting on the Monterrey Peninsula, where a groundbreaking conference on the regulation of genetic research was held in 1975, they all committed to 23 âAsilomar Principlesâârules they deemed critical for keeping AI safe and beneficial. One of the principles reads:
âRisks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.â
Californiaâs Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), currently with Governor Newsom awaiting his signature, embodies this promise. It requires that developers of the most powerful AI systems write and follow safety and security protocols to prevent critical harm. If they donât, they can be held liable. By focusing only on the most capable (and therefore dangerous) AI systems, the bill aims to prevent the worst harms, while leaving room for smaller developers to innovate freely. It balances the need to keep people safe with granting the freedom to prosperâthe goal of all laws to some degree. This balance is crucial to âhaving our cakeâ with AI.
This balance is also why the bill is so popular. It swept through Californiaâs Senate 32-1, and its Assembly 49-16. The worldâs foremost AI experts are behind it, lauding it as light-touch legislation that will âprotect consumers and innovation.â Preeminent technologists have expressed their support, calling it a âgood compromise.â World leaders have praised it as a step towards urgently needed AI governance. Crucially for Newsom, 59 percent  of Californians support it, (including 64 percent of tech workers), with only 20 percent against.
Yet Google, Meta, and OpenAI have spoken out against SB1047. Why? They continue to warn about the enormous risks of ungoverned AI ââlights out for all of usâ, says OpenAIâs Sam Altman. By OpenAIâs own admission, their latest model has a âmediumâ risk of enabling the creation of bioweapons. They have restated their commitment to safety. They have repeatedly called to be regulated. But when concrete legislation is proposed that merely codifies their existing promises, they cry overreach. What gives?
Perhaps these companies object to the billâs other provisions, like those that protect whistleblowers, who speak out about irresponsible and unscrupulous corporate behavior. These brave individuals are absolutely critical to delivering accountability. But perhaps companies fear legal action if, under competitive pressures, they cut corners to rush products to market.
The underlying explanation is simplerâBig Tech is gonna Big Tech. The leaders of large tech companies resist any actual constraints, in their unwavering belief that they always know best. Calling for regulation in general and then lobbying furiously against specific laws is straight out of their playbook. just look at data privacy or social media.
But their resistance is precisely why we need lawmakers to step in. In the heat of AIâs frantic arms race, companies may not keep their word. Voluntary commitments will never be enough when the stakes are this high. When private companies are making decisions about so many lives and livelihoods, the public must have a say. In the fight against climate change, Governor Newsom has shown the leadership and foresight to combat escalating threats in the face of intense corporate pressure. By not caving to Big Tech now, he can help keep tech leaders honest, and the public safe.
For AI to be sustainable, it must be safe. As with any transformative technology, the risks imperil the benefits. Beyond the massive harm that an AI-enabled catastrophe would causeâbe it bioweapons, cyberattacks, or a financial crashâthe subsequent shuttering of the industry would deny millions the incredible benefits AI could bring about. By signing SB1047, Newsom can help prevent those catastrophes and safeguard future benefits. He can set a global standard for sensible AI regulation and help safeguard our future with it.