“Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously. It’s critical that we have legislation with real teeth to address the risks, and SB 1047 takes a very sensible approach. ” —- Professor Geoffrey Hinton, “Godfather of AI”
August 7, 2024
Letter to CA state leadership from Professors Bengio, Hinton, Lessig, & Russell
Dear Governor Newsom, Senate President pro Tempore McGuire, and Assembly Speaker Rivas,
As senior artificial intelligence technical and policy researchers, we write to express our strong support for California Senate Bill 1047. Throughout our careers, we have worked to advance the field of AI and unlock its immense potential to benefit humanity. However, we are deeply concerned about the severe risks posed by the next generation of AI if it is developed without sufficient care and oversight.
SB 1047 outlines the bare minimum for effective regulation of this technology. It doesn’t have a licensing regime, it doesn’t require companies to receive permission from a government agency before training or deploying a model, it relies on company self-assessments of risk, and it doesn’t even hold companies strictly liable in the event that a catastrophe does occur. Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation. It would be a historic mistake to strike out the basic measures of this bill – a mistake that will become even more evident within a year when the next generation of even more capable AI systems is released.
As AI rapidly progresses, we face growing risks that AI could be misused to attack critical infrastructure, develop dangerous weapons, or cause other forms of catastrophic harm. These risks are compounded as firms work on developing autonomous AI agents that can take significant actions without human direction and as these systems become more capable than humans across a broader variety of domains. The challenge of developing incredibly capable AI systems safely should not be underestimated.
Some AI investors have argued that SB 1047 is unnecessary and based on “science fiction scenarios.” We strongly disagree. The exact nature and timing of these risks remain uncertain, but as some of the experts who understand these systems most, we can say confidently that these risks are probable and significant enough to make safety testing and common-sense precautions necessary. If the risks are really science fiction, then companies should have no issue with being held accountable for mitigating them. If the risks do end up materializing, it would be irresponsible to be underprepared. We also believe there is a real possibility that without appropriate precautions, some of these catastrophic risks could emerge within years, rather than decades.
Opponents also claim this bill will hamper innovation and competitiveness, causing startups to leave the state. This is false for multiple reasons:
SB 1047 only applies to the largest AI models, models that cost over $100,000,000 to train – costs out of reach for all but the largest startups.
Large AI developers have already made voluntary commitments to take many of the safety measures outlined in SB 1047.
SB 1047 is less restrictive than similar AI regulations in Europe and China.
SB 1047 applies to all developers doing business in California, regardless of where they are headquartered. It would be absurd to expect the large companies impacted to completely withdraw from the 5th largest economy in the world rather than comply with basic measures around safety testing and common-sense guardrails.
Finally, at a time when the public is losing confidence in AI and doubting whether companies are acting responsibly, the basic safety checks in SB 1047 will bolster the public confidence that is necessary for AI companies to succeed.
Airplanes, pharmaceutical drugs, and a variety of other complex technologies have been made remarkably safe and reliable through deliberate effort from industry and governments to make it so. (And when regulators have relaxed their rules to allow self-regulation, as in the case of Boeing, the results have been horrific both for the public and for the industry itself.) We need a comparable effort for AI, and can’t simply rely on companies’ voluntary commitments to take adequate precautions when they have such massive incentives to do otherwise. As of now, there are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers.
In particular, we strongly endorse SB 1047’s robust whistleblower protections for employees who report safety concerns at AI companies. Given the accounts of “reckless” development from employees at some frontier AI companies, such protections are clearly needed. We cannot simply blindly trust companies when they say that they will prioritize safety.
In a perfect world, robust AI regulations would exist at the federal level. But with Congress gridlocked, and the Supreme Court’s dismantling of Chevron deference disempowering administrative agencies, CA state laws have an indispensable role to play. California led the way in green energy and consumer privacy and can do it again on AI. President Biden and Governor Newsom’s respective AI executive orders are both a good start towards recognizing these risks, but there are limits on what can be accomplished without new legislation.
The choices the government makes now about how to develop and deploy these powerful AI systems may have profound consequences for current and future generations of Californians, as well as those around the world. We believe SB 1047 is an important and reasonable first step towards ensuring that frontier AI systems are developed responsibly, so that we can all better benefit from the incredible promise AI has to improve the world. We urge you to support this landmark legislation.
Sincerely,
Yoshua Bengio
Professor of Computer Science at Université de Montréal & Turing Award winner
Geoffrey Hinton
Emeritus Professor of Computer Science at University of Toronto & Turing Award winner
Lawrence Lessig
Professor of Law at Harvard Law School & founder of Creative Commons
Stuart Russell
Professor of Computer Science at UC Berkeley & Director of the Center for Human-Compatible AI
New poll: SB 1047 has strong bipartisan support
A recent statewide survey of Californians shows a strong intensity of support for developing AI safety regulations to protect Californians, with the vast majority of voters saying it is a very high priority that California take action on this issue. SB 1047 in particular was supported by 77% of voters. Read more about the poll here.
Highlights from SB 1047
Covered models
SB 1047 only applies to AI models larger than any in existence today:
• Models trained on over 10^26 FLOP of computing power AND that cost more than $100M to train
• The vast majority of startups are not covered by the bill
The bill only addresses extreme risks from these models:
• Cyberattacks causing over $500 million in damage
• Autonomous crime causing $500M in damage
• Creation of a chemical, biological, radiological, or nuclear weapon using AI
Requirements for developers
Under SB 1047, developers test for risks and adopt precautions for models assessed to be risky:
• Before training: developers adopt cybersecurity precautions, implement shutdown ability, and report safety protocols.
• Before deploying: developers implement reasonable safeguards to prevent societal-scale catastrophes.
• After deploying: developers monitor safety incidents and monitor continued compliance.
• Developers of derivative models have no new duties under SB 1047, only original developers of foundation models
Enforcement
The provisions in SB 1047 are enforced in the following ways:
• Whistleblower protections are provided to employees at frontier labs to ensure that information on compliance is readily available.
• Civil suits can be brought by the Attorney General against developers who cause catastrophic harm or threaten public safety by neglecting the requirements.
CalCompute
SB 1047 creates a new CalCompute research cluster to support academic research on AI and the startup ecosystem, inspired by federal work on the National Artificial Intelligence Research Resource Pilot (NAIRR).
Open-source advisory council
SB 1047 establishes a new advisory council to advocate for and support open-source AI development in California.
Transparent pricing
SB 1047 requires cloud computing providers and frontier model developers to provide fair and transparent pricing, to avoid price discrimination impeding competition in California.
FAQ
Why is SB 1047 needed?
California has become a vibrant hub for artificial intelligence. Universities, startups, and technology companies are using Al to accelerate drug discovery, coordinate wildfire responses, optimize energy consumption, uncover rare minerals that produce clean energy, and enhance creativity. Artificial intelligence has enormous potential to benefit our state and the world. California must act now to ensure that it remains at the forefront of dynamic innovation in AI development.
At the same time, scientists, engineers, and business leaders at the cutting edge of this technology – including the three most cited machine learning researchers of all time – have repeatedly warned policymakers that failure to take appropriate precautions to prevent irresponsible AI development could have severe consequences for public safety and national security. California must ensure that the small handful of companies developing extremely powerful AI models — including companies explicitly aiming to develop “artificial general intelligence” — take reasonable care to prevent their models from causing very serious harms as they continue to produce models of greater and greater power.
What does SB 1047 do?
The bill has two main components:
Promoting responsible AI development: The bill defines a set of hazardous impacts the largest AI models could have, from cyberattacks to the development of biological weapons. It requires developers of these AI models to conduct self-assessments to ensure these outcomes will be prevented and empowers the Attorney General to take action against developers whose technology causes catastrophic harm or threatens public safety.
Supports AI competition and innovation: The bill also promotes ongoing academic research on AI, creating CalCompute, a new state research cluster to support the AI startup ecosystem. The legislation creates a new open-source advisory council that will be tasked with advocating for and supporting safe and secure open-source AI development. The bill also promotes competition by requiring large-scale AI developers to provide fair and transparent pricing.
How does SB 1047 prevent risks from AI models with hazardous capabilities?
SB 1047 sets out clear standards for developers of AI models trained using a quantity of computing power greater than 10^26 floating-point operations (and other models with similar capabilities). These models, which would cost over $100,000,000 to train, would be substantially more powerful than any model that exists today.
Specifically, SB 1047 clarifies that developers of these models must invest in basic precautions such as pre-deployment safety testing, red-teaming, cybersecurity, safeguards to prevent the misuse of dangerous capabilities, and post-deployment monitoring. Furthermore, developers of covered models must disclose the precautionary measures they have taken to the California Department of Technology. If the developer of an extremely powerful model causes severe harm to Californian citizens by behaving irresponsibly, or if the developer’s negligence poses an imminent threat to public safety, SB 1047 empowers the Attorney General of California to take appropriate enforcement action.
SB 1047 also creates whistleblower protections for employees of frontier laboratories, and requires companies that provide cloud compute for frontier model training to institute “know your customer” policies to help prevent the dangerous misuse of AI systems.
What are “hazardous capabilities”?
SB 1047 is focused on models capable of causing extraordinary harms that involve the creation of weapons of mass destruction, or AI systems causing $500 million of damage through cyberattacks or autonomously executed criminal activity.
These are extreme capabilities that models currently do not possess. It’s possible that models trained in the next couple of years will have these capabilities, and so developers need to start taking reasonable, narrowly targeted precautions when training the most advanced models.
How does SB 1047 help the AI ecosystem in California to thrive?
SB 1047 helps ensure California remains the world leader in AI innovation, by establishing a process to create a public cloud-computing cluster that will conduct research into the safe and secure deployment of large-scale artificial intelligence (AI) models. The model will allow smaller startups, researchers, and community groups to participate in the development of large-scale AI systems, helping to align them with the needs of California communities.
Additionally, to support the flourishing open-source ecosystem, SB 1047 creates a new advisory council to advocate for and support safe and secure open-source AI development.
Finally, in order to ensure that smaller startup developers have equal opportunities to larger players, SB 1047 requires cloud-computing companies and frontier model developers to provide transparent pricing and avoid price discrimination.
Can startups easily comply with SB 1047?
Yes. SB 1047’s requirements only apply to an extremely small set of AI developers, making the largest, most cost-intensive models that cost in excess of $100 million dollars. The vast majority of AI startups, and all AI application and use-case developers, would have little or no new duties under SB 1047.
Can major developers of covered models comply with SB 1047?
Yes, similar safety testing and disclosure is already being done by many leading developers. The voluntary commitments to the White House and President Biden’s Executive Order call for similar actions. Indeed, many frontier AI developers have already put in place many of these precautions, such as capabilties testing, or developing Safety & Security protocols.
How could an open-source developer comply with this legislation?
A developer can open-source any AI model covered by this bill so long as they conduct safety tests and reasonably determine that it doesn’t have specific, highly hazardous capabilities. The author is actively working with developers to ensure these tests are minimally burdensome and continues to welcome input on how innovation can be fostered safely.
What requirements does this create for developers?
The bill requires developers of the largest AI models, which cost well over $100 million to train today, to conduct self-assessments to protect against potential risks and adopt a set of defined precautions. These steps include:
Before training a model self-assessed to be risky: developers must adopt cybersecurity precautions, implement shutdown ability, follow guidance from the National Institute of Standards and Technology and standard-setting organizations, and report safety protocols.
Before deploying a model self-assessed to be risky: developers must implement reasonable safeguards to prevent societal-scale catastrophes.
After deploying a model self-assessed to be risky: developers must monitor/report safety incidents and monitor/report continued compliance.
Will developers need approval or a license from the government before training or deploying AI models?
No. Developers self-assess whether their models are safe to deploy and need not wait for approval from any government agency.
AI has the potential to produce incredible benefits for California, with innovative companies driving unprecedented advances in medicine, climate change, wildfire prevention, and clean power development. The next generation of this technology also presents new risks—with large, untested AI models presenting the threat of cyberattacks, autonomous crime, or even the development of chemical or biological weapons. Recognizing that powerful next-generation AI systems bring both benefits and risks, Senator Wiener introduced SB 1047 to promote the responsible development of large AI models, and support competition and innovation that benefits all Californians.
Learn more
- AI Companies Fight to Stop California [AGI] Safety Rules. WSJ/Yahoo. Aug 8, 2024
- California Senator Scott Wiener Introduced New Bill that Would Require a Kill Switch for Applicable Artificial Intelligence Models – February 15, 2024. Benesch
- California’s SB-1047: Understanding the Safe and Secure Innovation for Frontier Artificial Intelligence Act February 20, 2024. DLA Piper.
- The Pros and Cons of California’s Proposed SB-1047 AI Safety Law. May 8, 2024. Gabriel Weil. Creators of frontier AI models should be strictly liable for the harms those models cause.
- The brewing storm over California’s AI bill. Aug 9, 2024. Reed Albergotti
Top researchers Yoshua Bengio, @geoffreyhinton, Lawrence @Lessig & Stuart Russell are calling for lawmakers to pass SB 1047.
“As of now, there are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers.”https://t.co/Qd8Taw7i5G
— Senator Scott Wiener (@Scott_Wiener) August 8, 2024
🚨 New @TheAIPI polling shows that a strong bipartisan majority of Californians support the current form of SB1047, and overwhelmingly reject changes proposed by some AI companies to weaken the bill.
Especially notable: “Just 17% of voters agree with Anthropic’s proposed… pic.twitter.com/wXWXYq1wEz
— Future of Life Institute (@FLI_org) August 7, 2024
New letter from @geoffreyhinton, Yoshua Bengio, Lawrence @Lessig, and Stuart Russell urging Gov. Newsom to sign SB 1047.
“We believe SB 1047 is an important and reasonable first step towards ensuring that frontier AI systems are developed responsibly, so that we can all better… pic.twitter.com/yuL74lNf7x
— Dan Hendrycks (@DanHendrycks) August 7, 2024
In addition to that, Anthropic co-founder Ben Mann recently said that with fine tuning and prompt engineering, there is a 30% chance that Claude 3 could succeed in dangerous capabilities evaluations on autonomous replication and CBRN risks.pic.twitter.com/ia0z85uNvD
— ControlAI (@ai_ctrl) August 5, 2024
As I’ve been saying for a while – we’re heading for a Wild future..
Wild Wild Wild pic.twitter.com/VIp1j5FxMw
— AGI – Tech Gone Wild 🤖❤️🔥🇳🇴 (@AGItechgonewild) June 14, 2024
It is very interesting that Li has come out against SB-1047 immediately after taking funding from Andreessen Horowitz, one of the bill’s biggest detractors. https://t.co/7V6WhcpiE1 pic.twitter.com/oWwlIfvPQQ
— Shakeel (@ShakeelHashim) August 6, 2024