“Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously. It’s critical that we have legislation with real teeth to address the risks, and SB 1047 takes a very sensible approach. ”  —- Professor Geoffrey Hinton, “Godfather of AI”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

August 7, 2024

Letter to CA state leadership from Professors Bengio, Hinton, Lessig, & Russell

Dear Governor Newsom, Senate President pro Tempore McGuire, and Assembly Speaker Rivas,

As senior artificial intelligence technical and policy researchers, we write to express our strong support for California Senate Bill 1047. Throughout our careers, we have worked to advance the field of AI and unlock its immense potential to benefit humanity. However, we are deeply concerned about the severe risks posed by the next generation of AI if it is developed without sufficient care and oversight.

SB 1047 outlines the bare minimum for effective regulation of this technology. It doesn’t have a licensing regime, it doesn’t require companies to receive permission from a government agency before training or deploying a model, it relies on company self-assessments of risk, and it doesn’t even hold companies strictly liable in the event that a catastrophe does occur. Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation. It would be a historic mistake to strike out the basic measures of this bill – a mistake that will become even more evident within a year when the next generation of even more capable AI systems is released.

As AI rapidly progresses, we face growing risks that AI could be misused to attack critical infrastructure, develop dangerous weapons, or cause other forms of catastrophic harm. These risks are compounded as firms work on developing autonomous AI agents that can take significant actions without human direction and as these systems become more capable than humans across a broader variety of domains. The challenge of developing incredibly capable AI systems safely should not be underestimated.

Some AI investors have argued that SB 1047 is unnecessary and based on “science fiction scenarios.” We strongly disagree. The exact nature and timing of these risks remain uncertain, but as some of the experts who understand these systems most, we can say confidently that these risks are probable and significant enough to make safety testing and common-sense precautions necessary. If the risks are really science fiction, then companies should have no issue with being held accountable for mitigating them. If the risks do end up materializing, it would be irresponsible to be underprepared. We also believe there is a real possibility that without appropriate precautions, some of these catastrophic risks could emerge within years, rather than decades.

Opponents also claim this bill will hamper innovation and competitiveness, causing startups to leave the state. This is false for multiple reasons:

  • SB 1047 only applies to the largest AI models, models that cost over $100,000,000 to train – costs out of reach for all but the largest startups.

  • Large AI developers have already made voluntary commitments to take many of the safety measures outlined in SB 1047.

  • SB 1047 is less restrictive than similar AI regulations in Europe and China.

  • SB 1047 applies to all developers doing business in California, regardless of where they are headquartered. It would be absurd to expect the large companies impacted to completely withdraw from the 5th largest economy in the world rather than comply with basic measures around safety testing and common-sense guardrails.

  • Finally, at a time when the public is losing confidence in AI and doubting whether companies are acting responsibly, the basic safety checks in SB 1047 will bolster the public confidence that is necessary for AI companies to succeed.

Airplanes, pharmaceutical drugs, and a variety of other complex technologies have been made remarkably safe and reliable through deliberate effort from industry and governments to make it so. (And when regulators have relaxed their rules to allow self-regulation, as in the case of Boeing, the results have been horrific both for the public and for the industry itself.) We need a comparable effort for AI, and can’t simply rely on companies’ voluntary commitments to take adequate precautions when they have such massive incentives to do otherwise. As of now, there are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers.

In particular, we strongly endorse SB 1047’s robust whistleblower protections for employees who report safety concerns at AI companies. Given the accounts of “reckless” development from employees at some frontier AI companies, such protections are clearly needed. We cannot simply blindly trust companies when they say that they will prioritize safety.

In a perfect world, robust AI regulations would exist at the federal level. But with Congress gridlocked, and the Supreme Court’s dismantling of Chevron deference disempowering administrative agencies, CA state laws have an indispensable role to play. California led the way in green energy and consumer privacy and can do it again on AI. President Biden and Governor Newsom’s respective AI executive orders are both a good start towards recognizing these risks, but there are limits on what can be accomplished without new legislation.

The choices the government makes now about how to develop and deploy these powerful AI systems may have profound consequences for current and future generations of Californians, as well as those around the world. We believe SB 1047 is an important and reasonable first step towards ensuring that frontier AI systems are developed responsibly, so that we can all better benefit from the incredible promise AI has to improve the world. We urge you to support this landmark legislation.

Sincerely,

Yoshua Bengio
Professor of Computer Science at Université de Montréal & Turing Award winner

Geoffrey Hinton
Emeritus Professor of Computer Science at University of Toronto & Turing Award winner

Lawrence Lessig
Professor of Law at Harvard Law School & founder of Creative Commons

Stuart Russell
Professor of Computer Science at UC Berkeley & Director of the Center for Human-Compatible AI

New poll: SB 1047 has strong bipartisan support

A recent statewide survey of Californians shows a strong intensity of support for developing AI safety regulations to protect Californians, with the vast majority of voters saying it is a very high priority that California take action on this issue. SB 1047 in particular was supported by 77% of voters. Read more about the poll here.

Highlights from SB 1047

Covered models
SB 1047 only applies to AI models larger than any in existence today:
• Models trained on over 10^26 FLOP of computing power AND that cost more than $100M to train
• The vast majority of startups are not covered by the bill
The bill only addresses extreme risks from these models:
• Cyberattacks causing over $500 million in damage
• Autonomous crime causing $500M in damage
• Creation of a chemical, biological, radiological, or nuclear weapon using AI

Requirements for developers
Under SB 1047, developers test for risks and adopt precautions for models assessed to be risky:
• Before training: developers adopt cybersecurity precautions, implement shutdown ability, and report safety protocols.
• Before deploying: developers implement reasonable safeguards to prevent societal-scale catastrophes.
• After deploying: developers monitor safety incidents and monitor continued compliance.
• Developers of derivative models have no new duties under SB 1047, only original developers of foundation models

Enforcement
The provisions in SB 1047 are enforced in the following ways:
• Whistleblower protections are provided to employees at frontier labs to ensure that information on compliance is readily available.
• Civil suits can be brought by the Attorney General against developers who cause catastrophic harm or threaten public safety by neglecting the requirements.

CalCompute
SB 1047 creates a new CalCompute research cluster to support academic research on AI and the startup ecosystem, inspired by federal work on the National Artificial Intelligence Research Resource Pilot (NAIRR).

Open-source advisory council
SB 1047 establishes a new advisory council to advocate for and support open-source AI development in California.

Transparent pricing
SB 1047 requires cloud computing providers and frontier model developers to provide fair and transparent pricing, to avoid price discrimination impeding competition in California.

FAQ

Why is SB 1047 needed?

AI has the potential to produce incredible benefits for California, with innovative companies driving unprecedented advances in medicine, climate change, wildfire prevention, and clean power development. The next generation of this technology also presents new risks—with large, untested AI models presenting the threat of cyberattacks, autonomous crime, or even the development of chemical or biological weapons. Recognizing that powerful next-generation AI systems bring both benefits and risks, Senator Wiener introduced SB 1047 to promote the responsible development of large AI models, and support competition and innovation that benefits all Californians.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.