FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

PRESS RELEASE. Senator Wiener’s Groundbreaking Artificial Intelligence Bill Advances To The Assembly Floor With Amendments Responding To Industry Engagement.

AUGUST 15, 2024

SACRAMENTO – The Assembly Appropriations Committee passed Senator Scott Wiener’s (D-San Francisco) Senate Bill 1047 with significant amendments introduced by the author.  SB 1047 is legislation to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. The bill will now advance to the Assembly floor. It will be eligible for a vote on August 20th and must pass by August 31st.

“The Assembly will vote on a strong AI safety measure that has been revised in response to feedback from AI leaders in industry, academia, and the public sector,” said Senator Wiener. “We can advance both innovation and safety; the two are not mutually exclusive. While the amendments do not reflect 100% of the changes requested by Anthropic—a world leader on both innovation and safety—we accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry. These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.

“With Congress gridlocked over AI regulation—aside from banning Tik Tok, Congress has passed no major technology regulation since computers used floppy disks—California must act to get ahead of the foreseeable risks presented by rapidly advancing AI while also fostering innovation.”

The major amendments to SB 1047, which will be published by the Senate in the coming days, are:

  • Removing perjury – Replace criminal penalties for perjury with civil penalties. There are now no criminal penalties in the bill. Opponents had misrepresented this provision, and a civil penalty serves well as a deterrent against lying to the government.
  • Eliminating the FMD – Remove the proposed new state regulatory body (formerly the Frontier Model Division, or FMD). SB 1047’s enforcement was always done through the AG’s office, and this amendment streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been moved to the existing Government Operations Agency.
  • Adjusting legal standards – The legal standard under which developers must attest they have fulfilled their commitments under the bill has changed from “reasonable assurance” standard to a standard of “reasonable care,” which is defined under centuries of common law as the care a reasonable person would have taken. We lay out a few elements of reasonable care in AI development, including whether they consulted NIST standards in establishing their safety plans, and how their safety plan compares to other companies in the industry.
  • New threshold to protect startups’ ability to fine-tune open sourced models– Established a threshold to determine which fine-tuned models are covered under SB 1047. Only models that were fine-tuned at a cost of at least $10 million are now covered. If a model is fine-tuned at a cost of less than $10 million dollars, the model is not covered and the developer doing the fine tuning has no obligations under the bill. The overwhelming majority of developers fine-tuning open sourced models will not be covered and therefore will have no obligations under the bill.
  • Narrowing, but not eliminating, pre-harm enforcement – Cutting the AG’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety.

SB 1047 is supported by both of the top two most cited AI researchers of all time: the “Godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. Today, Professor Bengio published an op-ed in Fortune in support of the bill.

Of SB 1047, Professor Hinton, former AI lead at Google, said, “Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one – including myself – would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously.

“SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off.”

Inaccurate claims about the bill have spread online, leading to divided opinions among AI leaders.

In recent weeks, more AI industry leaders have come out in support of SB 1047. Simon Last, co-founder of Notion, was the latest to express support in an op-ed published last week.

Experts at the forefront of AI have expressed concern that failure to take appropriate precautions could have severe consequences, including risks to critical infrastructure, cyberattacks, and the creation of novel biological weapons. A recent survey found 70% of AI researchers believe safety should be prioritized in AI research more while 73% expressed “substantial” or “extreme” concern AI would fall into the hands of dangerous groups.

In line with President Biden’s Executive Order on Artificial Intelligence, and their own voluntary commitments, several frontier AI developers in California have taken great strides in pioneering safe development practices – implementing essential measures such as cybersecurity protections and safety evaluations of AI system capabilities.

Last September, Governor Newsom issued an Executive Order directing state agencies to begin preparing for AI and assess the impact of AI on vulnerable communities. The Administration released a report in November examining AI’s most beneficial uses and potential harms.

SB 1047 balances AI innovation with safety by:

  • Setting clear standards for developers of AI models with computing power greater than 1026 floating-point operations that cost over $100 million to train and would be substantially more powerful than any AI in existence today
  • Requiring developers of such large “frontier” AI models to take basic precautions, such as pre-deployment safety testing, red-teaming, cybersecurity, safeguards to prevent the misuse of dangerous capabilities, and post-deployment monitoring
  • Creating whistleblower protections for employees of frontier AI laboratories
  • Empowering California’s Attorney General to take legal action in the event the developer of an extremely powerful AI model causes severe harm to Californians or if the developer’s negligence poses an imminent threat to public safety
  • Establishing a new public cloud computer cluster, CalCompute, to enable startups, researchers, and community groups to participate in the development of large-scale AI systems and align its benefits with the values and needs of California communities

SB 1047 is coauthored by Senator Roth (D-Riverside), Senator Susan Rubio (D-Baldwin Park) and Senator Stern (D-Los Angeles) and sponsored by the Center for AI Safety Action Fund, Economic Security Action California, and Encode Justice.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.