FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Fast Company. California lawmakers are about to make a huge decision on the future of AI.

Lawmakers will soon vote on a bill that would impose penalties on AI companies failing to safeguard and safety-test their biggest models.

CALIFORNIA’S HOTLY CONTESTED AI BILL NEARS A DECISIVE MOMENT

A new California bill requiring AI companies to enact stricter safety standards has made its way through California’s state house, much to Silicon Valley’s chagrin.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (otherwise known as SB 1047) would require developers of large “frontier” models (those requiring massive compute power and cost at least $100 million to train) to implement a safeguarding and safety-testing framework, undergo audits, and give “reasonable assurance” that the models won’t cause a catastrophe. Developers would report their safety work to state agencies. The bill also calls for the creation of a new agency, called the Frontier Model Division, which would help the Attorney General and Labor Commissioner with enforcement and creating new safety standards.

A long list of industry, academic, and political names has lined up to voice their disapproval of SB 1047. The VC firm Andreesen Horowitz and its influence network has produced the loudest voices in opposition. Stanford professors Fei Fei Li and Andrew Ng have also come out against the bill following meetings with its author, Senator Scott Wiener. (Li has a billion-dollar AI company funded by Andreesen Horowitz. Ng is CEO of Landing AI.) Meta’s Yann LeCun has come out against the bill. So have Reps. Ro Khanna and Zoe Lofgren.

SB 1047’s opponents argue that tech companies have already pledged to lawmakers and the public that they’ll take steps to make sure their models can’t be used to cause catastrophic harm. They also claim that the bill would create burdensome rules that slow progress toward smarter and more capable models, and worry that the new Frontier Model Division would struggle to gain enough expertise to develop safety guidelines for such a young and fast-moving technology as AI. Many have said the bill would make it too risky to open-source AI models because open-source model developers could be held liable if someone modified or fine-tuned the model for harm later on.

The bill’s sponsors cite a recent survey showing that 65% of Californianssupport SB 1047 as currently written. Two of AI’s godfathers, Geoffrey Hinton and Yoshua Bengio, also support the bill. But unsurprisingly, the list of big-name opponents to the bill has grown faster than the list of supporters.

This week, the California Assembly’s Appropriations Committee is expected to add a series of amendments to SB 1047. Many of the amendments are focused on cost considerations. If the committee can agree on the amendments, the bill will go to a floor vote in the Assembly, the final step before heading to the governor’s desk.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.