A very good read from a respected source!

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

The AI bill driving a wedge through Silicon Valley – Financial Times

California’s Gavin Newsom has until September 30 to decide whether to sign legislation that will reach far beyond the state

California’s push to regulate artificial intelligence has riven Silicon Valley, as opponents warn the legal framework could undermine competition and the US’s position as the world leader in the technology. Having waged a fierce battle to amend or water down the bill as it passed through California’s legislature, executives at companies including OpenAI and Meta are waiting anxiously to see if Gavin Newsom, the state’s Democratic governor, will sign it into law. He has until September 30 to decide. California is the heart of the burgeoning AI industry, and with no federal law to regulate the technology across the US — let alone a uniform global standard — the ramifications would extend far beyond the state. “The rest of the world is certainly paying close attention to what is happening in California and in the US more broadly right now, and the outcome there will most likely have repercussions on other nations’ regulatory efforts,” Yoshua Bengio, a professor at the University of Montreal and a “godfather” of AI, told the Financial Times. Why does California want to regulate AI? The rapid development of AI tools that can generate humanlike responses to questions have magnified perceived risks around the technology, ranging from legal disputes such as copyright infringement to misinformation and a proliferation of deepfakes. Some even think it could pose a threat to humanity. President Joe Biden issued an executive order last year aiming to set national standards for AI safety, but Congress has not made any progress in passing national laws.

Liberal California has often jumped in to regulate on issues where the federal government has lagged behind. AI is in focus with California’s Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, which was put forward by state senator Scott Wiener. Of the various bills filed in different states, the one in California is the most likely to have a real impact, because the state is at the centre of the technological boom, home to top companies including OpenAI, Anthropic, Meta and Google. Bengio said: “The big AI companies which have been the most vocal on this issue are currently locked in their race for market share and profit maximisation, which can lead to cutting corners when it comes to safety, and that’s why we need some rules for those leading this race.”

What does the bill say?

Wiener has said his bill “requires only the largest AI developers to do what each and every one of them has repeatedly committed to do: perform basic safety testing on massively powerful AI models”. The bill would require developers building large models to assess whether they are “reasonably capable of causing or materially enabling a critical harm”, ranging from malicious use or theft to the creation of a biological weapon. Companies would then be expected to take reasonable safeguards against those identified risks. Developers would have to build a “kill switch” into any new models over a certain size in case they are misused or go rogue. They would also be obliged to draft a safety report before training a new model and to be more transparent — they would have to “report each artificial intelligence safety incident” to the state’s attorney-general and undertake a third-party audit to ensure compliance every year. It is directed at models that cost more than $100mn to train, roughly the amount required to train today’s top models. But that is a fast-moving target: Anthropic chief executive Dario Amodei has predicted the next group of cutting-edge models will cost $1bn to train and $10bn by 2026. The bill would apply to all companies doing business in California, regardless of where they are based, which would in effect cover every company currently capable of developing top AI models, Bengio said.

It would introduce civil penalties of up to 10 per cent of the cost of training a model against developers whose tools cause death, theft or harm to property. It would also create liabilities for companies offering computing resources to train those models and auditing firms, making them responsible for gathering and retaining detailed information about customers’ identities and intentions. Failure to do so could result in fines of up to $10mn.

Who is for the bill and who is against it?

Wiener and his colleagues say there is strong public support for new AI guardrails. He has also won qualified support from leading AI start-up Anthropic and Elon Musk, as well as SAG-AFTRA, an actors’ union, and two women’s groups. On Monday, 100 employees at top AI companies including OpenAI, xAI and Google DeepMind signed a letter calling on Newsom to sign the bill. “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” they wrote. Critics — including academics such as Stanford AI professor Fei-Fei Li, venture capital firm Andreessen Horowitz and start-up accelerator Y Combinator — argue the bill would hobble early-stage companies and open-source developers who publicly share the code underlying their models. Senate bill SB 1047 would “slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere”, warned OpenAI chief strategy officer Jason Kwon in a letter to Wiener last month. He echoed one of the most common complaints: that the senator was meddling in an area that should be dealt with at the federal level. Opponents also say it would stifle innovation by piling onerous requirements on to developers and making them accountable for the use of their AI models by bad actors. It legislates for risks that do not yet exist, they add. Dario Gil, director of research at IBM, said: “Philosophically, anticipating the consequences of how people are going to use your code in software is a very difficult problem. How will people use it, how will you anticipate that somebody will do harm? It’s a great inhibitor. It’s a very slippery slope.” Dan Hendrycks, director of the Center for AI Safety, which played a critical role in formulating the bill, said opponents “want governments to give them a blank cheque to build and deploy whatever technologies they want, regardless of risk or harm to society”. Hendrycks, who is also an adviser to Musk’s xAI, has come under fire from critics who cast the CAIS as a fringe outfit overly concerned about existential risks from AI. Opponents also expressed concerns that CAIS had lobbied for influence over a “Board of Frontier Models” that the bill would create, staffed with nine directors drawn from industry and academia and tasked with updating regulations around AI models and ensuring compliance. Wiener rejected those arguments as “a conspiracy theory”. “The opposition tried to paint anyone supporting the bill as ‘doomers’,” he said. “They said these were science fiction risks; that we were focused on The Terminator [film]. We’re not, we’re focused on very real risks like shutting down the electric grid, or the banking system, or creating a chemical or biological weapon.”

How have the bill’s authors tried to address concerns?

Wiener said he and his team have spent the past 18 months engaging with “anyone that would meet with us” to discuss the bill, including Li and partners at Andreessen and Y Combinator. One of their concerns was that requiring a kill switch for open-source models would prevent other developers from modifying or building on them for fear they might be turned off at a moment’s notice. That could be fatal for young companies and academia, which are reliant on cheaper or free-to-access open-source models. Wiener’s bill has been amended to exclude open-source models that have been fine-tuned beyond a certain level by third parties. They will also not be required to have a kill switch. Some of the bill’s original strictures have also been moderated, including narrowing the scope for civil penalties and limiting the number of models covered by the new rules.

Will the bill become law?

SB 1047 easily passed the state’s legislature. Now Newsom has to decide whether to sign the bill, allow it to become law without his signature or veto it. If he does veto, California’s legislature could override that with a two-thirds-majority vote. But, according to a spokesperson for Wiener, there is virtually no chance of that happening. The last time a California governor’s veto was overridden was in 1980.

The governor is in a tough spot, given the importance of the tech industry to his state. But letting AI grow unchecked could be even more problematic. Wiener said: “I would love for this to be federal legislation: if Congress were to act in this space and pass a strong AI safety bill I’d be happy to pack up and go home. But the sad reality is that while Congress has been very, very successful on healthcare, infrastructure and climate, it’s really struggled with technology regulation . . . Until Congress acts, California has an obligation to lead because we are the heartland of the tech industry.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.