FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

AI Companies Fight to Stop California [AGI] Safety Rules – WSJ/Yahoo

Preetika Rana

Thu, Aug 8, 2024

California has become ground zero in the fight over artificial-intelligence regulation.

AI startups and tech giants are rallying to kill a bill ascending through the state legislature that they say would impose impossibly vague constraints in the name of safety. Though some in the industry have called for government regulation, they say it should be done at the federal level with more specificity.

The bill is widely viewed as crucial for how the technology will be regulated across the U.S., as California is home to many AI companies and often has an outsize effect on laws in other states. Proposals to regulate AI nationally have made little progress in Washington.

The California bill, called SB 1047, requires that developers of large AI models conduct safety tests to reduce the risks of “catastrophic harm” from their technology, which it defines as cyberattacks that cause at least $500 million in damage or mass casualties. The developers also must ensure their AI can be shut down by a human if it starts behaving dangerously.

The bill applies to AI models that meet a certain computing-power threshold and cost more than $100 million to train—the estimated cost of training OpenAI’s . Any company doing business in California is covered, regardless of where it is based.

SB 1047 has been passed by California’s Senate and two of the state’s Assembly committees with little opposition. Opponents hope to stop it from passing the full Assembly now that the Democratic-controlled legislature is back in session this week. Democratic Gov. Gavin Newsom’s office didn’t respond to a request for comment on whether he would sign the bill if it passes.

“If it were to go into effect as written, it would have a chilling effect on innovation in California,” said Luther Lowe, the head of public policy at startup accelerator Y Combinator, which has played a leading role in lobbying efforts against the bill.

Meta Platforms and ChatGPT maker OpenAI have also raised concerns about SB 1047, while Google, Microsoft and Anthropic have proposed lengthy amendments.

Scott Wiener, a Democratic state senator from San Francisco who drafted the bill, said he is engaging with the tech industry and is open to changes.

“There are people in the tech sector who are opposed to any and all forms of regulation no matter what it is, even for something reasonable and light-touch,” he said.

Colorado and Utah recently passed some of the country’s first laws regulating AI, but they are narrower in scope than California’s SB 1047.

Some 400 bills related to AI are currently in state legislatures across the U.S., according to the Transparency Coalition, a nonprofit that advocates for regulation of the data on which AI models are trained. Some 30 bills are at varying stages in the California legislature, with goals ranging from protecting intellectual-property rights to expanding the definition of child pornography to include images generated by AI.

SB 1047 has received particularly strong industry pushback. The bill’s language says it would mirror a safety-testing framework that OpenAI, Anthropic and other AI companies voluntarily adopted last year. Opponents say the bill doesn’t specify what those tests should be or who would be on a new commission that is supposed to oversee compliance.

“Foundational elements that would underpin the bill’s regulatory architecture are not yet in place, risking misplaced investments, misleading results, and a missed opportunity to focus resources,” Microsoft said in a letter outlining its objections to the bill.

Another concern for opponents is the bill’s prohibitions on releasing large AI models “if there is an unreasonable risk” that they “can cause or enable a critical harm.”

“This vague standard creates a legal minefield,” said Anjney Midha, a partner at venture-capital firm Andreessen Horowitz who focuses on AI. “Current technology simply cannot guarantee against these hypothetical scenarios.”

Midha said the bill would discourage large developers from making their models available to the public, fracturing a startup ecosystem that relies on such openness to innovate.

Wiener said the bill only codifies the safety standards that the industry has set on its own. “It’s not overly prescriptive,” he said.

Several computer science researchers and legal scholars including Geoffrey Hinton and Yoshua Bengio, who developed much of the technology on which the current generative AI wave is based, co-signed a letter to Newsom supporting SB 1047.

“It would be a historic mistake to strike out the basic measures of this bill—a mistake that will become even more evident within a year when the next generation of even more capable AI systems is released,” the letter said.

Wiener defended his bill in a room full of startup founders and AI researchers at Y Combinator’s San Francisco headquarters in late July.

“There have been significant exaggerations about the scale of liability for model developers,” he told the audience, including a false rumor that developers would go to prison if a malicious actor misused their technology.

Still, many seemed unswayed.

“There are clauses in there that, honestly as an AI developer, I have no idea what to do,” said Stanford University computer-science professor Andrew Ng, who co-founded Google’s deep-learning research team.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.