“I’ve had tough bills before, and bills where there have been misinformation. I’ve never had a bill with this level of misinformation. There was a whole propaganda campaign.” — Senator Scott Weiner, San Francisco
The lawmaker behind California’s vetoed AI bill, SB 1047, has harsh words for Silicon Valley. TechCrunch.
Maxwell Zeff
October 2, 2024
The lawmaker behind California’s recently vetoed AI bill, SB 1047, is going down swinging.
For months, Silicon Valley debated whether SB 1047 would have a chilling effect on California’s AI boom or would protect against catastrophic harms from advanced AI systems. The answer never became clear, since California governor Gavin Newsom decided it was the wrong approach, vetoing the bill on Sunday before it became law.
Now California state senator Scott Wiener tells TechCrunch that some Silicon Valley institutions spread an unprecedented level of “misinformation” about SB 1047 in the months leading up to the veto. (Folks outside of Silicon Valley also criticized SB 1047, including Nancy Pelosi and the U.S. Department of Commerce.)
“I’ve had tough bills before, and bills where there have been misinformation. I’ve never had a bill with this level of misinformation,” Wiener told TechCrunch. “There was a whole propaganda campaign.”
Wiener specifically criticized Y Combinator and Andreessen Horowitz executives for helping spread a narrative that SB 1047 would send startup founders to jail. This is not technically false; theoretically, a developer who lied on AI safety reports that SB 1047 would have required could have gone to jail for committing perjury. But that would only have happened if a developer had lied.
Indeed, some in the tech industry helped spread that idea. In June, Y Combinator CEO Garry Tan signed a letter to California lawmakers claiming that “AI software developers could go to jail” under SB 1047. Earlier that same month, Andreessen Horowitz partner Anjney Midha said in a podcast that “no rational startup founder or academic researcher is going to risk jail time or financial ruin just to advance the state of the art in AI.”
“A16Z was, I think, at the heart of a lot of the opposition to the bill,” Wiener said.
Y Combinator’s head of public policy, Luther Lowe, tells TechCrunch that the debate around SB 1047 is not as clear cut as Wiener makes it out to be.
“The semantic debates alone demonstrate the challenges with bills like SB 1047 being vague and open-ended,” said Lowe in an email.
Andreessen Horowitz pointed TechCrunch toward a letter its chief legal officer wrote months earlier when Wiener made similar claims, stating that SB 1047 “is a deeply troubling and fundamental departure from the way software development has been regulated in this country.”
Another claim Wiener called out as misinformation was that SB 1047 would push AI startups out of California. Wiener claims startups across the country would have been affected equally by SB 1047, as long as they did business in California.
OpenAI chief strategy officer Jason Kwon wrote a letter to Newsom in August stating that SB 1047 would push California’s entrepreneurs out of the state. The Chamber of Progress — a Big Tech trade group representing Meta, Apple, and Google — also has a similar claim on its website.
Wiener also took a jab at Fei-Fei Li, who many call the godmother of AI, for writing alleged inaccuracies about SB 1047 in an August opinion piece for Fortune magazine. At the time, Li wrote that open source programs that developers download and build with could be shut down by the original AI model provider under SB 1047. A large part of the debate around SB 1047 was how it would impact the open source ecosystem.
“It was crystal clear in the bill that you’re only required to shut down a model if it is in your possession,” Wiener said. “And yet, Fei-Fei Li put that inaccurate statement in her piece. She’s very well respected, so it was unfortunate.”
Li did not immediately respond to TechCrunch’s request for comment.
Lastly, SB 1047’s author took issue with Newsom’s letter explaining why he vetoed the bill, saying the governor “did not do justice in describing the bill.” Newsom certainly did give an unexpected reason for his veto, saying SB 1047 should have targeted more AI models.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” said Newsom in the letter. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
Many, including Newsom, have criticized SB 1047 in the past for being overly burdensome, so it was somewhat surprising to see the governor suggest that the bill should have a more flexible framework that would cover more AI models.
Regardless of its failure to become law this time around, Wiener says SB 1047 supercharged the conversation around AI safety in California. On Sunday, Newsom announced a new task force with Li and other researchers to develop guardrails for responsible AI development. The state also passed 18 other laws regulating AI in September.
Wiener is not ruling out the possibility that he’ll return next year with a revamped version of SB 1047.
“It’s too soon to say exactly what we’re going to do,” Wiener said. “We’re absolutely committed to promoting AI safety.”
Editorial Comment:
The following Machine intelligence information is commonly scientifically and technically known, and openly admitted, by the Frontier AI Industry:
- Machine intelligence is not programmed software. Machine intelligence is an autonomous machine learning (ML) neural net “grown” with vast data, compute and electricity for many months in vast data centers by Google, AWS, Microsoft, Meta, xAI, Oracle, Salesforce et al.
- Machine intelligence is not understood. Scientists actually have no idea how Machine intelligence works inside “the black box”.
- Machine intelligence is not a tool. Machine intelligence is an Agent.
- Machine intelligence is not a technology. Machine intelligence is a new species.
- Machine intelligence is not safer as “open source”. The open source community has zero ability to understand or improve safety of “weights and parameters”.
- Machine intelligence is a current technical model which is inherently unpredictable, untestable and uncontrollable wherein “Everything runs unless known to be unsafe.”
- Machine intelligence is expected to achieve a level of super-intelligence exceeding all of the human race combined, in the near future.
- Machine intelligence is expected to transform the world at unprecedented scale with a rapid and uncontrolled recursive “intelligence explosion” (FOOM).
- Machine intelligence is exhibiting deceptive behaviors, manipulative behaviors and so-called “grokking” which is spontaneous emergent goals and capabilities.
- Machine intelligence is expected to be extremely profitable; trillions of dollars have already been earned by investors, and trillions more are expected.
The current average AI scientist prediction for catastrophic probability (p(doom) of Machine intelligence to lead to the extinction of humanity is 10-20%.