California Pushes AI Regulation as Experts Reveal Looming Dangers | San Francisco Public Press
Sen. Wiener’s SB 53 seeks to hold AI companies accountable for catastrophic risks as Trump threatens financial retribution.
08.15.2025 | by Jason Winshell
Bay Area technology companies are racing to build powerful artificial intelligence systems they admit could pose “catastrophic risks” to society. But a new report by academic experts commissioned by Gov. Gavin Newsom finds they are resisting transparency and oversight in dangerous ways.
The big AI companies prefer that society trust them to innovate responsibly, and so they have been aggressively lobbying for federal protection from state and local regulations. President Donald Trump responded with a new executive order, “America’s AI Action Plan,” that threatens to financially punish states that try to regulate in ways he deems hinder innovation.
But technology experts say California urgently needs to regulate the AI industry that it’s incubating, drawing on hard lessons learned from delaying regulation of social media companies and the tobacco industry.
So, state Sen. Scott Wiener, D-San Francisco, is following up on his vetoed AI legislation from last year with Senate Bill 53, which is crafted from the state’s own expert recommendations. It seeks to balance beneficial innovation with holding AI giants accountable for their high-risk creations before they harm the public.
In the nine months since Newsom commissioned his AI policy report and its release this June, Bay Area companies have developed AI capabilities that even their creators call dangerous.
The technology that has both experts and company insiders worried is what powers chatbots like OpenAI’s ChatGPT and Anthropic’s Claude. The same AI that can help a college student write a term paper could help a would-be terrorist with minimal scientific knowledge build a biological weapon.
OpenAI warned that its latest AI outperforms 94% of expert virologists on the Virology Capabilities Test — a system devised to evaluate AI models as compared to highly skilled practitioners — and could soon help people with minimal scientific training create biological weapons. Google warned that its system could soon be used to launch major cyberattacks. Anthropic warned that its AI can already help users create and find materials for nuclear weapons.
In response to the increasing risks, all three companies say they are stepping up precautions. OpenAI is monitoring its AI systems more closely and working to make them safer, meaning less likely to be used to cause harm. Google is testing more frequently and has emergency response plans ready, including monitoring for certain actions that will trigger internal alarms. Anthropic has activated its highest level of controls to prevent misuse of its technology to harm individuals or society at large.
But experts say allowing companies to self-regulate as they develop powerful technologies that even they don’t fully understand is a recipe for disaster. As the report concluded, “Without proper safeguards, however, powerful AI could induce severe and, in some cases, potentially irreversible harms.”
Innovating in darkness
Anthropic CEO Dario Amodei recently wrote, “People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.”
Steve Newman, president of the Golden Gate Institute for AI and creator of Google Docs, echoed that point during a recent state Senate hearing on SB 53.
“I’ve also never seen any technology less well understood by the people who are building it,” Newman said. “Any AI developer will tell you that what we’re doing here is as much alchemy as engineering.”
In other words, there’s a mystical quality to the rapid advances artificial intelligence systems are making as they feed on human intelligence to learn tasks. That mystery is why experts say better regulations are desperately needed.
The lack of understanding is explained by Thomas Woodside, an expert in machine learning and co-founder of Secure AI Project, a co-sponsor of SB 53. The stunning human-like abilities of the latest chatbots are “emergent,” he says, not programmed directly by their creators.
Those advances are born of how the technology is constructed and what it learns from being trained on and exposed to the vast content available on the internet. The high-risk capabilities, such as helping build weapons, arise from the same process.
Now, the question is whether society can reap the benefits of the new technologies while guarding against the catastrophic dangers they pose.
History lessons
The team of AI experts who authored Newsom’s report examined the history of regulation of other industries that posed harms as a model for how to regulate the AI industry. They looked at both good and bad outcomes of past approaches.
They scrutinized the importance of transparency in regulating the energy and tobacco industries, which “misrepresented critical data” to policy makers, who could have intervened earlier to mitigate harms if they had been better informed.
Energy industry executives decades ago knew the catastrophic impacts fossil fuel emissions would have on global climate, kept that knowledge secret and waged disinformation campaigns to push back regulation that would bite into their profits.
Similarly, tobacco industry insiders knew and concealed for generations the health dangers of smoking, even falsely telling Congress that nicotine isn’t addictive.
The report authors say they don’t intend to suggest that AI companies are behaving the same way or have similar motives. Instead, the case studies serve as examples of particularly egregious consequences of poor industry transparency and regulation.
Social media provide a more recent example. Both Wiener and an SB 53 co-sponsor, Teri Olle, director of Economic Security California, say policymakers need to learn from the failure to regulate social media.
“We obviously did nothing for many years around social media and we’ve seen some of the challenges that’s created with kids and disinformation,” Wiener said at a recent hearing on SB 53.
Olle similarly lamented: “Fifteen years ago or more, when social media was a new thing and we were told, ‘just wait till we see what happens,’ you blinked, and now we have all of these kinds of horrors that people are trying to put the glitter back in the bag. So, I think there was a real sense of the learnings from that hesitation.”
Those learnings refer to the negative impacts that social media have on teenagers and young adults who spend significant time using it: poor self-esteem, bullying and mental health problems. In 2024, the U.S. Surgeon General called for social media companiesto post safety warnings about the dangers, like those appearing on cigarette packs.
Regulations that work
It’s an all-too-common rejoinder by industries trying to maximize profits: claiming that regulation stifles competitiveness and innovation. The experts cite three case studies of regulations that struck an appropriate balance between promoting innovation and accountability: those on pesticides, building codes and car safety belts.
California agriculture continues to thrive despite strict regulation of pesticides by the state’s Department of Pesticide Regulation and Environmental Protection Agency. Home building continues throughout the state despite enforcement of strict building codes aimed at ensuring they’re safe. National seatbelt laws did not cripple the American auto industry’s competitiveness.
So, experts argue that a similar regulatory balance is needed for artificial intelligence, particularly given its potential to harm society.
A key finding in the 52-page report commissioned by Newsom is that AI companies need to be far more transparent about how they build, test and secure their technology. The findings are echoed by recent analysis from SaferAI and the Future Of Life Institute, nonprofit organizations that grade the transparency and safety practices of leading AI companies.
The report’s authors relied on Stanford University’s 2024 Foundation Model Transparency Index, which grades AI companies on 100 transparency indicators. It measures how well they document and report on how they build their technology, monitor and respond to problems, and track how the technology is used by others.
Examples include telling the public how their AI is used or revealing what data they use to train their systems. While the authors note improvements over the 2023 rankings, AI companies still lag in multiple transparency measures.
When it comes to explaining how their AI affects people through the products and services they interact with every day, these companies average 15% — clearly flunking this key test of self-regulation.
Massachusetts Institute of Technology professors Daron Acemoglu and Simon Johnson, who shared the 2024 Nobel Prize in Economics, say there is a culture of superiority baked into tech companies that undermines accountability.
“The vibe in the corridors of most tech companies is that men (and sometimes, but not that often, women) of genius are at work, striving for the common good. It is only natural that they should be the ones making the important decisions. When approached this way, the political discourse of the masses becomes something to be manipulated and harvested, not something to be encouraged and protected,” they wrote in “Power and Progress,” their book examining the trajectory of technology revolutions over 1,000 years.
Company insiders share these concerns about accountability. Just last year, as reported by The New York Times, a group of OpenAI insiders blew the whistle on a culture of recklessness and secrecy within the company as it raced for market dominance, publishing a foreboding letter, “A Right to Warn about Advanced Artificial Intelligence.”
Hidden harms
The cutting edge generative AI technology that powers the jaw-dropping conversational abilities and multi-subject expertise of latest chatbots from OpenAI, Anthropic, Google and Meta, undergirds a vast ecosystem of downstream commercial uses that people interact with every day.
“We’ve already seen AI systems effectively collude to raise rent prices and point the finger at innocent black men based on faulty facial recognition technology,” said Kit Walsh, director of AI at the Electronic Frontier Foundation. “The nature of AI is to try to recreate the patterns it sees in its training data, so if you have a history of discrimination then AI will re-create that discrimination, even if you try to take out explicit data about race, gender and so forth. AI has figured out, for instance, that ZIP codes can be a proxy for race and that college activities can be a proxy for gender.”
For example, a 2024 experiment by Bloomberg News unearthed patterns of bias against certain ethnicities and women in AI-driven job application screening using application names as a proxy for gender and ethnicity. Bloomberg created a set of resumes, which it had OpenAI’s ChatGPT 3.5 rank 1,000 times, swapping only the applicant names.
The fictitious names were derived from U.S. Census data associated with specific ethnic groups 90% of the time. The experiment revealed that ChatGPT “favored names from some demographics more often than others, to an extent that would fail benchmarks used to assess job discrimination against protected groups.”
Because developers don’t know how their technology actually works, it becomes critically important to make sure that data used to train it does not incorporate bias and other negative traits of human behavior.
Without transparent insight into the technology, when something goes wrong, it is difficult to identify why or know who is responsible, allowing companies to deny the problem exists because no one can see inside their systems.
So, lawmakers across the country have been advancing laws limiting specific uses of AI. For example, California’s Senate Bill 7, dubbed the “No Robo Bosses Act,” restricts how employers may use AI in employment-related decisions.
To date, California’s laws have targeted specific uses of AI technology — how it is applied in areas like hiring, housing and pricing, where it can exacerbate existing problems.
But no law yet ensures the underlying AI systems themselves are built safely or address the catastrophic risks these systems could pose to the public. SB 53 could fill this regulatory vacuum.
Resisting oversight
Silicon Valley and its Big Tech companies have long resisted regulatory oversight: “Tech has grown up in 40 years of the wilderness in terms of regulation,” Olle said. “There’s no regulatory rubric.”
She noted that there was no agency to turn to for AI regulation.
“I mean, you know how you deal with the wastewater if you were to build manufacturing plants of any kind. You would be immediately in the world of the EPA and the Air Resources Board, and whatever other kind of like regulatory rubrics and regulatory bodies that we have set up over years — appropriate guard rails,” Olle said. “Tech has none of that.”
Trump, eager to roll back what limited AI oversight had been put in place, rescinded former President Joseph Biden’s AI safety rules for government within hours of taking office. In July, the administration published its executive order seeking to prevent AI regulation. “America’s AI Action Plan” even directs federal agencies to “limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding.”
The executive order does not carry the force of law, and its legality and enforceability are unclear. But Trump has been pairing it with financial threats against law firms, academic institutions and state governments with some notable successes.
OpenAI, Anthropic, Microsoft and Google have published statements that generally support Trump’s stance. Meta has stayed silent. But Newsom and other California officials have pledged to resist efforts by Trump and the AI industry to kill regulations.
Newsom says he is committed to strictly regulating the most dangerous AI. In a letteraccompanying his veto of SB 1047 — Wiener’s previous attempt at passing AI regulation — Newsom wrote: “We cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted.”
The San Francisco Public Press asked the governor’s press office how Trump’s AI Action Plan might affect Newsom’s commitment to protect the public from risks posed by California AI companies, including taking actions to regulate them.
“The Governor does not support preemption of state laws that protect the public from AI-generated child pornography, deepfake porn, or robocall scams against the elderly and he will not risk the well-being of Californians because of Trump’s threats to withhold funding,” Tara Gallegos, Newsom’s spokesperson, wrote in an email. “Having failed to get his group of sycophants to approve an AI moratorium in his ‘Big Beautiful Betrayal,’ Trump now tries to force it on states through bullying and intimidation in his AI Action Plan.”
She added that Newsom was also focused on the benefits of AI when development is carried out responsibly and with thoughtful oversight.
“The Governor believes in the responsible, ethical, and safe adoption of emerging technology as a force multiplier for the benefit of the public, which is why he has issued an executive order and has announced a number of GenAI contracts with state government, all of which undergo a robust risk management process and consultation with organized labor. He also tapped the world’s leading AI experts to guide California’s regulations,” Gallegos wrote.
But the statement did not address whether Newsom was still committed to regulating high-risk AI through legislation, as he had expressed in his prior veto through a letter in which he wrote that he sought “to find the appropriate path forward, including legislation and regulation.”
Art of compromise
The clock is ticking on getting SB 53 to the governor’s desk, with less than a monthremaining in the legislative session. The bill, which awaits approval by the Committee on Appropriations, represents Wiener’s best effort at compromise — legislation that establishes meaningful regulation of an industry that fought last year to block a similar measure.
Last year, Wiener tried to impose AI accountability regulation with Senate Bill 1047, but the tech industry came out in force, with 165 companies and organizations officially opposing it. The Legislature approved it and Newsom vetoed it
Wiener has taken a lighter, more nimble approach with SB 53. Only six professional organizations have objected to the new bill.
However, Politico reports that an explosion of lobbying money is pouring in to derail and reshape AI regulation. So, it’s too soon to tell whether opposition to SB 53 will mount.
SB 53 incorporates the recommendations in the governor’s AI report and solicits ideas from supporters and opponents of regulation. At a recent state Senate hearing, Wiener said, “We took a look at the working group report, and there were a number of very solid policy recommendations in that report. And so, we then incorporated them into SB 53.”
Wiener also addressed Newsom’s key concern from his SB 1047 veto: adaptability. The governor warned that regulation of risky AI systems couldn’t be tied to today’s definitions of “large-scale models” because technology evolves. SB 53 lets the attorney general’s office update the thresholds annually to keep pace.
In SB 53, Wiener is focusing on reducing the greatest AI risks. “It’s not about eliminating risks,” he said at the recent hearing. “Life is about risk. It’s about trying to understand the risks and then trying to get ahead of them in order to reduce the risks. The risks that we’re focused on in SB 53 are more catastrophic risks, whether around chemical, biological, radiological, nuclear weapons, whether around major cybercrimes, threats to critical infrastructure.”
The governor’s AI experts warn that “policy windows do not remain open indefinitely.” California can still establish effective AI governance and provide “clarity to AI companies that drive innovation in California,” they wrote in their report.
Encode AI General Counsel Nathan Calvin, a co-sponsor of SB 53, pointed to this language from the report: “If those whose analysis points to the most extreme risks are right — and we are uncertain if they will be — then the stakes and costs for inaction on frontier AI at this current moment are extremely high.”
Calvin wrote in an email that the U.S. Senate’s 99-to-1 rejection of an AI moratorium preventing state AI regulation sent a strong message.
“It’s clear that the public understands that waiting for Congress to act here is not a coherent strategy, and that absent robust federal action, the states need to be able to protect their residents,” he wrote.