A California Bill to Regulate A.I. Causes Alarm in Silicon Valley
A California state senator, Scott Wiener, wants to stop the creation of dangerous A.I. But critics say he is jumping the gun.
Cade Metz reported from San Francisco, and Cecilia Kang from Washington.
A California bill that could impose restrictions on artificial intelligence has tech companies, investors and activists scrambling to explain what the first-of-its-kind legislation could mean for their industry in the state.
The bill is still winding its way through the state capital in Sacramento. It is expected to reach the California state assembly appropriations committee on Thursday before facing a vote by the full assembly.
If signed into law by Gov. Gavin Newsom, the bill would require companies to test the safety of powerful A.I. technologies before releasing them to the public. It would also allow California’s attorney general to sue companies if their technologies cause serious harm, such as mass property damage or human casualties.
The debate over the A.I. bill, called SB 1047, is a reflection of the arguments that have driven intense interest in artificial intelligence. Opponents believe it will choke the progress of technologies that promise to increase worker productivity, improve health care and fight climate change.
Supporters believe the bill will help prevent disasters and place guardrails on the work of companies that are too focused on profits. Just last year, many A.I. experts and tech executives led public discussions about the risks of A.I. and even urged lawmakers in Washington to help set up those guardrails.
Now, in an about-face, the tech industry is recoiling at an attempt to do exactly that in California. Because they are based in the state or do business in the state, many of the leading A.I. companies, including Google, Meta, Anthropic and OpenAI, would be bound by the proposed law, which could set a precedent for other states and national governments.
SB 1047 arrives at a precarious time for the San Francisco Bay Area, where much of the A.I. start-up community, as well as many of the industry’s biggest companies, is based. The bill, its harshest critics argue, could push A.I. development into other states, just as the region is rebounding from a pandemic-induced slump.
Some notable A.I. researchers have supported the bill, including Geoff Hinton, the former Google researcher, and Yoshua Bengio, a professor at the University of Montreal. The two have spent the past 18 months warning of the dangers of the technology. Other A.I. pioneers have come out against the bill, including Meta’s chief A.I. scientist, Yann LeCun, and the former Google executives and Stanford professors Andrew Ng and Fei-Fei Li.
Mr. Newsom’s office declined to comment. Google, Meta and Anthropic also declined to comment. An OpenAI spokeswoman said the bill could slow innovation by creating an uncertain legal landscape for building A.I. The company said it had expressed its concerns in meetings with the office of California State Senator Scott Wiener, who created the bill, and that serious A.I. risks were national security issues that should be regulated by the federal government, not by states.
The bill has roots in “A.I. salons” held in San Francisco. Last year, Mr. Wiener attended a series of those salons, where young researchers, entrepreneurs, activists and amateur philosophers discussed the future of artificial intelligence.
After sitting in on those discussions, Mr. Wiener said he created SB 1047, with input from the lobbying arm of the Center for A.I. Safety, a think tank with ties to effective altruism, a movement that has long been concerned with preventing existential threats from A.I.
The bill would require safety tests for systems that have development costs exceeding $100 million and that are trained using a certain amount of raw computing power. It would also create a new state agency that defines and monitors those tests. Dan Hendrycks, a founder of the Center for A.I. Safety, said the bill would push the largest tech companies to identify and remove harmful behavior from their most expensive technologies.
“Complex systems will have unexpected behavior. You can count on it,” Dr. Hendrycks said in an interview with The New York Times. “The bill is a call to make sure that these systems don’t have hazards or, if the hazards do exist, that the systems have the appropriate safeguards.”
Today’s A.I. technologies can help spread disinformation online, including text, still images and videos. They are also beginning to take away some jobs. But studies by OpenAI and others over the past year showed that today’s A.I. technologies were not significantly more dangerous than search engines.
Still, some A.I. experts argue that serious dangers are on the horizon. In one example, Dario Amodei, the chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled people create large-scale biological attacks.
Mr. Wiener said he was trying to head off those scary scenarios.
“Historically, we have waited for bad things to happen and then wrung our hands and dealt with it later, sometimes when the horse was out of the barn and it was too late,” Mr. Wiener said in an interview. “So my view is, let’s try to, in a very light touch way, get ahead of the risks and anticipate the risks.”
Google and Meta sent letters to Mr. Wiener expressing concerns about the bill. Anthropic, Mr. Amodei’s company, surprised many observers when it also opposed the bill in its current form and suggested changes that would allow companies to control their own safety testing. The company said the government should only become involved if real harms were caused.
Mr. Wiener said the opposition by tech giants sent mixed messages. The companies have already promised the Biden administration and global regulators that they would test their systems for safety.
“The CEOs of Meta, Google, of OpenAI — all of them — have volunteered to testing and that’s what this bill asks them to do,” he said.
The bill’s critics say they are worried that the safety rules will add new liability to A.I. development, since companies will have to make a promise that they have taken reasonable steps to ensure their models are safe before they release them. They also argue that the threat of legal action from the state attorney general will discourage tech giants from sharing their technology’s underlying software code with other businesses and software developers — a practice known as open source.
Open source is common in the A.I. world. It allows small companies and individuals to build on the work of larger organizations, and critics of SB 1047 argue that the bill could severely limit the options of start-ups that do not have the resources of tech giants like Google, Microsoft and Meta.
“It could stifle innovation,” said Lauren Wagner, an investor and researcher who has worked for both Google and Meta.
Open-source backers believe that sharing code allows engineers and researchers across the industry to quickly identify and fix problems and improve technologies.
Jeremy Howard, an entrepreneur and A.I researcher who helped create the technologies that drive the leading A.I. systems, said the new California bill would ensure that the most powerful A.I. technologies belonged solely to the biggest tech companies. And if these systems were to eventually exceed the power of the human brain, as some A.I. researchers believe they will, the bill would consolidate power in the hands of a few corporations.
“These organizations would have more power than any country — any entity of any kind. They would be in control of an artificial super intelligence,” Mr. Howard said. “That is a recipe for disaster.”
Others argue that if open source development is not allowed to flourish in the United States, it will flow to other countries, including China. The solution, they argue, is to regulate how people use A.I. rather than regulating the creation of the core technology.
“A.I. is like a kitchen knife, which can be used for good things, like cutting an onion, and bad things, like stabbing a person,” said Sebastian Thrun, an A.I. researcher and serial entrepreneur who founded the self-driving car project at Google. “We shouldn’t try to put an off-switch on a kitchen knife. We should try to prevent people from misusing it.”
Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz
Cecilia Kang reports on technology and regulatory policy and is based in Washington D.C. She has written about technology for over two decades. More about Cecilia Kang