California governor Gavin Newsom vetoed the AI safety bill, but he has recently signed bills targeted at regulating clear and present AI risks rather than hypothetical ones
Letter. California’s AI bill matters | Financial Times
From Jason Green-Lowe, Executive Director, Centre for AI Policy, Washington, DC, US
California’s artificial intelligence safety bill (SB 1047) represented America’s best opportunity to date to establish guardrails and rules of the road for emerging artificial intelligence (AI) technology. The governor’s veto ignores the urgent need for proactive measures to mitigate the risks of advanced AI (FT View, October 3).
Your editorial rightly notes that it would “be better if safety rules were hashed out and enacted at a federal level”. However, with Congress bogged down in election-year politicking, states cannot afford to wait any longer.
It may sound superficially appealing to limit regulation of AI to “high-risk environments”, but advanced AI is a general-purpose tool that cannot be easily confined to any one industry. By their very nature, neural networks have the potential to learn and perform a wide range of tasks. For instance, an AI model designed for document translation could end up controlling critical systems like our power grid, cell towers, weapons systems and stock markets.
SB 1047 would have introduced much-needed accountability measures for large AI companies, such as requiring companies spending over $100mn on AI model training to implement basic safety protocols and maintain the ability to shut down potentially harmful systems.
We don’t need to “rework the proposed rules”, we need to swiftly pass them into law, whether at the state or federal level. The urgency of AI safety cannot be overstated, and America must act now to prevent serious harm from advanced AI.
Jason Green-Lowe Executive Director, Centre for AI Policy, Washington, DC, US