FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

The inside story of California’s SB-1047, the AI safety bill that exposed Silicon Valley’s deep ideological divides and sparked an unprecedented power struggle over humanity’s most transformative technology.

open AI the latest to oppose a new California bill that would add safety precautions to increasingly powerful models there’s one bill is this SB 1047 it’s created its own weather system sb 1047 which is to regulate the explosion of AI into the world will actually make AI more dangerous incredibly dangerous just terrible quite scary and the public wants us to do something you don’t want to be heavy-handed too early but I don’t think it’s right to do nothing i was sitting having dinner with my wife one night in early February and uh picked up my phone and saw that a bill from Scott Weiner on AI policy had dropped scott Weiner is one of the most powerful people in the California legislature so a bill from him means more than a lot of other bills various really really smart people started talking to me about the benefits of AI for humanity but also uh some of the risks i started working at incode we were one of the three co-sponsors of SP 1047 and we’re really worried about AI assisted catastrophic risk like bioweapons or chemical weapons as a council with the center for AI safety we and the center and the sponsors like wanted to do an ambitious bill that really was trying to like tackle some of the thorniest problems i do research on safeguards and measuring the capabilities of models measuring how hazardous they are the VCs were saying like “Yep there’ll be cyber attacks on critical infrastructure but we would prefer to live in that world but I think World War II be very bad.” These models are so powerful that they may pose a risk of societal level harm and what we’re asking these companies to do before they put these things out into the world is to test and see if that’s true and if it’s true to take you know common sense precautions ai could revamp target selection programs we know that these risks are real and if there are ways to put guard rails to make it harder for people to melt down the banking system or create a biological weapon we should do that so I introduced Senate Bill 1047 sp 1047 sets out clear standards for developers of these extremely powerful AI models that would be substantially more powerful than any system that exists today the bill had pre-deployment safety testing requirements that would apply to the frontier models what SP1047 requires large AI companies to do is to create publish and follow your own safety and security protocol it says how you are going to ensure that you take reasonable care so that there aren’t catastrophic harms there was a emergency shutdown provision this was in the event that a model was about to cause a disaster developers had to have the ability to shut it down it’s like have a safety plan disclose it publicly have an external audit of it this is like the minimum that this is the floor that we should at least have some transparency for a lot of the history of the debate it was mostly extremely online nerds fighting with each other the debate about SP 1047 was between libertarians who believe that the government should do almost nothing regarding almost everything and another group of libertarians who believe the government should absolutely do far less than it’s doing now in most areas but that AI is a special exception there’s an ethic in Silicon Valley that’s very like sovereign citizen that like DC is not going to tell me what to do sacramento is not going to tell me what to do i learned a lot about the politics of artificial intelligence during this process and um it’s a little bit like the jets and the sharks from Westside Story you had the accelerationists and the altruists and these eternal fights and it’s actually a really important argument you encountered a lot of people who were reacting to the bill as if they had never seen a law they had never seen a regulation they had never thought about what it would mean to try and design a regulation [Music] i always knew that this bill was going to be hard i see from refreshment with you you think you’re going to be here a while just a little coffee all right at the beginning of this process it was us figuring out is this something that other senators will understand will they be willing to kind of entertain these things that they’ve never really heard about and never really talked about and how will they take this and there are steps to making this happen the bill has to go through the Senate and then the Assembly but there are many committees between those two stages in April it passed both of its policy committees in the Senate very little opposition sb 1047 Weiner 110 that bill is out the work became more intense within the legislature but the outside dialogue about the bill became more and more high-profile more broad-based more intense a few of the big effective accelerationist accounts started tweeting about it negatively and then it quickly caught fire dude what the hell was that bill first off written by this guy who’s never built anything in his life responding to all these fantastical fears of like runaway genetic super weapons the right people made noise about it all at the right time so I finally got through the reaction was a barrage of misinformation misinterpretation lying if you go back and scroll on Twitter you’ll find us getting into some of those Twitter fights and trying to explain to people that they’re wrong and and you know that that they’re kind of spouting BS there were three categories of critics there were people who were just hard no i’m opposed to this we shouldn’t be regulating AI and it was so dumb it would so obviously stifle the golden age of democratized intelligence that’s accessible to everyone then there were people who had what I would call fake constructive feedback so they would say things like why 10 to the 26 flops why not 10 the 27 why not 10 the 25 things that are equivalent to why is this people 65 why not 68 and you have to pick a number fundamentally they were just opposed to any type of regulation but they couldn’t bring themselves to say it warnings that this would destroy the AI industry that companies would leave California that this would be devastating that everything was on the line then there were constructive critics who came to us and said “We have concerns about the bill here are some ideas.” And that was frankly a good thing because from the very beginning what I’ve wanted on this bill is for really smart people who love the bill or hate the bill to provide us with constructive feedback senate Bill 1047 by Senator Weiner senator Weiner the floor is yours in response to feedback from key stakeholders in the open source and startup communities um I continue to make a commitment to continue soliciting and incorporating uh constructive feedback eyes 32 knows one measure passes and all of a sudden from there it was like an avalanche because people are like what this passed [Music] The open-source ethos ended up being applied to SP 1047 by the community we all had the opportunity to kind of adversarially test the bill just like we might red team a model i went on Twitter and offered my thoughts about the bill in the original version of the bill any model that met let’s say GPT4 level is going to be regulated no matter how much it costs to train but the cost of compute is getting cheaper and that means that a build is going to be covering lots and lots of models over time it’s not going to just be the frontier then I made the suggestion that if they did want to just cover the biggest models maybe they should be tying it to cost he raised points online that that were good points and that we like looked at and talked about with like the senator’s legislative director being like “It seems like he’s raising a fair point here.” I want to say it was only a few weeks later some of those concerns were incorporated into a new set of amendments to the bill sp 1047 clarifies the developers of these models that meet the bill’s threshold uh would currently cost more than $100 million to train this like open process has problems and challenges and like there are things that you don’t like about it but I I do think that it is just like really valuable and you end up instead of catching things after they’re into law and you and it’s much harder to change you you catch them in the course of that open process to me it’s not about pulling it’s not about just winning winning the fight I want to get this right and I am so deeply grateful to the folks who have really worked hard to provide that feedback including folks in the open source community so a lot of lot of constituent AI founders ers in this audience so I’m eager to hear your questions hi Scott jeremy Nixon founder and CEO of Omniscience would you be working to ensure that SB 1047 isn’t ownorous regulation my name is Jeremy Nixon sp 1047 was actually something that really bothered me because the American companies are no longer allowed to compete in open source and that actually is a big part of why open source was a central concern of the community i think the fundamental challenge with this bill is that if you were just regulating proprietary models then it’s relatively easy i think the really hard question is what do you do about openw weight models we’re releasing by far the most advanced open- source model at 405 billion parameters it’s a big deal it’s going to be more affordable to run than similar closed models and because it’s open you’re going to be able to use it to fine-tune distill and train your own models of any size that you want meta can do whatever they want to add guardrails to the llama models and program it to you know refuse to tell you how to make a bomb or hack into a computer or something but then it’s pretty easy for somebody else to come along once the weights are available and fine-tune it for a much smaller amount of compute and produce a model that will do the things that it was previously trained not to do if you’re going to have a bill that prevents large models from having these capabilities you pretty much can’t have open way models to the concern regarding if a model enters the open space realm what do you say to that potential liability we made an amendment to the bill to make explicit what we had always intended that once it’s no longer in your possession you are not responsible for ensuring that it can be shut down my name is Garrison Lovely and I was probably the only journalist covering SB 1047 full-time up until I started writing about it there was narrative that this was a bill that was just being pushed by the companies trying to do regulatory capture sp 1047 did have a emergency shutdown provision and this was in the event that a model was about to cause a disaster or was causing a disaster developers had to have the ability to shut it down and so the emergency shutdown provision initially caused a lot of concern the bill was amended to specifically address that concern and clarify that if you produce an open weight model you did not have to have the ability to shut it down but despite that the opponents of the bill took advantage of the um ambiguity uh in earlier versions or just the fact that language changed to basically make up this bill that is you know terrifying and so unreasonable and then argue against that the open source community the entrepreneurial community the public sector academia actually will have our hands tied due to the bill so in AI policym we really have to be careful this is a early technology feel Lee the quote godmother of AI wrote an op-ed in Fortune arguing against SP 1047 and she made a number of arguments but the biggest one was saying that SP 1047 through the kill switch would effectively destroy the open- source development community and this was just not true it was an extremely dishonest reading of of the bill i don’t know exactly where Lee got her facts from but Andre and Horowitz probably the largest venture capital firm in the world were like one of the biggest opponents of the bill they produced a 12 or 14page letter signed by their chief legal officer and they made a bunch of arguments that lined up quite well with the claims made by Feay Lee fei Lee has received millions of dollars for her startup from Adrien Horowitz which was the lead investor the idea that they were coordinating behind the scenes to kill SP 1047 is not inconsistent with the public information that we know my perception is that the venture capitalists thought that the appropriations committee was one of the last attempts to stop the bill they timed um FA Lee’s um op-ed to sort of strike around that time but still it ultimately passed the committee has moved bills to the assembly floor and this concludes our hearing we got through that pretty easily but then that day Congresswoman Lafrren and other California uh representatives weighed in with a letter to the governor saying “You should veto SB 1047.” Uh that was sort of shocking first of all it is super unprecedented for members of Congress to tell state legislators what to do i mean this is just insane we got that letter and it was riddled with inaccuracies and in some cases flat out wrong zoe Lafrren is one of the most influential members of Congress she represents parts of Silicon Valley and her daughter works on Google’s legal team which I think presents a conflict of interest given that Google opposed SP 1047 in multiple letters so there was a letter from Zoe Lafrren and then Friday evening we had wrapped up for the week and we were like the worst thing that has happened happened it’s all good you know all right we’re fine and then that night the policy letter dropped [Music] i thought her statement was unfortunate she just said she was against it she referred to the bill as quote unquote illinformed that it would kill open source this is regulatory capture this would crush the AI industry none of that was true [Music] pelosi is famously a trader of stocks her largest position by far is long Nvidia so if she was being told by various people that this bill was bad for AI in various ways she might want to be protective of her investment nancy Pelosy’s opposition I think was the most single important moment for the bill and in terms of what motivated her a lot of the earliest opposition to SB 1047 came from Injuries and Horowits their chief legal officer published a long essay opposing the bill making a number of arguments and claims about the bill not based on true things in the bill there were misrepresentations feel repeats some of these false claims in an op-ed in Fortune and then those false claims made by Fee Lee are then cited in a letter by Zoe Lafrren lafrren then is joined by seven other members of Congress from California citing Fee Lee and then the day after that letter Speaker Ammerida Nancy Pelosi writes her own statement against SP 1047 very prominently citing Fe Lee and then this gives credibility to these claims which ultimately just were not [Music] true most politicians be influenced by tech lobbyists there’s a question of degree but the lowest point was when OpenAI opposed the bill for me they didn’t need to do that microsoft didn’t even oppose the [Music] bill openai was set up to be a sort of more safetyconscious organization um but them opposing the bill we learned a lot of information about how financially conflicted many different players are there was a Bloomberg article and somebody from OpenAI implied to the journalists that they would maybe leave the state this was like one of the biggest arguments or memes about SB1047 it’s like oh this will just cause AI companies to leave California the main reason this would not be likely to happen is that the bill would apply to any AI developer doing business in California which is uh the fifth largest economy in the [Music] world anything about you know oh we’re moving our headquarters out of California this is going to make us that’s just theater that’s just negotiating leverage like it it it had bears no relationship to what to to the actual content of the bill anthropic sent a very detailed letter with pages of ideas we amended the bill significantly based on Anthropic’s thoughtful feedback and Anthropic then came forward and didn’t formally endorse the bill but sent a letter basically endorsing the bill one of the main things that made Anthropic potentially not come out in full support is their need to fund raise from organizations such as Amazon and and others amazon lobbyed very hard against the bill in the uh last hour because there were some KYC requirements they really resisted that anthropic submitted their letter and then you also saw Elon Musk supporting the bill elon Musk now coming out in support of a California bill to regulate the development of AI a stark contrast to other big names including OpenAI and Meta both of which have come out against the bill in his post on X writing “This is a tough call and will make some people upset but all things considered I think California should probably pass the AI safety bill.” Elon has a long history of supporting regulation on AI and he has a long history of saying that AI is an existential risk and really threatens human extinction it’s not clear whether we’d be able to recover from some of these negative negative outcomes in fact some of the certainly you can construct scenarios where recovery of human civilization does not occur elon supporting him was was huge actually it helped build the kind of coalition that Musk who is on the right is willing to support something that this progressive left Democrat is is willing to do and it was it was very useful for us to be like look we we do have some industry buying but for the assembly vote we were a lot more worried i think file item 126 SP 1047 we were worried that maybe we weren’t hearing what the lobbyists were doing maybe we were missing something does it impose an unreasonable liability on AI developers tort law is vague does the bill ban open- source models because it’s impossible for them to comply the assembly floor is where things like that’s like end of session and it’s very um chaotic and there’s like a race against time to get things through that’s where like a lot of sort of shenanigans start to happen on bills generally assembly murmur Matthysse you are recognized it’s time that big tech plays by some kind of a rule not a lot but something we all like crowded around the TV as they were voting and like were like collectively holding our breath and with that the clerk will open the roll tally the votes 41 nos nine measure passes for a bill to pass out of both chambers and head to the governor’s desk that is like a very meaningful step forward in the conversation and a very real thing that happens and so we felt incredibly optimistic about it when the bill reached the governor’s desk the part of the job of trying to you know build in the coalition all of the all the best support we could to convince the governor obviously that was a really significant task this is a really really big bill we’re going to need a lot of help and so the co-sponsors are the ones who like kind of coordinate the behind-the-scenes work along with the senators team because it’s just way too much for them to do by themselves so today we have two big super fans myself and Sunny in conversation with none other than Joseph Gordon Levit we had previously collaborated with Zagafa on their campaign to ban deep fakes zag Aafa is the actor’s guild it’s one of the most powerful unions in California and they are the union that represents actors just a force of very powerful and connected people in Hollywood how are other actors feeling about this how are you feeling about this let’s make sure that the the technology that’s dominating the world and it’s about to dominate even harder is something that’s benefiting people and not just benefiting the big powerful businessmen joseph Gordon Levit posted that video publicly and helped a lot ai companies should have follow laws cuz it’s coming for everybody mr governor please me asking you do the right thing sign this bill for the good of the future and all these coming together really convinced other networks to activate in that space and be like wow okay this is something we should care about and something we should be interested in sb1047 which is to regulate the explosion of AI into the world sag ARA the the actors union Hollywood getting into the SB1047 debate was a surprise to me and it felt to me like SAG ARA getting involved really cheapened things to be honest with you nothing to do with protecting actors from being automated away right it doesn’t protect you from Sora sp 1047 is about catastrophic risk and it’s not your issue i love this criticism does that mean that you have to have a technical background in anything that you have a policy opinion on like are you a doctor like do you have an opinion on abortion i mean if you’re not then you probably shouldn’t right according to this logic that is so antithetical to how democracy and advocacy works we had actors who literally came to us and they were like this is my understanding of the bill is this correct they’re talking about flops they’re talking about auditing in 2027 like just like such a nitty-gritty stuff and they’re like no like we legitimately like want to get this right and so we don’t want to just go out and say something about it hey Governor Newsome it’s Sean Aston it was a pleasure to meet you in Chicago at the DNC the other day senate Bill 1047 is on your desk i’m sure you know please please please sign 1047 sean Aton who you know played like Sam in Lord of the Ring movies he has a masters in toy policy they got what the bill did they understood it the bill is pretty simple sb 1047 the depth of our coalition was one of the highlights of this bill you have Hollywood actors lab employees Nobel Prizewinning academics um so I felt like that spoke to the strength of the bill once we explained what the bill was going to do people got it this is powerful technology and we have seen before with other instances that just leaving it up to the people who stand to benefit from it that’s not such a good plan we need to be creating some appropriate guard rails and the Wii here is the our elected officials that’s how we do things in a democracy [Applause] welcome Governor thank you thank you what’s been the most difficult uh thing that you’ve had to decide on i mean look I I think there’s one bill it’s this SB 1047 it’s created its own weather system a lot of people have feelings about it because of the sort of outsized impact that legislation could have and the chilling effect particularly in the open source community that legislation could have the governor of California vetoing a sweeping AI safety bill governor Nuome in his veto message said that he agrees that the risk of AI is real and needs to be mitigated however he does not agree with the specifics obviously the first initial reaction is like heavy heavy disappointment i was just angry sad disappointed you do all this work and ultimately it comes down to one person’s decision and then he gives an explanation that didn’t actually make a lot of sense newsome rejected the bill in its current form saying that it applied only to the largest and the most expensive AI models those that cost at least 100 million to train the governor’s statement that the bill did too much and then the bill didn’t do enough I thought was a completely incoherent and absurd thing to include in a veto message i’m just being perfectly honest is there anything you could have done differently there you think what doing what differently we announced it we had we had a national media outlet we said said like um change the bill enough before announcing it such that the key players would be happy about it from the from the get-go like I don’t think intrinsically they’re going to be happy about it because it’s not for them it’s for the public when you saw the decision from Gab Newsome you saw in Twitter you saw this how did you react to the to the veto i was I I immediately announced a massive veto party there’s nothing could stop me i’m on the way oh it’s a party sorry i was literally cranking away and then I saw the feed and I was like “Oh OMG.” Yes to regulate something before it’s had a chance to even be understood is to just give into fear i’m super happy again right i was kind of expecting Governor Newsome to do the veto of course you never really know until it actually happens but I was pretty confident he was going to veto i mean I love science fiction right i’m a huge fan of the Matrix series the Terminator series but I feel like some people they watch that and they think that’s what AI is i’m like not really right i’m happy to say “Hey bookmark this conversation play it 10 years from now 20 years from now you’ll see who was right i mean right now we know we’re right it’s just a matter of you know the rest of history seeing it that way last night I was this party AGI house of like party about veto the bill and everyone was like pro open source and pro technology and and against regulation right but you’re mostly leading libertarian i find these people bewildering i’m like I’m sorry did we did we disturb you you’re creating the most powerful technology by your own admission the most powerful technology in the history of mankind uh we’re so sorry to impose a few safety standards so sorry to have interrupted your little party here um yes you know it’s it’s funny uh the the car manufacturers also for the longest time I believe a decade plus were resisting safety belt regulation an overwhelming good like we’ve saved dozens of thousands of lives thanks to this simple thing can we please put a safety belt on the car that the car manufacturers were absolutely resisting ai is not going away who decides the future of this technology the people spoke out in support of this bill every single poll was like well over 75% so do we want to have a handful of very self-interested billionaires making those decisions there’s no political coalition in the world that could stop this train at this point like until there’s like a three- mile island and then maybe then there would be a backlash i think you can try and prevent harms before they happen humanity doesn’t have as much of a track record of doing that it’s usually you know to hobbling along and disaster happens and you do something about it and regulation gets written in blood the difference between AI and atomic weapons is like anyone can noodle with machine learning there are some critics that are like I want an AI Chernobyl to happen before we decide to regulate and I find that an insane thing to say

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.