“Show me the incentive and I’ll show you the outcome.”
— Charlie Munger (1924-2023)

“There is a belief in the market that the invention of intelligence has infinite return.” — Eric Schmidt (2024)

On the Origin of Species: Chapter III, Struggle for Existence: “The forms which stand in closest competition with those undergoing modification and improvement will naturally suffer most. But Natural Selection, as we shall hereafter see, is a power incessantly ready for action, and is immeasurably superior to man’s feeble efforts.”

— Charles Darwin (1859)

OPINION: AGI is not a technology tool. AGI is an Agent. AGI is not “open-source” because nobody knows how it works, therefore nobody can improve the safety inside “the black box”. The weights and parameters are meaningless to humans. AGI is unpredictable, uncontrollable and untestable for safety. Machine intelligence is emerging as a new species in our environment, with intelligence vastly superior to the human race. Manageable regulations are certainly good for any unregulated industry to protect their own markets. SB 1047 was light touch and was supported by 80% of Californians, by California Labor, by Hollywood, by an overwhelmingly bipartisan California Assembly AND by Leading AI Scientists including Bengio, Hinton, Russell, Tegmark and 1,000s more. Regulations will certainly come, but will they come in time to manage a “controlled intelligence explosion” – in time to avoid catastrophe?  Or, are we in for a FOOM with the probabilistic “P(doom)” of 10 to 90% ? — SafeAI Editorial Board

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“I do not believe this is the best approach to protecting the public from real threats posed by the technology… Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.” — California Governor Gavin Newsom (29 September 2024)

Page 2

Page 3

Tech executives and investors opposed the measure, which would have required companies to test the most powerful AI systems before release.

California Gov. Gavin Newsom (D) vetoed a bill on Sunday that would have instituted the nation’s strictest artificial intelligence regulations — a major win for tech companies and venture capitalists who had lobbied fiercely against the law, and a setback for proponents of tougher AI regulation.

The legislation, known as S.B. 1047, would have required companies to test the most powerful AI systems before release and held them liable if their technology was used to harm people — for example, by helping to plan a terrorist attack.

Tech executives, investors and prominent California politicians, including Rep. Nancy Pelosi (D), had argued the bill would quash innovation by making it legally risky to develop new AI systems, since it could be difficult or impossible to test for all the potential harms of the multipurpose technology. Opponents also argued that those who used AI for harm — not the developers should be penalized.

The bill’s proponents, including pioneering AI researchers Geoffrey Hinton and Yoshua Bengio, argued that the law only formalized commitments that tech companies had already made voluntarily. California state Sen. Scott Wiener, the Democrat who authored the bill, said the state must act to fill the vacuum left by lawmakers in Washington, where no new AI regulations have passed despite vocal support for the idea.

Hollywood also weighed in, with Star Wars director J.J. Abrams and “Moonlight” actor Mahershala Ali among more than 120 actors and producers who signed a letter this past week asking Newsom to sign the bill.

California’s AI bill had already been weakened several times by the state’s legislature, and the law gained support from AI company Anthropic and X owner Elon Musk. But lobbyists from Meta, Google and major venture capital firms, as well as founders of many tech start-ups, still opposed it.

In a statement to California’s legislature on his veto, Newsom wrote that the bill was wrong to single out AI projects using more than a certain amount of computing power while ignoring an AI system’s use case, such as whether it is involved in critical decision-making or uses sensitive data.

“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” he said. “A California-only approach may well be warranted — especially absent federal action by Congress — but it must be based on empirical evidence and science.”

Newsom also announced Sunday that his administration will work with leading academics to develop what his office called “workable guardrails” for deploying generative AI technology. The experts involved include Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute and an AI entrepreneur, and Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace.

Anthony Aguirre, executive director of the Future of Life Institute, a nonprofit that studies existential risks to humanity and vocally advocated for the bill, said in a statement that the veto signaled it was time for federal or even global regulation to oversee Big Tech companies developing AI technology.

“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one,” Aguirre’s statement said.

The California bill had prompted unusually impassioned public debate for a piece of technology regulation. At a tech conference in San Francisco earlier this month, Newsom said the measure had “created its own weather system,” triggering an outpouring of emotional comments. He also noted that the California legislature had passed other AI bills that were more “surgical” than S.B. 1047.

Newsom’s veto came after he signed 17 other AI-related laws, which impose new restrictions on some of the same tech companies that opposed the law he blocked. The regulations include a ban on AI-generated images that seek to deceive voters in the months ahead of elections; a requirement that movie studios negotiate with actors for the right to use their likeness in AI-generated videos; and rules forcing AI companies to create digital “watermark” technology to make it easier to detect AI videos, images and audio.

Newsom’s veto is a major setback for the AI safety movement, a collection of researchers and advocates who believe smarter-than-human AI could be invented soon and argue that humanity must urgently prepare for that scenario. The group is closely connected to the effective altruism community, which has funded think tanks and fellowships on Capitol Hill to influence AI policy and been derided as a “cult” by some tech leaders, such as Meta’s chief AI scientist, Yann LeCun.

Despite those concerns, the majority of AI executives, engineers and researchers are focused on the challenges of building and selling workable AI products, not the risk of the technology one day developing into an able assistant for terrorists or swinging elections with disinformation.

Andrea Jimenez contributed to this report.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

2024-10-01T12:11:27+00:00
Go to Top