California Governor Dismisses Suggested AI Safety Bill

California Governor Dismisses Suggested AI Safety Bill

California Governor Dismisses Suggested AI Safety Bill


# California’s Governor Gavin Newsom Rejects Controversial AI Regulation Measure SB-1047

In a decision that has incited discussions within the tech community and beyond, California Governor Gavin Newsom has rejected **Senate Bill 1047 (SB-1047)**, a legislative proposal intended to oversee large-scale artificial intelligence (AI) systems. The bill, which was approved by the state Assembly in August 2024, aimed to mandate that developers of significant AI models conduct safety evaluations and incorporate “kill switches” to mitigate potential risks. However, Newsom’s rejection has prompted inquiries regarding the trajectory of AI governance in California and across the United States.

## The Veto and Newsom’s Rationale

In a statement made public on Sunday evening, Newsom elucidated his rationale for vetoing SB-1047, indicating that the bill’s concentration on large AI models was misplaced. He contended that the legislation could create an illusory sense of security by focusing only on the largest and most costly models, while smaller, specialized systems might present equally serious dangers.

> “By concentrating solely on the priciest and largest models, SB-1047 establishes a regulatory structure that could mislead the public into thinking they have control over this rapidly evolving technology,” Newsom stated. “Smaller, specialized systems may arise as equally or even more perilous than those addressed by SB-1047—potentially at the cost of stifling the very innovation that propels progress for the public good.”

Newsom also emphasized that the swiftly changing risks associated with AI—such as threats to democracy, the dissemination of false information, deepfakes, privacy intrusions, and disruptions to vital infrastructure—could be better mitigated through more precise regulations. California currently has several existing AI laws, and Newsom proposed that a more detailed approach would be more efficient than the sweeping measures suggested in SB-1047.

## The Discourse Surrounding SB-1047

SB-1047 was co-sponsored by **State Senator Scott Wiener**, who has been a prominent supporter of enhanced AI oversight. The bill had received backing from notable figures in the AI sector, including **Geoffrey Hinton**, a deep learning trailblazer, and **Yoshua Bengio**, another esteemed AI academic. Nevertheless, the bill also encountered considerable resistance from tech firms and industry leaders who claimed it was excessively restrictive and had the potential to hinder innovation.

In his response to the veto, Wiener conveyed his discontent, describing it as a setback for advocates who see the necessity for oversight of large corporations crafting AI technologies. He argued that voluntary pledges from AI companies are inadequate to guarantee public safety.

> “Voluntary promises of safety from AI companies are insufficient,” Wiener stated in a social media announcement. “The absence of effective governmental regulation means we are all at greater risk due to the veto.”

### Advocates for SB-1047

Supporters of SB-1047 maintained that the legislation was a crucial preliminary action in regulating AI, especially considering the possible risks posed by sophisticated models. **Elon Musk**, the CEO of xAI and a strong proponent of AI oversight, endorsed the bill, asserting that AI must be regulated “just like we manage any product/technology that poses a potential danger to the public.”

Furthermore, **SAG-AFTRA**, the influential actors’ union, championed the bill, citing worries about deepfakes and the unauthorized use of voices and likenesses. The union contended that SB-1047 would aid in safeguarding its members from these emerging threats.

### Opponents of SB-1047

Conversely, many individuals within the tech industry criticized the bill as being excessively broad and possibly detrimental to innovation. A consortium of California business leaders presented an open letter to Newsom requesting that he veto the bill, labeling it “fundamentally flawed” and asserting that it would impose cumbersome compliance costs on firms. They argued that the bill prioritized model development regulation instead of addressing the misuse of AI, which they believed should be the appropriate focus for regulation.

**OpenAI Chief Strategy Officer Jason Kwon** also publicly urged Newsom to reject the bill, maintaining that federal regulation would be more effective than a “collection of state laws.” Kwon and others voiced concerns that the bill could impose legal responsibilities on developers of open-weight AI models, which might be exploited by others for malicious purposes.

## A Contentious Lobbying Struggle

The contention over SB-1047 was characterized by vigorous lobbying efforts from both proponents and adversaries. Major tech corporations, including **Google** and **Meta**, publicly opposed the bill, while a segment of employees from these and other significant tech firms demonstrated support for its enactment. The discourse highlighted the increasing divide between those advocating for stricter AI regulation and those apprehensive that such regulations could hinder innovation and competitiveness.

At the **2024 Dreamforce conference**, Newsom addressed the debate surrounding the bill, recognizing the potential “ch