California Passes Debated Law to Govern AI Model Training

California Passes Debated Law to Govern AI Model Training

California Passes Debated Law to Govern AI Model Training


# California’s SB 1047: A Pioneering Regulation for Generative AI

As the dialogue around the ethical considerations of generative AI develops, California has made a notable advancement by enacting the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, commonly known as SB 1047. This recently approved legislation by the California State Assembly and Senate stands as one of the initial comprehensive regulatory frameworks for artificial intelligence in the U.S.

## Grasping SB 1047

The newly passed bill requires that AI firms operating within California must adopt a series of safety measures before they can initiate training on a “sophisticated foundation model.” This includes ensuring that developers have the ability to quickly deactivate an AI model if it is found to be unsafe. Furthermore, the legislation mandates that language models be protected against “unsafe post-training modifications” that might result in “critical harm.” Senators backing the bill characterize it as an essential action to shield society from the potential misuse of AI technologies.

Importantly, notable figures in the AI field, including Professor Geoffrey Hinton, a former leader at Google, have commended the bill for recognizing the serious risks tied to powerful AI systems. Hinton highlighted the importance of taking these risks seriously, emphasizing the necessity for regulatory oversight.

## Responses from the Industry

Although the bill has received backing from some sectors, it has also encountered backlash from various stakeholders, including leading AI corporations like OpenAI and smaller independent developers. Detractors argue that the legislation could impose considerable demands on developers, especially those operating on a limited scale. The possibility of criminal penalties for non-compliance has sparked concerns about the bureaucratic challenges that indie developers might confront, compelling them to allocate resources toward legal support and compliance rather than innovation.

With the bill pending the signature of Governor Gavin Newsom, who has until the end of September to make a decision, the tech community is watching the situation closely. The result could establish a precedent for how AI is regulated not only in California but throughout the United States.

## Wider Context: Optional AI Safety Guidelines

In a related move, earlier this year, significant tech companies, including Apple, Amazon, Google, Meta, and OpenAI, committed to a set of voluntary AI safety protocols introduced by the Biden administration. These guidelines aim at assessing AI systems for discriminatory behavior and security flaws, with the findings intended for evaluation by government entities and academia. However, contrary to SB 1047, these guidelines lack legal enforceability.

Apple, in particular, has a keen interest in these regulations as it gears up to launch its Apple Intelligence features, set to roll out with the upcoming iOS 18.1 and macOS Sequoia 15.1. These features will necessitate compatible hardware, such as the iPhone 15 Pro or more recent models, along with iPads and Macs equipped with the M1 chip or newer.

## Final Thoughts

California’s SB 1047 signifies a crucial juncture in the ongoing discussion regarding AI regulation. As the state takes proactive measures to guarantee the safety and security of AI technologies, the ramifications of this legislation are likely to be felt beyond its borders. Striking a balance between encouraging innovation and ensuring public safety remains an essential challenge for lawmakers, developers, and society as a whole. As the AI landscape progresses, the results of these regulatory initiatives will be closely monitored by stakeholders worldwide.