California Governor Gavin Newsom vetoed a significant bill on Sunday aimed at establishing extensive regulations on the rapidly growing artificial intelligence (AI) industry. The bill, known as SB 1047, was seen as a potential national framework for AI legislation due to its ambitious scope.
The California legislature had overwhelmingly passed SB 1047, which sought to hold tech companies legally accountable for any harm caused by their AI models. Among its key provisions, the bill would have mandated that companies include a “kill switch” for AI systems to deactivate them in case of misuse or malfunction.
In his veto message, Newsom described the legislation as “well-intentioned,” but expressed concern that the stringent regulations it proposed could be overly burdensome for California’s leading AI companies. He pointed out that the bill disproportionately focused on large AI models, ignoring the potential risks posed by smaller, emerging companies.
“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047,” Newsom wrote. He cautioned that the bill’s stringent requirements might stifle innovation and hinder the advancement of technology that could ultimately serve the public good.
California Senator Scott Wiener, a co-author of the bill, voiced disappointment over Newsom’s decision, calling the veto a setback for accountability in the AI sector. He emphasized the troubling reality that, without binding restrictions, companies developing powerful AI technologies would operate without oversight from policymakers, particularly as Congress has yet to take meaningful action on tech regulation.
“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions,” Wiener tweeted on X, formerly Twitter. He also highlighted that the proposed legislation would have required safety tests for powerful AI models, suggesting that without such mandates, the industry would be left to self-regulate.
“While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that the voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener added.
ADVERTISEMENT
Opposition to the bill came from several influential players in Silicon Valley. Major tech companies, venture capital firms such as Andreessen Horowitz, and industry groups representing giants like Google and Meta lobbied against SB 1047, arguing that its requirements could hinder AI development and stifle innovation for early-stage companies. OpenAI’s Chief Strategy Officer Jason Kwon warned that the bill could slow growth and lead engineers and entrepreneurs to relocate to more favorable business environments.
In contrast, a number of prominent figures in the tech world supported the bill, including Elon Musk and leading AI researchers like Geoffrey Hinton and Yoshua Bengio. These advocates argued for the necessity of rigorous testing and safeguards for powerful AI models, citing potential severe risks, including the misuse of AI for cyberattacks and the proliferation of biological weapons.
Hinton and other AI experts signed a letter urging Newsom to support the legislation, stating, “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms and for these companies to implement reasonable safeguards against such risks.”
While California’s bill was vetoed, other states, including Colorado and Utah, have passed more narrowly tailored laws addressing specific concerns related to AI, such as bias in employment and healthcare decisions, as well as consumer protection measures.
Despite the lack of progress on comprehensive federal legislation, Newsom has recently signed over a dozen other bills related to AI. These include measures to combat the spread of deepfakes during elections and to protect actors from unauthorized replication of their likenesses by AI technologies.
As investment in AI continues to surge and its influence expands into various aspects of daily life, the need for legislative oversight becomes increasingly urgent. However, lawmakers in Washington have yet to advance any federal regulations aimed at mitigating potential harms associated with AI technologies or ensuring their responsible development.
ADVERTISEMENT