HB Ad Slot
HB Mobile Ad Slot
California Governor Vetoes AI Safety Bill with Whistleblower Protections for Employees Who Report Dangers of the Technology
Thursday, October 3, 2024

California Governor Gavin Newsom has vetoed a bill that sought to ensure the safe development of artificial intelligence (AI), including by imposing whistleblower protections for developers’ employees who reported potential dangers. The veto comes amid concerns that the restrictions may stifle California’s leading role in developing this emerging technology.

Quick Hits

  • The bill would have required developers of large AI models made available to the public to take specific steps to ensure that they did not cause critical harm. It would have imposed employment protections for developers’ employees who blew the whistle about potential dangers.
  • The legislation was one of several AI bills presented to the governor during the year’s regular legislative session as California looks to be a leader in developing and regulating this emerging technology.
  • It is likely that California will try again to enact AI safety legislation amid continuing concerns over the fast-growing technology.

On September 29, 2024, Governor Newsom vetoed Senate Bill (SB) No. 1047, known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” In a veto message, Governor Newsom warned that though the bill sought to address some real concerns, it went too far in its restrictions.

“While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making[,] or the use of sensitive data,” Governor Newsom stated in the veto message. “Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

SB 1047, which California lawmakers passed in August 2024, would have implemented wide-ranging safety requirements for developing specific large AI models with the goal of preventing technology available to the public from being misappropriated to cause “critical harm,” such as to make weapons of mass destruction.

Governor Newsom, however, stated that SB 1047’s focus on regulating large and expensive AI models and their developers “could give the public a false sense of security about controlling this fast-moving technology.”

“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047—at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good,” his veto message continued.

SB 1047’s Proposed Safety Protocols and Whistleblower Protections

The groundbreaking legislation would have imposed one of the strictest state regulatory regimes on the developers of large AI models. It would have made California perhaps the first jurisdiction to implement job protections for those working on AI models who blew the whistle on potential dangers related to the technology.

Specifically, SB 1047 would have required developers of certain covered large AI models and their derivatives to implement specific safety measures, including:

  • reasonable cybersecurity protections “to prevent unauthorized access to, misuse of, or unsafe post-training modifications” of the covered models;
  • a mechanism to “promptly enact a full shutdown” of a model; and
  • “a written and separate safety and security protocol.”

In addition, SB 1047 would have prohibited developers of large AI models from preventing employees, through terms and conditions of employment, “from disclosing information to the Attorney General or the Labor Commissioner” or from retaliating against employees who reported to the authorities their reasonable belief that a developer was not following the bill’s safety protocols or other laws or that an AI model “pose[d] an unreasonable risk of causing or materially enabling critical harm.”

Such “critical harm” would have included the “creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties,” “cyberattacks on critical infrastructure” causing mass casualties of at least $500 million in damage, or “[o]ther grave harms to public safety and security that are of comparable severity.”

Under SB 1047, developers would also have been required to provide notice to their employees of their rights and to establish internal reporting processes for employees and subcontractors to anonymously disclose information to the developers if they believed “in good faith that the information indicate[d] that the developer[s] ha[d] violated” the bill’s safety provisions.

Potential to Stifle Innovation

Governor Newsom’s veto came after he had signed into law seventeen recently passed AI-related bills addressing issues such as deepfakes, AI watermarking, the protection of children and workers, and AI-generated misinformation. The same day that Governor Newsom vetoed SB 1047, he announced new initiatives to advance safe and responsible AI that will focus on “developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”

But California Senator Scott Wiener, SB 1047’s primary sponsor, posted a statement on social media calling the governor’s veto “a setback” for those who believe government oversight of AI is necessary.

“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers,” Wiener said in the statement.

SB 1047 faced stiff opposition from the business community, which had raised concerns that the legislation would stifle innovation in California and cause developers to leave the state without providing a clear benefit for the public or a sound evidentiary reason for the restrictions.

Opponents of the bill also included U.S. Representative Nancy Pelosi (D-CA), who issued a statement in opposition to the bill, and eight other members of California’s congressional delegation who signed a letter arguing that the bill “create[d] unnecessary risks for California’s economy with very little public safety benefit.” On the other side, more than one hundred Hollywood artists signed an open letter urging Governor Newsom to sign the bill into law.

Next Steps

While AI technology can potentially increase efficiency and improve decision-making, the unchecked exponential growth of AI may require some smart and practical regulation that can respond to the ever-changing landscape. Principles that appear in the AI laws of other jurisdictions include notice, consent, transparency, explainability, security, bias elimination, and auditing.

There remains a small chance the California legislature will override Newsom’s veto of SB 1047 as the Senate has taken it under consideration. Regardless, California lawmakers are likely to take another stab at AI safety regulation. Moreover, the California Civil Rights Department is developing final regulations addressing the potential for employment discrimination in the use of AI and automated decision-making systems.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins