California’s SB 1047, also known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” has passed the state’s Assembly and Senate and now awaits its fate on Governor Gavin Newsom’s desk. By September 30, Governor Newsom must decide whether to sign or veto it.
The bill aims to regulate large language models trained above certain compute and cost thresholds, holding developers liable for the use or modification of their models down the line. Prior to training the models, developers would need to prove that their models would not enable “hazardous capabilities” and implement a host of guardrails to protect against misuse. The new requirements for developing large artificial intelligence models set testing, safety, and enforcement standards.
Whistleblower Protections
The bill contains whistleblower protections for employees who raise concerns to the California Attorney General about risks of critical harm found in the models their companies are building and deploying.
Under the bill, a developer of a covered model or contractor must not prevent employees from disclosing information to the State AG or Labor Commissioner or retaliate against an employee for disclosing information to the AG or Labor Commissioner. An employee retaliated against for blowing the whistle may petition a court for temporary or preliminary injunctive relief.
The AG or Labor Commissioner may publicly release or provide to the Governor any complaint if they determine doing so will serve the public, with redactions of confidential or otherwise exempt information.
Developers must provide reasonable internal processes to anonymously disclose potential violations of the law, misleading statements regarding safety and security, or failure to disclose risk to employees. They must also inform all relevant individuals of their rights and responsibilities under this section, including the right to use internal processes to make protected disclosures. This information must be displayed in the workplace, and an annual written notice of rights and responsibilities must be acknowledged by the employees.
Scope of the Law
Models covered by the bill are:
- An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud computing at the start of training as reasonably assessed by the developer.
- An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud computing at the start of fine-tuning
- An artificial intelligence model trained using a quantity of computing power determined by the Government Operations Agency pursuant to Section 11547.6 of the Government Code, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market price of cloud computing at the start of training as reasonably assessed by the developer.
- An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power that exceeds a threshold determined by the Government Operations Agency, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud computing at the start of fine-tuning.
Copies of covered models are also regulated by the bill.
Support and Opposition Toward Bill
120+ current and former employees of OpenAI, Anthropic, Google’s DeepMind, and Meta issued a statement in support of SB 1047, iterating their belief that “the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.”
They claim it is feasible and appropriate for frontier AI companies to test “whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”
A group of lauded academics in the field publicly supported the bill last week, stating that while it does not address every risk, it is a “solid step forward.”
Following changes to language and provisions, Anthropic has offered tepid support for the bill, while OpenAI opposes it, claiming it would “stifle innovation.”
San Francisco Representative Nancy Pelosi (D-Calif.), Mayor London Breed, the U.S. Chamber of Commerce, the Software & Information Industry Association, and other tech advocacy groups oppose the bill.