State: | California |
Full Name: | Safe and Secure Innovation for Frontier Artificial Intelligence Models Act |
Introduced: | February 7, 2024 |
Senate Voted: | May 21, 2024 (32-1) |
Sponsors: | Scott Wiener |
Governor: | Gavin Newsom |
Bill: | SB 1047 |
Url: | Bill Text |
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist".[1] Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations.[2] SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter.[3] The bill creates protections for whistleblowers[4] and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.
The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to express concern about the risks associated with increasingly powerful AI systems.[5]
Governor Newsom and President Biden issued executive orders on artificial intelligence in 2023.[6] [7] [8] Senator Wiener first proposed AI legislation for California through an intent bill called SB 294, the Safety in Artificial Intelligence Act, in September 2023.[9] [10] [11] SB 1047 was introduced in February 2024. Wiener says his bill draws heavily on the Biden executive order, and is motivated by the absence of federal legislation: "I would love to have one unified, federal law that effectively addresses AI safety. Congress has not passed such a law. Congress has not even come close to passing such a law."[12] Several technology companies have made voluntary commitments to conduct safety testing, for example at the AI Safety Summit and AI Seoul Summit.[13] [14]
The bill was significantly amended by Wiener on August 15, 2024, notably to preserve California's competitiveness. Amendments included adding clarifications, and removing the creation of a "Frontier Model Division" and the penalty of perjury.[15] [16]
SB 1047 would require developers, beginning January 1, 2028, to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill, as provided.
The Government Operations Agency would review the results of safety tests and incidents, and issue guidance, standards, and best practices. SB 1047 would create a public cloud computing cluster called CalCompute, associated with the University of California, to support startups, researchers, and community groups that lack large-scale computing resources.
SB 1047 covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million.[17] If a covered model is fine-tuned using more than $10 million, the resulting model is also covered.
Prior to model training, developers of covered models and derivatives are required to submit a certification, subject to auditing, of mitigation of "reasonable" risk of "critical harms" of the covered model and its derivatives, including post-training modifications. Critical harms are defined with respect to four categories:[18]
Developers of covered models are also required to implement "reasonable" safeguards to reduce risk, including the ability to shut down the model. Whistleblowing provisions protect employees who report safety problems and incidents. What is "reasonable" will be defined by the California Frontier Model Division, which provides advice on jury instructions and also advises on a "AI state of emergency."
The bill creates a Board of Frontier Models to supervise the application of the bill by the Government Operations Agency. It is compose of 9 members.
Supporters of the bill include Turing Award recipients Yoshua Bengio,[19] Geoffrey Hinton,[20] Kevin Esvelt,[21] former OpenAI employee Daniel Kokotajlo,[22] Lawrence Lessig,[23] Sneha Revanur,[24] Stuart Russell and Max Tegmark.[25] The Center for AI Safety, Economic Security California[26] and Encode Justice[27] are sponsors. Yoshua Bengio writes that the bill is a major step towards testing and safety measures for "AI systems beyond a certain level of capability [that] can pose meaningful risks to democracies and public safety." Max Tegmark likened the bill's focus on holding companies responsible for the harms caused by their models to the FDA requiring clinical trials before a company can release a drug to the market. He also argued that the opposition to the bill from some companies is "straight out of Big Tech's playbook."
Andrew Ng, Fei-Fei Li,[28] Ion Stoica, Jeremy Howard, Turing Award recipient Yann LeCun, along with U.S. Congressmembers Nancy Pelosi, Zoe Lofgren, Anna Eshoo, Ro Khanna, Scott Peters, Tony Cárdenas, Ami Bera, Nanette Barragán and Lou Correa have come out against the legislation.[29] [30] [31] Andrew Ng argues specifically that there are better more targeted regulatory approaches, such as targeting deepfake pornography, watermarking generated materials, and investing in red teaming and other security measures.[32] University of California and Caltech researchers have also written open letters in opposition.
The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress, the Computer & Communications Industry Association and TechNet. Companies including Meta[33] and OpenAI are opposed to or have raised concerns about the bill, while Google, Microsoft and Anthropic have proposed substantial amendments.
Several startup founder and venture capital organizations are opposed to the bill, for example, Y Combinator,[34] [35] Andreessen Horowitz,[36] [37] [38] Context Fund[39] [40] and Alliance for the Future.[41]
Critics expressed concerns about liability on open source software imposed by the bill if they use or improve existing freely available models. Yann LeCun, Chief AI Officer of Meta, has suggested the bill would kill open source AI models. Currently (as of July 2024), there are concerns in the open-source community that due to the threat of legal liability companies like Meta may choose not to make models (for example, Llama) freely available.[42] [43] The AI Alliance has written in opposition to the bill, among other open-source organizations.
The Artificial Intelligence Policy Institute, a pro-regulation AI think tank,[44] ran two polls of California respondents on whether they supported or opposed SB 1047.
Support | Oppose | Not sure | Margin of error | ||
---|---|---|---|---|---|
July 9, 2024[45] [46] | 59% | 20% | 22% | ±5.2% | |
August 4–5, 2024[47] [48] | 65% | 25% | 10% | ±4.9% |
A David Binder Research poll commissioned by the Center for AI Safety, a group focused on mitigating existential risk, found that 77% of Californians support a proposal to require companies to test AI models for safety risks, and 86% consider it an important priority for California to develop AI safety regulations.[49] [50] [51]