by István Kopácsi
Senate Bill 1047, titled the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”[1] is a comprehensive legislative effort introduced by Senator Wiener and co-authors, which aims to regulate and ensure the safety of advanced artificial intelligence (AI) development, particularly in California. This bill would responds to growing concerns about the potential harms that AI models could cause if not properly regulated, and at the same time, aims to preserve the benefits of AI innovation. The bill passed on August 15 in the Assembly Committee on Appropriations, was ordered to third reading on August 20, but the legislative process is still ongoing.[2]
The scope of Senate Bill 1047 would be broad, covering advanced AI models and the entities responsible for developing and managing them. The bill specifically would targets AI models that are trained using significant computing power—referred to as “covered models.” These are defined as models whose development requires an enormous amount of computing power, including (i) those trained with more than 10^26 integer or floating-point operations (which is a higher threshold than the 10^25 floating-point operations used in the EU AI Act regarding a “general-purpose AI model with systemic risk”[3]), costing more than $100 million, and (ii) AI models created by fine-tuning a covered model using at least 3x10^25 integer or floating-point operations, and that costs over $10 million. Additionally, it would include ”covered model derivative” that is (i) an unmodified copy of a covered model or (ii) a copy of a covered model that has undergone post-training modofications distinct from fine-tuning. The definition of a covered model would also be subject to periodic updates by Government Operations Agency.
The bill would mandate that developers implement rigorous safety and security protocols before the initial training of covered models. These protocols would include the ability to promptly execute a ”full shutdown” of the model in the event of a significant safety or security risk. A “full shutdown” refers to a developer's capacity to halt the model's operations to prevent further risk or harm. Developers would also be required to maintain comprehensive records of the model's training processes, usage, and any AI safety incidents. AI safety incidents are defined as any event that significantly raises the risk of critical harm, such as unauthorized access to model weights or an AI system autonomously engaging in unrequested actions.
Developer restrictions would prominently feature in the bill. Developers would be prohibited from publicly or commercially deploying any covered model or covered model derivative if it poses an unreasonable risk of enabling critical harm. Critical harm includes scenarios like mass casualties or widespread infrastructure damage resulting from AI misuse. In this regard, developers would be restricted from allowing any use of their models that could contribute to the creation or use of weapons of mass destruction, enable large-scale cyberattacks, or autonomously engage in activities that would constitute serious crimes under California law if conducted by humans.
Developers would also be responsible for reporting any AI safety incidents involving their models to the Attorney General, who would oversee the regulation and ensure developers comply with safety measures. In addition to internal compliance mechanisms, developers would be required to retain independent third-party auditors that would annually review their adherence to these requirements and submit reports on their findings. These third-party audits are designed to add an additional layer of scrutiny to ensure that developers would maintain the standards set by the legislation.
The legislation would also introduce responsibilities for those operating computing clusters. Computing clusters, which would be defined as sets of machines connected by high-speed data networking with the capacity for large-scale AI training, would need to implement strict policies for assessing and verifying the intended use of their computing resources by prospective customers. If a customer intends to use the computing cluster for training a covered model, operators would need to ensure that the customer complies with all legal and regulatory obligations as outlined in the bill. This includes ensuring that the customer has the necessary safeguards to prevent the covered model from being misused in a way that could lead to critical harm.
The establishment of a public cloud computing cluster, CalCompute, further emphasizes the bill’s dual focus on regulation and innovation. CalCompute would be a state-run cloud platform aimed at facilitating safe AI research and innovation. Its goal is to ensure equitable access to AI resources for academic researchers, startups, and smaller organizations, counterbalancing the dominance of large corporations in AI development. CalCompute would serve as a research hub, advancing safe AI practices and supporting the development of AI technologies that would benefit society at large.
Enforcement of these regulations would be carried out through a combination of administrative oversight, civil penalties, and legal actions. Violations of the provisions set forth in the bill, such as failure to adhere to reporting requirements or allowing a covered model to be used in a manner that could lead to critical harm, would result in civil penalties. The Attorney General or the Labor Commissioner would be authorized to bring civil actions against violators. The bill would safeguard whistleblowers by preventing developers and their contractors from obstructing or retaliating against employees who report to the Attorney General or Labor Commissioner, provided the employees have reasonable grounds to believe that the developer is not meeting regulatory requirements or that the model presents a serious risk.
In summary, Senate Bill 1047 would seek to strike a balance between promoting AI innovation in California and ensuring that the development and use of frontier AI models would be governed by stringent safety and security protocols. It would cover a broad scope, addressing both the responsibilities of developers and the operators of computing clusters. Developers would be restricted from deploying AI models that could lead to critical harm and would be held responsible for ensuring the safety of their models throughout their lifecycle. Operators of computing clusters would be required to assess and verify the proper use of their resources. Enforcement would be handled through civil penalties and legal actions, with the Government Operations Agency playing a pivotal role in maintaining compliance and updating regulations as necessary. By fostering a regulatory environment that would prioritize both innovation and safety, the bill would aim to ensure that California remains at the forefront of AI research and development while minimizing the risks posed by these powerful technologies.
[1] https://leginfo.legislature.ca.gov/faces/billPdf.xhtml?bill_id=202320240SB1047&version=20230SB104790AMD
[2] https://leginfo.legislature.ca.gov/faces/billHistoryClient.xhtml?bill_id=202320240SB1047
[3] Article 51 AI Act: ”2. A general-purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a), when the cumulative amount of computation used for its training measured in FLOPs is greater than 10^25.”