A California Chamber of Commerce-opposed bill that creates confusing and infeasible requirements for artificial intelligence (AI) technology developers passed the Senate Judiciary Committee yesterday.
The bill, SB 1047 (Wiener; D-San Francisco), would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to require frontier AI developers to make a positive safety determination before initiating training of a covered model, among other things.
While the CalChamber agrees on the importance of ensuring the safe and responsible development of AI, the CalChamber argues that it is an issue that is appropriately being addressed at the federal level. The CalChamber is concerned that SB 1047 will add more confusion to the already-fragmenting AI regulatory landscape in the U.S.
In addition to potentially creating inconsistencies with federal regulations, SB 1047 demands compliance with various vague and impractical, if not technically infeasible, requirements for which developers will be subject to harsh penalties, including potential criminal liability.
The CalChamber opposes the proposal because it regulates AI technology as opposed to its high-risk applications, creates significant regulatory uncertainty and therefore high compliance costs, and poses significant liability risks to developers for failing to foresee and block any conceivably harmful use of their models by others—all of which inevitably discourages economic and technological innovation.
In a letter sent to legislators, the CalChamber pointed out that by discouraging innovation and focusing almost exclusively on developer liability, SB 1047 does not better protect Californians. Instead, by hamstringing businesses from developing the very AI technologies that could protect them from dangerous models developed in territories beyond California’s control, it risks making them more vulnerable.
Below are some of the concerns the CalChamber outlines in the letter:
- SB 1047 fails to account for the AI value chain, impeding open source. The bill almost exclusively focuses on developer liability, failing to account for the AI value chain. Under SB 1047, developers must build full shutdown capabilities into their models and may be held liable for downstream uses over which they have no control, impeding their ability to open-source their models. Ultimately, the CalChamber argues, liability should rest with the user who intended to do harm, as opposed to automatically defaulting to the developer who could not foresee, let alone block, any and all conceivable uses of a model that might do harm.
- SB 1047 sets unreasonable safety incident reporting requirements that are not only vague but deter open-source development. Developers are required to report each AI safety incident upon learning of it, or learning facts that would lead to the reasonable belief that a safety incident occurred. However, what is considered an “AI safety incident” is vague. Among other things, it includes a covered model “autonomously engaging in a sustained sequence of unsafe behavior other than at the request of a user” but fails to define what is considered “unsafe,” leaving developers to guess if they must report an incident. At the same time, “AI safety incident” covers a range of circumstances that are incompatible with open source because it would require monitoring of all downstream uses and applications.
- SB 1047 establishes a new regulatory body with an ambiguous and ambitious purview. The new “Frontier Model Division” within the Department of Technology would be responsible for a sweeping array of AI-related regulation, including developing novel safety tests and benchmarks, which could very well result in greater inconsistencies with federal rules.
- SB 1047 imputes excessively harsh penalties, including potentially criminal liability and model deletion. For instance, developers are required to submit certification of positive safety determinations to the new Frontier Model Division under penalty of perjury, yet the certainty required for that assessment is impracticable if not impossible to obtain. Potential civil penalties include model deletion (in the face of imminent risk or threat to public safety) and “an amount not exceeding 10 percent of the cost, excluding labor cost, to develop the covered model for a first violation and in an amount not exceeding 30 percent of the cost, excluding labor cost, to develop the covered model for any subsequent violation.” “Considering the significant resources to train covered models, this sum could amount to many millions,” the CalChamber said.
Staff Contact: Ronak Daylami