California legislators return to the Capitol today from summer recess and have just one month to pass bills to the Governor’s desk.
Most job killer bills identified by the California Chamber of Commerce this year have been stopped, but there are a number of problematic bills still active. These bills include a job killer bill imposing a tax on digital advertising; a job killer bill chilling employer speech regarding political matters; and several bills regulating artificial intelligence (AI).
Below are some of the priority bills the CalChamber is monitoring:
Job Killers
- SB 1327 (Glazer; D-Contra Costa): Implements a discriminatory 7.25% tax on the revenue generated from the sale of digital advertising. The bill targets taxpayers that annually make at least $2.5 billion of revenue from these services.
- SB 399 (Wahab; D-Hayward): Chills employer speech regarding political matters, including unionization. Is likely unconstitutional under the First Amendment and preempted by the National Labor Relations Act.
AI, Privacy
- SB 1047 (Wiener; D-San Francisco): Requires frontier AI developers to make a positive safety determination before initiating training of a covered model, among other things, subject to harsh penalties that include criminal penalties. Creates significant uncertainty for businesses due to vague, overbroad and impractical, and at times infeasible, standards, requirements, and definitions. Focuses almost exclusively on developer liability, creating liability for failing to foresee and block any and all conceivable uses of a model that might do harm—even if a third party jailbreaks the model. As a consequence of such issues, deters open-source development, undermines technological innovation and our economy. Further imposes unreasonable requirements on operators of computing clusters, including a requirement to predict if a prospective customer “intends to utilize the computing cluster to deploy a covered model” and implement a “kill switch” to enact a full shutdown in the event of an emergency. Establishes a totally new regulatory body, the “Frontier Model Division” within the Department of Technology, with an ambiguous and ambitious preview.
- AB 2930 (Bauer-Kahan; D-Orinda): Requires developers and deployers of automated decision tools (ADTs) to perform specified impact assessments prior to first using an ADT and annually thereafter, impacting every industry and businesses of all sizes, in addition to public entities. For any ADT first used prior to January 1, 2025, the impact assessment must be conducted prior to January 1, 2026, and annually thereafter. Impact assessments must include, among other things, a statement of the ADT’s purpose, intended benefits, uses, and deployment contexts, as well as an analysis of any potential adverse impacts based on protected classifications such as sex, race, ethnicity, or religion, from the deployer’s use of the ADT, and must be provided to the Civil Rights Department (CRD) within 7 days of a request by CRD, which is allowed to share the impact assessment with public prosecutors. Enforceable by the CRD, Attorney General, and other public attorneys for significant statutory damages ranging between $10,000 (for administrative enforcement) and $25,000 (for civil enforcement), per violation, with each day constituting a separate violation.
- AB 3211 (Wicks; D-Oakland): Places very prescriptive and technologically infeasible requirements on AI developers, large online platforms and camera/recording device manufacturers to incorporate a brand-new technology that is still developing. What this technology is currently capable of changes basically every month. For example, just a couple months ago, there wasn’t a program that can watermark text, making the bill’s requirements to do so impossible to comply with. Currently, one company is seemingly closer to having that technology, but the technology is not yet fully reliable, raising serious competition concerns around entrenching market leaders. When violations invariably occur, companies face significant penalties under this bill.
- AB 2877 (Bauer-Kahn; D-Orinda): Amends the California Consumer Privacy Act (CCPA) to prohibit a developer, as defined, from using the personal information (PI) of a consumer less than 16 years of age, as specified, to train or “fine-tune” an AI system or service unless affirmative authorization is provided pursuant to the CCPA’s provisions providing opt-out/opt-in rights. Because another pending bill, AB 1949, would also amend the existing opt-out/opt-in rights for minors under that same provision, potentially could apply to any consumer under the age of 18. Even if authorization is received, businesses would be prohibited from using the PI of minors unless they both deidentify and aggregate the data. By limiting inputs, this bill regulates the technology itself, hamstringing developers from appropriately training the technology. Realistically, forces companies to engage in either age verification or not use any PI to train any AI. Even if they are able to age verify consumers, unintended consequences are likely significant, because access to data specific to children and teens is essential to develop tools to provide them unique support for risks and challenges specific to their age groups.
- AB 2655 (Berman; D-Palo Alto): Based on a false assumption that online platforms definitively know whether content has been manipulated, requires large platforms to (1) block the posting or sending of materially deceptive and digitally modified or created content related to elections, during specified periods before and after an election, (2) label certain additional content inauthentic, fake, or false during specified periods before and after an election, and (3) develop procedures for Californians to report content that has not been blocked or labeled in compliance with the act. Authorizes candidates for elected office and the Attorney General, among others, to seek injunctive relief against a large online platform for noncompliance. Unlikely to have the desired outcomes as it is underinclusive (excluding platforms such as Truth Social or Parler) and incorrectly presumes platforms are the appropriate arbiter in deciding what constitutes election information. Instead, will result in significant suppression of political speech out of fear of liability, in violation of the First Amendment which affords the broadest protection to political speech – even protecting allegedly false statements about public officials and figures.
- AB 1949 (Wicks; D-Oakland): Prohibits businesses covered under the California Consumer Privacy Act from selling or sharing the personal information (PI) of anyone under the age of 18 unless the minor or the minor’s parent/guardian provide affirmative authorization (opting in), whereas the CCPA currently applies the opt-in right to minors under the age of 16, conditioned upon the business having actual knowledge that the minor is under 16. Further expands the right to opt-in to also now restrict the ability of businesses to collect, use or disclose minor’s PI or sensitive personal information, unless the minor or the minor’s parent or guardian, opts-in. In neither scenario is actual knowledge required, effectively forcing businesses to engage in age verification for every consumer and obtain opt-in consent from those shown to be under 18.
- AB 3048 (Lowenthal; D-Long Beach): Prohibits businesses from developing or maintaining a browser that does not include a setting that enables consumers to send an opt-out preference signal to other businesses that the consumer interacts with through the browser, pursuant to regulations adopted by the California Privacy Protection Agency
- AB 2481 (Lowenthal; D-Long Beach): Requires “large social media platforms” to create a process to verify an expansive list of individuals as “verified reporters,” including school principals and counselors, among others, which will result in over 146,000 verified reporters, each of which can make a report of a “social media related threat” or a violation of the platform’s terms of service that in in their opinion poses a “severe risk” to the health and safety of a minor. A “social media related threat” is content that promotes, incites, facilitates, or perpetuates any one of 15 problems, many of which are entirely subjective (e.g. suicide, cyberbullying, harassment, academic dishonesty). Depending on the size of a platform, a platform must then respond to any report by a non-verified reporter within 10-21 days or, if the report is submitted by a verified reporter, within 24-72 hours. Violations are subject to a private right of action by any person making a report, or unable to make a report, in violation of the bill for relief, including statutory damages of up to $10,000 per violation.
Labor and Employment
- AB 2421 (Low; D-Silicon Valley): Effectively creates a new, broad evidentiary privilege in the public sector that is one-sided and will preclude relevant evidence during litigation or workplace investigations.
- SB 1299 (Cortese; D-San Jose): Creates workers’ compensation presumption that would require the Workers’ Compensation Appeals Board (WCAB) to adjudicate agriculture Cal/OSHA claims and impose a presumption regardless of any causal link between the alleged occupational injury and a violation of any provision of heat-related standards.
Legal Reform and Protection
- AB 2863 (Schiavo; D-Chatsworth): Adds a variety of obligations for auto-renewing subscription contracts, and adds new obligations for companies that use them – including infeasible and vague requirements for answering phone lines and responding to inquiries.