Automated Decision-Making Tools
Existing Law Guards Against Algorithmic Discrimination
One of the biggest issues that the California Legislature has taken on since it started to regulate artificial intelligence (AI) in the last several years relates to the topics of “algorithmic discrimination” and bias and discrimination in automated decision-making tools (ADMTs). While the business community shares in the central underlying goal identified in that legislation, it has been unable to support the legislation due to the inclusion of impractical, if not infeasible requirements that would not only make compliance unmanageable, but also undermine both the utility and development of the technology.
Under existing law, businesses can be held responsible for algorithmic discrimination under anti-discrimination statutes such as the Fair Employment and Housing Act and Unruh Civil Rights Act, which are rights based and not technology specific. The California Chamber of Commerce takes seriously the responsibility of California businesses to not discriminate and to avoid bias when making consequential decisions affecting people, including when they are deploying new technologies such as ADMTs. To be clear, however, risks of bias and discrimination exist whether decisions are human made from start to finish or a byproduct of using or incorporating new technologies into the decision-making process. By that same token, the responsibility to avoid those outcomes exists regardless of whether these decisions are made via a human employee or an AI tool.
The CalChamber would support reasonable legislation recognizing that existing anti-discrimination laws provide protections against algorithmic discrimination or seeking to ensure that developers and deployers take certain precautionary steps to identify and avoid biased or discriminatory outcomes in using ADMTs. That said, it is critical that such legislation be sufficiently narrow in scope, risk-based, and balanced, not only to ensure that the law can be operationalized as a practical matter, but also to avoid overregulation that could interfere with the responsible advancement of these tools, which have the potential to reduce, if not one day eliminate, human bias.
Generally, this will require that any legislation, at minimum:
• Have a well-defined, manageable and reasonable scope in terms of the technology it captures, the types of decisions affected, the size of the businesses captured, and range of industries implicated in any “one size fits all” approach;
• Provide sufficient confidentiality protections from public disclosure both to protect trade secrets and to avoid other concerns, such as impact assessments getting used as fodder for litigation and undermining the level of candor necessary for accurate assessments;
• Not include ancillary and unrelated obligations that are unnecessary to achieve the actual objective of ensuring that these tools are developed and deployed in a manner that is not discriminatory. Such obligations can include the enforcement of new consumer opt-out rights, individual notice requirements, rights to appeal, and more;
• Ensure enforcement by a single enforcement entity without any private right of action to ensure uniform application of the law and, again, to encourage candor; and
• Include preemption not only of local jurisdictions, but also of state entities to avoid overregulation and to avoid increasingly fragmented regulatory schemes with conflicting or confusing requirements.
Risks of Bias, Discrimination, Overregulation
Although ADMTs can pose risks of bias and discrimination and care must be taken to avoid such outcomes, these risks and problems exist whether decisions are human made from start to finish or a result of using or incorporating new technologies in the decision-making process. ADMTs also present significant benefits. Just to name a few, this technology can: enable quick approvals and access to credit that would take much longer if decided solely by human processes, provide broader access to credit, protect consumers against fraudsters by assisting in the identification of uncharacteristic account activity, automate repetitive tasks (such as entering data in two places at once), and minimize errors by comparing current work to past work. Perhaps one of the more beneficial uses of ADMTs is their potential to reduce the instances and effects of human bias. Overregulation, however, can block all those current and future benefits.
Starting with AB 331 in 2023 and AB 2930 in 2024, both introduced by Assemblymember Rebecca Bauer-Kahan (D-Orinda), the California Legislature has been considering legislation that would require impact assessments for ADMTs to help avoid bias and discrimination in the development and deployment of those tools. Presented as simple legislation requiring businesses to conduct impact assessments of ADMTs making consequential decisions to avoid or reduce instances of bias and discrimination, the bills went much further. For example, AB 2930 also required businesses to do all the following:
• Provide an opportunity to opt-out of the use of the ADMTs regardless of whether the business is in full compliance with AB 2930, if “technically feasible” — even if the alternative could be rife with more biases, or when it would be technically feasible but completely unreasonable.
• Provide consumers specific notices, both pre-and post-use, including consumer-specific notices that in some cases would be completely impractical if not infeasible, rendering the use of ADMTs pointless and slowing down any number of business processes.
• Establish, document, implement, and maintain governance programs, which is particularly difficult for smaller businesses that may have one employee with human resources responsibilities, but who has no experience establishing such programs for a sophisticated technology like ADMTs.
Minimum Requirements for Future Legislation
Although the problems in the bills were not limited to these issues, in opposing AB 331 and AB 2930, the CalChamber was able to identify five major priorities for the business community that had to get addressed at a minimum. These priorities would not resolve many of the practical problems with the bills (such as redundancies, confusion between “developers” and “deployers,” retroactivity, etc.), but they provide a good guide as to the major elements of any compromise legislation moving forward, both for impact assessment-related legislation and other AI-related legislation. For example, with all AI legislation, the CalChamber has taken a position that the proposal should not regulate the technology itself, but should take a risk-based approach, legislating high risk use cases or applications of the technology where appropriate.
Specifically, the CalChamber will be looking to ensure that any legislation on these issues meet certain requirements.
• Have a well-defined, manageable, and reasonable scope, both in terms of the technology it captures (if overly broad, it defeats the utility of these tools, with direct impact on other consumer interests) and in terms of the types of decisions affected (that is, being sufficiently risk based in order to differentiate between decisions that pose a low versus high risk to a person’s rights), as well as in the size of the businesses captured (especially as impact assessments have to get outsourced to experts and small businesses may not be able to absorb significant costs) and industries implicated in any “one size fits all” approach.
• Provide sufficient confidentiality protections from public disclosure both to protect trade secrets and to avoid other concerns, such as impact assessments being subject to Public Records Act requests and getting used as fodder for litigation. This would ultimately undermine the candor necessary for accurate assessments.
• Be reasonably tailored to the objective and not involve ancillary and unrelated obligations that are unnecessary to ensure that these tools are developed and deployed in a manner that is not discriminatory.
For example, an impact assessment bill should be limited to impact assessments and not include other obligations, such as the enforcement of new consumer opt-out rights which do not make a tool less discriminatory. In fact, because tools learn on the data fed into them, adding another requirement could make them more discriminatory. In some instances, the right would be nonsensical, such as in an emergency room when treating a patient requiring lifesaving procedures.
Other examples include the pre-use and post-use notice requirements that would significantly undermine the utility of these tools (for example, it would be impossible to explain to each person how their credit score was calculated by the tool, let alone enforce a nebulous right to correct information they felt was incorrect). Resolving such public policy issues is not necessary to conduct ADMT impact assessments.
• Ensure enforcement by a single enforcement entity without any private right of action to ensure uniform application of the law and, again, to encourage candor. Providing for a single enforcer (the Attorney General) will promote consistent interpretation and application across the state.
• Include preemption not only of local jurisdictions to avoid increasingly fragmented regulatory schemes with conflicting or confusing requirements, but also of state entities to avoid overregulation and getting ahead of the Legislature and Governor on issues that have the ability to devastate the California economy. Already two state entities are conducting a formal rulemaking process on these same issues. These issues are too important to Californians across the state and the struggling economy to significantly delegate and defer policy decisions to unelected officials.
CalChamber Position
The CalChamber would support reasonable legislation that would ensure the responsible development and deployment of ADMTs to help reduce, if not prevent, incidences of algorithmic discrimination without creating confusion as to what is and is not unlawful. Legislation must not regulate the technology itself; rather, it must be focused on specific use cases, and high-risk applications of the technology. It must be sufficiently risk-based and narrow in scope (ADMTs captured, businesses, types of decisions) and balanced, both to be operable and to avoid overregulation.
Other elements that must be included are: reasonably-tailored obligations (no opt-out rights, overly broad and cumbersome notice and right to correct requirements), confidentiality protections, no private right of action, a single enforcer, and preemption of localities and of state agencies.
Keeping in mind that anti-discrimination laws are technology neutral and are not obviated by the use of tools, the focus should be practices that would help develop and deploy the technology responsibly to avoid bias and discrimination, not discourage the advancement or application of the technology.
February 2025

Agriculture and Resources
California Environmental Quality Act (CEQA)
Climate Change
Education
Energy
Environmental Regulation
Health Care
Housing and Land Use
Immigration Reform
International Trade
Labor and Employment
Legal Reform
Managing Employees
Privacy
Product Regulation
Taxation/Budget
Tourism
Transportation
Unemployment Insurance/Insurance
Water
Workers’ Compensation
Workplace Safety
Related News
Privacy Bills
Coalitions
Committees
Staff Contact
Ronak Daylami
Policy Advocate
Privacy and Cybersecurity