Smart Laws: Regulating Algorithms

2022 Business Issues Guide

Smart Laws: Regulating Algorithms

Cultivating Innovation While Protecting Consumers

Across the globe and here at home, California is known as a leader in technological innovation and creative ingenuity. And with this reputation, California has been able to attract some of the most impactful businesses and talents from around the world. New digital technologies and constantly evolving computing systems are a cornerstone of California’s competitiveness, and the impact of the innovation economy has been fruitful for California.

In recent years, however, California’s ability to maintain its lead has come into question as it gains a reputation for being too tough on technology companies, particularly at a time when other states are welcoming new companies with exciting incentives. (States use credits and incentives to attract startups and technology companies. See Deloitte https://www2.deloitte.com/us/en/pages/tax/articles/states-use-credits-and-incentives-to-attract-startups-and-technology-companies.html.)

But California does not have to choose between innovation and consumer protection. The two are not mutually exclusive. A policy that values inclusion of industry expertise could yield much sweeter fruit for consumers and the state. In particular, one area where California can exercise this inclusive approach to policy is by tailoring its approach to regulating algorithms in a way that cultivates innovation and protects intellectual property while still protecting consumer interests.

Background

At the outset, it is important to understand that algorithms by themselves are not inherently good or bad. They are simply instructions that provide a framework for carrying out tasks, akin to a series of “if-then” statements. But today, the prevalence of this technology, combined with widespread misinformation about surveillance and tracking, has led to a general fear of the unknown. In particular, people have come to fear the use of algorithms in more important decision-making processes, leading to concerns about fairness and privacy. But just as with any technology, a distinction must be made between the technology itself and the conduct of bad actors.

The most-cited concerns surrounding the use of algorithms relate to their use as tools to help make decisions that affect people, particularly protected classes. People legitimately fear the use of algorithms in areas such as criminal justice, financial lending, and even in job hiring because there is a concern that algorithms may inadvertently affect protected classes of people. But it is important to note that any potential bias in algorithms inherently stems from human bias. For example, if an algorithm is given criminal records that already are stained with decades of systemic racism, then that algorithm may have a disparate impact on people of color. Similarly, if an algorithm reviewing thousands of resumes is programmed to search for graduates from Harvard, it may produce inadvertent results stemming from racial bias that exists in the college admission process, even though the algorithm itself does not know what race the applicants are.

Consumer Protections Against Disparate Impacts

Under current law, California provides people with protections by allowing them to bring disparate impact claims under Title VII of the U.S. Civil Rights Act of 1964 and the California Fair Employment and Housing Act (FEHA). When it comes to algorithmic decision making, a disparate impact claim can be strengthened by the factual data of an algorithm.

What people don’t realize is that eliminating algorithms from decision making removes a layer of accountability from the process. This is because algorithms can be audited for fairness, tested for results, updated, and improved. In stark contrast, with a traditional hiring scheme, a manager with a bias against members of a protected class cannot be audited, cannot easily be tested for results, and certainly cannot be updated to work better next time.

In this way, algorithms are actually a tool for eliminating bias and even identifying bias in data sets (such as criminal records). Indeed, algorithms commonly are used to do exactly that. Considering the benefits that algorithms provide our world, most people would agree that the solution lies not in inhibiting or discouraging the use of the technology itself, but in policy that is smart enough to adapt and improve with the innovations that affect our world.

Regulating Algorithms

As questions continue to arise regarding the use of algorithms and their impact on people, the California Chamber of Commerce expects to see legislation on the issue.

In 2021, AB 13 (Chau; D-Monterey Park) sought to address the concern of discrimination with the use of algorithms. This bill failed to move out of the second fiscal committee to consider it.

In California, regardless of the use of an algorithm, FEHA precludes discrimination on the basis of a protected classification and permits individuals to pursue disparate impact claims. A disparate impact claim allows employees to challenge any adverse employment action that can be tied to facially neutral policies or practices, including the use of an algorithm, with no need to prove discriminatory intent. California law thus already addresses the misuse of an algorithm that can lead to a disparate impact on a protected class.

AB 13 sought to regulate algorithms and would have created significant additional requirements for the state procurement process that would have negatively affected businesses and the state. AB 13 included an impractical definition of “automated decision system” (ADS) as any “computational process” that “facilitates human decision making, [sic] that impacts persons.” Under its definition, AB 13 applied to virtually all electronics used to help make decisions that affected persons, including spreadsheets, standardized tests and performance evaluations. This impact probably was inadvertent, but it demonstrates a fundamental lack of understanding of these technologies, their application in the real world, and the negative consequences that overbroad legislation on these technologies will bring.

Legislative Activity 2022

The CalChamber expects some form of AB 13 to return in 2022. We also expect the California Privacy Protection Agency to advance regulations on algorithms and machine learning technologies, including reporting and auditing requirements, when such technologies are used in a setting that could affect members of protected classes. It is important to maintain a thorough understanding of these technologies and how widespread they really are if we are to design policies that are inclusive of a diverse set of stakeholder input.

CalChamber Position

The CalChamber encourages the continued growth of emerging technologies and innovation by tailoring statutes that regulate technologies to address specific, problematic behaviors by bad actors. Overbroad regulations that fail to isolate the problem will unnecessarily burden innovation in California and discourage further investment in our state.

January 2022

2022 Business Issues Guide

Related News

Privacy Bills

Coalitions

Staff Contact

Ronak DaylamRonak Daylami
Privacy