Smart Laws: Regulating Algorithms
Cultivating Innovation While Protecting Consumers
Across the globe and here at home, California is known as a leader in technological innovation and creative ingenuity. And with this reputation, California has been able to attract some of the most impactful businesses and talents from around the world. New digital technologies and constantly evolving computer systems are a cornerstone of California’s competitiveness, and the impact of the innovation economy has been fruitful for California.
According to the California State Assembly’s Committee on Jobs, Economic Development and the Economy, California’s innovation economy continues to lead the nation:
• Intellectual Property: California ranks first in the nation in the total number of patents (39,139). California is second in the nation in patents granted to independent inventors per 11,000 workers, at over double the national average.
• Entrepreneurship: California ranks fourth in the nation in entrepreneurial activity with 33% more people starting a new business than the national average.
• Venture Capital: California is a leader in venture capital. More than half of all venture capital dollars invested in small businesses and startups during the first quarter of 2018 were earned by California-based companies.
• High Technology Jobs: California is 11th in the nation with 6% of all jobs in the state being in a high tech industry.
In recent years, however, California’s ability to maintain its lead has come into question as it gains a reputation for being too tough on technology companies, particularly when other states are welcoming new companies with exciting incentives. (States use credits and incentives to attract startups and technology companies. See Deloitte https://www2.deloitte.com/us/en/pages/tax/articles/states-use-credits-and-incentives-to-attract-startupsand-technology-companies.html.)
But California’s approach to regulating technology does not have to choose between innovation and consumer protection. The two are not mutually exclusive. A policy that values inclusion of industry expertise could yield much sweeter fruit for consumers and the state. In particular, one area where California can exercise this inclusive approach to policy is by tailoring its approach to regulating algorithms in a way that cultivates innovation and protects intellectual property while still protecting consumer interests.
Clarke’s third law states that any sufficiently advanced technology is indistinguishable from magic. Importantly, however, algorithmic computing is not magic. The development of algorithms dates to about 850 CE, when a mathematician named Al-Khwarizmi, also known as the founder of algebra, developed a series of step-by-step computations that broke down complex mathematics into smaller, simpler steps. These step-by-step computations provided people with the instructions necessary to carry out even more complex mathematics.
Algorithms, which take their name from Al-Khwarizmi, work in the same way: a series of step-by-step instructions that are carried out to accomplish larger, more complex tasks. The main distinction today is that these step-by-step instructions can be carried out by computers and therefore can accomplish extremely complex tasks with great efficiency. Broadly, when an algorithm (or series of algorithms) is efficient enough to perform tasks that can be associated with human intelligence (like finding you directions), it is called artificial intelligence (AI). When AI can learn new information and use that to improve its own performance, it is called machine learning (ML). These technologies have had a major impact on California’s innovation economy and play a major role in achieving California’s long-term goals in areas such as public health and environmental sustainability.
According to the National Cancer Institute (NCI), “Integration of AI technology in cancer care could improve the accuracy and speed of diagnosis, aid clinical decision-making, and lead to better health outcomes.” (See https://www.cancer.gov/research/areas/diagnosis/artificial-intelligence.) As noted by the NCI, AI excels at recognizing patterns in large volumes of data and identifying characteristics in data (including images) that humans cannot perceive.
In one study, scientists at NCI helped develop an artificial intelligence approach to detect precancerous lesions in cervical images. The identification of true positives by a gynecologist was 69%; the identification of true positives by a pap smear was 71%; and the identification of true positives by artificial intelligence was 91%. The study found that the AI-based approach was more accurate than other methods. (See NCI, “Leveraging AI to Improve Detection of Cervical Precancer.”)
Similar use of AI in medicine has yielded real world results. In 2016, researchers at the University of San Francisco piloted a new system that uses AI to detect sepsis, a deadly blood infection. The death rate fell more than 12%, meaning patients whose treatment involved the artificial intelligence were 58% less likely to die in the intensive care unit (ICU). See British Medical Journal, “Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial” (2017); also NBC News, “How hospitals are using AI to save their sickest patients and curb ‘alarm fatigue,’” (July 27, 2019).
Even beyond medicine, AI also plays a major role in climate change and environmental sustainability. Columbia University’s Earth Institute called artificial intelligence “a game changer for climate change and the environment.” In Norway, AI helped create a flexible and autonomous electric grid, integrating more renewable energy. AI has helped farmers in India achieve 30% higher groundnut yields per hectare by providing information on preparing the land, applying fertilizer and choosing sowing dates. AI has even helped researchers achieve 89% to 99% accuracy when it comes to identifying tropical cyclones, weather fronts and atmospheric rivers. See Columbia University, Earth Institute, State of the Planet blog (June 5, 2018).
Fears About Algorithm Use
What is important to understand is that algorithms by themselves are not inherently good or bad. They are simply instructions that provide a framework for carrying out tasks, akin to a series of “if-then” statements. But today, the prevalence of this technology combined with widespread misunderstanding about how it works has led to a general fear of the unknown.
In particular, people have come to fear the use of algorithms in more important decision-making processes, leading to concerns about fairness and privacy. But just as with any technology, a distinction must be made between the technology itself and the conduct of bad actors.
The most-cited concerns surrounding the use of algorithms relates to their use as tools to help make decisions that affect people, particularly protected classes. People legitimately fear the use of algorithms in areas such as criminal justice, financial lending, and even in job hiring because there is a concern that algorithms may inadvertently affect protected classes of people.
For example, if an algorithm is given criminal records that already are stained with decades of systemic racism, then that algorithm may have a disparate impact on Black and Hispanic
people. Just the same, if an algorithm reviewing thousands of resumes is programmed to search for graduates from Harvard University, it may produce inadvertent results that stem from racial bias which exists in the college admission process, even though the algorithm itself does not know what race the applicants are.
It is important to note that these concerns of discrimination through disparate impact are likely already prohibited and protected under Title VII of the Civil Rights Act of 1964 and
California’s Fair Employment and Housing Act (FEHA). Under either state or federal law, an employee likely can challenge the use of a facially neutral policy or test, including an algorithm, if the policy/test has a disparate impact on a protected classification such as race or gender.
Tool for Eliminating Bias
Eliminating algorithms from decision making could actually remove a layer of accountability from the process. This is because algorithms can be audited for fairness, tested for results, updated and improved. In stark contrast, with a traditional hiring scheme, a manager with a bias against members of a protected class cannot easily be audited, cannot easily be tested for results, and cannot easily be updated to work better next time. In this way, algorithms can be seen as a tool for eliminating bias, identifying bias in data sets (such as criminal records) and indeed, they commonly are used to do exactly that. Considering the benefits that algorithms provide our world, most people would agree that the solution lies not in inhibiting or discouraging the use of the technology itself, but in policy that is smart enough to adapt and improve with the innovations that affect our world.
As questions continue to arise regarding the use of algorithms and their impact on people, the California Chamber of Commerce expects to see legislation on the issue. In 2020, there were two pieces of legislation, AB 2269 (Chau; D-Monterey Park) and SB 1241 (Lena Gonzalez; D-Long Beach), that sought to address the concern of discrimination with the use of algorithms. Both bills did not move or even have a policy hearing, in part due to the pandemic. But the CalChamber expects both proposals to return in some form in 2021.
The CalChamber encourages promoting the continued growth of emerging technologies and innovation by tailoring statutes that regulate technologies to address specific, problematic behaviors by bad actors. Overbroad regulations that fail to isolate the problem will unnecessarily burden innovation in California and discourage further investment in the state.
Agriculture and Resources
California Environmental Quality Act (CEQA)
Health Care Reform
Housing and Land Use
Labor and Employment
Privacy/Cybersecurity, Technology, Telecommunications, Economic Development, Elections/Fair Political Practices