Content Moderation and Internet Safety

2022 Business Issues Guide

Content Moderation and Internet Safety

Legislation Must Consider First and 14th Amendment Concerns

Few things impress the imagination more than our ability to connect with each other using cheap, commonplace digital services. We enjoy a world where we can see our loved ones instantly, in real time, when we may be too distant to see them in person. And out of the shadows of a pandemic-induced shutdown also came the realization that these services were not just tools for interaction, but tools that could help us continue to thrive and connect economically and socially. But this increased use and usefulness of online tools has also led to more questions about bad content on the internet, and how best to keep the internet safe for everyone.

Surely, there is no shortage of stories where the use of digital technology has helped highlight the best in humanity. But there is a growing concern that the use of digital tools for harmful, or otherwise negative results warrants some measure of protection. Most people would agree that the internet should be safe and secure for everyone. But reasonable minds differ about how best to achieve that shared goal.

Constitutional Challenges with Content Moderation

One of the difficulties of drafting legislation related to content moderation is the risk of running afoul of First and 14th Amendment protections. The First Amendment deals with free speech and restricts the government’s power to control speech by private actors. The 14th Amendment deals with equal protection under the law.

First Amendment Consideration

The First Amendment generally prohibits states from interfering with speech, the press, religious beliefs, or the right to assemble. Relevant here is that the First Amendment does not apply to private actors, and thus restricts only the state’s ability to tell its citizens what they can and cannot express.

In the context of content moderation, whether legislation related to content moderation would survive a judicial challenge hinges upon whether the legislation is content-based, or content-neutral. This distinction is important because it determines what level of review the court will apply — either strict scrutiny or intermediate scrutiny.

A content-based law or regulation discriminates against expression based on the content of what the expression communicates. Typical examples are restrictions on specific ideological viewpoints. Courts treat content-based restrictions with strict scrutiny.

Strict scrutiny is the most exacting level of judicial examination and requires the government to demonstrate that the statute in question serves a compelling state interest in a manner that imposes the least possible burden on expression.

By contrast, a content-neutral restriction may apply to the expression of content, but does not restrict the content of expression itself. Content-neutral restrictions generally are considered those with reasonable time, place, or manner restrictions on expression, or those that address the secondary effects of expression. Examples include requirements that demonstrators comply with safety protocols when gathering on public property.

Courts treat content-neutral restrictions with intermediate scrutiny, a lower standard of review than strict scrutiny. To survive intermediate scrutiny, the state must demonstrate that a challenged law furthers an important government interest and does so by means that are substantially related to that interest.

14th Amendment

Content moderation also may be subject to challenges under the 14th Amendment’s Equal Protection Clause. The purpose of the equal protection clause is to secure every person against arbitrary discrimination. Under the 14th Amendment, the standard of review applied by the court hinges upon where a statute draws lines based on certain characteristics.

Statutes that single out groups based on protected characteristics, such as race, are subject to strict scrutiny. Classifications based on quasi-suspect characteristics, such as gender, are subject to intermediate scrutiny. Purely arbitrary government classifications, even classifications consisting of just one person, are subject to rational basis review. Rational basis is a very deferential level of judicial review.

For online content moderation, the key question is to what extent the identification of general characteristics is relevant to the objective of the legislation. For example, if a bill seeks to regulate “social media companies,” it cannot define the term merely by defining the number of users who visit a website. If this was the case, some of the most dangerous or radical platforms could go unregulated simply because they are not large enough to fall within the definition.

Similarly, if a bill defines “social media companies” by outlining a specific set of features, such as the ability to create a profile, interact with other users, or share user-generated content, bad actors could easily subvert the law’s application by altering the features offered on their services to avoid its application.

In both examples, the arbitrary definition of who is covered by the law could be a potential violation of the 14th Amendment, although subject to a very deferential standard of review by the courts.

Feasibility of Compliance

In addition to difficulties with adequately defining the scope of content moderation legislation, feasibility of compliance also is a concern. Businesses are not opposed to investing in an internet that is safe for everyone, but imposing costly ongoing compliance requirements on businesses can be unnecessarily punitive. For years, businesses have invested billions of dollars, hired tens of thousands of workers, and developed scaled operations dedicated to transparency, safety and security online.

By now, everyone has seen the rollout of increased privacy, security and safety features on their favorite applications. Businesses treat information about how these features work as sensitive and proprietary because inadvertent disclosure or breach of these processes could compromise the systems designed to protect users. For example, disclosure of how security warnings are triggered, or the methods by which a system verifies a user, can be sensitive and should be protected against potential breach or disclosure. This is why bills requiring businesses to create and submit detailed reports on content moderation practices or divulge sensitive content moderation information significantly increase the risk that the information will be breached or otherwise compromised. Proposed legislation that creates reporting requirements to address content moderation should consider the security risks that overreporting or disclosure may create with regard to data breaches.

Legislative Efforts

Drafting legislation in this area that strikes the right balance between the twin objectives of promoting internet safety while maintaining First and 14th Amendment principles is challenging. The primary flaws of California Chamber of Commerce-opposed bills in this area last year related to their scope of application and feasibility of compliance:

• AB 613 (C. Garcia; D-Bell Gardens) Would have required platforms, as defined, to place labels on images, specifically those that depict humans and have been altered with regard to bodily figure or skin, and are posted for promotional or commercial purposes. The Assembly policy committee hearing on this bill was canceled at the author’s request.

• AB 587 (Gabriel; D-San Fernando Valley) Would have required all social media companies to make detailed disclosures on a quarterly basis describing content moderation practices and procedures, including details that could threaten the security and efficacy of content moderation practices currently in place. This bill passed the Assembly, but the Senate policy committee hearing was canceled at the author’s request.

• AB 1545 (Wicks; D-Oakland) Would have created restrictions on specific features and content that can be viewed by underage users online. This bill was held in the Assembly fiscal committee.

• SB 388 (Stern; D-Canoga Park) Would have required social media platforms to divulge detailed information relating to content management practices to the Department of Justice on an annual basis. The Senate policy committee hearing on this bill was canceled at the author’s request.

CalChamber Position

Social media platforms are unique as the users of the platform are creating content and expressing their own views. Forcing social media companies to censor, edit, or remove content creates constitutional challenges. The CalChamber supports efforts to protect consumers, but such efforts must be done in a way that balances constitutional protections and does not impose on companies unnecessary compliance requirements that provide no additional consumer safety protections.

January 2022

2022 Business Issues Guide

Related News

Privacy Bills

Coalitions

Committees

Staff Contact

Ronak DaylamRonak Daylami
Privacy