The Internet has become an indispensable tool that many rely on for information, marketing, commerce, and connections. The wide-reaching data accessible by a quick Google search retrieves information that would otherwise take days to find in a library. Society has become greatly dependent on this access to information, allowing individuals to “make quicker, more-informed decisions” and “connect [with] anything or anyone at any given moment.” However, “[o]ur greatest strength can also be our greatest weakness, and our human relationship with technology is a classic testament to that.”
Social media platforms have grown immensely over the past decade, with many using social media as their primary source to learn about current events and breaking news. A study conducted in 2020, at the height of the COVID-19 Pandemic and the U.S. presidential election, revealed that a staggering 53% of adults in the United States use social media as their news source either “often” or “sometimes.” With Facebook being the most popular, a subset of 36% of Americans regularly use the site to learn about news, out of a total of 68% of Americans who are on Facebook generally. With X (formerly known as Twitter) closely behind, 15% of Americans regularly refer to the site as their news source, out of a total of 25% of Americans registered on X generally.
Social media platforms are owned and operated by private entities that currently have full control over the implementation of algorithms and other content-moderation policies. Due to the influential role of social media, especially with younger generations, there has been increased tension regarding a state’s ability to regulate the interaction between platforms and their users through content moderation. Platforms are resisting state intervention by asserting First Amendment claims, stating that platforms have a right to free speech and that content-moderation decisions are equivalent to protected speech. Currently, there is a circuit split between the U.S. Court of Appeals for the Fifth and Eleventh Circuits addressing this issue, in which Florida and Texas enacted statutes that placed major restrictions on social media platforms’ ability to freely censor or moderate content. Specifically, both statutes include nondiscrimination provisions, in addition to other disclosure provisions, that would prohibit platforms from censoring based on viewpoint. The key tension arises between the purported First Amendment rights of the private entities that run social media platforms and the ability for users to express and be exposed to diverse viewpoints through “one of the most important communications mediums used in [the] [s]tate[s].”
Both Florida and Texas argue that the statutes prohibiting viewpoint discrimination are constitutional because they do not restrict protected speech, and even further, that platforms should be subjected to common carrier obligations. The plaintiffs, representing large social media platforms, instead argue that content-moderation decisions require the use of editorial judgment, which has been interpreted as protected speech in past cases. The importance of providing meaningful restrictions on platforms’ censorship policies has become even more evident with the recent acquisition of X, exemplifying that a change in management in such relied-on commodities could potentially be devastating to the access of information. Due to the increased uncertainty regarding the status of the law and the importance of providing direction and uniformity on interpreting the Constitution, a petition for a writ of certiorari was filed by Texas and granted by the Supreme Court. Opening briefs were filed on November 30, 2023, and oral argument occurred on February 24, 2024.
One of the main difficulties in resolving this issue is the continuum of control and expression that platforms exert when moderating content. This Note will argue that the ultimate determination of whether moderation decisions rise to protected speech will be fact dependent. Platforms that lack a clear target audience and only censor objectively obscene content (rather than subjective beliefs) do not convey a message through their content moderation that amounts to protected speech. Most large platforms, such as X, Facebook, Instagram, and TikTok would be included within this category. Conversely, this Note will argue that platforms that clearly moderate content based on political or other personal beliefs, and express these choices with their users, will have First Amendment protections as the moderation expresses a message equivalent to speech. By conveying subjective viewpoints through a platform’s content moderation, potential users can make informed decisions about whether to opt-in to the platform’s services. Groups that fall within this second category include Vegan Forum, ProAmerica Only, and Democratic Hub.
Alternatively, with a view on consistency and the best overall policy outcome, there is an argument that Congress should designate social media platforms as common carriers in order to regulate this area similarly to the telecommunications industry. This Note primarily provides a doctrinal analysis of common carrier law and editorial judgment and applies the analysis to the conflicting arguments raised in the circuit-split cases. While the current debate is highly politicized, with the perceived motive of the Florida and Texas statutes to stop platforms from censoring conservative views, this Note argues that analyzing these issues with a neutral, doctrinal-focused lens will provide a positive long-term solution.
Part I of this Note will establish an example of a current content-moderation policy exercised by a large social media platform. Part II will provide a doctrinal analysis concerning First Amendment law, specifically referring to the development and current state of “common carriers” and “editorial judgment.” Part III will identify the state and federal statutes that underly the circuit-split litigation. Part IV will discuss the facts and the conflicting rationales of the current circuit-split cases. This Note will also highlight the most persuasive arguments and their application to the doctrinal analysis of First Amendment law provided in Part II. Then, Part V will speak to the significance of resolving this issue and how it will affect social media platforms, states, and the greater community. A conclusion will follow.