First Amendment Governance: Social Media, Power, and a Well-Functioning Speech Environment

Introduction

In Moody v. NetChoice, LLC,1Moody v. NetChoice LLC, 603 U.S. 707 (2024). the Supreme Court declared, in a majority opinion by Justice Kagan, that “it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment.”2Id. at 732–33. In Moody, social media platforms claimed that their expressive freedom had been violated by state laws mandating certain content-moderation policies.3Id. at 713–17. Although Moody was decided on the criteria required to bring a facial challenge, it nonetheless provided some direction with respect to what the government can and cannot do vis-à-vis the First Amendment rights of social media platforms.4Id. at 717–19.

This decision also implicitly raises the question of what it means for a democracy to have a well-functioning political speech environment in the digital era. This question seems particularly urgent given the profound dilemma that social media poses for democratic theory and practice. On the one hand, social media democratizes communication and promotes egalitarianism by reducing the cost of speech.5See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805 (1995); Eugene Volokh, What Cheap Speech Has Done: (Greater) Equality and Its Discontents, 54 U.C. Davis L. Rev. 2303, 2305 (2021). It provides new avenues for expression and association, thereby strengthening public discourse. It has also been harnessed to enable citizen participation in political decision-making.6See Hélène Landemore, Open Democracy and Digital Technologies, in Digital Technology and Democratic Theory 62, 66 (Lucy Bernholz et al. eds., 2021); Roberta Fischli & James Muldoon, Empowering Digital Democracy, 22 Persps. on Pol. 819, 819 (2024). On the other hand, social media can undermine democratic functioning, giving rise to various challenges such as disinformation, echo chambers, troll armies, bots, microtargeting, citizen distrust, and foreign election interference.7See, e.g., Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media (2017); Nathaniel Persily, Can Democracy Survive the Internet?, 28 J. Democracy 63 (2017); Richard L. Hasen, Cheap Speech: How Disinformation Poisons Our Politics—and How to Cure It (2022). As various attempts at election subversion, including the attack on the Capitol, demonstrate, election disinformation can have damaging and destabilizing effects on democracy and can diminish the confidence that citizens have in elections. The ongoing stability of political institutions should not be taken for granted in our era of democratic decline.8See, e.g., Tom Ginsburg & Aziz Z. Huq, How to Save a Constitutional Democracy (2018); Steven Levitsky & Daniel Ziblatt, How Democracies Die (2018).

Although free speech has always posed this particular dilemma—both essential for, yet potentially injurious to, democracy—key features of the new digital era raise questions as to whether conventional regulatory approaches are sufficient to safeguard the public sphere. Social media platforms enjoy unprecedented asymmetries of wealth and power as compared to their users. These platforms play a crucial role in providing and regulating the online speech environment9See Jack M. Balkin, Free Speech is a Triangle, 118 Colum. L. Rev. 2011, 2011 (2018). and, hence, in constructing a significant dimension of public discourse. Aside from their dominance, these powerful social media platforms were not created to provide a healthy expressive realm for democracy. Instead, they engage in “surveillance capitalism”—a behavioral advertising business model that sells users’ data for immense profits.10See Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 16 (2019). This profit motive arguably renders the platforms unreliable as self-regulators.11See Abby K. Wood & Ann M. Ravel, Fool Me Once: Regulating “Fake News” and Other Online Advertising, 91 S. Cal. L. Rev. 1223, 1237, 1245 (2018). The outsized power of social media platforms to shape the expressive sphere, combined with their non-public regarding orientation, raises genuine concerns about the ongoing health of the political marketplace of ideas.

While the overwhelming power of the state has always—and rightly—been viewed as particularly perilous for the freedom of speech, dominant private actors, particularly those who either control or have disproportionate access to the means of communication, can likewise pose a threat to free speech. Is it possible to address such asymmetries of power consistent with the First Amendment? Should social media platforms be regulated to provide for the type of speech environment necessary for democracy? What are the normative attributes of a well-functioning sphere of political expression? More generally, what should be done to protect listeners, a category of democratic actor that tends to receive less scholarly attention than speakers?

This Article offers a preliminary analysis of these issues. It is organized in three parts. Part I begins by providing a brief overview of First Amendment doctrine as it applies to speakers and listeners. In addition, it outlines the three principal values—democracy, autonomy, and truth-seeking—that animate the First Amendment. For the purposes of the ensuing analysis, this Article adopts the view that the First Amendment is geared to promoting democratic self-government. Part I then sets out a normative account of a healthy expressive realm. A well-functioning political speech environment for speakers and listeners, I suggest, is one that is free of domination and coercion and in which acute asymmetries in political and economic power do not distort the capacity of individuals to engage in self-government, principally with respect to three central activities: (1) informed voting; (2) discussion and deliberation; and (3) meaningful participation. I claim further that the speech environment ought to protect individuals’ liberty, equality, epistemic, and nondomination interests in order to foster a healthy sphere of expression for these self-governing activities.

While this Article sets out an admittedly idealized account of what a well-functioning political speech environment would entail, and while such an account may never be attained in full (or even in part), a normative theory provides, I suggest, a useful benchmark by which to assess current challenges and their possible regulatory solutions.12To be sure, the idealized account offered here does not on its own furnish a roadmap for reform efforts; its ambition is instead cabined to identifying normative objectives and the problematic features of the world to which such objectives apply, following what Jacob Levy has described as “a back and forth process between cases and principles, evils and ideals.” Jacob T. Levy, There Is No Such Thing as Ideal Theory, 33 Soc. Phil. & Pol’y 312, 328 (2016). To this end, Part I also identifies certain challenges posed by the digital public sphere, and, in addition, advances a claim of “digital exceptionalism”—the idea that the online world of expression has distinctive features that not only distinguish it from the non-digital world but that also pose unique and profound difficulties for the attainment of a well-functioning expressive realm.

Part II turns to First Amendment jurisprudence to see whether it enables the government to address the challenges posed by the digital world so as to provide for a well-functioning political speech environment. It begins by describing the positive conception of the First Amendment, under which the state is viewed as having an affirmative role in protecting the democratic public sphere from the distortive influence of powerful private entities. Part II then offers a snapshot view of the current law of public discourse, focusing in particular on campaign finance regulation and the Moody decision, to show that the Court has largely abandoned the positive conception in favor of an approach that prohibits the government from ensuring a greater diversity of expression.

While the Court’s approach protects listeners from the power of the state, it gives rise to the troubling conundrum that the political speech environment is left unprotected not only from the dominant power of private tech giants but also from the deficits of the digital public sphere. Neither the state nor the platforms protect listeners from the effects of acute asymmetries of private power. Indeed, many regulatory responses to the challenges of digital exceptionalism would likely fall afoul of the First Amendment. For this reason, the sizeable gap between the normative ideal of a well-functioning political speech environment and the often disheartening reality of the digital public sphere cannot be closed by contemporary First Amendment doctrine.

In response to this conundrum, Part III makes an argument for “countervailance,” which is, in essence, the idea that certain mechanisms could counter, or at least lessen, these asymmetries in power and their resulting deficits such that listeners’ interests are better protected, even if that protection does not rise to the level of establishing the kind of equality needed for self-governance. I briefly consider a suite of countervailing mechanisms—including disclosure and transparency rules, a narrow prohibition of false election speech, strategies to manage deepfakes, state-led incentives structures and norms, public jawboning, and civil society efforts—that can be deployed by public entities, social media platforms, and civil society institutions. Given First Amendment constraints, however, these measures are necessarily modest in their scope and cannot serve as full-blown solutions to the challenges of digital exceptionalism.

I. A Well-Functioning Speech Environment and its Challenges

This Part sets out a normative account of a well-functioning political speech environment. It also argues for “digital exceptionalism”—the idea that the challenges faced by the digital public sphere are unique and may therefore require a tailored regulatory response. To ground the discussion, I begin with a brief overview of First Amendment values and doctrine as they apply to speakers and listeners.

A. Speakers, Listeners, and the First Amendment

In his philosophical examination of the freedom of expression, T.M. Scanlon identifies three groups of interests: those of participants, audiences, and bystanders.13See T.M. Scanlon, Jr., Freedom of Expression and Categories of Expression, 40 U. Pitt. L. Rev. 519, 520 (1979). Burt Neuborne’s Madisonian reading of the First Amendment likewise identifies a range of participants in a “neighborhood” of expressive freedom, including, most prominently, speakers and listeners.14See Burt Neuborne, Madison’s Music: On Reading the First Amendment 100 (2015). For Neuborne, listeners ought to be treated as equal partners, who, like speakers, require expressive freedom to develop their own identities and preferences.15See id. Speakers and listeners thus go hand in hand: the “free flow of ideas and information generated by autonomous speakers” is “essential to the ability of hearers to make the informed decisions on which the efficient functioning of choice-dependent institutions like democracy, markets, and scientific inquiry depend.”16Id. at 101.

In First Amendment doctrine, however, listener interests play a limited role; indeed, such interests are typically protected to the extent that they correspond to speaker interests.17See Derek E. Bambauer, The MacGuffin and the Net: Taking Internet Listeners Seriously, 90 U. Colo. L. Rev. 475, 477 (2019). To be sure, the underlying logic of the categorical approach to First Amendment jurisprudence—under which the Supreme Court has created tiers of speech based on the value of particular kinds of speech to public discourse—is implicitly oriented to the perspective of listeners.18See Elena Kagan, Private Speech, Public Purpose: The Role of Governmental Motive in First Amendment Doctrine, 63 U. Chi. L. Rev. 413, 476–77 (1996). For instance, political speech is afforded maximum protection because it provides indispensable information for citizens to fulfill their democratic roles, while libel is accorded no value because defamatory statements do not enhance, and indeed detract from, reasoned discourse.

The Supreme Court has also recognized that under the First Amendment, listeners may enjoy a “right to know” or an “independent right to receive information.”19Neuborne, supra note 14, at 103–04; Lamont v. Postmaster Gen. of U.S., 381 U.S. 301, 308 (1965) (Brennan, J., concurring); Kleindienst v. Mandel, 408 U.S. 753, 762–63 (1972). Indeed, the right of listeners to receive a free flow of information has served as the basis of the First Amendment’s protection of commercial and corporate speech.20Va. State Bd. of Pharmacy v. Va. Citizens Consumer Council, 425 U.S. 748, 771–72 (1976). However, in the face of the Court’s increasingly deregulatory posture toward commercial speech, critics have argued that rather than protecting listener interests, the Court has subordinated them to corporate speech rights.21See Morgan N. Weiland, Expanding the Periphery and Threatening the Core: The Ascendant Libertarian Speech Tradition, 69 Stan. L. Rev. 1389, 1415 (2017). Although speaker interests usually trump listener interests in the event of a conflict, there are some circumstances outside of public discourse in which listener interests can prevail. As Helen Norton explains, when “listeners have less information or power than speakers,” the law can prohibit speakers from providing false information or can require truthful disclosures with respect to, for example, consumer products or professional speech.22See Helen Norton, Powerful Speakers and Their Listeners, 90 U. Colo. L. Rev. 441, 441–42, 453 (2019). The Supreme Court’s deregulatory turn on compelled professional speech,23Nat’l Inst. of Fam. & Life Advocs. v. Becerra, 585 U.S. 755, 755 (2018). however, has created uncertainty about the status of a broad range of consumer-protective regulations.24See Alan K. Chen, Compelled Speech and the Regulatory State, 97 Ind. L.J. 881, 912–13 (2022).

For both speakers and listeners, there are three principal values that animate the First Amendment: democratic self-government; autonomy or self-fulfillment; and truth seeking through the marketplace of ideas.25See Thomas I. Emerson, Toward a General Theory of the First Amendment, 72 Yale L.J. 877, 878–79 (1963). An additional value proposed by Vincent Blasi—checking the abuse of power—also seems particularly relevant for democratic self-government.26See Vincent Blasi, The Checking Value in First Amendment Theory, 2 Am. Bar Found. Rsch. J. 521, 527 (1977). On this view, the freedoms of speech, assembly, and a free press provide a crucial countervailing force for checking the abuse of power by public officials.

However, there is considerable debate as to which value is predominant. According to Alexander Meiklejohn’s influential theory, the First Amendment is exclusively geared to producing a democratic system of government; hence, “[w]hat is essential is not that everyone shall speak, but that everything worth saying shall be said.”27Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948). Owen Fiss likewise argues that the “purpose of free speech is not individual self-actualization, but rather the preservation of democracy, and the right of a people, as a people, to decide what kind of life it wishes to live.”28Owen M. Fiss, Free Speech and Social Structure, 71 Iowa L. Rev. 1405, 1409–10 (1986). On this view, individual autonomy is simply a means to achieve collective self-determination.29See id.

For Robert Post, however, the value of autonomy is inseparable from democratic self-government because democracy depends on the active participation of citizens.30See Robert Post, Meiklejohn’s Mistake: Individual Autonomy and the Reform of Public Discourse, 64 U. Colo. L. Rev. 1109, 1120–21 (1993). Public discourse and free public debate—and, by extension, the autonomy of speakers—must be protected in service of democratic government.31See Robert Post, Equality and Autonomy in First Amendment Jurisprudence, 95 Mich. L. Rev. 1517, 1526–27 (1997). Some scholars place primacy on individual autonomy or self-realization apart from self-government,32See Martin H. Redish, The Value of Free Speech, 130 U. Pa. L. Rev. 591, 593 (1982). on the basis that, following Kant, all individuals possess the right to be treated as ends in themselves.33See Charles Fried, Speech in the Welfare State—The New First Amendment Jurisprudence: A Threat to Liberty, 59 U. Chi. L. Rev. 225, 233 (1992). Finally, the value of truth seeking emphasizes the First Amendment’s role in protecting, and indeed maximizing, the free flow of information, in order for society to better pursue the truth. As stated by Justice Holmes, “the best test of truth is the power of the thought to get itself accepted in the competition of the market.”34Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting).

This Article takes the view, as expressed by Cass Sunstein, that the First Amendment is “fundamentally aimed at protecting democratic self-government.”35Cass R. Sunstein, Free Speech Now, 59 U. Chi. L. Rev. 255, 263 (1992); see also Cass R. Sunstein, The First Amendment in Cyberspace, 104 Yale L.J. 1757, 1762–63 (1995) [hereinafter Sunstein, Cyberspace]. The other values—autonomy, truth seeking, and checking the abuse of power—will be treated as serving the democracy value.

A related, but conceptually distinct, question concerns the role of the democratic state: should the government regulate speech in order to promote the democracy value? There are two competing constellations of ideas, which correspond roughly with the libertarian and egalitarian approaches to speech. The libertarian approach asserts that state regulation of speech is particularly dangerous for democracy. Speech itself is a form of power: it enables citizens to hold leaders to account and check the abuse of official power. Given state incentives to stifle dissent and criticism, content-based regulations of speech are prohibited save for a few tightly circumscribed and justified exceptions for particularly disfavored speech such as obscenity or libel.36See Cass R. Sunstein, Democracy and the Problem of Free Speech 1–51 (1st Free Press Paperback ed. 1995). The overall posture is one of distrust of government,37See Helen Norton, Distrust, Negative First Amendment Theory, and the Regulation of Lies, 22-07 Knight First Amend. Inst. 3 (Oct. 19, 2022), https://knightcolumbia.org/content/distrust-negative-first-amendment-theory-and-the-regulation-of-lies [https://perma.cc/8F46-R2LH]. in keeping with what Vincent Blasi has termed the “pathological perspective,” whereby the First Amendment is “targeted for the worst of times.”38Vincent Blasi, The Pathological Perspective and the First Amendment, 85 Colum. L. Rev. 449, 449–50 (1985). Under the libertarian approach, expressive liberties are best served by minimizing state regulation, thereby enhancing the free flow of information in the marketplace of ideas. In general, this constellation of ideas is associated with a negative rights approach to the First Amendment, under which the role of the state is to refrain from interfering with citizens’ freedom of speech.

The second, and opposing, constellation of ideas holds that the primary value of a system of free expression is to enable citizens to “to arrive at truth and make wise decisions, especially about matters of public import.”39Kagan, supra note 18, at 424. Under the egalitarian approach, listeners have an interest in being exposed to a wide range of competing views.40See id. at 423–25. However, due to certain factors, such as, for example, the cost of political advertising in the campaign finance context, the marketplace of ideas may be skewed toward elite viewpoints. Listeners would thus be deprived of hearing the full range of ideas and political preferences necessary to reach an informed decision. To ensure that listeners are fully informed, the government may have to impose restrictions in order for all points of view to have a roughly equal opportunity of being heard.41See id. As described in more detail below,42See infra text accompanying notes 94–103. this constellation of ideas is associated with a positive rights approach to the First Amendment, under which the government may have to take affirmative steps to protect individuals’ expressive freedoms.

B. A Normative Account of a Well-Functioning Speech Environment

As Justice Kagan observed, a “well-functioning sphere of expression” is “the whole project of the First Amendment.”43Moody v. NetChoice LLC, 603 U.S. 707, 732–33 (2024). But what does it mean to have such a sphere of expression?44For an alternative account of a well-functioning sphere of expression, see Joshua Cohen and Archon Fung, Democracy and the Digital Public Sphere, in Digital Technology and Democratic Theory (Lucy Bernholz et al. eds., 2021). Cohen and Fung offer an account of the informal public sphere (as opposed to formal political processes of elections and decision-making) which has five elements: rights to expression and association, fair opportunities to participate, access to information from reliable sources, a diversity of views, and the capacity for joint action arising from discussion. Id. at 29–30. This Article argues, as a normative matter, for the promotion of a well-functioning political speech environment for speakers and listeners, one that is free of domination and coercion, and in which acute asymmetries in political and economic power do not distort the capacity of individuals to engage in various self-governing activities, including the following:

(1) Informed Voting: individuals form opinions on public matters based on reliable information in both digital and non-digital mediums, with access to a wide array of competing viewpoints, thereby engaging in informed voting;

(2) Discussion and Deliberation: individuals engage in discussion and deliberation with other citizens whether online or in person as an integral and ongoing democratic practice necessary to self-governing activities, including but not limited to voting; and

(3) Meaningful Participation: individuals participate meaningfully in the democratic process through a variety of avenues, including voting, deliberating, associating with others whether online or in-person, organizing events, consuming or producing political content online, petitioning, and the like, thereby ensuring governmental responsiveness and accountability.

The idea is that democratic citizens should be able to participate in the democratic process with full knowledge and equal freedom.

To foster a healthy expressive realm for these self-governing activities, I further claim that the speech environment ought to protect individuals’ liberty, equality, epistemic, and nondomination interests. The protection of these interests, I suggest, is required to ensure that public discourse is organized and conducted in a manner that serves the value of democratic self-government. To be sure, there will inevitably be conflicts among these interests that would require certain choices and tradeoffs to be made.45For an argument about how the conflicting values of equality and liberty should be instantiated in law, see Yasmin Dawood, Democracy and the Freedom of Speech: Rethinking the Conflict Between Liberty and Equality, 26 Canadian J.L. & Juris. 293 (2013). These interests may also overlap in various ways such that a given outcome could be described as involving, say, both equality and epistemic considerations. While it is beyond the scope of this Article to provide a full account of these interests and their possible conflicts, a few preliminary observations follow.

As described above with respect to the libertarian approach, individuals’ liberty interests are best served by the robust protection of their expressive and associational freedoms under the First Amendment.46See supra text accompanying notes 36–38. Speakers ought to be able to freely express their political opinions and policy preferences, while listeners’ right to know should likewise be shielded from government censorship. In addition to their liberty interests, citizens have equality interests in being exposed to speech that reflects a wide range of competing views, ideas, and political preferences. As described above with respect to the egalitarian approach, the government may have to take affirmative steps to protect listeners’ equality interests in hearing a wide range of viewpoints because the marketplace of ideas may be skewed in favor of elite viewpoints.47See supra text accompanying notes 39–42. For an argument about how the conflicting values of equality and liberty should be instantiated in law, see Yasmin Dawood, Democracy and the Freedom of Speech: Rethinking the Conflict Between Liberty and Equality, 26 Canadian J.L. & Juris. 293 (2013). The speech environment should also protect citizens’ epistemic interests in receiving accurate and reliable information, which is required for reaching good judgments. As Melissa Schwartzberg observes, these epistemic interests ought to also be understood to encompass the kinds of institutions and instruments needed to develop, inform, and assess such judgments.48See Melissa Schwartzberg, Epistemic Democracy and Its Challenges, 18 Ann. Rev. Pol. Sci. 187, 201 (2015). To be sure, epistemic interests may overlap with equality intersts to the extent that good judgments depend upon an exposure to a wide range of viewpoints.

Finally, a healthy expressive environment should also protect democratic actors from domination or coercion. As Philip Pettit argues in his influential account of republican freedom, an individual has dominating power over another person to the extent that they have the capacity to interfere on an arbitrary basis in certain choices that the other is in a position to make.49See Philip Pettit, Republicanism: A Theory of Freedom and Government 52 (1997). An act of interference is arbitrary to the extent that the dominating agent is not forced to track the avowable or relevant interests of the victim but instead can interfere as their will or judgment dictates.50See id. at 55. Individuals’ nondomination interests broadly capture the idea that speakers and listeners ought to be protected from the capacity of powerful agents, whether public or private, to interfere arbitrarily in their choices.51For an elaboration of these ideas in the democratic context, see Yasmin Dawood, The Antidomination Model and the Judicial Oversight of Democracy, 96 Geo. L.J. 1411 (2008).

While these four interests—liberty, equality, epistemic, and nondomination—apply to all three self-governing activities, they take different forms depending on the context. In addition, the self-governing activities overlap in various ways: meaningful participation may require informed discussion, for example. The discussion below provides additional details for each self-governing activity.

  1. Informed Voting

Freedom of speech is a precondition for informed voting. As noted by the Supreme Court, the First Amendment has the objective of “securing . . . an informed and educated public opinion with respect to a matter which is of public concern.”52Thornhill v. Alabama, 310 U.S. 88, 104 (1940). Voters learn about the key issues at stake in the election, the differences among political candidates, and the main features of the platforms of various political parties. As Meiklejohn observes, the well-being of the political community depends on the wisdom of voters to make good decisions.53See Meiklejohn, supra note 27, at 24–25. For voters to make wise decisions, they must be aware, to the extent possible, of all the relevant facts, issues, considerations, and alternatives that bear upon their collective life.

Thus, a well-functioning political speech environment provides voters with epistemically reliable information on matters of public import from a wide range of competing sources and perspectives. For this to take place, speakers’ liberty interests must be fostered, and listeners’ equality, epistemic, and nondomination interests must be satisfied. Under these conditions, listeners as voters have access to the information they need to understand matters of public concern.

  1. Discussion and Deliberation

Discussion and deliberation are crucial activities for those individuals we formally deem to be speakers. However, listeners are also, at times, speakers. Listeners do not develop their views in a vacuum: the activities of discussion and deliberation require democratic listeners to engage with others as they evaluate matters of public importance. The idea here is one of active listening, which involves not just the passive receipt of information but requires discussion and debate. Informal conversations among listeners enable them to consider issues of public policy and to make up their minds about what is best for their common lives—activities that lie at the heart of self-government. The First Amendment is principally concerned with the “authority of the hearers to meet together, to discuss, and to hear discussed by speakers of their own choice, whatever they may deem worthy of their consideration.”54Alexander Meiklejohn, Political Freedom: The Constitutional Power of the People 119 (1966) (emphasis added).

As such, the normative account offered here departs in significant ways from Habermas’s formal account of ideal deliberation. Habermas’s theory of the “ideal speech situation” envisions a reasoned discussion among free and equal participants who aim for consensus by being persuaded by the force of the better argument.55See Jürgen Habermas, Discourse Ethics: Notes on a Program of Philosophical Justification, in Moral Consciousness and Communicative Action 89 (Christian Lenhardt & Shierry Weber Nicholsen, trans., 1990). Formal accounts of deliberative democracy, while differing in various respects, all tend to share a commitment to reaching collective decisions through public reasons, that is, reasons that are generally persuasive to all the participants in the deliberation.

However, in my view, this ideal form of deliberation is not mandatory in order to achieve a well-functioning sphere of expression. Instead, as John Dryzek observes, deliberation can include informal discussion, humor, emotion, and storytelling.56See John S. Dryzek, Deliberative Democracy and Beyond: Liberals, Critics, Contestations 1 (2000). Rather than requiring consensus, we should instead focus on the values of mutual respect, reciprocity, cooperation, and compromise.57See Amy Gutmann & Dennis Thompson, Democracy and Disagreement 346 (1996); James Bohman, Public Deliberation: Pluralism, Complexity, and Democracy 238 (2000); Jane Mansbridge, James Bohman, Simone Chambers, David Estlund, Andrea Føllesdal, Archon Fung, Cristina Lafont, Bernard Manin & José luis Martí, The Place of Self-Interest and the Role of Power in Deliberative Democracy, 18 J. Pol. Phil. 64, 94 (2010). That being said, a basic predicate of a well-functioning speech environment is that speakers and listeners can engage in discussion, debate, and deliberation free of coercion, harassment, and deception.

To be sure, deliberation has come under criticism for being exclusionary because it tends to favor advantaged citizens.58See Lynn M. Sanders, Against Deliberation, 25 Pol. Theory 347, 349 (1997). Critics have also charged that deliberation is simply unfeasible given the complexity of democratic institutions59See Ian Shapiro, Enough of Deliberation: Politics Is About Interests and Power, in Deliberative Politics: Essays on Democracy and Disagreement 28, 31 (Stephen Macedo ed., 1999). or is difficult to realize in practice given the realities of electoral campaigns.60See James A. Gardner, What are Campaigns For? The Role of Persuasion in Electoral Law and Politics 1, 86, 92–93, 115 (2009). In addition, deliberation may accentuate group polarization.61See Cass R. Sunstein, Why Societies Need Dissent 111–14 (2003). These criticisms underscore the need for a more capacious and inclusive understanding of deliberation.

  1. Meaningful Participation and Governmental Responsiveness

A well-functioning political speech environment must also facilitate meaningful participation by listeners and speakers. Participation can take many forms, including voting and deliberating, but can also include such activities as joining a political party, attending a town hall or a candidate rally, volunteering for a political cause, penning an op-ed, marching and protesting, organizing a petition, or running for office. Meaningful participation has online analogues, such as reading or posting messages on social media platforms, consuming or developing political content, reading or writing blogs, listening to podcasts, or running websites. Citizens engage in meaningful participation when they criticize public officials or government policies. Or when they join forces with like-minded others and vote for change. Or when they organize to influence public policy and legislation. All of these activities depend upon a robust sphere of expressive freedom.

Meaningful participation could also be understood as requiring a relatively equal opportunity to influence the outcome of an election. On this view, listeners as voters would have a strong interest in ensuring a somewhat level electoral playing field.62See Burt Neuborne, The Status of the Hearer in Mr. Madison’s Neighborhood, 25 Wm. & Mary Bill Rts. J. 897, 906 (2017). Meaningful citizen participation is also crucial for ensuring governmental responsiveness and accountability. By communicating and associating with one another, citizens can join together to vote for new political leaders. The threat of being removed from office in the next election is one of the most effective mechanisms for ensuring governmental accountability. A well-functioning speech environment is thus indispensable to ensure that state power is responsive to the interests of citizens.

C. Digital Exceptionalism

Does the digital public sphere provide the conditions necessary to foster a well-functioning political speech environment? In what follows, I identify the central features of what I shall call “digital exceptionalism,” the idea that the digital public sphere has distinctive features that not only distinguish it from the non-digital world but that also pose unique challenges for the promotion of a healthy expressive realm.

A principal challenge is that social media platforms wield vast “asymmetries of knowledge and power” over their users.63See Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, 51 U.C. Davis L. Rev. 1149, 1162 (2018). The platforms act as private governors of online speech—enacting, implementing, and enforcing the rules that govern online expression.64See id. at 1197; Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1601–03 (2018). In addition, their power is remarkably concentrated: the digital public sphere is controlled in the main by three companies—Apple, Google, and Meta—that serve as the gatekeepers to online public discourse.65See Nikolas Guggenberger, Moderating Monopolies, 38 Berkeley Tech. L.J. 119, 121 (2023). To be sure, the media landscape in the pre-digital age was likewise highly concentrated: three networks shaped the news on television and a small handful of newspapers comprised the national market.66See Henry Farrell & Melissa Schwartzberg, The Democratic Consequences of the New Public Sphere, in Digital Technology and Democratic Theory 198 (Lucy Bernholz et al. eds., 2021). This concentration of pre-digital media power is likewise problematic for it undoubtedly reduced the plurality of differing points of view. However, certain mitigating features of the pre-digital public sphere are either absent, or greatly attenuated, in the digital world, and conversely, certain features unique to the digital world amplify the dangers posed by these power asymmetries. I briefly canvass a few of the relevant distinctions, noting, first, that these observations capture general trends and, second, that there are, of course, notable exceptions to each of these distinctions.

The first difference is that the pre-digital news media exerted a “strong gatekeeper” approach as compared to the “weak gatekeeper” approach of social media platforms.67See id. at 192. The traditional news media is bound by journalistic standards of objectivity and factual reliability. By contrast, social media platforms impose far fewer gatekeeping controls: while they filter certain prohibited topics such as graphic violence and pornography and rank or label other sorts of disfavored messages, there is far less ex ante quality control. Indeed, as of this writing, Meta has announced that it will eliminate fact checkers in the U.S. and rely instead on a “community notes” system similar to X (formerly Twitter).68See Our Approach to Political Content, Meta (Jan. 7, 2025), https://transparency.meta.com/features/approach-to-political-content [https://web.archive.org/web/20250207231253/https://transparency.meta.com/features/approach-to-political-content]. Research suggests, however, that community-based fact checking systems garner greater trust among users than professional fact-checking, in part because community notes provide additional information and context. See Chiara Patricia Drolsbach, Kirill Solovev & Nicholas Pröllochs, Community Notes Increase Trust in Fact-Checking in Social Media, 3 PNAS Nexus 1, 2, 9 (2024).

Second, as a result of this weak gatekeeping, there is said to be higher levels of misinformation on social media platforms. For example, Elon Musk’s false or misleading claims about elections accrued nearly 1.2 billion views on the social media platform X.69See David Ingram, Elon Musk’s Misleading Election Claims Have Accrued 1.2 Billion Views on X, New Analysis Says, NBC News (Aug. 8, 2024), https://www.nbcnews.com/tech/misinformation/elon-musk-misleading-election-claims-x-views-report-rcna165599 [https://perma.cc/7Q79-CYUH]. Recent empirical evidence suggests, however, that the degree of exposure to misinformation tends to be overstated with respect to the vast majority of users, at least in North America and Europe.70For an analysis of the empirical evidence, see Aziz Z. Huq, Islands of Algorithmic Integrity: Imagining a Democratic Digital Public Sphere, 98 S. Cal. L. Rev. 1287, 1297–98 (2025). Jurisdictions that rely heavily on social media, however, may have different outcomes. For instance, digital misinformation has proved to be a serious challenge in Brazil, with 90% of Bolsonaro supporters believing at least one piece of fake news in 2018.71See Christopher Harden, Brazil Fell for Fake News: What to Do About It Now?, Wilson Ctr. (Feb. 21, 2019), https://www.wilsoncenter.org/blog-post/brazil-fell-for-fake-news-what-to-do-about-it-now [https://perma.cc/7Z6M-4GSH]. In addition, deepfake technology may pose significant challenges for public discourse in the future.72See Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753, 1786 (2019). This is particularly true as the capacity to generate deepfakes using generative AI will soon outstrip both the platforms’ and users’ ability to detect them.73See Commc’ns. Sec. Establishment, Cyber Threats to Canada’s Democratic Process 18 (2023). A counterpoint, however, is that AI was used extensively, reportedly in a largely successful manner, in India’s recent national election, wherein politicians connected with voters by including deepfake impersonations of candidates and deceased politicians in campaign materials.74See Vandinika Shukla & Bruce Schneier, Indian Election Was Awash in Deepfakes—But AI Was a Net Positive for Democracy, The Conversation (June 10, 2024), https://theconversation.com/indian-election-was-awash-in-deepfakes-but-ai-was-a-net-positive-for-democracy-231795 [https://perma.cc/JT4C-3HWN].

A third difference is that social media platforms create a loss of epistemic trust. The decline in trust, rather than truth, may ultimately prove to be more damaging to the public sphere. Experimental evidence suggests that while exposure to deepfakes did not mislead participants, it left them feeling uncertain about the truthfulness of content.75See Cristian Vaccari & Andrew Chadwick, Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News, 6 Soc. Media + Soc’y 1, 2 (2020). This uncertainty, in turn, led to lower levels of trust with respect to news on social media. Researchers surmise that an increase in political deepfakes “will likely damage online civic culture by contributing to a climate of indeterminacy about truth and falsity that, in turn, diminishes trust in online news.”76Id. Epistemic distrust “can severely undermine a sense of democratic legitimacy among large parts of society.”77See Gilad Abiri & Johannes Buchheim, Beyond True and False: Fake News and the Digital Epistemic Divide, 29 Mich. Tech. L. Rev. 59, 65 (2022). The decay of trust also benefits leaders with authoritarian impulses.78See Chesney & Citron, supra note 72, at 1786. By contrast, in the pre-digital world, misinformation in public discourse was counteracted by civil society organizations, in particular the traditional news media, which maintained common standards for accuracy and objectivity, thereby instilling widespread trust in epistemic authorities.79See Abiri & Buchheim, supra note 77, at 65–66.

Fourth, social media platforms generate “epistemic fragmentation”—the idea that citizens no longer share a common set of facts and understandings about political life.80See id. at 66–67. Social media platforms tailor content for each user, leading to what Sunstein has dubbed “the Daily Me.”81Sunstein, supra note 7, at 2. Platforms also enable political campaigns to engage in microtargeting so that political advertising messages vary depending on the race and gender of the recipient. By contrast, citizens under the traditional news media paradigm were more likely to engage with the same news stories.82See Abiri & Buchheim, supra note 77, at 66–67. This fragmentation has compounded challenges to epistemic trust because “citizens no longer trust the same sources of information, and the reliability of the sources they do trust varies substantially.”83Farrell & Schwartzberg, supra note 66, at 192.

A fifth difference is that social media platforms rely on behind-the-scenes algorithms to do the vast majority of content filtering, in an effort to provide listeners with the kind of filtered experience that each user is seeking.84See Jane Bambauer, James Rollins & Vincent Yesue, Platforms: The First Amendment Misfits, 97 Ind. L.J. 1047, 1068 (2022); James Grimmelmann, Listeners’ Choices, 90 U. Colo. L. Rev. 365, 378–79 (2019). Because the predominant characteristic of the expressive environment online is the scarcity of listener attention, an important “means of controlling speech is targeting the bottleneck of listener attention, instead of speech itself.”85See Tim Wu, Is the First Amendment Obsolete? Knight First Amend. Inst. at Colum. Univ. (Sep. 1, 2017), https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete [https://perma.cc/Y5DM-BJUG]; Tim Wu, The Attention Merchants (2016). As a result of this algorithmic filtering, Erin Miller argues that media companies could exert “skewing power” over certain “consumers’ information pools in a way that prevents them from forming epistemically justified beliefs.”86Erin Miller, Media Power Through Epistemic Funnels, 20 Geo. J.L. & Pub. Pol’y 873, 901 (2022).

Finally, social media platforms “were not created principally to serve democratic values and do not have as their lodestar the fostering of a well-informed and civically minded electorate.”87Persily, supra note 7, at 74. Instead, the platforms engage in “surveillance capitalism,” trading users’ behavioral data for vast profits.88See Zuboff, supra note 10, at 16. This behavioral advertising business model depends on maximizing the amount of time users engage with social media. A variety of deleterious phenomena are thus good for the bottom line, including addictive behavior, sensationalist and divisive content, and weakened privacy norms.89See Lina M. Khan & David E. Pozen, A Skeptical View of Information Fiduciaries, 133 Harv. L. Rev. 497, 505 (2019). Unlike the traditional news media, internet platforms “are not built to create a digital public sphere of common concern.”90Abiri & Buchheim, supra note 77, at 66–67. In addition, the platforms’ system of private governance threatens citizens’ opportunities to engage meaningfully in democratic participation, particularly in light of their lack of accountability to users.91See Klonick, supra note 64, at 1603.

These features of the digital public sphere, taken together, raise serious questions about whether the online speech market provides the conditions necessary to sustain a well-functioning political speech environment. As of this writing, the asymmetry of power between platforms and users has arguably been heightened by the intertwining of governmental and private tech interests. Because social media platforms exert asymmetrical power on users in a way that does not track the public interest, this gives rise to the apprehension that listeners’ interests in nondomination are not satisfied. By contrast, selection intermediaries that act in public-regarding ways, such as a well-run national broadcasting corporation, do not pose the same degree of risk. To be sure, traditional media could also exert dominating power on their listeners to the extent they are not forced to track listeners’ avowable interests in a well-functioning public sphere. What matters is whether the selection intermediary is upholding public-regarding standards such as the provision of accurate information and a diversity of competing viewpoints.

Digital exceptionalism does not mean that the government must intervene in a way that differs from its regulation of traditional news media. Instead, the distinctive features of the digital public sphere suggest that a specialized and tailored set of regulatory responses may be warranted to foster a well-functioning speech environment. Jack Balkin’s distinction between the “old-school” speech regulation of the predigital world and the “new school” speech regulation of digital intermediaries seems applicable.92See Jack M. Balkin, Old-School/New-School Speech Regulation, 127 Harv. L. Rev. 2296, 2306 (2014). Finally, the concerns raised here do not amount to a blanket condemnation of social media platforms. These platforms provide a range of goods such as entertainment, commerce, convenience, and connection that are rightly valued by consumers.

II. Law and the Speech Environment

To what extent is the normative account outlined in Part I reflected in First Amendment jurisprudence? Or to put the question another way: does the First Amendment offer any conceptual resources that would enable the government to respond to the challenges posed by digital exceptionalism? While it is beyond the scope of this Article to provide a comprehensive answer to these questions, this Part begins by briefly describing the positive conception of the First Amendment, under which the state’s role is to affirmatively protect the democratic public sphere from powerful private actors. Part II then offers a snapshot view of the current law of public discourse,93By “public discourse,” I mean speech that is relevant to the formation of public opinion and that deals with matters of public concern. See James Weinstein, Participatory Democracy as the Central Value of American Free Speech Doctrine, 97 Va. L. Rev. 491, 493 (2011). For an alternative interpretation of this concept, see Robert Post, Participatory Democracy and Free Speech, 97 Va. L. Rev. 477, 488 (2011) (arguing that the “boundaries of public discourse are inherently normative”). focusing in particular on campaign finance regulation and the Moody decision to show that the Supreme Court has for the most part abandoned the positive conception and, as a result, has significantly restricted the range of allowable regulatory responses to the deficits of digital exceptionalism.

A. The First Amendment as a Positive Right

A positive conception of the First Amendment, as mentioned above, holds that the government may have to take affirmative steps to protect expressive freedom from powerful private entities.94See supra text accompanying notes 39–42. Owen Fiss asserts, for instance, that “the impact that private aggregations of power have upon our freedom” means that “sometimes the state is needed simply to counteract these forces.”95Owen M. Fiss, The Irony of Free Speech 2–3 (1996). The state has a duty to “preserve the integrity of public debate” in order to “safeguard the conditions for true and free collective self-determination.”96Fiss, supra note 28, at 1416. In keeping with this duty, the state may have to intervene to protect the “robustness of public debate in circumstances where powers outside the state are stifling speech.”97Fiss, supra note 95, at 4. Sunstein argues for a “New Deal for speech” under which the supposed democratic interferences with the autonomy of private actors are not abridgements of speech; indeed, the autonomy of private actors is itself a product of law and may amount to an abridgment.98See Cass R. Sunstein, The Partial Constitution 202 (1993). As such, “what seems to be government regulation of speech might, in some circumstances, promote free speech, and should not be treated as an abridgment at all.”99Id. at 204.     

As Genevieve Lakier observes, the Supreme Court understood the freedom of speech as having a positive dimension during the New Deal and Warren Court eras.100See Genevieve Lakier, The First Amendment’s Real Lochner Problem, 87 U. Chi. L. Rev. 1241, 1247 (2020). That is, the First Amendment did not only provide individuals with personal expressive freedom; it also provided them with the means for democratic self-government.101See id. at 1333. For example, in Red Lion Broadcasting Co. v. FCC, the Supreme Court upheld, against a First Amendment challenge, the FCC’s fairness doctrine, which required broadcasters to provide adequate and fair coverage to public issues in a way that accurately captured competing viewpoints.102Red Lion Broad. Co. v. FCC, 395 U.S. 367, 375 (1969). The FCC repealed the fairness doctrine in 1987. According to the Court, the fairness doctrine furthered the “First Amendment goal of producing an informed public capable of conducting its own affairs.”103Id. at 392. However, in the ensuing years, the Court has largely abandoned the positive conception of

the First Amendment,104But see Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180 (1997) (upholding against a First Amendment challenge must-carry rules requiring cable television networks to allocate some channels to local broadcast stations). including in the campaign finance context, as discussed below.

B. Public Discourse and Campaign Finance Regulation

The Supreme Court has interpreted the First Amendment as providing the highest possible protection to public discourse due to its centrality to self-government. One of the main ways in which public discourse—specifically electoral speech—is regulated is through campaign finance law.105The discussion that follows is drawn from Yasmin Dawood, The Theoretical Foundations of Campaign Finance Regulation, in The Oxford Handbook of American Election Law 817–42 (Eugene D. Mazo ed., 2024). In recent years, the Supreme Court has taken a deregulatory posture to campaign finance law, striking down significant parts of the legal infrastructure governing money in politics. This skepticism was apparent in an early landmark case, Buckley v. Valeo,106Buckley v. Valeo, 424 U.S. 1 (1976). in which the Court struck down limits on campaign expenditures because they were not justified by the government’s interest in preventing the actuality and appearance of corruption. In Buckley, the Court explicitly rejected the egalitarian—or equalization—rationale, stating that “the concept that government may restrict the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment.”107Id. at 48–49. Hence, the “governmental interest in equalizing the relative ability of individuals and groups to influence the outcome of elections” did not justify expenditure limits.108See id. at 49. The Buckley court found, however, that limits on campaign contributions were justified by the government’s interest in preventing corruption and its appearance. The provision of large contributions “to secure political quid pro quos from current and potential office holders” undermined the integrity of representative democracy.109See id. at 26–27.

In a subsequent decision, Austin v. Michigan State Chamber of Commerce,110Austin v. Mich. Chamber of Com., 494 U.S. 652 (1990), overruled by Citizens United v. FEC, 558 U.S. 310 (2010); see also FEC v. Mass. Citizens for Life, 479 U.S. 238, 257–58 (1986) (observing that the “corrosive influence of concentrated corporate wealth” may make “a corporation a formidable political presence, even though the power of the corporation may be no reflection of the power of its ideas”). the Supreme Court broadened the definition of corruption beyond quid pro quo corruption to encompass the concept of antidistortion which arose from the “corrosive and distorting effects of immense aggregations of wealth that are accumulated with the help of the corporate form and that have little or no correlation to the public’s support for the corporation’s political ideas.”111Austin, 494 U.S. at 660. The antidistortion concept was ultimately based on an equality rationale.112See, e.g., Stephen E. Gottlieb, The Dilemma of Election Campaign Finance Reform, 18 Hofstra L. Rev. 213, 229 (1989); Kathleen M. Sullivan, Political Money and Freedom of Speech, 30 U.C. Davis L. Rev. 663, 679 (1997). Concentrated corporate wealth gives certain voices far greater political influence than others due to the fact that speech is expensive.113See David Cole, First Amendment Antitrust: The End of Laissez-Faire in Campaign Finance, 9 Yale L. & Pol’y Rev. 236, 266 (1991). As a result of these inequities in speech capacities, listeners do not have access to the full range of views, which may affect their voting patterns and, hence, skew electoral outcomes. In McConnell v. FEC,114McConnell v. FEC, 540 U.S. 93 (2003) (quoting FEC v. Colo. Republican. Fed. Comm., 533 U.S. 431, 441 (2001)), overruled by Citizens United v. FEC, 558 U.S. 310 (2010). the Court held that corruption also encompassed the “undue influence on an officeholder’s judgment, and the appearance of such influence.”115Id. at 95. Undue influence arises when political parties sell special access to federal candidates and officeholders, thereby creating the perception that money buys influence. The undue influence standard is concerned with the skew in legislative, rather than electoral, outcomes.

The Supreme Court’s decision in Citizens United v. FEC,116Citizens United v. FEC, 558 U.S. 310 (2010). however, marked a turning point, implicating listener interests in at least four ways. First, the Supreme Court rejected Austin’s antidistortion rationale on the basis that it was actually an equalization rationale in violation of Buckley’s central tenet that the First Amendment prevents the government from restricting the speech of some in order to enhance the voice of others. The Court held that preventing quid pro quo corruption or the appearance thereof was the only governmental interest strong enough to overcome First Amendment concerns. Listener interests in the maintenance of a relatively level electoral playing field were undercut by this decision. In other cases, the Court has rejected equality-based arguments on the grounds that leveling the electoral playing field is impermissible under the First Amendment.117Davis v. FEC, 554 U.S. 724 (2008) (striking down on First Amendment grounds a federal statute that raised contribution limits for non-self-financed candidates who were running against wealthy self-financed opponents); Ariz. Free Enter. Club’s Freedom Club PAC v Bennett, 564 U.S. 721 (2011) (striking down on First Amendment grounds a state law that provided matching funds to publicly financed candidates in order to level the playing field by offsetting high levels of spending by privately funded opponents and independent committees).

Second, the Court held in Citizens United that corporations were henceforth allowed to spend unlimited sums from their general treasury funds as independent expenditures. According to the Court, independent expenditures do not give rise to the actuality or appearance of quid pro quo corruption. This reasoning gave rise to the emergence of Super PACs. In a subsequent case, SpeechNow.org v. FEC,118SpeechNow.org v. FEC, 599 F.3d 686 (D.C. Cir. 2010), cert. denied sub nom Keating v. FEC, 562 U.S. 1003 (2010). a lower court struck down contribution limits on PACs that engaged exclusively in independent spending—entities that are now known as Super PACs. Super PACs can accept unlimited contributions from individuals, corporations, and labor unions to fund independent ads supporting or opposing federal candidates. Listener interests are arguably undermined by the phenomenon of Super PACs: these entities have changed the political landscape by flooding huge sums of money into elections.119See Michael S. Kang, The Year of the Super PAC, 81 Geo. Wash. L. Rev. 1902 (2013). Not only is coordination with candidates a reality,120See Richard Briffault, Super PACs, 96 Minn. L. Rev. 1644 (2012). For a contrary view, see Bradley A. Smith, Super PACs and the Role of “Coordination” in Campaign Finance Law, 49 Willamette L. Rev. 603, 635 (2013). but Super PACs lack accountability and transparency relative to political parties and candidates, thereby further decreasing the influence of individual listeners on the democratic process.

Some may argue, however, that the increases in corporate advertising, and hence in available information, are beneficial to listeners. Indeed, the Court majority in Citizens United took this position, stating that the “right of citizens to inquire, to hear, to speak, and to use information to reach consensus is a precondition to enlightened self-government and a necessary means to protect it.”121Citizens United, 558 U.S. at 339 (emphasis added). The Court also asserted that “it is inherent in the nature of the political process that voters must be free to obtain information from diverse sources in order to determine how to cast their votes.”122Id. at 341.

Third, Citizens United and the deregulatory turn it ushered in, has broader implications for democracy. Money skews legislative priorities because it provides legislative access to large donors and lobbyists.123See Lawrence Lessig, Republic, Lost: How Money Corrupts Congress—and a Plan to Stop It 16 (2011); Christopher S. Elmendorf, Refining the Democracy Canon, 95 Cornell L. Rev. 1051, 1055 (2010) (arguing that “electoral systems should render elected bodies responsive to the interests and concerns of the normative electorate, i.e., the class of persons entitled to vote”). While access does not guarantee legislative outcomes, it is required to exert political influence. As such, officeholders are disproportionately responsive to the wishes of large donors than to other constituents.124See Nicholas O. Stephanopoulos, Aligning Election Law 240–46 (2024). Empirical studies have shown, for instance, that elected representatives are more responsive to the preferences of the affluent than to the preferences of low-income and middle-income individuals.125See, e.g., Larry M. Bartels, Unequal Democracy: The Political Economy of the New Gilded Age (2d ed. 2008); Martin Gilens, Affluence and Influence: Economic Inequality and Political Power in America (2012). It should be noted, however, that this does not speak directly to the impact of campaign money on legislative decision-making. The emphasis on the donor class disproportionately impacts the participation and representation of people of color and ordinary citizens.126See Spencer Overton, The Donor Class: Campaign Finance, Democracy, and Participation, 153 U. Pa. L. Rev. 73 (2004). Empirical research has demonstrated that donors “are not only wealthy, they are almost all white.”127Abhay P. Aneja, Jacob M. Grumbach & Abby K. Wood, Financial Inclusion in Politics, 97 N.Y.U. L. Rev. 566, 569 (2022). This racial gap has an impact on representation by affecting the electoral candidate pool and the behavior of legislators in office.128Id. at 630.

Finally, listener interests were at issue in the Court’s holding that disclosure and disclaimer requirements survived exacting scrutiny. The Court found that disclosure was “justified based on a governmental interest in ‘provid[ing] the electorate with information’ about the sources of election-related spending.”129Citizens United v. FEC, 558 U.S. 310, 368 (2010) (citing Buckley v. Valeo, 424 U.S. 1, 66 (1976)). The transparency resulting from disclosure “enables the electorate to make informed decisions and give proper weight to different speakers and messages.”130Id. at 371. Abby Wood argues that disclosure provides multiple informational benefits for voters.131See Abby K. Wood, Learning from Campaign Finance Information, 70 Emory L.J. 1091, 1102 (2021). By contrast, critics argue that disclosure rules violate privacy and raise the risk of retaliation. In a recent decision, Americans for Prosperity Foundation v. Bonta,132Ams. for Prosperity Found. v. Bonta, 594 U.S. 595 (2021). however, the Supreme Court has made it easier for disclosure laws to be found unconstitutional.133Although Bonta is not a campaign finance case as it concerns disclosure by nonprofit organizations (and not candidates, parties, or PACs), it has clear implications for campaign finance disclosure laws. See Michael Kang, The Post-Trump Rightward Lurch in Election Law, 74 Stan. L. Rev. Online 55, 64–65 (2022); Abby K. Wood, Disclosure, in The Oxford Handbook of American Election Law 923, 924, 928–29 (Eugene D. Mazo ed., 2024).

C. Public Discourse and Social Media Platforms

In the campaign finance realm, listeners’ liberty interests in unrestricted access to the commercial speech market are protected. However, their equality interests in a relatively level electoral playing field are significantly undermined. A similar pattern is evident in the emerging law of social media platform regulation. Listeners’ liberty interests are largely protected on social media platforms given the sheer volume of information available, but their equality interests in a level electoral playing field, an open deliberative sphere, and access to competing viewpoints appear to be compromised in the online world. As described in Part I.C above, listeners’ epistemic and nondomination interests are likewise threatened as a result of the key features of digital exceptionalism.

In Moody v. NetChoice, LLC,134Moody v. NetChoice, LLC, 603 U.S. 707 (2024). the Court considered the constitutionality of state laws from Florida and Texas that restricted the ability of social media platforms to engage in content moderation. The laws required internet platforms to carry speech that might otherwise be demoted or removed due to the platforms’ content moderation policies.135Id. at 713–22. The laws also required a platform to provide an individualized explanation to any user whose posts had been altered or removed.136Id. The states’ underlying concern was that the platforms were politically biased and were unfairly silencing the voices of conservative speakers.137Id. at 740–41; NetChoice, LLC v. Att’y Gen., Fla., 34 F. 4th 1196, 1203 (11th Cir. 2022). NetChoice, an internet trade association, brought facial challenges to the laws. The U.S. Court of Appeals for the Eleventh Circuit upheld a preliminary injunction, finding that the Florida law likely violated the First Amendment.138NetChoice, LLC, 34 F. 4th at 1227–28. However, the Court of Appeals for the Fifth Circuit reversed a preliminary injunction of the Texas law, partially on the basis that the platforms’ content moderation activities did not amount to speech, and hence did not infringe the First Amendment.139NetChoice, LLC v. Paxton, 49 F. 4th 439, 494 (2022).

Writing for the Supreme Court in Moody, Justice Kagan vacated the lower court decisions and remanded the cases, on the grounds that there was an insufficient record to sustain a facial challenge.140Moody, 603 U.S. at 713–18. While the Court was unanimous that NetChoice’s facial challenge had failed, Justice Kagan, speaking for a six-member majority,141Justice Kagan was joined by Chief Justice Roberts and Justices Sotomayor, Kavanaugh and Barrett in full and Justice Jackson in part. nonetheless proceeded to provide substantive guidance as to how the lower courts should conduct the facial analysis.

The Court majority’s central proposition was that the laws in question infringed the First Amendment rights of large social media platforms (specifically with respect to Facebook’s NewsFeed, YouTube’s homepage, and the like). Drawing an analogy to newspapers, the Court asserted that such platforms should be viewed as speakers with the right to compile and curate the speech of others. Justice Kagan relied on Miami Herald Publishing Company v. Tornillo,142Mia. Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974). in which the Court had struck down a right-of-reply law that required newspapers to print the reply of any political candidate who received critical coverage in their pages. In Tornillo, the Court held that the First Amendment protects newspaper editors in their “exercise of editorial control and judgment.”143Id. at 258. The Court majority drew upon additional cases—involving a private utility’s newsletter (Pacific Gas and Electric Co. v. Public Utilities Commission of California),144Pac. Gas & Elec. Co. v. Pub. Util. Comm’n of Cal., 475 U.S. 1 (1986). must-carry rules for cable operators (Turner Broadcasting System, Inc. v. FCC),145Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622 (1994). The Court noted that in a later decision, the regulation was upheld because it was necessary to protect local broadcasting. Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180, 189–90 (1997). and regulations affecting parades (Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc.)146Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Boston, Inc., 515 U.S. 557 (1995).—to find that the First Amendment prohibits the government from directing a private entity to include certain messages where that entity is curating the speech of others to create its own expressive product.147Moody v. NetChoice, LLC, 603 U.S. 707, 731–32, 742–43 (2024).

In the same way, the curating activity of social media platforms amounts to expressive activity protected by the First Amendment. Justice Kagan noted that Facebook’s News Feed and YouTube’s homepage use algorithms to create a personalized feed for each user.148Id. at 710. Their content moderation policies filter prohibited topics, such as pornography, hate speech, and certain categories of misinformation, and rank or label disfavored messages. In making these choices, social media platforms “produce their own distinctive compilations of expression.”149Id. at 716. The Moody majority thus appears to have resolved the debate as to whether platforms should be treated as publishers or as common carriers under the First Amendment (at least with respect to Facebook’s NewsFeed and the like).150See, e.g., Adam Candeub, Bargaining for Free Speech: Common Carriage, Network Neutrality, and Section 230, 22 Yale J.L. & Tech. 391 (2020); Eugene Volokh, Treating Social Media Platforms like Common Carriers?, 1 J. Free Speech L. 377 (2021); Ashutosh Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2 J. Free Speech L. 127 (2022).

Consistent with the campaign finance context, the Court majority was adamant that the First Amendment prevents the state from interfering with “private actors’ speech to advance its own vision of ideological balance.”151Moody, 603 U.S. at 741. Government may not “decide what counts as the right balance of private expression,” and must instead “leave such judgments to speakers and their audiences.”152Id. at 719. This principle holds true even when there are credible concerns that certain private parties wield disproportionate expressive power in the marketplace of ideas. The majority noted that the regulations in Tornillo, PG&E, and Hurley “were thought to promote greater diversity of expression” and “counteract advantages some private parties possessed in controlling ‘enviable vehicle[s]’ for speech.”153Id. at 733 (citing Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Boston, Inc., 515 U.S. 557, 577 (1995)). The Court also drew on its campaign finance jurisprudence, citing Buckley’s proposition that the government may not “restrict the speech of some elements of our society in order to enhance the relative voice of others.”154Id. at 742 (citing Buckley v. Valeo, 424 U.S. 1, 48–49 (1976)). Justice Kagan argued that “[h]owever imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others.”155Id. at 733.

In a concurring judgment, Justice Alito (joined by Justices Thomas and Gorsuch) agreed with the majority’s facial unconstitutionality argument but took issue with the majority’s First Amendment analysis. Justice Alito argued that the states’ laws, at least in some of their applications, appeared to regulate passive carriers of third-party speech, which receive no protection under the First Amendment.156See id. at 788 (Alito, J., concurring). He criticized the majority for failing to address the states’ argument that Facebook and YouTube amount to common carriers,157See id. at 793–94 (Alito, J., concurring). as did Justice Thomas in a separate concurrence.158See id. at 751–52 (Thomas, J., concurring). Justice Alito also seemed more sympathetic to the states’ concerns, noting that the content moderation decisions of social media platforms can have “serious consequences,” including impairing “users’ ability to speak to, [and] learn from,” others; impairing a political candidate’s “efforts to reach constituents or voters”; compromising “the ability of voters to make a fully informed electoral choice”; and exerting “a substantial effect on popular views.”159Id. at 768 (Alito, J., concurring). He described the Florida law as an attempt “to prevent platforms from unfairly influencing elections or distorting public discourse,”160Id. at 770 (Alito, J., concurring). in a manner reminiscent of the very antidistortion arguments that were rejected by the conservative Justices in the campaign finance context.

III.  Possibilities for Countervailance

The Moody majority’s stance was consistent with a long line of precedent that has treated state control of speech with grave distrust. By “requir[ing] the platforms to carry and promote user speech that they would rather discard or downplay,”161Id. at 728. the states’ content moderation policies violated a central tenet that the government may not influence the content of speech. However, the Supreme Court’s interpretation of the First Amendment gives rise to a genuine conundrum: although this approach protects listeners from the power of the state, it does not protect the speech environment from the power of the platforms nor from the deficits that ensue from digital exceptionalism. Indeed, actions on the part of the state that would amount to an effective fix of the challenges of digital exceptionalism would very likely involve too great a governmental intrusion into expressive freedom. Hence, the gap between the ideal of a well-functioning speech environment and the challenges of digital exceptionalism cannot be resolved without dramatic changes to current First Amendment jurisprudence. As a result, there is a very narrow space for measures that might lessen the deleterious effects of digital exceptionalism without falling afoul of the First Amendment.

In light of this conundrum, this Part canvasses some possibilities for countervailance; that is, mechanisms that could lessen the deficits of the digital public sphere such that listeners’ interests are better protected, even if that protection does not rise to the level of establishing the kind of equality required for democratic self-governance. With respect to the challenge of disinformation in social media, I have argued elsewhere for a “multifaceted public-private approach that employs a suite of complementary tactics including: (1) disclosure and transparency laws; (2) content-based regulation and self-regulation; (3) norm-based strategies; and (4) civic education and media literacy efforts.”162Yasmin Dawood, Protecting Elections from Disinformation: A Multifaceted Public-Private Approach to Social Media and Democratic Speech, 16 Ohio State Tech. L.J. 639, 641 (2020). Using Canada as a case study, I suggested that the “combined and interactive effects of a multifaceted approach provide helpful protections against some of the harms of disinformation while still protecting the freedom of speech.”163Id. at 642.

A similar type of approach might be an appropriate way to think about countervailance. The idea is not that any one countervailing tactic will protect listener interests. Instead, the combined and interactive effects of a number of measures may serve as a countervailing force against the immense power of social media platforms. A caveat, however, is in order. These countervailing measures are imperfect, even deeply so, in terms of their ability to counter the challenges of digital exceptionalism. These measures will not on their own bring about a well-functioning speech environment; instead, they will bring such an environment closer to realization. Hence, the effect of this countervailance will no doubt be modest: listeners would still very much be at the mercy of the platforms. The objective would be to at least lessen the acuteness of the asymmetry and its resulting deficits.

Indeed, the majority opinion in Moody suggests that there are possibilities for regulation. Justice Kagan acknowledged, for instance, that “[i]n a better world, there would be fewer inequities in speech opportunities; and the government can take many steps to bring that world closer.”164Moody v. NetChoice, LLC, 603 U.S. 707, 741 (2024). Citing Turner I,165Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 647 (1994) (protecting local broadcasting). Justice Kagan explicitly recognized that the “government can take varied measures, like enforcing competition laws, to protect th[e] access”166Moody, 603 U.S. 707 at 732–33. to information from many sources. In recent years, the federal government has been pursuing antitrust cases against Google, Meta, and Amazon. The Court majority also noted that “[m]any possible interests relating to social media” can meet the First Amendment intermediate scrutiny test.167Id. at 711 (citing United States v. O’Brien, 391 U.S. 367, 377 (1968)). Under intermediate scrutiny, a law must advance a “substantial governmental interest” that is “unrelated to the suppression of free expression.” Id. The Court was pointed in its assertion that “nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects.”168Id. at 740.

In what follows, I briefly canvass an array of countervailing mechanisms, including disclosure and transparency rules; a narrow prohibition of false election speech; strategies to manage deepfakes; state-led incentive structures and norms, including mechanisms to provide listeners with increased choices and powers of their own; public jawboning; and civil society efforts. Each of these measures warrants a far more extensive treatment—particularly with respect to their advantages and disadvantages—than I am able to offer here. Although it is beyond the scope of this brief discussion to attempt anything more than a cursory analysis, I hope that it nonetheless provides some indication of the kinds of possibilities that merit attention.

A. Disclosure and Transparency

As described above, disclosure provides multiple informational benefits for voters, including not only the content of the disclosures but also their quality and the amount of information provided.169See Wood, supra note 131, at 1102. Disclosure and disclaimers with respect to online political advertising would help to facilitate counterspeech and deter disinformation.170See Abby K. Wood, Facilitating Accountability for Online Political Advertisements, 16 Ohio State Tech. L.J. 520, 523–24 (2020). Disclosure would also provide listeners with the context they need to assess political advertising. That being said, the disclosure regimen in the campaign finance context is subject to various limitations, including structural barriers to connecting disclosures to voters and enforcing disclosure rules against violators.171See Jennifer A. Heerwig & Katherine Shaw, Through a Glass, Darkly: The Rhetoric and Reality of Campaign Finance Disclosure, 102 Geo. L.J. 1443, 1486, 1498 (2014). Disclosure rules have also been criticized for violating privacy, raising the risk of retaliation, chilling speech, and discouraging political participation.172See, e.g., Richard Briffault, Two Challenges for Campaign Finance Disclosure After Citizens United and Doe v. Reed, 19 Wm. & Mary Bill Rts. J. 983, 988–92, 1013–14 (2011).

Outside of the campaign finance context, online platforms could increase transparency about the content curation decisions they make. Transparency requirements are also an appropriate regulatory response to political disinformation.173See Wood, supra note 170, at 539–40. Compared to other regulatory responses, transparency laws have various benefits: they provide additional information to consumers, allow for public accountability, and nudge companies to make better decisions in anticipation of public disclosure.174See Eric Goldman, The Constitutionality of Mandating Editorial Transparency, 73 Hastings L.J. 1203, 1206 (2022). In his concurring opinion in Moody, Justice Alito remarked that the platforms are providing various disclosures under the European Union’s Digital Services Act, and that “complying with that law does not appear to have unduly burdened each platform’s speech in those countries.”175Moody v. NetChoice, LLC, 603 U.S. 707, 797–98 (2024) (Alito, J., concurring). Justice Alito further suggested that courts on remand should investigate whether such disclosures chilled the platforms’ speech.

B. False Election Speech

In general, falsehoods and lies are constitutionally protected speech.176See N.Y. Times Co. v. Sullivan, 376 U.S. 254, 279–83 (1964). As Sunstein observes, “[p]ublic officials should not be allowed to act as the truth police” because if they are empowered to “punish falsehoods, they will end up punishing dissent.”177Cass R. Sunstein, Liars: Falsehoods and Free Speech in an Age of Deception 3 (2021). There are, of course, a few narrow exceptions to the general rule that false statements are protected speech, such as, for example, regulations concerning defamation and false or misleading advertising.

The best response to false speech is not censorship but counterspeech. As the Supreme Court plurality noted in United States v. Alvarez, “[t]he remedy for speech that is false is speech that is true. This is the ordinary course in a free society.”178United States v. Alvarez, 567 U.S. 709, 727 (2012). Abby Wood observes that as a remedy for disinformation, counterspeech “fits well in the court’s ‘marketplace of ideas’ theory of the First Amendment.”179Wood, supra note 170, at 541. Lies stated by a candidate during an election campaign should likewise be addressed by the counterspeech of the candidate’s political opponent.180See Eugene Volokh, When Are Lies Constitutionally Protected?, 4 J. Free Speech L. 685, 704 (2024). That being said, counterspeech is often ineffective given the realities of echo chambers and the partisan divide in the news media.

Although restrictions on false speech are generally unconstitutional, a narrowly drawn prohibition of false election speech aimed at disenfranchising voters might survive constitutional scrutiny.181See Richard L. Hasen, Deep Fakes, Bots, and Siloed Justices: American Election Law in a “Post-Truth” World, 64 St. Louis U. L.J. 535, 548 (2020). Such a prohibition would target the mechanics of voting. Indeed, in Minnesota Voters Alliance v. Mansky, the Supreme Court indicated that false speech about when and how to vote could be banned by the government.182Minn. Voters All. v. Masky, 585 U.S. 1 (2018). The government’s compelling interest in protecting the right to vote could serve as the justification for the law. An additional consideration is that false speech about the mechanics of voting would be difficult to redress with counterspeech particularly in the few days leading up to an election.183See Volokh, supra note 180, at 707.

C.  Deepfakes and AI

Deepfake technology poses serious threats of harm to democracy, including by distorting public discourse, eroding citizens’ trust in news media, and manipulating elections.184See Chesney & Citron, supra note 72, at 1777. There have been several attempts to regulate deepfakes by the states,185See Jack Langa, Deepfakes, Real Consequences: Crafting Legislation to Combat Threats Posed by Deepfakes, 101 B.U. L. Rev. 761, 786 (2021). such as legislation in California and Texas that prohibited the use of deepfakes within a designated pre-election period.186See Yinuo Geng, Comparing “Deepfake” Regulatory Regimes in the United States, the European Union, and China, 7 Geo. L. Tech. Rev. 157, 162–63 (2023). However, deepfakes are better regulated—by both public officials and private entities—through disclosure and counterspeech rather than by outright bans.187See Sunstein, supra note 177, at 117. Disclosure requirements could, for example, label deepfakes as “altered.”188Hasen, supra note 7, at 27.

To be sure, there are real dangers to having the government determine what is true and false, which suggests that laws regulating deepfakes should be treated with caution. If platforms on their own accord institute deepfake bans, they should exempt parody, education, or art, and should provide accountability to users for any speech that is suppressed, including a meaningful opportunity to contest the decision.189See Chesney & Citron, supra note 72, at 1818.A growing challenge facing both public and private interventions, however, is that it will become increasingly difficult to detect deepfakes, particularly given the availability of generative AI.190See Communications Security Establishment, supra note 73, at 18. As the technology advances, the capacity to create deepfakes “will diffuse and democratize rapidly.”191Chesney & Citron, supra note 72, at 1762.

D. Incentives and Norms

The government can also use incentive structures to pressure platforms into making responsible choices about the democratic public sphere. For example, online platforms are protected from liability for hosting third-party content under Section 230 of the Communications Decency Act—a protection that arguably encourages platforms to moderate harmful speech and thereby perform a task that the government is not permitted to do.192See Erwin Chemerinsky & Alex Chemerinsky, The Golden Era of Free Speech, in Social Media, Freedom of Speech, and the Future of Our Democracy 92 (Lee C. Bollinger & Geoffrey R. Stone eds., 2022). Platforms may also be motivated to respond to harmful content out of a concern that the government could amend Section 230 if they fail to take action (although this eventuality is, of course, dependent on the priorities of the incumbent administration).193See Chesney & Citron, supra note 72, at 1813. The Digital Services Act promulgated by the European Union provides a more extensive regulatory model, one that is unlikely to be adopted in the U.S. It imposes several mandatory obligations on platforms, including transparency, notice-and-takedown systems, internal complaint handling systems, deplatforming, and independent auditing.194Council Regulation, 2022/2065, arts. 14, 16, 20, 23, 39, 2022 O.J. (L 277) 1 (EU).

The government could also create incentives for platforms to provide users with greater control over the content they receive. Many platforms already enable users to block or mute content they do not wish to see. However, they could take additional steps to enable users to actively moderate their own feeds.195See Bambauer, Rollins & Yesue, supra note 84, at 1069. In addition, the government could impose data interoperability requirements, thereby enabling users to easily move their data across platforms.196See Khan & Pozen, supra note 89, at 538–39. Platforms that violate users’ rights would lose followers in favor of rival platforms with healthier environments.197See id. To be sure, greater user control could also lead to greater epistemic fragmentation if users choose to avoid competing viewpoints.

Public-regarding behavior could be indirectly encouraged by such mechanisms as digital charters.198See Dawood, supra note 162, at 663–65. These public-private norm-based initiatives “identify standards, best practices, and objectives to govern the digital world.”199Id. at 663. For example, the Declaration of Electoral Integrity, an initiative between the Canadian government and the major platforms, endorsed the values of integrity, transparency, and authenticity as the pillars of a healthy political discourse.200See id. at 663–64. Another initiative, the Digital Charter, identified ten principles, including universal access; safety and security; control and consent; transparency, portability and interoperability; a level playing field; strong enforcement and real accountability.201See id. at 665. Although these norm-based approaches were not legally binding, they identified democracy-enhancing norms that could serve as a “standard by which to judge actions taken or not taken.”202Id.

E. Public Jawboning

Can public jawboning play a salutary role as a countervailance mechanism? A recent Supreme Court decision, Murthy v. Missouri,203Murthy v. Missouri, 603 U.S. 43 (2024). involves what is colloquially referred to as “jawboning,” which takes place when the government pressures private actors to take certain actions without directly using its coercive power to do so. In Murthy, the record revealed that, over the last few years, White House and other federal officials had routinely communicated with social media platforms about misinformation related to COVID-19 vaccines and electoral processes. Some of these communications were public: government officials, in response to vaccine misinformation on the platforms, opined that reforms to antitrust laws and to Section 230 of the Communications Decency Act may be in order.204See id. at 51–52. Other communications were private: officials in the White House, CDC, FBI, and CISA “regularly spoke” with platforms about misinformation over several years.205See id. at 51. The District Court for the Western District of Louisiana had issued a preliminary injunction, which was affirmed by the Fifth Circuit, on the basis that government officials had “coerced or significantly encouraged” the platforms to censor disfavored speech in violation of the First Amendment.206Missouri v. Biden, 83 F. 4th 350, 392 (5th Cir. 2023).

In a 6-3 majority opinion by Justice Barrett, the Supreme Court overturned the Fifth Circuit’s decision on standing grounds.207See Murthy, 603 U.S. at 58–62. Justice Barrett also rejected the plaintiffs’ “right to listen” theory—which asserted that the First Amendment protects the interest of social media users to engage with the content of other social media users—on the grounds that it provided a “startlingly broad” right to users to “sue over someone else’s censorship.” Id. at 74–75. Dissenting in Murthy, Justice Alito (joined by Justices Thomas and Gorsuch) asserted that the issue was whether the government engaged in “permissible persuasion” or “unconstitutional coercion.”208Id. at 98–100 (Alito, J., dissenting). While the government may inform and persuade, it is barred under the First Amendment from coercing a third party into suppressing another person’s speech.209See id. (Alito, J., dissenting) (citing Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 67 (1963)). Drawing on the Court’s approach in National Rifle Association v. Vullo,210Nat’l Rifle Ass’n v. Vullo, 602 U.S. 175, 189–90 (2024). Justice Alito analyzed three factors—the authority of the government officials; the nature of the statements made by those officials; and the reactions of the third party alleged to have been coerced—to find that the government had engaged in coercion.211See id. at 100–07 (Alito, J., dissenting).

Ashutosh Bhagwat draws a helpful distinction between public jawboning and private jawboning: while public jawboning should rarely be considered coercive, in large part because government actors routinely hector corporations and often do so as part of their official responsibilities, private jawboning can sometimes amount to unconstitutional coercion.212See Ashutosh Bhagwat, The Bully Pulpit or Just Plain Bully: The Uses and Perils of Jawboning, 22 First Amend. L. Rev. 292, 306 (2024). However, “[d]etermining when private jawboning crosses the constitutional line . . . raises extremely difficult questions,” which require courts to engage in a highly contextual analysis.213Id. at 310. Justice Alito contended, for instance, that while the coercion in Murthy was “more subtle than the ham-handed censorship found to be unconstitutional in Vullo . . . it was no less coerceive.”214Murthy, 603 U.S. at 80 (Alito, J., dissenting). The danger is that if “a coercive campaign is carried out with enough sophistication, it may get by.”215Id. Ilya Somin catalogues the various ways in which government agencies post-Murthy can ensure that their pressure tactics avoid judicial scrutiny.216See Ilya Somin, The Supreme Court’s Dangerous Standing Ruling in Murthy v. Missouri, Reason.com: The Volokh Conspiracy (June 26, 2024, 5:57 PM), https://reason.com/volokh/2024/06/26/the-supreme-courts-dangerous-standing-ruling-in-murthy-v-missouri [https://perma.cc/64XB-E7FV].

Despite these legitimate concerns, there may be a role for public, but not private, jawboning to serve as a countervailing force against the power of the tech giants. Helen Norton’s “transparency principle”—namely, “an insistence that the governmental source of a message be transparent to the public”—could serve as a guide.217See Helen Norton, The Government’s Speech and the Constitution 30 (2019). As Norton observes, the “government’s speech is most valuable and least dangerous to the public when its governmental source is apparent: only then is the government’s speech open to the public’s meaningful credibility and accountability checks.”218Id. In an August 2024 letter to Congress, Mark Zuckerberg was unequivocal that Meta would no longer compromise its content standards in response to government pressure.219See Letter from Mark Zuckerburg, Founder, Chairman & CEO of Meta Platforms, Inc. to the Hon. Jim Jordan, Chairman, Comm. on the Judiciary, United States House of Reps. (Aug. 26, 2024). Indeed, Meta later announced the adoption of a new content moderation protocol that, among other things, removed restrictions on topics such as immigration and gender identity. If other platforms follow Meta’s lead, the protection (or not) of listener interests would be even more subject to the platforms’ decisions. Provided that the government’s use of public jawboning does not violate Vullo’s standards for coercion, it may prove to be a useful measure to protect users from the overwhelming power of the platforms.

F. Civil Society and the State

Civil society can also play a countervailing role. Truth-finding institutions, such as journalists and political activists, can combat false statements in an iterative process akin to the scientific method.220See Volokh, supra note 180, at 696–98. Collaborations between platforms and outside researchers could also lead to better responses for online misinformation.221See Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson & Duncan J. Watts, Misunderstanding the Harms of Online Misinformation, 630 Nature 45, 45 (2024). More generally, the concept of “knowledge institutions,” as developed by Vicki Jackson, captures the indispensable contribution of public and private entities, including universities, government agencies, libraries, and the press, to the collection and dissemination of knowledge needed for democratic self-governance.222See Vicki C. Jackson, Knowledge Institutions in Constitutional Democracies: Preliminary Reflections, 7 Canadian J. Compar. & Contemp. L. 156 (2021); see also Heidi Kitrosser, Protecting Public Knowledge Producers, 4 J. Free Speech L. 473 (2023).

The state can bolster the speech environment by supporting knowledge institutions. Over the last several decades, the federal government has fostered the public sphere by enacting legislation to support newspapers, establishing a system of broadcast licenses, regulating cable, and implementing antitrust laws.223See Martha Minow, Saving the News: Why the Constitution Calls for Government Action to Preserve Freedom of Speech 42–57 (2021). With respect to the threats currently facing private news organizations, Martha Minow argues that “[n]othing in the Constitution forecloses government action to regulate concentrated economic power . . . or strengthen public and private investments in the news functions presupposed by democratic governance.”224Martha Minow, Does the First Amendment Forbid, Permit, or Require Government Support of News Industries?, in Constitutionalism and a Right to Effective Government? 86 (Vicki C. Jackson & Yasmin Dawood eds., 2022). Minow further suggests that the “First Amendment’s presumption of an existing press may even support an affirmative obligation on the government to undertake reforms and regulations to ensure the viability of a news ecosystem.”225Minow, supra note 223, at 98. Emily Bazelon proposes that federal and state governments could create publicly funded TV or radio, in addition to funding nonprofit journalism.226See Emily Bazelon, The Disinformation Dilemma, in Social Media, Freedom of Speech, and the Future of Our Democracy 41, 49 (Lee C. Bollinger & Geoffrey R. Stone eds., 2022). To be sure, the independence of news organizations must be protected by

various mechanisms so that the government cannot control the media it funds and supports.227See Minow, supra note 223, at 138–42.

Finally, community participation in regulating online platforms may also improve the speech environment. For example, Reddit is internally governed by volunteer moderators, who establish and enforce rules about what conduct is permitted or prohibited in each subcommunity.228See Ethan Zuckerman, The Case for Digital Public Infrastructure, Knight First Amend. Inst. at Colum. Univ. (Jan. 17, 2020), https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure [https://perma.cc/F5EX-XTKV]. These moderators often put in “dozens of hours a week to ensure that content meets community standards and that participants understand why their content was permitted or banned.”229Id. Although Reddit is by no means perfect, it may be an example of what Aziz Huq has described as an “island of algorithmic integrity”; that is, a model of a well-functioning social media platform that acts in public-regarding ways and may thereby shift norms and expectations.230See Huq, supra note 70, at 1301–03.

Conclusion

This Article has offered a normative account of a well-functioning speech environment for speakers and listeners, under which individuals engage in three self-governing activities—informed voting; discussion and deliberation; and meaningful participation—while having their liberty, equality, epistemic, and nondomination interests satisfied. It also argued for digital exceptionalism—the idea that the expressive realm on social media platforms suffers from certain unique deficits that not only undermine the speech environment but that also pose challenges for regulation. The Article then turned to the law of public discourse, focusing on campaign finance regulation and the Moody decision, to find that First Amendment jurisprudence provides few conceptual resources to protect listeners’ equality, epistemic, and nondomination interests. Finally, the Article argued for countervailance, which is the idea that certain mechanisms could lessen the deficits of the online realm such that listener interests are better protected.

To be sure, there continues to be great uncertainty about how digital technologies will evolve over time and what new difficulties they will pose. The rapidly changing landscape of social media technology poses genuine challenges for regulation. While the Moody majority insisted that free speech principles do not change despite the challenges of applying them to evolving technology, the concurring Justices expressed reservations about how evolving algorithmic and AI technology would be covered by the First Amendment. For example, Justice Barrett queried whether there was a difference between an algorithm that did the curation on its own versus an algorithm that was directed by humans.231Moody v. NetChoice, LLC, 603 U.S. 707, 745–48 (2024) (Barrett, J., concurring). Justice Alito noted that the vast majority of the content moderation on the platforms is performed by algorithms, and now that AI algorithms are being used, the platforms may not even know why a particular content moderation decision was reached.232See id. at 793–95 (Alito, J., concurring). He asked: “Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?”233Id. (Alito, J., concurring); see also Toni M. Massaro & Helen Norton, Siri-ously? Free Speech Rights and Artificial Intelligence, 110 Nw. U. L. Rev. 1169, 1174 (arguing that AI speakers should be covered by the First Amendment due to the value of their speech to humans and the risk of government suppression). It is fair to say that much work remains to be done when considering how best to protect and promote a well-functioning political speech environment.

98 S. Cal. L. Rev. 1193

Download

* Professor of Law and Political Science, and Canada Research Chair in Democracy, Constitutionalism, and Electoral Law, Faculty of Law, University of Toronto; J.D. Columbia Law School, Ph.D. (Political Science) University of Chicago. I am very grateful to Ashutosh Bhagwat, Daniel Browning, James Grimmelmann, Aziz Huq, Michael Kang, Heidi Kitrosser, Erin Miller, Helen Norton, Eugene Volokh, Abby Wood, and the participants at the Listener Interests Symposium at USC Gould School of Law and the Public Law Colloquium at Northwestern Pritzker School of Law for very helpful comments and conversations. Special thanks to David Niddam-Dent for excellent research assistance and to the editors of the Southern California Law Review for their valuable editorial work.

Listeners’ Choices Online

The most useful way to think about online speech intermediaries is structurally: a platform’s First Amendment treatment should depend on the patterns of speaker-listener connections that it enables. For any given type of platform, the ideal regulatory regime is the one that gives listeners the most effective control over the speech that they receive.

In particular, we should distinguish four functions that intermediaries can play: (1) broadcast, such as radio and television, transmits speech from one speaker to a large and undifferentiated group of listeners, who receive the speech automatically; (2) delivery, such as telephone, email, and broadband Internet, transmits speech from a single speaker to a single listener of the speaker’s choosing; (3) hosting, such as YouTube and Medium, allows an individual speaker to make their speech available to any listeners who seek it out; and (4) selection, such as search engines and feed recommendation algorithms, gives listeners suggestions about speech they might want to receive. Broadcast is relevant mostly as a (poor) historical analogue, but delivery, hosting, and selection are all fundamental on the Internet.

On the one hand, delivery and hosting intermediaries can sometimes be subject to access rules designed to give speakers the ability to use their platforms to reach listeners because doing so gives listeners more choices among speech. On the other hand, access rules are somewhere between counterproductive and nonsensical when applied to selection intermediaries because listeners rely on them precisely to make distinctions among competing speakers. Because speakers can use delivery media to target unwilling listeners, they can be subject to filtering rules designed to allow listeners to avoid unwanted speech. Hosting media, however, mostly do not face the same problem, because listeners are already able to decide which content to request. Selection media, for their part, are what enable listeners to make these filtering decisions about speech for themselves.

Introduction

This is an essay about listeners, the Internet, and the First Amendment. In it, I will argue that the most useful way to think about online speech intermediaries is structurally: a platform’s First Amendment treatment should depend on the patterns of speaker-listener connections that it enables. For any given type of platform, the ideal First Amendment regime is the one that gives listeners the most effective control over the speech that they receive.

This essay does not stand alone. In a previous article, Listeners’ Choices, I outlined a two-part theory of the First Amendment based on recognizing listeners’ choices about what speech to hear.1James Grimmelmann, Listeners’ Choices, 90 U. Colo. L. Rev. 365, 366–67 (2019). First, any free-speech principle that does not take listeners’ choices seriously is self-defeating. In a world where speakers pervasively compete for listeners’ attention—which is to say, in our world—listeners’ choices provide the only normatively appealing way to resolve the inevitable conflicts among speakers. Second, existing First Amendment doctrine regularly defers to listeners’ choices. Many cases that are seemingly about speakers’ rights snap into focus as soon as we pay attention to which listeners are willing and which listeners are not. Listeners’ choices among speakers are typically content- and viewpoint-based, but a legal rule that defers to those choices can be content-neutral.

The theory I presented in Listeners’ Choices was skeletal. Here, my purpose is to flesh out the listeners’-choice principle so that it does useful doctrinal and policy work in our modern media environment. I will analyze the role of listeners’ choices in four structurally different functions that media intermediaries can carry out:

  • Intermediaries carrying out a broadcast function, such as radio and television, connect one speaker to a large and undifferentiated group of listeners who receive the speech automatically;
  • Intermediaries carrying out a delivery function, such as telephone, email, and broadband Internet, transmit speech from a single speaker to a single listener of the speaker’s choosing;
  • Intermediaries carrying out a hosting function, such as YouTube and Medium, allow an individual speaker to make their speech available to any listeners who seek it out; and
  • Intermediaries carrying out a selection function, including search engines and feed recommendation algorithms, give listeners suggestions about speech they might want to receive.

Notice that I refer to distinct “functions,” because media and intermediaries are not monolithic. There is no set of First Amendment rules for “the Internet,” nor can there be. The Internet is too vast and variegated for that to work. Distinguishing among broadcast, delivery, hosting, and selection helps us see that these functions can be disaggregated. On the Internet, we are accustomed to thinking of hosting and selection as intertwined; the term “content moderation” encompasses them both. But they do not necessarily need to be: YouTube the hosting platform and YouTube the search engine are different and could be subjected to different legal rules.

The original sin of broadcast was that it inextricably combined selection and delivery into a single take-it-or-leave-it package, in a way that was uniquely disempowering to listeners. Bandwidth limitations mean that broadcast media present listeners with a limited array of speakers to choose among. And the fact that listeners receive broadcast speech as a group, rather than individually, means that it is hard to protect unwilling listeners from that speech without blocking willing listeners’ ability to receive it. The result is a body of doctrine and theory that purports to act in listeners’ interest but is primarily concerned with allocating scarce bandwidth among competing speakers.

In contrast, listeners can be far more empowered on the Internet than they were offline. Delivery, hosting, and selection are all more listener-friendly than broadcast. The individually targeted nature of delivery media means that media intermediaries can block unwanted communications to unwilling listeners without offending core free-speech values. The pinched kinds of choices that broadcast media needed to make among competing speakers were a poor proxy for the much broader kinds of choices that listeners can make for themselves on hosting media. And the recommendations that selection media provide to help listeners choose among competing speakers are fundamentally oriented towards facilitating listeners’ autonomy, not speakers’.

Turning to the specifics of how these different kinds of media should be regulated, there are two structurally different kinds of legal rules that can apply to them:

  • Access rules ensure that speakers are able to use a medium, even when an intermediary would prefer to exclude them.2 Access rules for listeners raise harder issues because speakers can have associational, privacy, and economic interests in restricting the audience for a communication to exclude willing listeners. An activist organizer’s mailing list might exclude political opponents; a copyright owner’s catalog might have a paywall with different prices for hobbyist and professional subscribers. A communications platform’s access policies for listeners are often inextricably bound up with speakers’ preferences about their audiences. These are subtle questions, and I do not discuss them in this essay.
  • Filtering rules ensure that listeners are able to avoid unwanted speech, even when speakers would prefer to subject them to it. Sometimes they empower an intermediary to reject that speech on behalf of listeners (i.e., they are the opposite of access rules), but sometimes they require speakers and intermediaries to structure their communications in a way that enables listeners themselves to reject the speech.

From a speaker’s point of view, access rules look like they promote free speech and filtering rules look like they inhibit it. But from a listener’s point of view, both types of rules can promote the values of the First Amendment.

For access rules, the key distinction is between rival and non-rival media. Delivery and hosting can be non-rival on the Internet, where bandwidth is immense and can be expanded as needed. Speakers who use delivery and hosting media mostly do not interfere with each other, and so an intermediary can treat most speakers identically. But selection is fundamentally rival: listeners rely on these intermediaries to help them distinguish among speakers, and so selection intermediaries must favor some speakers and disfavor others. As a result, delivery and hosting intermediaries can often be subjected to access rules requiring even-handed treatment of all interested speakers, but the First Amendment mostly forbids imposing access rules on selection intermediaries.

For filtering rules, the key distinction is that delivery situates the relevant choices among speaker-listener pairings upstream (closer to speakers) while hosting situates those choices downstream (closer to listeners). When listeners can make their own choices among speech (as on hosting intermediaries), filtering rules—whether imposed by intermediaries or by the legal system—have the effect of thwarting those choices. However, when speakers make those choices in the first instance (as on delivery intermediaries), sometimes filtering rules are necessary to empower listeners to make choices for themselves. Selection media, for their part, provide listeners the information they need to choose which content on hosting media to request, and which content on delivery media to receive.

In part, this essay is a love letter to selection media, written on behalf of listeners. Selection media play an utterly necessary role in an environment of extreme informational abundance, and they can be more responsive to listeners’ informational choices and needs than any other form of media.3This is a generalization of a point I have been making for decades about search engines. See, generally James Grimmelmann, Don’t Censor Search, 117 Yale L.J. Pocket Pt. 48 (2007); James Grimmelmann, The Structure of Search Engine Law, 93 Iowa L. Rev. 1 (2007); James Grimmelmann, Information Policy for the Library of Babel, 3 J. Bus. & Tech. L. 29 (2008); James Grimmelmann, The Google Dilemma, 53 N.Y. L. Sch. L. Rev. 939 (2009); James Grimmelmann, Speech Engines, 98 Minn. L. Rev. 868 (2014) [hereinafter Grimmelmann, Speech Engines]. Access rules are often nonsensical when applied to them, and filtering rules must be applied with care, lest they trample on the filtering work that selection media are already doing.4See James Grimmelmann, Some Skepticism About Search Neutrality, in The Next Digital Decade: Essays on the Future of the Internet 435, 439–42 (Berin Szoka & Adam Marcus eds., 2010).

But the fact that selection media are often listener-friendly does not mean that they always are. I have argued previously that search engines can be regulated when they behave disloyally or dishonestly towards their users,5Grimmelmann, Speech Engines, supra note 3. and the same goes for selection media. More generally, I will argue here that structural regulation of selection media is often appropriate. For example, an intermediary could be forced to disaggregate its hosting and selection functions; the former can—and sometimes should­—be regulated in ways that the latter cannot. Indeed, an intermediary might need to open its delivery or delivery platform up to competing selection intermediaries (so-called “middleware”) to give listeners broader and freer choice over the speech they receive.

Finally, a note on scope. This is an essay about intermediaries, not an essay about all forms of media. I am focusing on intermediaries’ roles in carrying third-party speech from speakers to listeners, not on their own first-party speech that they want to share with listeners. Different structural and First Amendment considerations apply to first-party speech. I will argue in places that solicitude for intermediaries’ speech interests should not prevent us from regulating them in ways that promote listeners’ speech interests. But this is not primarily an essay about intermediaries’ speech itself.6See generally Stuart Minor Benjamin, Transmitting, Editing, and Communicating: Determining What ‘The Freedom of Speech’ Encompasses, 60 Duke L.J. 1673 (2011) (discussing whether and when the First Amendment encompasses transmission of speech by intermediaries).

This essay has four substantive Parts. Part I provides a short review of the argument from Listeners’ Choices and can be skipped if you are familiar with it. Part II describes the structural differences among broadcast, delivery, hosting, and selection media, and explains how they relate to each other. Part III considers how access rules play out in these four types of media, and Part IV does the same for filtering rules. As we will see, the appropriate legal treatment of these different kinds of intermediaries and rules falls out naturally. First Amendment doctrine becomes radically simpler when we carve up media at their joints.

I. Listeners’ Choices: A Review

The starting point of Listeners’ Choices is that we can think about speech as a matching problem: in an environment where billions of people speak and billions of people listen, who speaks to whom? This way of thinking about speech is mostly content-neutral: it focuses on the network structure of connections between speakers and listeners, rather than on the content of the speech they exchange over those connections. I called an actual arrangement of speakers and listeners a “matching” to emphasize its mutuality and the fact that it is a collective property of speakers and listeners overall.

The possible structures of speaker-listener matching are shaped by two things: choices and scarcities. Regarding the former, speakers make choices about what to say and how; regarding the latter, listeners make choices about what to listen to and how. Not all their choices can be simultaneously honored, but the heart of this way of thinking about free speech is that speakers and listeners make choices among each other, and that these choices are in large part constitutive of the values that free expression serves. They are subjective, individual, and profoundly content- and viewpoint-based. Some conflicts among speakers’ and listeners’ choices arise simply from their diverging values and goals; I called these conflicts “internal” limits on possible speaker-listener matchings.

As for scarcities, another class of limits on speaker-listener matchings are what I called “structural” limits: some combinations of who speaks to whom are physically or practically impossible. In particular, three types of scarcity shape the patterns of speech everywhere and always: bandwidth, attention, and ignorance. Bandwidth limits, such as the limited range of the human voice or the limited number of very high frequency (“VHF”) television channels, restrict the ability of speakers’ messages even to reach listeners. Attention limits are hard-wired into human anatomy and psychology. Although speech consists of information, which is potentially infinitely replicable, each person can only pay attention to one or a few speakers at a time. Finally, ignorance about the content of speech can lead people to make choices about what to listen to—choices that would not have made if they were fully aware of what the speech would be.

The upshot of having these scarcities is that listeners’ choices among competing speakers provide a compelling way to decide among competing speech claims. Listeners’ choices are valuable in themselves because listening is an indispensable part of any communication, and listeners’ choices should be elevated over speakers’ choices because of the scarcity of attention; the capacity to listen is limited in a way that the capacity to speak is not. In order to tune into a preferred speaker, a listener must be able to tune out other speakers, and a speech environment in which listeners cannot do so is one in which effective speech is impossible. From this general point, a few specific observations follow.

First, in one-to-many cases of conflicts between willing and unwilling listeners, willing listeners generally prevail. The “Fuck the Draft” jacket in Cohen v. California7Cohen v. California, 403 U.S. 15, 16 (1971). and the drive-in movie screen in Erznoznik v. Jacksonville8Erznoznik v. Jacksonville, 422 U.S. 205, 206 (1975). were seen by both willing and unwilling viewers. To censor these forms of expression at the insistence of the unwilling ones would deprive the willing ones of speech they were willing (and in Erznoznik, affirmatively choosing) to see. The unwilling ones are expected to avert their eyes or change the channel. This looks like a preference for speakers’ right of expression as against unwilling listeners, but really it is a preference for willing listeners over unwilling ones.

Second, in true one-to-one cases where a speaker addresses a single unwilling listener, the analysis is far less speaker-friendly. The Supreme Court has affirmed homeowners’ rights to literally and figuratively shut their doors to unwanted solicitors9Martin v. City of Struthers, 319 U.S. 141, 150 (1943). and mail.10Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 736–37 (1970). A general ordinance prohibiting Jehovah’s Witnesses from going door-to-door11See Martin, 319 U.S. at 142. or prohibiting the mailing of communist literature would be unconstitutional,12Lamont v. Postmaster Gen., 381 U.S. 301, 307 (1965). because of the presence of potentially willing listeners among the audience. That concern drops away when the speaker can stop attempting to communicate with individual listeners who specifically object while still reaching those who do not. Listeners can choose not to pay attention, and speakers who attempt to overcome listeners’ defenses (for example, with amplified sound trucks) can be barred from doing so.13Kovacs v. Cooper, 336 U.S. 77, 89 (1949). The caselaw here is rich and context-sensitive; a rule that listeners always win would be as wrong as a rule that speakers always win. Instead, the cases grapple with the interests of speakers, willing listeners, unwilling listeners, and—importantly—undecided listeners, who cannot decide whether they want to hear what the speaker has to say unless the speaker at least has an initial chance to ask.14See, e.g., McCullen v. Coakley, 573 U.S. 464, 489 (2014) (holding that a state law establishing six-foot buffer zone around people entering abortion facilities interfered with the right of anti-abortion advocates to engage in “consensual conversations” with people seeking abortions (emphasis added)).

Third, the general problem of sorting listeners into the willing and the unwilling involves what I call “separation costs”: the effort that willing listeners must take to hear, or that unwilling listeners must take to avoid hearing, or that speakers must take to distinguish between the two, or some combination of the above. The scale and distribution of separation costs can vary greatly based on the technological environment. I argue that the legal system, in a very rough way, seeks out the least-cost-avoider of speech conflicts: when a party can take a simple and inexpensive action to resolve the conflict, the law often expects them to do so.

II. Four Media Functions

This Part reviews the structural differences among the four media functions: broadcast, delivery, hosting, and selection. Along with some examples of each type, I discuss the ways in which each of them is one-to-one or one-to-many.15Eugene Volokh, One-to-One Speech vs. One-to-Many Speech, Criminal Harassment Laws, and “Cyberstalking”, 107 Nw. U. L. Rev. 731 (2013). I defer discussion of scarcity and bandwidth constraints to the next Part, as these issues bear heavily on access rules.

A. Broadcast

Start with the wired and wireless mass media that dominated most of the twentieth century: radio, broadcast television, satellite television, and cable. These mass media are characterized by their extensive reach: they enabled a single speaker to reach a large potential audience of listeners. They are, in Eugene Volokh’s taxonomy, one-to-many media.

 

To be clear, broadcast media collectively enable numerous speakers to reach large audiences; there are many TV stations, and each station broadcasts many different programs. Instead, when I say that broadcast is one-to-many, I mean that each individual speaker reaches a large and undifferentiated audience. Broadcast aggregates numerous such one-to-many communications, dividing them up by time (for example, WNBC-TV broadcasts the news at 7:00 and Access Hollywood at 7:30) and by intermediary (WNBC-TV and WABC-TV both broadcast their respective news programs at 7:00). The structural point is that WNBC-TV can only broadcast a single program at a time—such as Access Hollywood at 7:30—and when it does, it enables a one-to-many communication from Access Hollywood to its viewers.

B. Delivery

Next, consider delivery media like mail, telegraph, telephone, email, direct messaging, and Internet service. They all transmit speech from an individual speaker to an individual listener selected by the speaker, making them one-to-one media.16Id. at 742. More precisely, they are one-to-one with respect to individual communications from speaker to listener. In aggregate, they are many-to-many. The postal service delivers millions of letters, but each letter goes from a single sender to a single recipient. Delivery is therefore a kind of disaggregated broadcast: instead of sending joint communication to all listeners at once, individual communications are sent to individual listeners at the speaker’s request.

Most delivery media use some form of medium-specific addresses for a sender to specify their chosen recipient. A letter goes to a specific postal address; a telephone call to a specific telephone number; an email to a specific email address; an Internet Protocol (“IP”) datagram to a specific IP address; and so on. A speaker can choose to send the same message to many listeners by sending many individual communications to different addresses. Conversely, by having an address, a listener makes themselves reachable by speakers and then can receive a mostly undifferentiated stream of communications from any speaker who wants to reach them.

Some delivery media—such as telephone and direct messaging—are interactive, but it still makes sense to talk of “the speaker” and “the listener.” First, at the beginning of a conversation, one user is trying to establish a connection with another: the phone rings, or an email appears in the inbox. The user trying to establish the connection is the one who chose to initiate the communication, chose when to do it, and most importantly, chose with whom to establish it. They are a speaker, and if the other user agrees, they receive the message and become a listener. Second, what we think of as “interactive” media are really bidirectional media. A telephone connection is “full duplex”: it requires two speech channels, one in each direction. The same is true for a Zoom call, an email conversation, or anything else that travels on the Internet. These interactive exchanges are made up of individual IP datagrams, each traveling from a sender to a recipient identified by IP address. Third, all delivery media are interactive on a long-enough time scale. Pen pals exchange letters, trading off the roles of speaker and listener. Each letter is still a discrete one-to-one communication carried by the postal service; mail is still a delivery medium.

C.  Hosting

A third category of Internet media consists of hosting platforms. Third-party speakers send content to these intermediaries, which make the content available to listeners on request. For example, an artist uploads illustrations from her portfolio of work to a Squarespace site and individual fans visit the site to view the illustrations.

Other examples of hosting intermediaries include (1) bulk storage like Google Drive and Amazon S3; (2) content-delivery networks (“CDNs”) like Akamai and Cloudflare; (3) hosting functions of social-media platforms like YouTube and X; and (4) web-based self-publishing features of platforms like Medium and Substack. Structurally, online marketplaces are also hosting services as long as they (a) sell digital content instead of physical goods or services, and (b) feature speaker-submitted third-party content. Examples include App Stores by Apple and Google, e-book stores by Barnes & Noble and Amazon, video game stores by Steam and Epic, and even Spotify as a distributor of podcasts and music.

Hosting is the mirror image of delivery. Both are one-to-one media; each individual communication goes from a single speaker to a single listener. The difference is that in delivery media, the speaker selects which listeners to speak to; in hosting media, the listener selects which speakers to listen to. Although hosting is usually thought of as a service offered by platforms to speakers, the listener’s request plays a crucial role in the process. Hosting is also a kind of disaggregated broadcast: instead of sending a joint communication to all listeners at once, individual communications are sent to individual listeners, this time at the listener’s request.

Hosting and delivery functions are often used in conjunction. A website host, for example, responds to a user’s request for a particular URL by sending a response with the contents of the page at that address. The request and the response are both made using delivery media—the Internet service providers (“ISPs”) along the delivery path between the host and the user. (So, for that matter, is the transmission from speaker to the website host with the content the speaker wants to make available, and so is the website host’s acknowledgement that it has received the content.) But the host’s own activities—its responses to listeners’ requests for content—have the listener-selected nature of hosting, not the speaker-selected nature of delivery.

Some intermediaries offer both hosting and delivery. Substack is a good example: each post is both made available on Substack’s website and also mailed out to newsletter subscribers. Substack is a hosting service for listeners who read the post on the website, but it is a delivery service for listeners who read the post in their email inbox. Sometimes the distinction is irrelevant, but sometimes it matters. Substack allows newsletter authors to import a mailing list of subscribers, so it is not safe to assume that everyone who receives a Substack delivery has consented to it. For a user who objects to newsletter spam, Substack is a delivery intermediary, not a hosting intermediary.

Like delivery, hosting can be aggregated into a one-to-many medium. Indeed, this is typically the default on the Internet. Unless a host affirmatively restricts which listeners have access to a speaker’s content—for example, with a list of subscribers to a paywalled publication—anyone with an Internet connection can access it, and it is far easier to leave access unrestricted than to impose selective restrictions. Thus, from a speaker’s perspective, hosting can function like broadcast in that it allows a speaker to reach an indeterminately large audience with a single act of publication.

D. Selection

Finally, consider the selection function of some media, which consists of recommending some content for users. Selection media include general search engines that index third-party sites, such as Google, Bing, Kagi, and DuckDuckGo, as well as site-specific search engines that index the content on a specific platform such as the search bars built into YouTube, TikTok, and X. They also include recommendation engines that may provide personalized results not explicitly tied to a user query, such as the feed algorithms on Facebook and TikTok or the watch-next suggestions on YouTube. The key feature of a selection platform is that it tells users about content, which they can then consume in full if they want.

Selection media are not strictly one-to-one or one-to-many in the same way that broadcast, delivery, and hosting are; they do not by themselves carry content from speakers to listeners. Instead, it is helpful to think of selection media as being many-to-one because they help individual listeners choose speech from a large variety of speakers. They turn an overwhelming volume of available content into a much smaller number of selections or recommendations that a listener can meaningfully experience, and they do so in ways that can be individuated for each specific listener.

Selection media are hardly new, but two features of the Internet make selection media particularly important online. First, the sheer scale of the Internet makes selection an absolute necessity. There is far more content on the Internet, or even on social-media platforms and not-especially large websites, than any one user can plausibly engage with. The shift from bandwidth to attention as the most salient bottleneck makes selection a crucial site of contestation.

Second, the Internet has often enabled selection to be disaggregated from delivery and hosting. The selection function of a television channel is obvious: because it can transmit so little compared with what it might, the choice of what to transmit does most of the work of selection. However, YouTube is both a content host and a content recommender: it can host a video without ever recommending that video to anyone. It is the difference between an album (selection bundled with hosting) and a playlist (selection by itself). This point cuts both ways—distinguishing the two functions takes some First Amendment pressure off of hosting, but piles more onto selection.

III.  Access

A. Scarcity

One of the fundamental structural constraints on choices about speech is scarcity: limits on the number of communications that a given medium, or an intermediary using that medium, can carry. Scarcity forces choices among speakers to be made upstream by the intermediary or by regulators allocating the medium among speakers and intermediaries. In contrast, non-scarce media allow choices among speakers to be made downstream by listeners themselves. Unsurprisingly, there is a long history of scarcity arguments in telecommunications policy.

The standard story, as reflected in caselaw, points to the scarcity of broadcast spectrum as a justification for regulation. First, the available spectrum needs to be allocated to different users to prevent chaos and interference. Then, once it has been handed out, these users can be required to carry a reasonable diversity of speakers so that the intermediaries do not have undue power over speech. The usual citation for this form of argument is Red Lion Broadcasting Co. v. FCC, which used scarcity arguments to uphold the FCC’s fairness doctrine.17Red Lion Broad. Co. v. FCC, 395 U.S. 367, 400–01 (1969).

In contrast, other media are not thought of as scarce in the same way. There is room for many simultaneous speakers, which means there is no need for regulatory intervention. Intermediaries themselves can choose which speakers to carry, and there is less risk of having a handful of powerful intermediaries entirely control the speech environment. The usual citation for this form of argument is Miami Herald Publishing Co. v. Tornillo, which declined to extend Red Lion to newspapers.18Mia. Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 257–58 (1974).

Instead, the Supreme Court upheld newspapers’ First Amendment right to pick and choose what content they print.

Thus, goes the story, there is a spectrum from scarce media, like broadcast, to non-scarce media, like newspapers. The scarcer the medium, the more regulable it is. Other media fall somewhere in between. Cable television, for example, can carry a limited number of channels, but typically more than broadcast can. Thus, the scarcity rationale for regulating cable exists, but is weaker than for regulating broadcast. This tracks with the regulatory regime: cable operators are required to set aside some of their channels for local broadcasters and public-access channels, but cable channels are not regulated for content. It also tracks with judicial treatment: the Supreme Court held 5-4 that this regulatory regime was constitutional in Turner Broadcasting System, Inc. v. FCC, almost exactly halfway in between the 9-0 decisions in Red Lion and Miami Herald.19Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180 (1997).

There are two problems with this story. The first is that it does not obviously explain why there are some media—such as telephone—that are even more regulated than broadcast. The telephone network has much higher capacity than broadcast does (it can carry millions of simultaneous conversations), but it is subject to a strict common-carriage regime. A naive scarcity argument would suggest the exact opposite: that because telephone capacity is effectively unlimited, there is no need for regulation.

The second problem is that even in cases that rely on scarcity arguments, those arguments do not always cut in the direction one would expect. In Miami Herald, it was the newspaper arguing that its editorial space was scarce—in the Supreme Court’s words, that it could not engage in “infinite expansion of its column space.”20Mia. Herald, 418 U.S. at 257. The Supreme Court accepted this argument as a rationale to uphold the newspaper’s First Amendment right to reject unwanted content—the exact opposite of what a naive scarcity argument would suggest.

The way out of these paradoxes is to recognize that there are two dimensions to scarcity. On one hand, there is what I call bandwidth scarcity: the limits on any one intermediary’s ability to carry the speech of multiple speakers. On the other hand, there is what I call entry scarcity: the limits on the number of intermediaries who can operate simultaneously. Entry scarcity cuts in favor of regulation: an intermediary is in a position to control who gets to speak, unconstrained by market forces and the threat of competition. But bandwidth scarcity cuts against regulation: it means that the intermediary necessarily exercises editorial judgment over which speakers have access, and it rules out simple common-carriage regimes that treat all

speakers equally. It is the interplay between these two distinct forms of scarcity that determines whether a medium is regulable.

In particular, mapping the two dimensions of scarcity in a two-by-two diagram reveals the underlying pattern of scarcity arguments:

  • In the top-right quadrant are print media, which are moderately bandwidth-scarce (it is possible to add pages to a newspaper or book, but at some expense and only by modifying its physical layout) and mostly not entry-scarce (physical printing is a commodity business). Thus, both scarcity considerations cut against regulation: there is no physical or economic need to allocate a limited ability to print among competing speakers, and imposing access rules comes at a real cost to a publisher’s ability to print the content it wants. Indeed, as Miami Herald illustrates, the Supreme Court’s solicitude for intermediaries’ speech is at its zenith here.
  • In the bottom-left quadrant are the classic common carriers. They are entry-scarce (the costs of running a second telephone network to every home were prohibitive), but they are not particularly bandwidth-scarce (carrying one more conversation or letter is a trivial burden for the phone network or the mails). Indeed, these are typically the most regulated communications intermediaries.
  • In the top-left quadrant are broadcast media. They are both entry-scarce (only thirteen VHF channels were allocated, and the practical number that could operate in any given area was invariably smaller) and bandwidth-scarce (each VHF television channel had 6 megahertz to carry a 525-line video signal at 30 frames per second). They are off-axis: their entry scarcity cuts in favor of regulation, but their bandwidth scarcity cuts against it. This is why they have historically been required to carry some diversity of content, but never with full common-carriage rules. They are more regulable than print, but less regulable than common-carriage networks.
  • In the bottom-right quadrant are media that are neither entry-scarce nor bandwidth-scarce. This is also an off-axis combination, but it is the opposite of the situation with broadcast, where access rules were both necessary (to give disfavored speakers access) and costly (because doing so comes at the cost of other speech the broadcasters could have carried). Here, access rules do not have a speech cost: giving additional speakers the ability to use an intermediary does not require the intermediary to drop other speakers to make room. However, it is also not clear whether these rules are necessary in the first place, because ordinary market forces would likely suffice to provide all speakers with the ability to speak.

As we will see, this two-dimensional framing of scarcity is quite helpful in situating the speech claims for and against access to the four types of intermediaries discussed in this essay: broadcast, delivery, hosting, and selection. Entry scarcity provides the justification for access rules to ensure listeners the widest possible range of choices among speakers without artificial limits imposed by incumbent intermediaries. However, bandwidth scarcity, when it exists, bespeaks caution: access rules come at their own sharp cost, limiting intermediaries’ ability to select the speech they think their listeners will most appreciate the ability to choose among. Thus, as we will see, hosting and delivery media (which are not bandwidth-scarce) may appropriately be the subject of common-carriage regulation where there are real issues of entry scarcity. However, selection media (which are intrinsically bandwidth-scarce) mostly should not be the subject of regulation regardless of entry scarcity.

I should note that there are competing definitions of “scarcity,” and my intention is to be agnostic among them. At different times and places, scarcity has been used to describe physical constraints (such as the laws of physics that govern electromagnetic interference), economic constraints (such as the cost of building out the infrastructure to run a telephone network), and regulatory constraints (such as limits on the number of cable franchises that will be awarded in a geographic area). Some commentators use scarcity narrowly to include only physical constraints; others use it broadly to include economic and regulatory constraints. These varying uses often reflect different beliefs about what kinds of regulations are appropriate for scarce media.21See generally Richard R. John, Sound Policy: How the Federal Communications Commission Worked in the Age of Radio (2025) (unpublished manuscript) (on file with author) (discussing these debates in the early years of the FCC). My argument here is modular with respect to the definition of scarcity in use. If you, according to your preferred definition, believe that a medium is entry-scarce but not bandwidth-scarce, I hope you will agree with my arguments for why common carriage might be an appropriate regulatory regime.

With these observations about scarcity in mind, we can turn to how access rules play out for different types of media. The focus throughout will be on how different rules increase or limit the choices available to listeners.

B. Broadcast

Twentieth-century broadcast media had highly limited capacity and were both bandwidth- and entry-scarce. These limits were primarily physical and technological and secondarily economic and regulatory. The available techniques for modulating an audio or audiovisual signal into one that could be transmitted through the atmosphere (radio, television, and satellite) or through wires (cable) allowed only a small number of such signals to be transmitted simultaneously in any geographic region. This number expanded over time with developments in telecommunications engineering: from AM to FM radio broadcasting; from VHF (very high frequency) to UHF (ultra high frequency) television broadcasting; from coaxial to fiber-optic cables; and so on. The basic structure remained the same: a fixed, finite menu of channels transmitted simultaneously to all potential listeners.

In such a setting, speaker-listener matching arises from a two-stage process. First, a few speakers are chosen to have access to the available channels, and then each listener chooses from the speech that speakers make available on those channels. In the United States, the first-stage choice among speakers was (and is) made by the operator of the physical infrastructure—the transmitting equipment or physical cable network—subject to some regulatory limits. The second-stage choice was (and is) made by individuals: members of the public with appropriate receiving apparatus (restricted in some cases, such as cable and satellite, who have subscribed to the operator’s service). The phrase most commonly used to describe this second-stage choice—changing the “channel”—reflects the way in which the technological constraints of twentieth-century broadcast funneled speech into a small and finite number of options.

Consider a speaker who is denied access to a channel, or who receives less access than they want, or who is limited in how they are allowed to use it, or who is charged more than they want for their access. In each case, they are obviously aggrieved. It is harder, however, from a purely speaker-centric position to explain why they have been wronged. The challenge—and this is a recurring challenge for speaker-centric analyses—is the problem of symmetry among speakers. It is one thing to say that the lucky speaker who receives access is better off than the unlucky speaker who does not, but it is quite another to make them change places. Doing so simply swaps the problem of the network operator picking winners and losers with the problem of the government picking winners and losers. To give A access and deny it to B amounts to preferring A’s speech to B’s, and on most theories of free speech, this preference is an awkward one for the government to engage in.

Instead, rationales for broadcast content regulation tend to rely on the needs of listeners, rather than speakers. As many scholars have noted,22E.g., David A. Strauss, Rights and the System of Freedom of Expression, 1993 U. Chi. Legal F. 197, 202 (1993). this is the upshot of Alexander Meiklejohn’s famous phrase, “What is essential is not that everyone shall speak, but that everything worth saying shall be said.”23Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948). The basic idea of this regulatory paradigm is to give listeners either high-quality content, a wide range of options of content, or both—on the assumption that speakers and broadcasters, left to their own devices, will provide neither. As the Supreme Court put it in Red Lion’s famous phrasing, “It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount.”24Red Lion Broad. Co. v. FCC, 395 U.S. 367, 390 (1969).

Ringing rhetoric aside, it is hard to find actual listeners in the resulting regulatory regime. In an environment of severe bandwidth constraints, it is impossible to solicit and honor all individual listeners’ choices; there are never enough channels to give each member of the audience what they personally want. Instead, they make their desires known only collectively and statistically by tuning in to channels and by paying for those channels or for the things advertised on them. Thus, as the long-running theme in media criticism goes, broadcast was a “vast wasteland” of boring, mediocre, and fundamentally majoritarian content.25Newton N. Minow, Television and the Public Interest, 55 Fed. Commc’n L.J. 395, 398 (2003) (reprinting Minow’s speech on May 9, 1961, before the National Association of Broadcasters). The larger the mass audience, the lower the common denominator.26See C. Edwin Baker, Media, Markets, and Democracy (2002) (arguing that mass media tend towards popular content to the exclusion of content of interest to smaller communities).

Consider some of the most notable examples of broadcast access regulations: the Mayflower doctrine27Mayflower Broad. Corp., 8 F.C.C. 333, 339–40 (1941). and its successor the fairness doctrine,28Rep. on Editorializing by Broadcast Licensees, 13 F.C.C. 1246, 1253 (1949). the right of reply,29Pers. Attacks; Pol. Eds., 32 Fed. Reg. 10303 (July 13, 1967); Red Lion Broad., 395 U.S. at 367 (upholding the constitutionality of the FCC’s right of reply rules). and the equal-time rule.3047 U.S.C. § 315. None of these were concerned with any specific listeners’ choices among speakers. Instead, they were all attempts to provide for listeners’ interests generically—by anticipating what groups of hypothetical listeners might want or need.

The few occasions on which broadcast media regulations have attempted to take account of actual listeners’ choices when setting access rules only show how hard it is to do so. The most striking example is format regulation. For years, the FCC interpreted the Communications Act of 1934’s requirement that broadcast licensees serve the “public convenience, interest, or necessity” to mean that it should consider stations’ formats in its licensing procedures.31Id. § 303. It would deny approval for new pop-music radio licenses, for example, if it felt that an existing market was adequately served by the radio stations already licensed to operate in the area.32Citizens Comm. to Pres. the Present Programming of the Voice of the Arts in Atlanta on WGKA-FM v. FCC, 436 F.2d 263, 270 (D.C. Cir. 1970). Indeed, a licensee seeking permission to change formats was required to petition the FCC for approval.33See Hartford Commc’ns Comm. v. FCC, 467 F.2d 408, 411–12 (D.C. Cir. 1972). These rules have long since gone by the wayside. The FCC now takes the position that broadcasters have a First Amendment right to broadcast any content format they want. In FCC v. WNCN Listeners Guild, the Supreme Court upheld the FCC’s policy decision not to consider formats in licensing renewal and transfer proceedings. 450 U.S. 582, 595–96 (1981).

Format regulation was in theory a listener-based system, but the FCC seemed genuinely flummoxed when actual listeners showed up in licensing procedures demanding a voice in the first-stage choices of who got access to the airwaves and on what terms. In Office of Communication of United Church of Christ v. FCC, a group of civil-rights activists attempted to intervene in a license-renewal proceeding before the FCC, alleging that WLBT in Jackson, Mississippi had aired only pro-segregation viewpoints.34Off. of Commc’n of United Church of Christ v. FCC, 359 F.2d 994, 997–98 (1966). The FCC denied their request, arguing that these “representatives of the listening public”35Id. at 997. could “assert no greater interest or claim of injury than members of the general public.”36Id. at 999. The D.C. Circuit reversed and remanded for an evidentiary hearing, as listeners were “most directly concerned with and intimately affected by the performance of a licensee.”37Id. at 1002.

There followed a string of cases in which the FCC and the D.C. Circuit struggled with how to actually take listeners’ views into account.38E.g., Citizens Comm. to Pres. the Present Programming of the Voice of the Arts in Atlanta on WGKA-FM v. FCC, 436 F.2d 263, 270 (D.C. Cir. 1970); Hartford Commc’ns Comm. v. FCC, 467 F.2d 408, 414 (D.C. Cir. 1972); Lakewood Broad. Serv., Inc. v. FCC, 478 F.2d 919, 924 (D.C. Cir. 1973); Citizens Comm. to Keep Progressive Rock v. FCC, 478 F.2d 926, 929 (D.C. Cir. 1973). In Citizens Committee to Keep Progressive Rock v. FCC, for example, WGLN in Sylvania, Ohio, switched to an all-prog-rock format in late 1971, and then received FCC approval in 1972 to switch to “generally middle of the road music which may include some contemporary, folk and jazz.”39Citizens Comm. to Keep Progressive Rock, 478 F.2d at 928. The Citizens Committee to Keep Progressive Rock petitioned the FCC to object. The D.C. Circuit ordered a hearing on whether the Toledo metropolitan area was adequately served by prog-rock stations as compared with top-forty stations,40Id. at 932. and discussed such details as whether a “golden oldies” format was sufficiently distinct from “middle of the road.”41Id. at 928 n.5. “In essence, one man’s Bread is the next man’s Bach, Bacharach, or Buck Owens and the Buckeroos, and where ‘technically and economically feasible,’ it is in the public’s best interest to have all segments represented,” the opinion sagely intoned.42Id. at 929.

My point here is not that the FCC’s enterprise of supervising formats or of requiring balanced public-interest programming in the name of listener interests was ill-considered. Instead, I want to emphasize that these interventions were more about listeners’ interests than about listeners’ choices. Some of them were about giving listeners information that it is considered important for them to have, and some of them were about moderately diversifying the menu of speech from which listeners could choose. But in an environment of severely limited bandwidth serving mass audiences, there was almost nothing more that could be done.

I make this point here because there are two misconceptions about listeners that are extraordinarily prevalent in the literature on access to the media. Both of them are direct consequences of inappropriately extending reasonable assumptions about the broadcast environment to other domains where they are much worse fits.

The first mistaken assumption is that speakers seeking access to media are necessarily good proxies for listeners. In 1967, Jerome Barron wrote, “It is to be hoped that an awareness of the listener’s interest in broadcasting will lead to an equivalent concern for the reader’s stake in the press, and that first amendment recognition will be given to a right of access for the protection of the reader, the listener, and the viewer.”43Jerome A. Barron, Access to the Press—A New First Amendment Right, 80 Harv. L. Rev. 1641, 1666 (1967) (emphasis added). In broadcast media, a strong right of access for diverse speakers may be a way to promote listeners’ practical ability to choose speech.

In other media, which are not characterized by the same combination of broad distribution and narrow bandwidth, there is much less reason to think of speakers as proxies for listeners. To give a simple example, many of the speakers most loudly demanding—and sometimes suing for—a right of access to Internet platforms are unrepentant spammers.44E.g., Cyber Promotions, Inc. v. Am. Online, Inc., 948 F. Supp. 436, 443–44 (E.D. Pa. 1996). Less charitably, the Republican National Committee. See Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *2–3 (E.D. Cal. Aug. 24, 2023). The access they seek is the access of pre-FCC unlicensed broadcast, as in the right to overwhelm media and listeners with high-volume speech that drowns out alternatives and reduces listeners’ practical ability to choose among speakers.

The second misconception about listeners’ choices that arises from seeing all media as broadcast media is the belief that nothing else can be done. Both the justifications for and many of the criticisms of regulations like the fairness doctrine and format review arise from thinking about speech environments in which listeners are fundamentally passive. The only controls they have—or can have—are the channel dial and the on-off switch. It seems to follow that the only useful regulatory interventions must happen upstream and that individual listeners themselves can have little involvement in the matching process. The entire model of media criticism that conceptualizes individuals as television viewers—numb, motionless, and mindless zombies or couch potatoes tuned in to the idiot box—is blind to the ways in which they engage with media that give listeners more agency and more choices.45Even in the case of television, it misses the way that fans engage. See generally Henry Jenkins, Textual Poachers: Television Fans and Participatory Culture (1992); Betsy Rosenblatt & Rebecca Tushnet, Transformative Works: Young Women’s Voices on Fandom and Fair Use, in eGirls, eCitizens 385 (Jane Bailey & Valerie Steeves eds., 2015). This is a different type of agency than the agency I am discussing as listeners. We will see many examples soon. For now, remember that the assumption of listener passivity is just that—an assumption.

C. Delivery

Delivery media are mostly not bandwidth-scarce, especially on the Internet. Any given delivery intermediary’s platform tends to face fewer capacity constraints than broadcast media did. Partly this is structural: delivery media solve a smaller problem because they only try to route a communication to one recipient, rather than many. Partly it is due to physical differences: the phone network could handle more simultaneous connections by running more wires in trunk lines, whereas cable could not increase the number of channels without reengineering every subscriber’s wiring and equipment. Partly it is due to the telecommunications engineering triumphs of the telephone system and the Internet, which have scaled up over many orders of magnitude in their lifetimes. And partly it is due to recognizing the limits of the possible: telegraph companies did not attempt to offer video service.

Whatever the reason, any given communication takes up a much smaller fraction of a delivery provider’s capacity than a corresponding communication would take up of a broadcaster’s capacity. Comcast as a cable operator can offer its subscribers a few hundred channels, while Comcast as an ISP can offer its subscribers delivery to and from millions of sites. The result is that Comcast’s Internet-service subscribers interfere with each other far less than the cable channels vying for transmission do. One more subscriber is trivial from Comcast’s perspective, and it has every economic incentive to sign up as many as it can. However, each cable carriage agreement is individually negotiated, and Comcast is ready to say “no” if the terms are not good enough because Comcast has to devote some of a sharply limited resource to each channel it offers.

Entry scarcity varies among delivery media. Some, such as email, are almost completely open to entrants: anyone can set up their own SMTP server and start exchanging emails. Others, such as telephone and Internet service, have limited competition among intermediaries who can serve any particular customer or region because the need to place physical infrastructure, such as fiber-optic cables or cell-phone towers, in particular locations creates economic and regulatory barriers to entry. The postal service is an extreme example: it has a statutory monopoly on the carriage of letters.4618 U.S.C. § 1694 (fining anyone who, in regular point-to-point service, “carries, otherwise than in the mail, any letters or packets”).

There is a long and robust tradition of speakers’ rights to access delivery media. Older delivery media, in particular, have frequently been subjected to common-carriage rules that require them to accept communications from all senders and for all receivers, and forbid them from discriminating on the basis of the contents of those messages.47See Genevieve Lakier, The Non–First Amendment Law of Freedom of Speech, 134 Harv. L. Rev. 2299, 2316–30 (2021); Blake E. Reid, Uncommon Carriage, 76 Stan. L. Rev. 89, 110–13 (2024). The postal service “shall not . . . make any undue or unreasonable discrimination among users of the mails . . . .”4839 U.S.C. § 403. This statutory obligation is almost certainly a First Amendment rule.49See Blount v. Rizzi, 400 U.S. 410, 416 (1971) (“The United States may give up the Post Office when it sees fit, but while it carries it on the use of the mails is almost as much a part of free speech as the right to use our tongues . . . [P]rocedures designed to deny use of the mail . . . violate the First Amendment unless they include built-in safeguards against curtailment of constitutionally protected expression . . . .”). Similarly, the Communications Act prohibits “any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services” by telecommunications common carriers including telephone companies.5047 U.S.C. § 202(a). This is the modern continuation of a long tradition: laws in the nineteenth century required telegraph companies to “operate their respective telegraph lines as to afford equal facilities to all, without discrimination in favor of or against any person, company, or corporation whatever.”51Telegraph Lines Act, ch. 772, 25 Stat. 382–83 (1888) (codified as amended at 47 U.S.C. § 10); See Lakier, supra note 47, at 2320–24 (surveying history of telegraph common-carrier laws). Indeed, the postal service,52See 39 U.S.C. § 101(a) (“The United States Postal Service shall be operated as a basic and fundamental service provided to the people by the Government of the United States . . . .”). telephone network,53See 47 U.S.C. § 254 (establishing universal service policy). and broadband Internet service54See generally FCC, Connecting America: The National Broadband Plan (2010). are all the subjects of universal-service policies that affirmatively attempt to provide access to all American residents.

On the other hand, it is an open doctrinal question whether government can require modern delivery providers—specifically email and broadband Internet—to provide uncensored access to speakers and listeners. The best and most prominent example is the FCC’s network neutrality rules that attempted to require broadband ISPs to carry traffic to and from all edge providers (that is, speakers) on a nondiscriminatory basis.55The most recent version was the Safeguarding and Securing the Open Internet Order of 2024, 89 Fed. Reg. 45404 (June 7, 2024). See 47 C.F.R. § 8.3(a) (2024) (ISPs “shall not block lawful content, applications, services, or non-harmful devices”); id. § 8.3(b) (ISPs shall not “impair or degrade lawful internet traffic on the basis of internet content, application, or service”); id. § 8.3(c)(1) (ISPs shall not “directly or indirectly favor some traffic over other traffic” for compensation); id. § 8.3(d)(1) (ISPs shall not “unreasonably interfere with or unreasonably disadvantage” users’ ability to access and edge providers’ ability to make available lawful content). That order was set aside by the Sixth Circuit. See Ohio Telecom Ass’n v. FCC, 124 F.4th 993, 933 (6th Cir. 2025). It is unlikely that federal network-neutrality rules will be revived in the short run, although state-level counterparts remain in force. See, e.g., Cal. Civ. Code § 3100 (West 2024). The D.C. Circuit upheld one version of the FCC’s network neutrality rules against a First Amendment challenge in 2016.56See U.S. Telecom Ass’n v. FCC, 825 F.3d 674, 675 (D.C. Cir. 2016). Dissenting from denial of rehearing en banc, Judge Kavanaugh argued that ISPs exercise editorial discretion protected by the First Amendment.57See U.S. Telecom Ass’n v. FCC, 855 F.3d 381, 382 (D.C. Cir. 2017). There are also dicta in the Moody v. NetChoice majority opinion describing First Amendment protections for social-media companies’ “choices about the views they will, and will not, convey” that would seem to apply equally well to ISPs.58Moody v. NetChoice, LLC, 603 U.S. 707, 737 (2024).

Indeed, § 230 affirmatively shields Internet delivery media from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”5947 U.S.C. § 230(c)(2)(A). The precise contours of what constitutes “good faith” are unsettled,60See, e.g., Darnaa, LLC v. Google, Inc., No. 15-cv-03221-RMW, 2016 U.S. Dist. LEXIS 152126, at *9 (N.D. Cal. Nov. 2, 2016). as is the scope of the “otherwise objectionable” catchall,61See, e.g., Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040, 1047 (9th Cir. 2019). but the general result is to preempt any state attempts (by statute or common law) to impose access mandates.62See, e.g., Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *10–11 (E.D. Cal. Aug. 24, 2023).

It is also notable that many delivery media are governed by strict privacy rules that limit carriers’ ability even to determine the contents of a message. The USPS is legally prohibited from opening first-class mail without a search warrant.63See 39 U.S.C. § 404(c). Telephone carriers are restricted from listening to conversations by the Wiretap Act,64See 18 U.S.C. § 2511(1)(a) (prohibition on interception); id. § 2511(2)(a)(i) (describing limited exception to that prohibition for interceptions “necessary incident to the rendition of his service or to the protection of the rights or property of the provider of that service”). as are ISPs and email providers.65See, e.g., United States v. Councilman, 418 F.3d 67, 69 (1st Cir. 2005) (finding Wiretap Act interception by email provider). Even beyond legal limits, many delivery providers now use encryption systems that technologically prevent the provider from determining message contents; for example, Apple Messages and Signal are end-to-end encrypted so that only the designated recipient (and not any intermediary, including Apple or Signal) can decrypt a message. A fortiori, carriers who cannot even tell what a message says cannot discriminate on the basis of its contents.

It is easy to justify common-carriage access rules for delivery media—old and new—in light of their structural characteristics. From the intermediary’s point of view, the weak bandwidth constraints mean that carrying any particular communication is not a substantial technical burden. In the aggregate, of course, communications add up, but that is primarily an economic problem—one to be addressed with appropriate pricing and funding.66See generally Brett Frischmann, Infrastructure: The Social Value of Shared Resources (2012). Where pricing is not available or insufficient, capacity limits on the volume of communications to or from a user are largely content-neutral ways of allocating bandwidth.67Similarly, communications that impair the network itself can be addressed through anti-abuse rules that target the harmful effects and only incidentally burden speech. See., e.g., 47 C.F.R. § 68.108 (2023) (allowing telephone providers to discontinue service to customers who attach equipment that harms the network); id. §§ 8.3(a), (b), (d)(2) (making exceptions to network neutrality rules for “reasonable network management”).

Carrying a communication is not a speech problem, except to the extent that the intermediary wants to make an expressive statement by carrying or refusing to carry particular messages. Historically, though, that argument has carried very little weight for traditional delivery media. This attitude is easy to justify by seeing delivery media from the perspective of speakers and listeners. Willing speakers and willing listeners have essentially the same interest in access to delivery media: the goal of forming the core free speech interest by communicating with each other.68Grimmelmann, supra note 1, at 382; Jovy Chan, Understanding Free Speech as a Two-Way Right, 1 Pol. Phil. 156, 164 (2024). If you want to send me an email and I want to receive the email, we are both thwarted if your email provider deletes your email.

An intermediary’s speech claims are weaker when they go up against those of matched speaker-listener pairs. The intermediary may not want to help the speaker and listener connect, but this is fundamentally an objection to their speech, not a claim about its own speech. It might prefer to deliver messages from other speakers it likes better; but when it does so, it forces listeners to receive messages from speakers they prefer less. As I argued in Listeners’ Choices, it is a core free-speech violation to make a listener listen to a speaker whose speech they do not want rather than listen to a speaker whose speech they want.69Grimmelmann, supra note 1, at 388. So while a delivery intermediary’s denial of access to a speaker or listener is not by itself a First-Amendment violation, the First Amendment leaves ample room for government to require delivery intermediaries to provide access.

In general, both speakers and listeners have standing to challenge denials of access to a delivery platform. In Murthy v. Missouri, the Supreme Court held that listeners do not have standing to challenge restrictions on speakers unless “the listener has a concrete, specific connection to the speaker.”70Murthy v. Missouri, 603 U.S. 43, 75 (2024). In the case of a speaker attempting to send a message to a specific listener (as opposed to the hosting platforms at issue in Murthy itself), this connection seems clearly satisfied. And where it is the listener who has been excluded from a platform (for example, disconnected by their ISP over alleged copyright violations), the impact on their speech interests as a listener is equally obvious.

If there is a distinction between analog and digital delivery media, it cuts in favor of applying access rules to modern digital intermediaries, not against. As bandwidth constraints drop further and further away, intermediaries’ arguments that they have a technical or economic need to discriminate among users on the basis of their speech get weaker and weaker. Most arguments to the contrary rest on a confusion between delivery and selection media. Commentators project the strong expressive interests in an intermediary’s selection function (both the intermediary’s own and those of the listeners they serve) onto the intermediary’s delivery function, without stopping to consider whether these functions can be separated and distinguished.

D. Hosting

Common-carriage access rules for hosting media generally facilitate listener choice. There is an obvious argument in favor of access rules: the more speakers that are available through a hosting intermediary, the wider the range of choices it offers to listeners. The entire web was better than AOL’s walled garden; a streaming service with ten million tracks beats one with one million. The hosting intermediary might have self-interested reasons to limit access (for example, to favor its affiliated speakers or to extract more money from speakers through price discrimination), but the listeners who use the platform generally prefer that it offer the widest possible range of speakers and speech. To a first approximation, listeners either side with the speaker in a speaker-hosting platform dispute (if they want the speech) or are at most indifferent (if they do not want the speech).

Common arguments against access rules that apply to other forms of media mostly do not apply to hosting media. First, there is no scarcity of bandwidth compelling hosting intermediaries to pick and choose among speakers to carry. Bandwidth on the Internet is effectively infinite. Cloudflare could serve every user in the United States if it needed to. This is not to say that Cloudflare could, would, or should do so for free—this level of access would be quite expensive and a speaker wanting to support hundreds of millions of massive downloads would quite reasonably be expected to pay commensurately. It is just that Cloudflare could serve everything to everyone.

Second, there are generally no operational constraints that cause one speaker’s content to interfere with another’s. Common Internet hosting intermediaries are technically capable of carrying almost any item of content within a category: videos at a given resolution, files consisting of arbitrary bitstrings, and so on. These items of content may have different sizes—and might be subject to caps for short-run capacity or economic reasons—but from a technical perspective, the intermediary is entirely indifferent as to their content. A broadcast radio station must deal differently with a talk-show host in studio one, a live musical performance in studio two, and a recorded program coming via audio link from a remote location. However, in an important sense, all apps in an app store are the same. Offering speaker A’s app does not divert resources needed to offer speaker B’s.

Third, there is no scarcity of listeners’ attention compelling hosting providers to prioritize some content over others. A delivery platform can fill up a listener’s queue with unwanted speech, making it harder to receive to the speech they want. If your telephone is ringing off the hook with telemarketers, your friends will get a busy signal every time they call. However, a hosting platform does not make any claims on a listener’s attention; it simply sits there passively until the user seeks out and requests the speech. No one is interested in all 100,000,000 tracks on Spotify; but for the most part, having access to an extra 99,900,000 does not take anything away from the 100,000 one might actually be interested in listening to.

To be sure, a hosting platform with 100,000,000 pieces of content is harder to browse than a platform with 100. But this should be understood as more of a selection problem than a hosting problem. Combining hosting and selection into a single platform function takes some of the control over speaker-listener matching away from listeners and vests it in the platform. A movie theater that shows 5 movies at time offers far less listener choice than a streaming platform that gives listeners access to a catalog of 50,000. Give that same listener a list of 5 recommended hot new releases and they have all of the choice-related benefits of the movie theater and none of the drawbacks. The creation of Internet-scale hosting intermediaries creates its own need for equally useful selection intermediaries, but the first step towards facilitating their healthy development is recognizing that selection is distinct from hosting.

None of this is not to say that access rules always actually enhance the choices available to listeners. The economics of multi-sided markets are complicated, and a badly designed access rule could undermine a pricing strategy that successfully attracts more speakers and more listeners to an intermediary. My goal here is narrower. I want to argue that rules that have the effect of increasing the range of speakers available on a hosting platform are pro-listener-choice, whether or not they are structured as open access rules. The actual creation of a regulatory regime involves difficult policy considerations and mechanism designs. My point is only that this policy space ought to be available to regulators and not be foreclosed by the First Amendment.

Indeed, access rules are even easier to justify for commodity hosting platforms than they are for delivery platforms. As we have seen, filtering rules for delivery media frequently translate into corresponding exceptions to access rules. Spam-blocking, for example, might be a case of reasonable network management under network neutrality rules. This, in turn, means that regulators need to be cautious with imposing access rules, lest they inadvertently cut off filtering that listeners depend on. A must-carry rule for email, for example, would be a spammer’s dream.

To the extent that listeners do their own filtering in accessing a hosting platform, hosting platforms do not require the same degree of caution with access rules. If regulators require that Candy Crush be available in app stores, it does no harm to a user who does not enjoy match-three games. If you don’t want to play Candy Crush, don’t download it.

E. Selection

For decades, speakers have been demanding access to selection intermediaries. In the 2000s, the issue of the day was “search neutrality”: equal access to search engines’ rankings.71See generally Grimmelmann, supra note 4. More recently, speakers have complained about being “downranked” on social media—that is, not placed in other user’s algorithmic feeds. In both cases, the complaint is the same: their speech is theoretically available to users but not recommended in practice.

The fundamental challenge with giving a coherent account of access to selection is the baseline problem.72See generally Grimmelmann, supra note 4. It is nearly impossible to describe what “correct” or “neutral” rankings would look like. Different users have different preferences, and even the same user has different preferences in different contexts and at different times. My Facebook News Feed should not be identical to yours; we have different friends and you like fashion while I like sports. My search results for “crab cakes” should be different than my search results for “crab canon,” and even my search for “Vikings” could be referring to Scandinavian seafarers, a football team, Mars probes, a TV series, or kitchen appliances.73See Grimmelmann, Speech Engines, supra note 3, at 913 (discussing challenge of defining relevance). As a result, different selection media can quite reasonably make different choices about speakers. Indeed, for a regulator to prescribe what a selection platform should do is to become a selection platform itself.

Thus, selection stands in sharp contrast to delivery and hosting, both of which have a plausible neutral baseline: deliver or host everything. Selection is more like broadcast in this respect: choices must be made. However, the reason for the choices is very different. The need for choices in broadcast stems from bandwidth being scarce; not all speech can be made available at all. The need for choices in selection stems from attention being scarce; listeners must choose among these the speech available to them. In broadcast, transmission and selection are inextricably linked. However, on the Internet, transmission (that is, hosting plus delivery) and selection can be distinct functions, one of which substantially overcomes the scarcity problem and the other of which confronts it full-force.

Access claims in the selection context are therefore effectively a zero-sum fight among speakers. To move speaker A up one place in a feed means pushing some other speaker B down one place. Platforms might make this choice for a variety of content-based reasons—profit, ideology, whimsy—but it is much harder to identify a legitimate reason for a regulator to prefer A to B or vice-versa. A neutrality rule in a delivery or hosting context works because the government can tell an ISP to deliver all IP datagrams with equal priority (network neutrality) or a cloud-hosting provider to host all lawful content (a must-carry regime); the baseline is content-neutral. But there is no simple corresponding neutrality rule for selection. To select is to choose on the basis of content.

I argued in Speech Engines for a more limited principle of relevance to search users. That is, a search result is a search engine’s guess at what a user will find relevant to their query.74Grimmelmann, Speech Engines, supra note 3, at 913. The user’s goals are subjective from their perspective, but it is an objectively observable fact from the search engine’s perspective how well a result corresponds to a user’s goals. The search engine must make a subjective guess at what the user will find relevant, but it is an objective fact whether the result the engine actually shows to the user corresponds to that best guess. A regulator therefore has a principled basis to intervene when a search engine is disloyal to its users—and it is disloyal when it shows them results that (objectively) differ from the engine’s own (subjective) judgment about what the users are likely to find relevant. This does not mean the regulator can substitute its own relevance judgments for those of the user or the search engine, but it does mean that the regulator can prevent the search engine from lying to users and it might be able to prevent certain conflicts of interest that might tempt the search engine into underplaying its hand.

This argument generalizes into a broader claim about selection intermediaries and listeners. A selection intermediary offers listeners a way to choose among speakers. To prohibit the intermediary from doing so, or to dictate how it makes the selection, is to interfere with listeners’ ability to choose. We should understand this as an interference with listeners’ First Amendment rights to listen (and not just the intermediary’s right to speak). At the same time, we should recognize that a selection intermediary that is dishonest or disloyal also interferes with listeners’ First Amendment interests. The dishonesty and disloyalty can provide a content-neutral basis for identifying problematic recommendations by selection intermediaries, even though those recommendations are themselves content-based.

  1. Moody v. NetChoice

The Supreme Court’s recent decision in Moody v. NetChoice was a missed opportunity to clarify these principles.75Moody v. NetChoice, LLC, 603 U.S. 707, 724–28 (2024). Texas and Florida passed content-moderation laws that, in various ways, prohibited major social-media platforms from restricting content on the basis of political viewpoint (Texas) or from restricting content from political candidates or journalistic enterprises (Florida). The actual holding in Moody was a nothingburger about the appropriate standards for facial challenges; but in dicta, a five-justice majority explained that the platforms’ “selection, ordering, and labeling of third-party posts” were protected expression.76Id. at 727.

This was a thoroughly speaker-oriented perspective. It treated the problem with the states’ laws as that “an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”77Id. at 731. This perspective makes perfect sense when the entity is a newspaper or a parade, both of which contribute to the marketplace of ideas by adding perspectives they think that readers or viewers will appreciate. And it is true, in a sense, for social media, where many platforms curate speech in ways that reflect specific viewpoints.

However, in another more accurate sense, the value of selection algorithms on social media is to users as listeners: the selection algorithms help them find speech they find interesting, valuable, and relevant to their diverse interests. A state mandate to insert some speech into a user’s feed or search results interferes with the user’s ability to listen to the speech that the user actually wants to hear. It is not just compelled speech as against the platform—it is also compelled listening as against the user. Put this way, the First Amendment problem is blindingly obvious.78See generally Brief of First Amendment and Internet Law Scholars as Amici Curiae Supporting Respondents, Moody v. NetChoice, LLC, 603 U.S. 707 (2024) (Nos. 22-277 and 22-555) (making this argument).

This shift in perspective—from speaker to listener, from platform to user—is important for two reasons. First, it gives a more convincing response to the states’ argument that the platforms are not really speaking in most of their selection decisions. Facebook does not really have an opinion on whether my cousin’s apple pie photos or my friend’s story about a long line at the grocery store is worthier speech, but I certainly do. There is a sense in which the speech value of Facebook’s ranking decisions is derivative of my speech interests.

This is a compelling response to Texas’s attempt to inject political speech into social-media feeds on a viewpoint-neutral basis. It is a bit uncomfortable for Facebook to argue that it has an expressive preference to discriminate on the basis of viewpoint, but it is perfectly natural for individual users to have expressive viewpoints and to prefer content on that basis. For listeners to choose speakers on the basis of viewpoint is not to interfere with the freedom of speech; it is an exercise of that freedom and the point of the whole enterprise. Subscribing to The Nation instead of The National Review (or vice-versa) is viewpoint discrimination on the user’s part, and that is a good thing! Social-media users want feeds that reflect their divergent interests and viewpoints, and social-media platforms advance, rather than inhibit, First Amendment values when they cater to these listener preferences.

Second, the focus on listeners’ expressive interests in choosing what speech they receive on social-media platforms and on having platforms that can algorithmically make selections in accordance with those interests makes clearer that this is an argument only about selection and not necessarily about hosting. To the extent that states attempt to regulate platforms’ hosting functions with neutrality or must-carry mandates, those laws may rest on a firmer basis than their attempts to regulate platforms’ selection functions.79Eugene Volokh, Treating Social Media Platforms Like Common Carriers?, 1 J. Free Speech L. 377, 448 (2021). As I argued above, there is a plausible neutral baseline for hosting, and regulating hosting by itself does not interfere with listeners’ choices in the same way as regulating selection does.

In the actual Moody and Paxton cases, the platforms’ hosting and selection functions were closely related, and the most common content-moderation remedy they applied was to delete the content entirely.80See generally Eric Goldman, Content Moderation Remedies, 28 Mich. Tech. L. Rev. 1 (2021) (discussing much wider range of remedies available to platforms). Similarly, the states’ laws ran rules that sounded in hosting (“permanently delete or ban”) together with rules that sounded in selection (“post-prioritization” or “shadow ban”), as if all of these practices were entirely equivalent. However, it is possible to imagine future laws that more clearly require hosting of content on a viewpoint-neutral basis while leaving platforms greater discretion over selection. I think these laws pose genuinely harder questions. Moody’s majority opinion collapses these distinctions in an unhelpful way.

  1. Antitrust and Self-Preferencing

A listeners’-choice perspective also shows why antitrust regulation of selection intermediaries is broadly permissible, even when some of the anticompetitive conduct complained of involves the selection of speech.81        See generally Hillary Greene, Muzzling Antitrust: Information Products, Innovation and Free Speech, 95 B.U. L. Rev. 35 (2015). The actual antitrust analysis is highly fact-specific and requires careful technological and economic reasoning about particular products and markets. See generally Erik Hovenkamp, Platform Exclusion of Competing Sellers, 49 J. Corp. L. 299 (2024); Erik Hovenkamp, The Antitrust Duty to Deal in the Age of Big Tech, 131 Yale L.J. 1483 (2022). My point here is only that in many circumstances, the First Amendment does not block a court from reaching the merits of an antitrust case involving a selection intermediary. Again, the key point is that although users have content- and viewpoint-based preferences among speech, the government can act neutrally in terms of content by taking those preferences into account, whatever they are. An app store that rejects fart apps because “the App Store has enough fart, burp, flashlight, fortune telling, dating, drinking games, and Kama Sutra apps, etc. already”82App Review Guidelines § 4.3 Spam, Apple Dev., https://developer.apple.com/app-store/review/guidelines [https://perma.cc/9FA3-N67R]. is certainly expressing a viewpoint. However, to the extent that users want fart apps and the app store is suppressing competing fart apps in favor its own, promoting welfare-enhancing consumer choices is a perfectly

legitimate government interest and the harm is cognizable under traditional antitrust principles.

Thus, rules against self-preferencing by selection intermediaries will generally be permissible under the First Amendment. This position may sound absurd if one sees only the First Amendment interests of the intermediary, and it is still difficult if one takes into account the interests of its competitors. However, it becomes entirely reasonable if one considers the interests of affected users. Indeed, there is a natural congruence between the interests of users as listeners (my argument in this essay) and the interests of users as consumers (the traditional stance of antitrust law).

More specifically, it would be permissible to have a rule that a pure selection intermediary must treat first-party content that it itself produced evenhandedly with third-party content from competitors. The intermediary will have valid, expressive reasons to prefer some content over others, and these decisions will mostly be off-limits to regulatory scrutiny, as discussed above. However, a regulator can make clear that the platform cannot prefer first-party content simply because it is first-party content. The platform can use any ranking rules it wants, but those rules must be applied even-handedly to all—or at least, the platform must give users the option of disabling any self-preferencing.

For similar reasons, disclosure of speech-selection intermediaries’ commercial ties is also generally permissible under traditional consumer-protection principles. Listeners can legitimately expect to know when a speaker has a financial incentive to tell them one thing rather than another, an expectation that applies to speech selection as well as to speech itself. At the moment, paid advertising in search results and in social-media feeds must be disclosed as such; however, a stronger rule that required selection platforms to disclose when recommended content is first-party, or when there are substantial financial ties between the platform and a speaker, would also be allowable for the same reasons.

Finally, full structural separation between hosting, delivery, and selection is a plausible antitrust remedy or regulatory mandate. In Part IV, I will discuss in more detail why this kind of separation might be appealing from a free-speech perspective. For now, I just want to note that the economic and technical separation of these functions is itself plausible from a First Amendment perspective, Moody notwithstanding. I have been arguing that hosting and delivery platforms could be subject to must-carry rules, but selection platforms generally cannot. Much of the gap between the two sides’ positions in Moody arose from the fact that the laws’ proponents generally cited caselaw about common carriage in hosting and delivery settings, while

the laws’ opponents generally cited caselaw about expressive choices in selection settings.

The thing that made the Moody cases difficult to resolve was that the platforms combined both hosting and selection functions, and most of the briefing (and the opinions) ran these functions together. This would seem to open up an argument on the platforms’ part: Moody confirms they have full First Amendment protection when they engage in selection, so even a pure hosting platform is always allowed to engage in selection—i.e., there is a First Amendment right to combine these two functions. However, I think this does not follow from Moody; or to the extent that it does, Moody is wrong.

The thrust of the common-carriage cases is that the public provision of standardized service can be subject to nondiscrimination obligations.83There is a parallel tradition that these standardized services can be structurally separated from other services that involve more individualized offerings. This, for example, is what the Telecommunications Act of 1996 attempted to do with its distinction between “telecommunications service” (standardized and common-carriage) and “information service” (bespoke and unregulated). To the extent that this distinction is coherent (and I think that it is, much of the time), nondiscrimination obligations should apply to the standardized services and not to the individualized ones. Moody may have missed this distinction, but the Court’s opinion in 303 Creative LLC v. Elenis seems to hinge on it; that is, it is First-Amendment-compelled speech to require a designer to make a custom wedding website (“pure speech”), but it is perfectly permissible to require a merchant to sell a commodity product to all comers.84303 Creative LLC v. Elenis, 600 U.S. 570, 593–94 (2023); see also Dale Carpenter, How to Read 303 Creative v. Elenis, Volokh Conspiracy (July 3, 2023, 2:11 PM), https://reason.com/volokh/2023/07/03/how-to-read-303-creative-v-elenis [https://perma.cc/KVQ9-KD2N] (arguing that 303 Creative applies to products that are customized and expressive). In listener terms, listeners are paying attention to the intermediary’s own speech in individualized cases like selection, while paying attention to third-party speech in standardized cases like hosting.

  1. Unranked Feeds

An interesting partial and special case of separating hosting from selection is to require a provider to include an unranked or chronological feed for those users who want it. Facebook offers both “Top Posts” (algorithmically ranked) and “Most Recent” (chronological) feeds; Reddit offers “Best” and “Hot” (algorithmically ranked) but also “New” (chronological) sorting options.

What makes these options feasible is that there is a plausible objective baseline. A chronological feed on Facebook is “all posts from friends and pages I follow, sorted by recency.” This is workable in a way that “all posts I would be interested in” is not. The restriction to content from accounts that one follows is what makes the option to display everything tractable. A purely chronological feed of everything posted to X (the “firehose”) is not of interest to most users—it would be overwhelmingly vast—but a purely chronological feed of everything posted by those they follow is. For similar reasons, a non-algorithmic search engine is an oxymoron except in domains that are so small or simple as to barely require a search engine at all. Anything larger than “find on this webpage” requires contestable choices about ordering.

A chronological-feed option is listener-choice enhancing. A chronological-feed mandate would not be. Facebook and other social-media platforms have extensive evidence showing that users stay on their sites longer and engage with more posts when they see non-chronological feeds. This is a legitimate user preference; given the limits of attention, the user benefits greatly from delegating the choice to Facebook.85I think it is more accurate to call this a “delegation” of choice rather than “choosing not to choose.” Cf. Cass R. Sunstein, Choosing Not to Choose, 64 Duke L.J. 1, 9 (2014). However, not every user wants algorithmic feeds. I, for example, only used chronological ordering on Twitter, and have stuck to that preference on federated platforms. This, too, is a legitimate user preference; a platform that forces algorithmic ordering on everyone when chronological ordering is feasible thwarts some listeners’ choices about speech selection.

This is another way in which Moody paints with too broad a brush. Seeing selection as purely a matter of platform speech makes the majority insensitive to listeners’ speech interests. Requiring a chronological option from social media feeds in addition to a platform’s preferred algorithmic option looks like a restriction on the platform’s speech rights; indeed, to the majority it might even be compelled speech. However, a chronological feed option is also a way of respecting users-as-listeners’ choices about speech without forcing a platform to make ranking choices that it and its users would otherwise disagree with. Requiring a chronological option strictly increases the choices available to listeners, while not interfering with a platform’s ability to provide its preferred ordering to any listeners who are interested in hearing it.

IV. Filtering

Now consider media from the perspective of unwilling listeners. As we will see, there are really three different types of unwilling listeners in media regulation. In each case, it is helpful to distinguish between (1) downstream filtering infrastructure that empowers listeners themselves to avoid unwanted content, and (2) upstream filtering rules that prevent that content from reaching them in the first place.

First, there are listeners who are uninterested in or who actively dislike particular content: opera fans who loathe rap music or reality television fans who find scripted shows unbearably dull. Here, downstream filtering infrastructure is typically sufficient. As long as there is something they would rather watch (an access problem), as long as they are able to find out about it (a selection problem), and as long as they are actually able to switch to it (which is true for most media),86Exceptions typically involve being in public places, such as in an auto mechanic’s waiting room or on a subway car with someone having a loud video call. they can watch operas and reality shows, and ignore the rap and scripted dramas. It does not bother them, because they do not need to see it. Upstream filtering rules are unnecessary.

Second, there are listeners who are individually targeted with specific unwanted content that is hard for them to avoid. This is fundamentally a delivery problem; it does not arise with other types of media. Sometimes speakers target individual listeners, like a harassing telephone caller. Sometimes they target many listeners indiscriminately, like an email spammer. Either way, listeners can try to use self-help downstream filtering to avoid it, but if that fails, they may need upstream filtering to help prevent it from reaching them in the first place.

And third, there are minors. Sometimes, children want to avoid violent, sexual, disturbing, or other adult-themed content because it upsets them, but they come across it by accident and cannot look or flip away in time. Sometimes—perhaps more often—the problem is that children are willing to see this material, but their parents or guardians want to shield them from it. In both cases, the theory is that children are less capable of making choices for themselves as listeners than adults are, and therefore that some kind of upstream filtering rules are necessary because downstream ones will fail. Either the kids themselves will be less good at filtering than their parents would be, or the kids will affirmatively evade the filtering their parents try to impose.

Downstream filtering infrastructure also plays a crucial role in supporting (or undermining) the rationales for other kinds of media regulations. On the one hand, good downstream filtering plays a crucial role in making it possible for listeners to pick and choose among the superabundance of content that access rules try to make available. On the other, good downstream filtering can reduce the need for upstream filtering rules—in First Amendment terms, it is frequently a “less restrictive alternative.”

A. Broadcast

In broadcast media, unwilling listeners were typically expected simply to change the channel. They may not always have had many other broadcast options, but no one was forcing them to watch any particular broadcast. Even this limited measure of choice was sufficient to protect unwilling listeners from programs they despised. As the range of channels expanded (with it, the range of choices), the less of an imposition any one unwanted channel was on listeners—indeed, the less likely they were to notice or care about it at all. Similarly, by their nature, very few broadcast programs were personally targeted at, or specifically harmful to, individual listeners. The local CBS affiliate simply did not care enough about Angela Johnson at 434 Oakview Terrace to preempt Murder She Wrote with an hour-long special insulting Johnson and her life choices.

Instead, the filtering problems on broadcast media primarily concern minors. The theory of “just change the channel” does not work for them for two reasons. First, something offensive or shocking could come up unexpectedly when one is just flipping through channels. This was the case in FCC v. Pacifica Foundation, in which the Supreme Court upheld the FCC’s finding that a radio broadcast of George Carlin’s “seven dirty words” routine was indecent in violation of its regulations.87FCC v. Pacifica Found., 438 U.S. 726, 740–41 (1978). And it is the case with the FCC’s modern attempts to extend its obscenity-and-indecency rules to cover fleeting expletives and other sudden intrusions into otherwise family-friendly broadcasts, like Bono calling U2’s Best Original Song win at the Golden Globes “really, really, fucking brilliant” live on air, or the 2004 Super Bowl wardrobe malfunction.88See generally FCC v. Fox Television Stations, Inc., 567 U.S. 239, 248, 258 (2012) (finding the FCC’s rule unconstitutionally vague as applied to fleeting expletives). These are cases where a listener (here, a parent making choices on behalf of their child) cannot effectively make a choice not to receive the unwanted material because of the linear, real-time nature of broadcast audio and video. The character of the channel changes more quickly than the listener can flip away.

Second, sometimes children want to watch shows their parents do not want them to. Nominally, the theory here is that parents cannot constantly supervise their children’s TV viewing; stations have to do the filtering work that parents cannot.89See J.M. Balkin, Media Filters, the V-Chip, and the Foundations of Broadcast Regulation, 45 Duke L.J. 1131, 1136–38 (1996) (arguing persuasively that the difficulty of parental supervision is the real import of courts’ language that broadcast media are uniquely “pervasive”). This is why the FCC’s indecency regulations are confined to only the hours from 6:00 AM to 10:00 PM each day: at night, when indecency regulations do not apply, kids are assumed to be in bed and not watching TV.9047 C.F.R. § 73.3999(b) (2023). In comparison with indecency rules, obscenity regulations apply at all hours of the day. Id. § 73.3999(a). The indecency rules are an incursion on adults’ abilities as listeners to choose what speech they want to receive. They are an exception to the normal rule that willing listeners beat unwilling listeners. The justification is simply the usual one offered so often in American law: protecting the supposed innocence of the young from the purportedly corrupting influence of being aware that sex is a thing that exists. The eight hours at night when indecency rules do not apply serve as a concession to adults’ interests as listeners.

I say that this is “nominally” the theory of broadcast indecency regulation because it only really makes sense in a world where the main audio and video media are broadcast—a world we have not lived in for decades. Cable, satellite, and other subscription services have never been subject to the indecency rules. Here, the theory is that parents can choose whether or not to subscribe, presumably in a different way than they could choose whether or not to have a TV. Thus, they have an upfront choice that they can use to prevent their children from receiving unwanted indecent material. If you do not want your kids to watch Skinemax late at night, do not get cable, or do not pay extra for premium channels. Similar laws and similar logic apply to “over-the-top” broadcast services on the Internet, like ESPN+’s live sports games. If you do not like it, do not subscribe.

At times, the government has tried to impose more stringent filtering rules on broadcasters. Listeners’ choices provide a simple and compelling explanation of where the doctrine has come to rest. Consider United States v. Playboy Entertainment Group, Inc., where § 505 of the 1996 Telecommunications Act required cable operators to “fully scramble or otherwise fully block”91Codified at 47 U.S.C. § 561(a). sexually explicit programs except between the hours from 10:00 PM to 6:00 AM the next day.92United States v. Playboy Ent. Grp., Inc., 529 U.S. 803, 806 (2000). Of course, most cable operators already scrambled sexually explicit channels for non-subscribers, and sexually explicit channels like Playboy Television were typically “premium” offerings sold à la carte, so only paying subscribers to these specific channels would have a converter box to descramble them.93See id. at 807. So far, this was simply a case of parental choice over what broadcast services to subscribe to.

The technological complication was “signal bleed”; the analog scrambling technologies available in the 1990s could not prevent portions of the audio and video from leaking through, albeit in somewhat garbled form.94Id. at 807–08. To Congress, signal bleed meant that existing scrambling by itself was insufficient, and so cable companies would need to “fully block” such content if they could not “fully scramble” it. However, the Supreme Court observed that there was a less-restrictive alternative to fully banning a channel—“block[ing] unwanted channels on a household-by-household basis.”95Id. at 815. Indeed, this capacity was already required of cable systems by § 504 of the Act,96Codified at 47 U.S.C. § 560. so the law contained its own less-restrictive alternative. In other words, a legal regime requiring upstream filtering for all listeners by broadcast intermediaries was unconstitutional because there was a downstream alternative that gave individual listeners a more granular choice.

A more technical complex broadcast filtering system is the “V-chip,” which the 1996 Telecommunications Act required in all televisions shipped through interstate commerce.9747 U.S.C. § 330(c)(1); see generally Balkin, supra note 89. The Act describes the V-chip bloodlessly as “a feature designed to enable viewers to block display of all programs with a common rating,”9847 U.S.C. § 303(x). but the intent and implementation were that the rating systems would flag programs with sexual, violent, or other type of adult content. While the V-chip is mandated by law, the ratings that it interprets are not. The TV Parental Guidelines, which include classic bangers like TV-14-LS (many parents would find the contents unsuitable for children under 14 because of crude language and sexual situations) are “voluntarily rated by broadcast and cable television networks, or program producers.”99Frequently Asked Questions, TV Parental Guidelines, http://tvguidelines.org/faqs.html [https://perma.cc/CMF3-PQWK]. Indeed, there is a strong argument that a mandatory rating system would constitute unconstitutional compelled speech. See Book People, Inc. v. Wong, 91 F.4th 318, 336–40 (5th Cir. 2024) (holding unconstitutional a mandatory self-applied age-rating system for websites). Overall use of the V-chip seems to have peaked at about 15 percent of parents.100Henry J. Kaiser Family Foundation, Parents, Children, & Media: A Kaiser Family Foundation Survey, KFF, https://www.kff.org/wp-content/uploads/2013/01/entmedia061907pres.pdf [https://web.archive.org/web/20250221161327/https://kff.org/wp-content/uploads/2013/01/entmedia061907pres.pdf].

It is enlightening to consider the V-chip, like § 504, as a mechanism for creating listener choice under the choice-unfriendly conditions of broadcast. In both cases, signals are still transmitted indiscriminately to all listeners, but in both cases, listeners can individually choose whether to opt in or opt out of making those signals intelligible. Section 504 does so in a less granular way (entire channels), while the V-chip does so in a more granular way (individual programs), but the general idea is the same. It is not a coincidence that in both cases, the regulatory regime converged on a technical system that put more choices in the hands of individual households. This overall downstream movement of choices about speech—from speakers and intermediaries to listeners; from “push” media to “pull” media—is one of the most significant trends in recent media history.

B. Delivery

Now consider filtering rules that help unwilling listeners avoid unwanted deliveries. The First Amendment does not operate directly here; outside of some narrow contexts involving a “captive audience,” there is no First Amendment right not to be spoken to.101See Frisby v. Schultz, 487 U.S. 474, 487–88 (1988) (upholding an ordinance against residential picketing on the grounds that people are captive audiences in their own homes); Snyder v. Phelps, 562 U.S. 443, 459–60 (2011) (rejecting liability for funeral protests on the ground that the mourners were not a captive audience when the protesters “stayed well away from the memorial service”). Instead, laws designed to protect listeners from unwilling communications in delivery media are generally constitutional, provided that they are suitably tailored to the actual harms suffered by listeners who are genuinely unwilling.

The most obvious example is that anti-harassment laws have repeatedly been upheld when they involve one-to-one communications.102E.g., Lebo v. State, 474 S.W.3d 402, 407 (Tex. Ct. App. 2015) (upholding conviction for repeatedly sending threatening emails and telephone calls to victim). Repeated telephone calls or harassing emails can be the subject of valid restraining orders, civil judgments, or criminal convictions.103See, e.g., 47 U.S.C. § 223(a) (prohibiting telephone harassment). See also United States v. Lampley, 573 F.2d 783, 788 (3d Cir. 1978) (upholding constitutionality of § 223(a)); United States v. Darsey, 342 F. Supp. 311, 312–14 (E.D. Pa. 1972) (describing problems § 223(a) was meant to solve). See generally Genevieve Lakier & Evelyn Douek, The First Amendment Problem of Stalking: Counterman, Stevens, and the Limits of History and Tradition, 113 Calif. L. Rev. 143, 170–77 (2025) (discussing history of anti-stalking law). The key here, as I argued in Listeners’ Choices, is that these restrictions do not prevent speakers from addressing willing listeners.104Grimmelmann, supra note 1, at 392. They remain free to telephone anyone else they want; only one particular number is forbidden. The legal system can therefore protect the unwilling victims of harassment without interfering in the core First Amendment relationship between willing speaker and willing listener.105See generally Leslie Gielow Jacobs, Is There an Obligation to Listen?, 32 U. Mich. J.L. Reform 489 (1999). An order requiring a speaker to take down a blog post about the victim interferes with that relationship; an order requiring them to stop sending direct messages to the victim does not.106See Volokh, supra note 15, at 742–43 (making one-to-many vs. one-to-one distinction).

Listeners can opt out of unwanted one-to-one commercial speech. The Controlling the Assault of Non-Solicited Pornography and Marketing Act (“CAN-SPAM”) for email, the Telephone Consumer Protection Act (“TCPA”) for telephone and Short Message Service (“SMS”), Do-Not-Call for telephone, and the TCPA for faxes all broadly prohibit sending certain types of commercial solicitations to unwilling listeners. CAN-SPAM uses an opt-out system; a sender gets one bite at the apple but must refrain from further emails once a recipient objects.10715 U.S.C. § 7704(a)(3)(A)(i). With some exceptions, TCPA prohibits the use of automated dialers and prerecorded messages (that is, bulk communications particularly unlikely to be of interest to individuals) unless they affirmatively opt in.10847 U.S.C. § 227(b)(1)(B). Do-Not-Call bars all unsolicited commercial calls to numbers on the list,10915 U.S.C. § 6151; 16 C.F.R. §310.4(b)(1)(iii)(B) (2024). and TCPA bars all unsolicited commercial faxes.11047 U.S.C. § 227(b)(1)(C). All of these laws have been upheld against First Amendment challenges.111See generally Mainstream Mktg. Servs., Inc. v. FTC, 358 F.3d 1228 (10th Cir. 2004) (discussing Do-Not-Call); United States v. Smallwood, No. 3:09-CR-249-D(07), 2011 U.S. Dist. LEXIS 76880 (N.D. Tex. July 15, 2011) (discussing CAN-SPAM); Moser v. FCC, 46 F.3d 970 (9th Cir. 1995) (discussing telephone provisions of TCPA); Missouri ex rel. Nixon v. Am. Blast Fax, Inc., 323 F.3d 649 (8th Cir. 2003) (discussing fax provisions of TCPA).

The First Amendment rule for unwanted postal mail is even stronger. In Rowan v. United States Post Office Department, the Supreme Court upheld a law under which “a person may require that a mailer remove his name from its mailing lists and stop all future mailings to the householder.”112Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 729 (1970). Although the law was framed in terms of allowing recipients to opt out of receiving “erotically arousing or sexually provocative” advertisements,113Id. at 730. it allowed recipients “complete and unfettered discretion in electing whether or not [they] desired to receive further material from a particular sender,”114Id. at 734. and the legislative history indicated that neither the postal service nor a reviewing court could “second-guess[]” the recipient’s decision.115Id. at 739 n.6. “Nothing in the Constitution compels us to listen to or view any unwanted communication,” wrote Chief Justice Burger for a unanimous court.116Id. at 737. Compare Rowan with Bolger v. Youngs Drug Products Corp., in which the Court held a law prohibiting the mailing of contraceptive advertising unconstitutional:117Bolger v. Youngs Drug Prods. Corp., 463 U.S. 60, 72 (1983). that is, a prohibition on the use of mailings was constitutional when the prohibition was requested by the recipient (Rowan) but unconstitutional when the prohibition was imposed by the government (Bolger).

Although Rowan is sometimes discussed as a captive-audience case,118E.g., Snyder v. Phelps, 562 U.S. 443, 459–60 (2011). it is better understood as a case about delivery media. Consider Frisby v. Schultz, a true captive-audience case: there is nowhere to go to hide from protesters outside your door, so a law prohibiting residential picketing is constitutional.119Frisby v. Schultz, 487 U.S. 474, 487–88 (1988). By contrast, the Supreme Court has treated self-help as effective against unwanted mail. Bolger stated that the “short, though regular, journey from mail box to trash can is an acceptable burden, at least so far as the Constitution is concerned.”120Bolger, 463 U.S. at 72 (internal quotation omitted). The only way this Bolger dictum can be squared with Rowan is if the basis of Rowan’s holding is listeners’ rights against unwanted communications, rather than one being a captive audience in one’s home against unwanted postal mail.

It is also widely accepted that there is no First Amendment problem if a delivery carrier implements some form of filtering or blocking at the request of a user. Wireless and landline telephone companies offer call blocking to their customers, which allows a user to block all further calls from a number. Indeed, FCC regulations explicitly permit providers to block calls that are likely to be unwanted based on “reasonable analytics”12147 C.F.R. § 64.1200(k)(3)(i) (2023). so long as the recipient has an opportunity to opt out of the blocking.122Id. § 64.1200(k)(3)(iii). Email filtering is also incredibly widely deployed. Some users do the filtering themselves, manually or with an app, but many rely on the filtering (both explicit blacklists and using machine learning) offered by their email providers. Here again, § 230 plays a role: the most common reason that delivery media block “otherwise objectionable” communications is that their users object to them, and spam is a common reason.123See, e.g., Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *11 (E.D. Cal. Aug. 24, 2023).

Finally, many laws require speakers to accurately identify themselves upstream when using delivery media so that listeners downstream can decide whether or not to receive their speech. CAN-SPAM prohibits false or misleading header information,12415 U.S.C. § 7704(a)(1). prohibits deceptive subject lines,125Id. § 7704(a)(2). and requires that advertisements be disclosed as such.126Id. § 7704(a)(5)(i). The Truth in Caller ID Act prohibits spoofing caller ID information “with the intent to defraud, cause harm, or wrongfully obtain anything of value.”12747 U.S.C. § 227(e)(1). The Junk Fax Prevention Act of 2005 (“JFPA”) requires clear “identification of the business, other entity, or individual sending the [fax] message.”128Id. § 227(d)(1)(B). Although there is a right to speak anonymously under many circumstances, there are limits on how far a speaker can go in lying about their identity to trick a listener into hearing them out. Importantly, some of these laws require delivery intermediaries to implement the infrastructure for accurate identification. The FCC, for example, requires telephone providers to implement a comprehensive framework against caller-ID spoofing known as “secure telephone identity revisited and signature-based handling of asserted information using tokens standards,” otherwise abbreviated as “STIR/SHAKEN.”12947 C.F.R. § 64.6300 (2023).

C. Hosting

Listener choices play a central role in the justifications for hosting providers’ First Amendment rights—and also in the justification for speakers’ access rights to hosting platforms. These justifications presume that listeners can voluntarily choose to engage with hosted content they want and to avoid hosted content they do not want. In the terminology of Listeners’ Choices, listeners can be asked to bear the necessary “separation costs” because they can easily and inexpensively choose where to click.130Grimmelmann, supra note 1, at 395–96. It follows, then, that unwilling listeners’ objection to content are not a sufficient reason to prevent it from being hosted for willing listeners.

The Supreme Court’s decision in Snyder v. Phelps is a nice example.131See generally Snyder v. Phelps, 562 U.S. 443 (2011). In addition to its funeral protests, the Westboro Baptist Church has a website that is, if anything, more offensive and upsetting. However, a website is even easier for an unwilling listener to avoid. The Church physically picketed at Albert Snyder’s son’s funeral, but he only found the website “during an Internet search for his son’s name.”132Id. at 449 n.1. Unsurprisingly, he pressed only the funeral-protest theory before the Supreme Court and abandoned his tort claims based on the website.133Id. The Court held that the First Amendment protected the Church’s picketing, and the argument is even stronger for the website.

Now consider whether hosting providers can have responsibilities to avoid carrying harmful-to-minors material. To simplify only slightly, the history of anti-indecency regulation is that some adults have tried to restrict minors’ access to sexually themed content by passing upstream filtering laws requiring speakers and hosting platforms to prevent the posting of such content. The courts have responded by invalidating these laws whenever listener-controlled downstream filtering is a plausible alternative. Indeed, it is striking how many contexts the same basic rationale has worked in.

Start with Sable Communications of California, Inc. v. FCC, in which federal law regulated “dial-a-porn” services by prohibiting the transmission of indecent interstate commercial telephone messages.134Sable Commc’ns of Cal., Inc. v. FCC, 492 U.S. 115, 118 (1989). While the prohibition might have been constitutional as to minors, adults have a constitutional right to view indecent but not obscene material. Because the statute prohibited transmission to adults as well, it restricted protected speech, and therefore was unconstitutional.

Put this way, Sable is a classic hosting case of both willing and unwilling listeners. The fact that the speech might reach some unwilling (minor) listeners does not mean that it can be prohibited entirely in such a way as to deprive willing (adult) listeners. Indeed, this first-cut explanation will apply perfectly well to almost all of the cases in this section. It is not wrong.

However, Sable is also a filtering case. The FCC had previously considered multiple technologies to block minors without blocking adults, including credit-card verification, access codes that would be provided only following an age verification process, message scrambling requiring a descrambler that only adults would be able to purchase, and customer-premises blocking, in which subscribers could block their phones from being

able to call entire exchanges (including the paid numbers over which Sable and other dial-a-porn operators provided their services). The Court specifically identified these technical schemes as plausible “less restrictive means, short of a total ban, to achieve the Government’s interest in protecting minors.”135Id. at 129.

These are all technologies to distinguish adults from minors, but they are also all filtering technologies. All four of them require a user to take an affirmative step to listen to particular speech. Indeed, the act of dialing a phone number itself is an affirmative step that these other mechanisms could piggyback on. This is why I describe Sable as a close cousin to a hosting case. To be sure, Sable Communications was delivering its own speech and not that of third parties, but it was fundamentally sending content to listeners on demand, and in such a way that they could predict the general outlines of the speech they were about to receive. (This fact alone is sufficient to distinguish FCC v. Pacifica Foundation and the other broadcast-indecency cases.136FCC v. Pacifica Found., 438 U.S. 726, 748–49 (1978).)

The same arc is visible in the Supreme Court’s caselaw on indecency on the Internet. The first stop was Reno v. American Civil Liberties Union.137See generally Reno v. Am. C.L. Union, 521 U.S. 844 (1997). The Communications Decency Act prohibited the transmission of indecent or sexual material to minors138Id. at 859–60.—including a good deal of material that was fully constitutional for adults to receive.139Id. at 870–76. The government tried to defend the statute by arguing that it only required intermediaries to refrain from sending such material to minors, while leaving them free to send it to adults.140Id. at 876–79. However, the Court held that “this premise is untenable”—that “existing technology did not include any effective method for a sender to prevent minors from obtaining access to its communications on the Internet without also denying access to adults.”141Id. at 876. In other words, the absence of effective age verification turned a de jure rule against sending indecent material to minors into a de facto rule against hosting it in general.142The Supreme Court is currently reconsidering the constitutional status of age-verification technology, in the context of numerous state laws requiring pornographic sites to implement age verification. See Free Speech Coal., Inc. v. Paxton, 95 F. 4th 263, 284 (5th Cir. 2024), cert. granted, 144 S. Ct. 2714 (2024).

Seven years later, in Ashcroft v. American Civil Liberties Union, the Supreme Court confronted a more narrowly drafted law, the Child Online Protection Act (“COPA”).143See generally Ashcroft v. Am. C.L. Union, 542 U.S. 656 (2004). Again, the statute prohibited sending to minors certain material that was constitutional for adults to receive.144Id. at 661–62. This time, however, the affirmative defenses were broader; providers were protected as long as they required a credit card, digital age verification, or any other “reasonable measures that are feasible under available technology.”145Id. at 662. The Court held that COPA was unconstitutional because “blocking and filtering software”—software operated and controlled by parents to limit the sites their children can access—was a less restrictive and more effective alternative.146Id. at 666–70.

As in Playboy Entertainment Group, the availability of more effective downstream filtering technologies meant that a law requiring upstream filtering was unconstitutional. However, unlike in Playboy Entertainment Group, the downstream filters were made available by third parties. The fact that parents could install their own filtering software meant that website hosts were under no duty to do their own filtering. This is a listener-choice-facilitating rule: Yes, it transfers some of the burdens of filtering from intermediaries to listeners, but it also means that each family can choose for itself how to tune its filters, if any.

In United States v. American Library Ass’n, the Supreme Court upheld the provisions of the Children’s Internet Protection Act (“CIPA”), which conditioned federal funding to schools and libraries on their installation of filtering software.147United States v. Am. Libr. Ass’n, Inc., 539 U.S. 194, 214 (2003). A four-Justice plurality held that the condition was a valid exercise of Congress’s Spending Clause power and that library Internet access was not a public forum.148Id. at 205–06. Meanwhile, Justice Kennedy and Justice Breyer’s concurrences in the judgment made nuanced arguments about listeners’ choices. Justice Kennedy’s argument rested on the government’s claim that “on the request of an adult user, a librarian will unblock filtered material or disable the Internet software filter without significant delay”—that is, CIPA allowed willing adult listeners to decide for themselves what sites to view.149Id. at 214. Justice Breyer made a similar point, arguing that an unblocking request was a “comparatively small burden.”150Id. at 220. Whether or not these claims are empirically accurate, the general principle is consistent with a deference to listener-controlled choices about filtering, subject only to the carve-out that minors are not regarded as having the autonomy to choose to view certain material that their elders regard as harmful to them.

D. Selection

I have argued that selection generally facilitates listener choices among speech, and that government attempts to alter platforms’ selection decisions interfere with listeners’ practical ability to find the content that they want. This is not to say that platforms’ selection decisions are ideal or give listeners the full degree of choices they might enjoy. Platforms will almost always get some users’ choices wrong some of the time. Every update you scroll past or search result you ignore is a mistake from your perspective. Platform-provided selection is better than the chaos of content without selection, but there is almost always room to improve.151See generally James Grimmelmann, The Virtues of Moderation, 17 Yale J.L. & Tech. 42 (2015) (discussing moderation in online communities).

It is helpful, then, to recognize that the bundling of hosting and selection on today’s social-media platforms may be a bug rather than a feature. The previous subsection argued that separation of hosting and selection could be permissible as a way for government to ensure that speakers are able to be heard by listeners who genuinely want to hear them (hosting) while not forcing their speech on listeners who do not (selection). However, there is another advantage to clearly separating the two functions, whether required by regulation or voluntarily adopted by a platform.

What would a world where social-media platforms separated hosting from selection look like? The short answer is that it would look much more like web search already does. Hosting providers make content available at speakers’ request, with stable URLs at reachable IP addresses, and transmit that content to listeners at listeners’ request. Meanwhile, search engines index the content and provide recommendations of relevant content to listeners, also at listeners’ request. Listeners have a choice of competing search engines to help them make their choice among competing speakers. The system is not perfect—Google has a dominant market share for general web search in the United States—but there is competition for those users who are willing to use other search engines. For example, Bing, DuckDuckGo, and Kagi are three highly creditable alternatives.

Several commentators have described a similar possible separation for social media. One proposal from a group of Stanford researchers is for “middleware,” defined as “software, provided by a third party and integrated into the dominant platforms, that would curate and order the content that users see.”152Francis Fukuyama, Barak Richman, Ashish Goel, Roberta R. Katz, A. Douglas Melamed & Marietje Schaake, Middleware for Dominant Social Platforms: A Technological Solution to A Threat to Democracy, Stan. Cyber Pol’y Ctr. (2021), https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/cpc-middleware_ff_v2.pdf [https://perma.cc/SZ9Z-AW3P]; see also Francis Fukuyama, Richard Reisman, Daphne Keller, Aviv Ovadya, Luke Thorburn, Jonathan Stray & Shubhi Mathur, Shaping the Future of Social Media with Middleware, Found. for Am. Innovation (Dec. 2024), https://cdn.sanity.io/files/d8lrla4f/staging/1007ade8eb2f028f64631d23430ee834dac17f8e.pdf/Middleware [https://perma.cc/7TBA-UUR3]. Users on the platform would rely on the platform for hosting speakers’ content, but third-party middleware would do the selection. The first and most obvious virtue of middleware is that it introduces competition into the selection process, even when a platform is “dominant”; a monopoly on hosting does not automatically translate into a monopoly on selection.

The authors of the Stanford proposal argue that middleware would “dilute[] the enormous control that dominant platforms have in organizing the news and opinion that consumers see.”153Fukuyama, Richman, Goel, Katz, Melamed & Schaake, supra note 152, at 6. This is entirely correct, but I would put the point differently. Middleware pushes control from a platform towards its users, specifically towards users as listeners. An integrated platform benefits from its position at the center of the two-sided market for hosting, even if its selection is disappointing to users. However, when selection is broken out, selection intermediaries will attract users precisely to the extent that they succeed in satisfying those users’ desire for useful advice about what speech to listen to. That is, middleware selection providers compete along the right axis.

A close relative of middleware—or perhaps a subset of it—is “user agents”: software controlled by the end user that takes the content from a platform and curates it. The difference between middleware and a user agent is that middleware is integrated with the platform and takes over the selection function, while a user agent starts from the content selected by the platform and performs a second round of selection on it. For example, an ad blocker integrated into a user’s browser takes the content selected by a website and curates it by removing the ads. I have argued that these user agents are important for user autonomy in deciding what software to run on their computers, and a similar argument applies to users’ autonomies over what speech they receive.154James Grimmelmann, Spyware vs. Spyware: Software Conflicts and User Autonomy, 16 Ohio St. Tech. L.J. 25 (2020).

Ben Thompson, a technology and business analyst and journalist, offered a fascinating road-not-taken proposal for Twitter (prior to its transformation into X by Elon Musk).155Ben Thompson, Back to the Future of Twitter, Stratechery (Apr. 18, 2022), https://stratechery.com/2022/back-to-the-future-of-twitter [https://perma.cc/3P3G-94KG]. Thompson argued that Twitter should be split in two: TwitterServiceCo would be “the core Twitter service, including the social graph”; TwitterAppCo would be “all of the Twitter apps and the advertising business.”156Id. TwitterAppCo would pay TwitterServiceCo for application programming interface (“API”) access to post to timelines and read tweets, but so could other companies. As Thompson observes, this solution would “cut a whole host of Gordian Knots”: it would make it easier for new social-media entrants to compete on offering better clients or better content moderation; it would pull many controversial content-moderation decisions closer to the users they directly affect; and it would enable a far greater diversity of content moderation policies (both geographically and based on user preferences).157Id.

Needless to say, this was not the route that Musk followed after his acquisition of Twitter—but it is much closer to the route that many post-Twitter social-media services are following. In their ways, Mastodon, Bluesky, and Threads have embraced a version of the middleware ideal, but with an interesting twist. All three of these systems have a “federated” approach to hosting. Users have a direct affiliation with a server or system; they upload their posts to it, and they read other users’ posts through it.

So far, so familiar. The difference is that these services all federate with other services providing similar functionality to their own users. They copy posts from other servers; they make their own users’ posts available for other servers to copy. The result is that content posted by a user anywhere is available to all users everywhere. As a consequence, any given server has less power over its users; they can migrate to a different server without cutting themselves off from their connections on the social graph. Mastodon, for example, has built-in migration functionality that allows users to change servers and have their contacts automatically update subscriptions to the new one.

Federation also has substantial content-moderation benefits because, like middleware, it pushes content moderation closer to the listeners who are directly affected by it. Each federated server can have its own content-moderation policy—that is, each server can implement its own selection algorithm. This is not quite middleware as such, in that a server combines hosting and selection. However, it is much closer than a fully integrated platform would be. Indeed, once it hits a basic baseline of technical competence and reliability, a federated server’s principal differentiator is its moderation policy. So here, too, users who prefer a particular set of policies as listeners have the ability to choose on that basis. This, too, is speech-promoting.

The most careful theorization is of this model is Mike Masnick’s Protocols, Not Platforms.158Mike Masnick, Protocols, Not Platforms: A Technological Approach to Free Speech, Knight First Amend. Inst. at Colum. Univ. (Aug. 21, 2019), https://knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free-speech [https://perma.cc/ET69-VQ4E]. Masnick argues that the key move is to separate a platform into a standardized open protocol and a particular proprietary implementation of that protocol. The interoperable nature of the protocol is what ensures that implementations are genuinely competing on the basis of users’ preferences over content, and not just based on the lock-in network effects of a single platform that has the largest userbase. That is, interoperability enables migration, which enables competition, which promotes competition and speech values. Masnick gives a detailed argument for why this model promotes diversity in users’ speech preferences. I would add only that this diversity is primarily diversity of users as listeners.

To finish, I would like to note a type of selection that can come closer to the middleware goal of facilitating listener choice, even within proprietary platforms. Shareable blocklists (a) allow users to make and share a list of users they do not want to see or receive any content from, and (b) allow other users to import and use another’s shared blocklist.159See generally R. Stuart Geiger, Bot-Based Collective Blocklists in Twitter: The Counterpublic Moderation of Harassment in a Networked Public Space, 19 Info. Commc’n & Soc’y 787 (2016). Blocking is a relatively crude form of selection; it does not necessarily work against abusers or spammers who change their identity or use sock puppet accounts, nor does it let through individual worthwhile posts from users who are otherwise blocked. Still, blocklists satisfy the key desideratum: they are listener-controlled filters. Shareable blocklists have been used for email, on Twitter (before X discontinued this feature), and for ad-blocking on the web, among other settings.

Conclusion

Internet media come in different bundles of functions than pre-Internet media did. Offline, broadcast combined transmission and selection in a way that made it appear that there was a natural connection between speakers’ access to a platform and listeners’ interests, and that both were naturally opposed to media intermediaries’ own speech claims. All of this was true enough in that context, given the structural constraints of the broadcast medium.

However, the assumption that listeners and speakers are united against intermediaries is simply not true when applied beyond the broadcast context. Instead, we frequently find that intermediaries are listeners’ allies, providing them with useful assistance in finding and obtaining the speech of interest to them—and that they form a united front against speakers trying to push their speech on unwilling listeners. Applying the broadcast analogy in this context can result in making unwilling listeners into captive audiences, all while claiming that it is necessary in the Orwellian name of listeners’ rights.

Instead, I have argued that to think clearly about speech on the Internet, we must distinguish between the functions of delivering, hosting, and selecting content, and that we must see each of them from listeners’ point of view. In such a setting, carefully drafted neutrality rules on delivering and hosting can be genuinely speech-facilitating because they promote listeners’ choices. In contrast, most attempts to regulate selection interfere with listeners’ choices. There are a few exceptions—structural separation, interoperability and middleware, restrictions on self-preferencing, and chronological feed options—but all of them are about giving listeners genuine choice among selection intermediaries, or about ensuring loyalty within the intermediary-listener relationship. Beyond that, selection intermediaries should largely be free to select as they see fit, and listeners should largely be free to use them or not, as they see fit.

Seeing the Internet from listeners’ perspective is a radical leap. It requires making claims about the nature of speech and about where power lies online, which can seem counterintuitive if you are coming from the standard speaker-oriented First Amendment tradition. But once you have made that leap, and everything has snapped into focus again, it is impossible to unsee.160See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805, 1834–36 (1995) (presciently arguing that the Internet will lead to an abundance of speech and shift control over that speech from speakers to listeners).

This is not to say that listeners should always get what they want, any more than speakers should. A democratic self-governance theory of the First Amendment might be acutely concerned that groups of like-minded listeners will wall themselves off inside echo chambers and filter bubbles. This is a powerful argument, and to refute it by appealing to a pure listeners’ choice principle is to beg the question. However, even if a shift to listeners’ perspective cannot resolve the debate between self-governance theories and individual-liberty theories—between collective needs and individual choices—such a shift can still clarify these debates. The fear of echo chambers and filter bubbles is fundamentally a concern about listeners’ choices, not one about speakers’ rights. Focusing on what listeners want, and on the consequences of giving it to them, makes clear what is really at stake. It also sheds light on the tradeoffs involved in adopting one media-policy regime as opposed to another.

Listeners online live in a world where countless chattering speakers vie for their attention using every dishonest and manipulative tactic they can—partisans, fraudsters, advertisers, and spammers of every stripe. Selection intermediaries are listeners’ best, and in some cases their only, line of defense against the cacophony; it can be the only way to tune out the racket and hear what they actually want to hear. Intermediaries have immense power over listeners because of it, but what listeners need is to moderate that power and tip the balance more in their favor, instead of eliminating the intermediaries entirely. Being more protective of platforms’ selection decisions gives us more room to be skeptical of their hosting and delivery decisions; it lets us better distinguish when speakers have legitimate claims against platforms and when they do not.

Listeners are at the center of the First Amendment and more so online than ever before. It is time for First Amendment theory and doctrine to get serious about listeners’ choices among speech on online platforms.

 

98 S. Cal. L. Rev. 1231

Download

* Tessler Family Professor of Digital and Information Law, Cornell Law School and Cornell Tech. I presented an earlier version of this article at The First Amendment and Listener Interests symposium at the University of Southern California on November 8–9, 2024. My thanks to the participants and organizers, and to Aislinn Black, Jane Bambauer, Kat Geddes, Erin Miller, Blake Reid, Benjamin L.W. Sobel, and David Gray Widder. The final published version of this article will be available under a Creative Commons license.

Islands of Algorithmic Integrity: Imagining a Democratic Digital Public Sphere

Introduction

A class of digitally mediated online platforms play a growing role as the primary sources of Americans’ knowledge about current events and politics. Prominent examples include Facebook, Instagram, TikTok, and X (which had formerly been known as Twitter). While only eighteen percent of Americans cited social media platforms as their preferred source of news in 2024, this number had risen by a striking six points since 2023.1Christopher St. Aubin & Jacob Liedke, News Platform Fact Sheet, Pew Rsch. Ctr. (Sept. 17, 2024), https://www.pewresearch.org/journalism/fact-sheet/news-platform-fact-sheet [https://perma.cc/SJ49-28W6]. These platforms also compete in “one of the most concentrated markets in the United States,”2Caitlin Chin-Rothmann, Meta’s Threads: Effects on Competition in Social Media Markets, Ctr. for Strategic & Int’l Stud. (July 19, 2023), https://www.csis.org/analysis/metas-threads-effects-competition-social-media-markets [https://perma.cc/2MQN-YSUR]. as a consequence of network effects and high barriers to entry.3Id. Current trends suggest that social media will soon outpace traditional news websites as the main source for a plurality of Americans’ understanding of what happens in the world.4St. Aubin & Liedke, supra note 1. Such platforms, which I will call “social platforms” here, are thus in practice a central plank of the political public sphere given their growing role in supplying so many people with news.

The role that social platforms play in public life has sparked a small avalanche of worries even before the extraordinary entanglement of big tech’s corporate leadership with the partisan policy projects of the second Trump administration.5This essay was completed in late 2024 and edited in early 2025. I have not tried here to account for the synergistic entanglement of Elon Musk and the Trump White House, nor for the ways in which the X social platform has changed as a result. It is, as I write, too early to say how this exorbitant display of codependency between partisan and technological projects will alter the American public sphere. The worries are diverse. Many commentators have aired concerns about the effects of social-platform use on mental health and sexual mores,6See, e.g., Surgeon General Issues New Advisory About Effects Social Media Use Has on Youth Mental Health, U.S. Dept. of Health & Human Servs. (May 23, 2023), https://www.hhs.gov/about/news/2023/05/23/surgeon-general-issues-new-advisory-about-effects-social-media-use-has-youth-mental-health.html (noting “ample indicators that social media can also pose a risk of harm to the mental health and well-being of children and adolescents”). or the extent of economic exploitation in this platform-based gig economy.7See, e.g., Veena Dubal, On Algorithmic Wage Discrimination, 123 Colum. L. Rev. 1929, 1944 (2023). These important cultural and economic worries are somewhat distinct from worries surrounding the political functions of the digital public sphere. It is the latter’s pathologies, and only those problems, that this essay—as well as the broader symposium on listeners’ rights in which it participates—concentrates on.

Even within the narrower compass of political speech defined in strict and demotic terms, the role of social platforms raises several distinct concerns. I take up three common lines of criticism and concern here. A first line of critique focuses on these platforms’ alleged harmful effects on a broad set of user beliefs and dispositions thought to be needful for democratic life. Social platforms, it is said, pull apart the electorate by feeding them fake news, fostering filter bubbles, and foreclosing dialogue—to the point where democratic dysfunction drives the nation toward a violent precipice. This first argument concerns platforms’ effects on the public at large.

A second common line of argument, by contrast, makes no claim about the median social platform user. It instead focuses on the “radicaliz[ing]” effect of social media engagement on a small handful of users at the ideological margin.8Steven Lee Myers & Stuart A. Thompson, Racist and Violent Ideas Jump from Web’s Fringes to Mainstream Sites, N.Y. Times (June 1, 2022), https://www.nytimes.com/2022/06/01/technology/fringe-mainstream-social-media.html [https://web.archive.org/web/20250219041047/https://www.nytimes.com/2022/06/01/technology/fringe-mainstream-social-media.html]. If even these few users resort to violence to advance their views, it might be said that social media has had a deadly effect.9Id. This is an argument not about social platforms’ effects on the mass of users, but upon the behavior of a small tail of participants in the online world.

Yet a third sort of argument against social platforms does not sound in a strictly consequentialist register. It does not lean, that is, on any empirical evidence as to how users are changed by their engagement. Rather, it is a moral argument that picks out objectionable features of the relationship between platforms and their users. This plainly asymmetrical arrangement, it is said, allows invidious manipulation, exploitation, or even a species of domination. Even if users’ behaviors do not change, these characteristics of the platform-user relationship are said to be insalubrious. Especially given the role that algorithmic design plays in shaping users’ online experiences, it is argued, a morally problematic imbalance emerges between ordinary people and the companies that manage social platforms. In the limited case, in which there are few potential sources of information and in which those sources are controlled and even manipulated by their owners (usually men of a certain age who are disdainful of civility and truthfulness norms), an acute concern about domination arises.

If one accepts one of these arguments (and I will try to offer both their best versions and to explore their weaknesses in what follows), then there is some reason to think closely about the way social platforms are governed, and to look for regulatory interventions. Such governance might be supplied by platforms’ own endogenous rules, which are usually embodied in their contractual terms of service or other internal procedures (such as mechanisms to dispute a take-down or deplatforming decision). Alternatively, governance could be supplied by exogenous legislation or regulation promulgated by a state. Private governance and legal regulation, of course, are potential substitutes. They can both be used to achieve the same policy goals. But how? What should such governance efforts, whether private or public, aspire to? And which policy levers are available to achieve it?

Where a platform employs algorithmic tools to shape users’ experience by determining what they see, the range of potential interventions will be especially large. This is a result of the complexity of common computational architectures today. There are many ways to craft the algorithms on which many platforms run.10See Arvind Narayanan, Understanding Social Media Recommendation Algorithms, Knight First Amend. Inst. 9–12 (March 9, 2023), https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms [https://perma.cc/9WVD-7NJ6] (discussing common structural elements). And there are many technical choices about which instruments to use, how to calibrate them, and what parameter (engagement? a subset of engagement?) to optimize. Many of these decision points offer opportunities for unavoidably normative choices about the purpose and intended effects of social platforms. Resolving those choices in turn requires some account of what it means exactly to talk about a normatively desirable social platform: That is, what should a social platform do? And for whom?

Such questions takes on greater weight given (1) recent regulatory moves by American states to control platforms’ content moderation decisions;11Tyler Breland Valeska, Speech Balkanization, 65 B.C. L. Rev. 903, 905 (2024) (“In 2021 and 2022 alone, state legislators from thirty-four states introduced more than one hundred laws seeking to regulate how platforms moderate user content.”). (2) a recent Supreme Court decision responding to those efforts;12Moody v. NetChoice, LLC, 603 U.S. 707 (2024); see infra text accompanying notes 124–26. and (3) the European Union’s Digital Services Act, a statute that takes yet a different and more indirect tack in modulating platform design and its ensuing costs.13Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act), 2022 O.J. (L 277) 3 [hereinafter “Digital Services Act”]. Or consider a 2025 U.S. Supreme Court decision, rendered on a tightly expedited schedule, to uphold federal legislation banning TikTok.14TikTok Inc. v. Garland, 145 S. Ct. 57, 72 (2025) (per curiam). The legislation in question is the Protecting Americans from Foreign Adversary Controlled Applications Act, Pub. L. No. 118–50, 138 Stat. 955 (2024). The decision makes the remarkable suggestion that legislative control over social platforms—exercised by reshaping (or cutting off) the ordinary market from corporate control (for example, by forcing or by restricting a sale)—raises only weak First Amendment concerns. Applied broadly, such an exception from close constitutional scrutiny might allow broad state control over social platforms.

My main aim in this essay is to offer a new and fruitful analytic lens for thinking about these problems as questions of democratic institutional design. This is a way of approaching the problem of institutional design, not a set of prescriptions for how to do such design. I do so by pointing to a model of a desirable platform, and then asking how we can move toward that aspiration, and how much movement might be impeded or even thwarted. My aspirational model is not conjured out of the ether; rather, I take inspiration from an idea found in the scholarly literatures in political science and sociology that evaluates pathways of economic development. The idea upon which I draw is that development policy should aim to seed “islands of integrity” into patrimonial or nepotistic state structures as a way of building foundations for a more robust—and hence public-regarding—state apparatus.15For examples of the term in recent studies, see Monica Prasad, Proto-Bureaucracies, 9 Socio. Sci. 374, 376 (2022); Eliška Drápalová & Fabrizio Di Mascio, Islands of Good Government: Explaining Successful Corruption Control in Two Spanish Cities, 8 Pol. & Governance 128, 128 (2020). For further discussion, see infra Part II. This literature focuses on the question of the state’s seeds and nurtures zones (or those of another interested party, such as a private foundation or an international organization) where public-regarding norms, not self-regarding or selfish motives, dominate as a means of generating public goods.

By analogy to the examples of effective public administration discussed in this literature, I will suggest here that we should think about public-regarding platforms as “islands of algorithmic integrity” that advance epistemic and deliberative public goods with due regard to the potential for either exploitation or manipulation inherent in the use of sophisticated computational tools. With that threshold understanding in mind, we should then focus on how to achieve that specific, affirmative model—and not simply on how to avoid narrowly-defined and specific platform-related

harms. An affirmative ideal, that is, provides a baseline against which potential reform proposals can be evaluated.16I am hence not concerned here with the First Amendment as a template or limit to institutional design. The constitutional jurisprudence of free speech provides a different benchmark for reform. I largely bracket that body of precedent here in favor of an analytic focus on the question of what functionally might be most desirable.

To be very clear up front, this approach has limitations. It draws on the “island of integrity” literature here as a general source for inspiration, instead of a source for models that can be directly transposed. I do not think that there is any mechanical way of taking the lessons of development studies and applying them to the quite different virtual environment of social platforms. To the extent lessons emerge, they are at a high level of abstraction. Still, studies of islands of bureaucratic integrity in the wild can nevertheless offer a useful set of analogies: they point toward the possibility of parallel formations in the online world. They also help us see that there are already significant web-based entities that exemplify certain ideals of algorithmic integrity in practice because they hew to the general lessons falling out of the islands of integrity literature. These studies can illuminate how a more democratically fruitful digital public sphere might begin to be built given our present situation, even if they cannot offer a full blueprint of its ultimate design.

It is worth noting that my analytic approach here rests on an important and controversial assumption. That is, I help myself to the premise that reform of the digital public sphere can proceed first by the cultivation of small-scale sites of healthy democratic engagement and that these can be scaled up. But this assumption may not be feasible. It may instead be necessary to start with a “big bang”: a dramatic and comprehensive sweep of extant arrangements followed by a completely new architecture of digital space. If, for example, you thought that the problem of social platforms began and ended in their concentrated ownership in the hands of a few bad-spirited people, then the creation of new, more democratic platforms would not necessarily lead to a comprehensive solution. Given disagreement about the basic diagnosis of social platforms’ malady, it is hard to know which of these approaches is more sensible. Therefore, there is some value to exploring a piecemeal reform approach of the sort illuminated here. But that does not rule out the thought that a more robust “big bang” approach is in truth needed.

Part I of this essay begins with a brief survey of the main normative (consequentialist and deontic) critiques that are commonly lodged against social platforms, focusing on the three listed above. In Part II, I introduce the “islands of integrity” lens—briefly summarizing relevant sociological and political science literature—as a means to directly think about social platform reforms. My aim in so doing is to provide a litmus test for thinking about social platform reform in the round. With that lens in hand, Part III critically considers the regulatory strategies pursued by the American states and the European Union to date. I suggest some reasons to worry that these are unlikely to advance islands of algorithmic integrity. I close by reflecting on some alternative regulatory tactics that might move us quicker toward that goal.

I. The Case(s) Against Social Platforms

What is a social platform? Do such all platforms work in the same way and raise the same kind of normative objections? Or are objections to platforms better understood as training on a subset of cases or applications? This Part sets some groundwork for answering these questions by defining the object of my inquiries and by offering some technical details about different kinds of platforms. I then taxonomize the three different objections that are commonly lodged against social platforms as they currently operate.

A. Defining Social Platforms and Their Algorithms

A “platform” is “a discrete and dynamic arrangement defined by a particular combination of socio-technical and capitalist business practices.”17Paul Langley & Andrew Leyshon, Platform Capitalism: The Intermediation and Capitalisation of Digital Economic Circulation, 3 Fin. & Soc’y 11, 13 (2017). A subset of platforms are understood by their users as distinctively “social” rather than “commercial” insofar they provide a space for interpersonal interaction, intercalated with other activities such as “reading political news, watching media events, and browsing fashion lines.”18Lisa Rhee, Joseph B. Bayer, David S. Lee & Ozan Kuru, Social by Definition: How Users Define Social Platforms and Why It Matters, Telematics & Informatics, 1, 1 (2020). The leading “social platforms,” as I shall call them here, are Facebook, X, Instagram, and TikTok.19Id. I have added TikTok to the list in the cited text. I use the term “social platforms” because “social media platforms” is overly clunky and merely “platforms” is too vague.

Not all social platforms propagate content in the same way. There are two dominant kinds of system architecture. The first is the social network, where users see posts by other users who they follow (or subscribe to) as well as posts those users chose to amplify.20Narayanan, supra note 10, at 10. When Facebook and Twitter allowed users to reshare or retweet posts, they enabled the emergence of networks of this sort.21Id. Note that before the affordances that allowed users to share content in these ways, these had limited network capacity. Here, what one sees depends on who one “knows.” Interconnected webs of users on a network can experience “information cascades” as information flows rapidly across the system.22Id. This is known colloquially as “going viral.” The possibility of virality depends not just on platform design but also on users’ behaviors. But, in practice a very small number of posts go viral on social networks.23Id. at 15. Attention is a scarce commodity. We cannot and do not absorb most of what’s posted online. Our inability to absorb much means that it is only possible for a few items to achieve virality.

The second possible architecture is centered around an algorithm (or, more accurately, algorithms). On platforms of this sort, the stream of data observed by a user is largely shaped by a suite of complex algorithms, which are computational decisional tools that proceed through a series of steps to solve a problem. These algorithms, in the aggregate, are designed with certain goals in mind, such as maximizing the time users spend on the platform.24Id. at 10. Networks require both content processing tools (e.g., face recognition, transcription, and image filters) and also content propagation tools (e.g., search, recommendation, and content moderation). Id. at 8. I am largely concerned here with content propagation tools. TikTok’s “For You Page,” Google Discover, and YouTube all rely at least in part on algorithms.25Id. at 11.

In practice, what is for the sake of simplicity called “the algorithm” can be disaggregated into several different design elements, each of which is in truth a distinct algorithm or digital artifact. These include (1) the “surfaces of exposure” (that is, the visual interface encountered by users); (2) a primary ranking model (often a two-stage recommender system that combs through and filters potential posts); (3) peripheral models, which rank content that appears around the main surface of exposure (for example, ads); and (4) auxiliary models (for example, content moderation for illegal materials or posts that violate terms of service).26Kristian Lum & Tomo Lazovich, The Myth of the Algorithm: A System-Level View of Algorithmic Amplification, Knight First Amend. Inst. (Sept. 13, 2023), https://knightcolumbia.org/content/the-myth-of-the-algorithm-a-system-level-view-of-algorithmic-amplification [https://perma.cc/4WBQ-34WN]. For the sake of simplicity, I will refer to them together only as “the algorithm,” but it is worth keeping in mind that this is a simplification, and in fact there are multiple instruments at stake.

Algorithm design implicates many choices. At the top level, for example, an algorithmic model can be braided into a network model or integrated into a subscription-service model.27Narayanan, supra note 10, at 10–11 (“[N]o platform implements a purely algorithmic model . . . .”). At a more granular level, algorithms can be designed to optimize a broad range of varied parameters. These range from “meaningful social interactions” (Facebook’s measure at one point in time) to user’s watch time (YouTube’s measure) to a combination of liking, commenting, and watching frequencies (TikTok’s measure).28Id. at 19. The choice of parameter to optimize is important. Most common parameters quantify some element of users’ engagement with the platform, but they do so in different ways. Engagement measures are relevant from the platforms’ perspectives given their economic reliance on the revenue from advertising displayed to users.29For a useful account of the behavioral advertising industry, see generally Tim Hwang, Subprime Attention Crisis (2020). In theory, more engagement means more advertising revenue. But engagement on social platforms is surprisingly sparse. Somewhere between only one percent and five percent of posts on most social platforms generate any engagement at all.30Narayanan, supra note 10, at 28. And the movement from engagement to advertising is rarer still: most targeted online advertising is simply “ignored.”31Hwang, supra note 29, at 77; accord Narayanan, supra note 10, at 29.

B. Consequentialist Critiques of Social Platforms

There are, as I read the literature, three clusters of normative concerns raised by social platforms that merit consideration as the most important and common criticisms made of those technologies.32I recognize that there are complaints beyond those that I adumbrate here. I have selected those that seem to me supported by evidence and a coherent moral theory. I have ignored those wanting in such necessary ballast. Two are consequentialist, in the sense of training on allegedly undesirable effects of social platforms. Of course, such arguments need some means of evaluating downstream effects as either desirable or undesirable. In practice, they rest on some account of democracy as an attractive—even ideal—political order. (Note that as is often the case in legal scholarship, the precise kind of “democracy” at work in these critiques is not always fully specified. This lack of specification is a gap that will prove relevant in the analysis that follows.)33For an illuminating recent discussion on the varieties of democratic theory, see generally Jason Brennan & Hélène Landemore, Debating Democracy: Do We Need More or Less? (2021). The other cluster is deontic, in the sense of picking out intrinsically unattractive qualities of social platforms. These accounts do not rely on a causal claim about the effects of social platforms; they instead assert the prima facie unacceptability of platforms in themselves.

Let’s begin with the two consequentialist arguments and then move on to the deontic critique.

A first view widely held in both the academic and non-academic public spheres is that social platforms cause political dysfunction in a democracy because of their effects on the dispositions and beliefs of the general public.34See, e.g., Helen Margetts, Rethinking Democracy with Social Media, 90 The Pol. Q., Jan. 2019, 107, at 107 (assigning blame to social media for “pollution of the democratic environment through fake news, junk science, computational propaganda and aggressive microtargeting and political advertising”; for “creating political filter bubbles”; and for “the rise of populism, . . . the end of democracy and ultimately, the death of democracy.”). Using social platforms, this argument goes, drives (1) a dynamic of “affective polarization” (negative emotional attitudes towards members of opposition parties), or (2) traps us in “echo chambers” or filter bubbles that are characterized by limited, biased information.35Jonathan Haidt, Yes, Social Media Really Is Undermining Democracy, The Atlantic (July 28, 2022), https://www.theatlantic.com/ideas/archive/2022/07/social-media-harm-facebook-meta-response/670975 [https://perma.cc/7FFV-QRPB]. Social media users are also said to be exposed to “fake news,” which are “fabricated information that mimics news media content in form but not in organizational process or intent.”36David M. J. Lazer, Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts & Jonathan L. Zittrain, The Science of Fake News: Addressing Fake News Requires a Multidisciplinary Effort, 359 Sci. 1094, 1094 (2018); see also Edson C. Tandoc Jr., The Facts of Fake News: A Research Review, Soc. Compass, July 25, 2019, at 1, 2 (“[Fake news] is intended to deceive people, and it does so by trying to look like real news.”). For examples, see Aziz Z. Huq, Militant Democracy Comes to the Metaverse?, 72 Emory L.J. 1105, 1118–19 (2023). The terms “misinformation” and “disinformation” are also used to describe fake news and its variants. I leave aside questions about how to exactly define and distinguish these terms. High levels of exposure are said to be driven by algorithmic amplification.37See, e.g., Haidt, supra note 35; Zeynep Tufekci, Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency, 13 Colo. Tech. L.J. 203, 215 (2015) (criticizing Facebook for its power to “alter the U.S. electoral turnout” through algorithmic manipulation). Recent advances in deep-fake-creation tools have further spurred worries about an “information apocalypse” that destroys “public trust in information and the media.”38Mateusz Łabuz & Christopher Nehring, On the Way to Deep Fake Democracy? Deep Fakes in Election Campaigns in 2023, 23 Eur. Pol. Sci. 454, 457 (2024). Platforms, in this view, foster a world in which citizens lack a shared reservoir of mutual tolerance and factual beliefs about the world. Such deficiencies are said to render meaningful political debate on social platforms challenging—perhaps even impossible. As a result of these changes in peoples’ dispositions, the possibility of democratic life moves out of reach.

These arguments hence assume that democratic life requires the prevalence of certain attitudes and beliefs in order to be durably sustained (an assumption that may or may not be empirically justified). Another way in which these concerns can concretely be understood is to view them in light of the rise of anti-system parties,39Giovanni Capoccia, Anti-System Parties: A Conceptual Reassessment, 14 J. Theoretical Pol. 9, 10–11 (2002) (offering several different definitions of that term). which are characterized by their limited regard for democratic norms. Platforms might facilitate the growth of such anti-system candidates who disrupt or even undermine democratic norms such as broad trust in the state and in co-citizens. Through this indirect path, platforms have a detrimental effect on democracy’s prospects.

There are surprisingly few empirical studies that support the existence of a robust causal connection between social platforms and democratically necessary trust.40There is one experiment focused on search ranking that finds political effects, but the experiment is more than a decade old and focuses on how search results are displayed, not on the central issue of platform design today. Robert Epstein & Ronald E. Robertson, The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections, 112 Proc. Nat’l Acad. Sci. E4512, E4518–20 (2015). Yet some evidence for it can be found in the behaviors and beliefs of significant political actors. President Donald Trump, for example, declared in November 2016 that Facebook and Twitter had “helped him win” the 2016 U.S. presidential election.41Rich McCormick, Donald Trump Says Facebook and Twitter ‘Helped Him Win’, The Verge (Nov. 13, 2016, 7:02 PM PST), https://www.theverge.com/2016/11/13/13619148/trump-facebook-twitter-helped-win [https://perma.cc/5MUQ-7R73]. Since 2020, conservative donors such as the Bradley Impact Fund and the Conservative Partnership Fund have contributed millions to Republican-aligned groups combating effects to “take a tougher line against misinformation online.”42Jim Rutenberg & Steven Lee Myers, How Trump’s Allies Are Winning the War Over Disinformation, N.Y. Times, https://www.nytimes.com/2024/03/17/us/politics/trump-disinformation-2024-social-media.html [https://web.archive.org/web/20250401001211/https://www.nytimes.com/2024/03/17/us/politics/trump-disinformation-2024-social-media.html]. Such significant financial investments by important political actors, beyond merely cheap talk, suggest that social platforms do have predictable partisan effects for candidates and parties that have an arguable anti-systemic orientation.43A mea culpa: in previous work, I was too credulous in respect to claims of platform-related harms. Huq, supra note 36, at 1118–19. I should have been more cautious.

On the other hand, well-designed empirical studies have cast doubt on the negative, large-“N” effects of social platforms.44For a prescient popular argument to that effect, see Gideon Lewis-Kraus, How Harmful Is Social Media?, New Yorker (June 3, 2022), https://www.newyorker.com/culture/annals-of-inquiry/we-know-less-about-social-media-than-we-think [https://perma.cc/7FFV-QRPB]. Four studies are illustrative. A first well-designed randomized experiment, which tested the effect of platform deactivation for several weeks before the 2020 election, found no statistically significant effects of platform exposure on affective polarization, issue polarization, or vote choice.45The study found a non-significant pro-Trump effect from Facebook usage but cautioned against treating this finding as generalizable. Hunt Allcott, Matthew Gentzkow, Winter Mason, Arjun Wilkins, Pablo Barberá, Taylor Brown, Juan Carlos Cisneros, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Sandra González-Bailón, Andrew M. Guess, Young Mie Kim, David Lazer, Neil Malhotra, Devra Moehler, Sameer Nair-Desai, Houda Nait El Barj, Brendan Nyhan, Ana Carolina Paixao de Queiroz, Jennifer Pan, Jaime Settle, Emily Thorson, Rebekah Tromble, Carlos Velasco Rivera, Benjamin Wittenbrink, Magdalena Wojcieszak, Saam Zahedian, Annie Franco, Chad Kiewiet de Jonge, Natalie Jomini Stroud & Joshua A. Tucker, The Effects of Facebook and Instagram on the 2020 Election: A Deactivation Experiment, 121 Proc. Nat’l Acad. Sci., 1, 8–9 (2024). A second random experiment focused on the difference between Facebook’s default algorithms and a reverse-chronological feed. Again, the study found no effect on affective polarization, issue polarization, or political knowledge after switching from a network-driven feed to an algorithmically-driven feed, even though the use of a reverse chronological feed increased the amount of “untrustworthy” content seen.46Andrew M. Guess, Neil Malhotra, Jennifer Pan, Pablo Barberá, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, Sandra González-Bailón, Edward Kennedy, Young Mie Kim, David Lazer, Devra Moehler, Brendan Nyhan, Carlos Velasco Rivera, Jaime Settle, Daniel Robert Thomas, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Beixian Xiong, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud & Joshua A. Tucker, How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?, 381 Sci. 398, 402 (2023). This null finding from a study of opting into algorithmic content propagation has been replicated in a separate study of YouTube.47Homa Hosseinmardi, Amir Ghasemian, Aaron Clauset, Markus Mobius, David M. Rothschild & Duncan J. Watts, Examining the Consumption of Radical Content on YouTube, 118 Proc. Nat’l Acad. Sci., 1, 1 (2021).

Finally, an empirical inquiry into exposure to fake news found only a very small positive effect on the vote share of populist candidates in European elections.48Michele Cantarella, Nicolò Fraccaroli & Roberto Volpe, Does Fake News Affect Voting Behaviour?, Rsch. Pol’y, Jan. 2023, at 1, 2. Another study of 1,500 users in each of three countries (France, the United Kingdom, and the United States) identified no correlation between social platform use and more extreme right-wing views; indeed, in the United States, they found a negative correlation.49Shelley Boulianne, Karolina Koc-Michalska & Bruce Bimber, Right-Wing Populism, Social Media and Echo Chambers in Western Democracies, 22 New Media & Soc’y 683, 695 (2020). The authors concluded that their “findings tend to exonerate the Internet generally and social media in particular, at least with respect to right-wing populism.”50Id. Finally, a 2017 study found that President Trump erred when he claimed that Twitter and X helped him in the 2016 election; again, that study found a negative correlation between more extreme right-wing views and social platform usage.51Jacob Groshek & Karolina Koc-Michalska, Helping Populism Win? Social Media Use, Filter Bubbles, and Support for Populist Presidential Candidates in the 2016 US Election Campaign, 20 Info., Commc’n & Soc’y 1389, 1397 (2017) (“American voters who used social media to actively participate in politics by posting their own thoughts and sharing or commenting on social media were actually more likely to not support Trump as a candidate.”).

Summarizing the available research (including these studies) in a June 2024 issue of Nature, a team of respected scholars concluded that “exposure to misinformation is low as a percentage of people’s information diets” and further “the existence of large algorithmic effects on people’s information diets and attitudes has not yet been established.”52Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson & Duncan J. Watts, Misunderstanding the Harms of Online Misinformation, 630 Nature 45, 47–48 (2024); accord Sacha Altay, Manon Berriche & Alberto Acerbi, Misinformation on Misinformation: Conceptual and Methodological Challenges, Soc. Media + Soc’y, Jan.–Mar. 2023, at 1, 3 (“Misinformation receives little online attention compared to reliable news, and, in turn, reliable news receives little online attention compared to everything else that people do.”). The Nature team warned that the extent to which social platforms undermine political knowledge depends on the availability of other news sources. Where countries “lack reliable mainstream news outlets,” their negative knowledge-related spillovers may be greater.53Budak et al., supra note 52, at 49. I do not pursue that suggestion here, since it invites a bifurcated analysis that separately considers different national jurisdictions, depending on the robustness of their non-digital media ecosystems. What follows should be taken as parochially relevant to North American and European democracies (at least for now) but not the larger world beyond that.

A second view of social platforms’ harms identifies not its spillovers at scale, but rather its effects on certain narrow slices of the population—in particular, those at the tails of the ideological distribution. The intuition here is that engagement with social platforms may not change the dispositions or beliefs of most people, but there is a small subset of individuals who adopt dramatically divergent beliefs (and even behaviors) as consequences of their platform use. “Tail effects” of this sort may not be significant for democratic life under some circumstances, but of particular relevance, there is some evidence of increased support for political violence among Americans.54At least some surveys suggest rising levels of positive attitudes to violence. See Ashley Lopez, More Americans Say They Support Political Violence Ahead of the 2024 Election, NPR, https://www.npr.org/2023/10/25/1208373493/political-violence-democracy-2024-presidential-election-extremism [https://perma.cc/ZM4L-BRRV]. For other findings exhibiting a concentration of such support at the rightward end of the political spectrum, see Miles T. Armaly & Adam M. Enders, Who Supports Political Violence?, 22 Persp. on Pol. 427, 440 (2024). Extremism at the tails in this context and about this sentiment may have profound consequences. At a moment when President Trump has (twice) faced near-assassination during the 2024 presidential election cycle, and considering how his supporters previously precipitated a deadly confrontation at a 2021 Joint Session of Congress meant to count Electoral College votes, it seems prudent to reckon with the risk that radicalized individuals—even if few in number—may be able to inflict disproportionate harms on institutions that are necessary for core democratic political processes.

This more narrowly gauged claim stands on firmer empirical ground than the critiques of social platforms’ large-“N” effects discussed above. A 2024 study of fake news’ circulation on Twitter found that 0.3 percent of users account for four-fifths of its fake news volume.55Sahar Baribi-Bartov, Briony Swire-Thompson & Nir Grinberg, Supersharers of Fake News on Twitter, 384 Sci. 979, 980 (2024). These “supersharers,” who tended to be older, female, and Republican, in turn reached a “sizable 5.2% of registered voters on the platform.”56Id. at 979. Note that this is not necessarily the population one would expect to engage in political violence. A different study published around the same time also found “asymmetric . . . political news segregation” with “far more homogenously conservative domains and URLs circulating on Facebook” and “a far larger share” of fake news on the political right.57Sandra González-Bailón, David Lazer, Pablo Barberá, Meiqing Zhang, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Deen Freelon, Matthew Gentzkow, Andrew M. Guess, Shanto Iyengar, Young Mie Kim, Neil Malhotra, Devra Moehler, Brendan Nyhan, Jennifer Pan, Carlos Velasco Rivera, Jaime Settle, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud & Joshua A. Tucker, Asymmetric Ideological Segregation in Exposure to Political News on Facebook, 381 Sci. 392, 397 (2023).

Such findings are consistent with wider-angle studies of partisan polarization, which find different microfoundations on the political left and right.58Craig M. Rawlings, Becoming an Ideologue: Social Sorting and the Microfoundations of Polarization, 9 Socio. Sci. 313, 337 (2022). The Nature team mentioned above hence concluded that exposure to misinformation is “concentrated among a small minority.”59Budak et al., supra note 52, at 48. Those who consume false or otherwise potentially harmful content are already attuned to such information and actively seek such content out.60Id. Platforms, however, do not release “tail exposure metrics” that could help quantify the risk of harm from such online interactions.61Id. at 50; see also Vivian Ferrillo, r/The_Donald Had a Forum: How Socialization in Far-Right Social Media Communities Shapes Identity and Spreads Extreme Rhetoric, 52 Am. Pol. Rsch. 432, 443 (2024) (finding that users who engage often with a far-right community also use far-right vocabulary more frequently in other spaces on their platform, contributing to the spread and normalization of far-right rhetoric). As a result, it is hard to know how serious the problem may be.

What of the concern that social platforms conduce to “filter bubbles” that constrain the range of information sources users can access in problematic ways?62For an influential treatment of the topic, see generally Eli Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (2012). Once again, the evidence is at best inconclusive. A 2016 study found that social homogeneity of users predicted the emergence of echo chambers characterized by asymmetrical patterns of news sharing.63Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley & Walter Quattrociocchi, The Spreading of Misinformation Online, 113 Proc. Nat’l Acad. Sci. 554, 558 (2016). At the same time, the study offered no empirical evidence about the extent or effects of filter bubbles “in the wild,” so to speak. A 2021 review identified divergent results in studies surveying human users of social platforms or digital trace data; yet, it identified only a handful of studies substantiating the concern.64Ludovic Terren & Rosa Borge, Echo Chambers on Social Media: A Systematic Review of the Literature, 9 Rev. Commc’n Rsch. 99, 110 (2021) (reviewing fifty-five studies and finding only five yielding no evidence of echo chambers). A 2022 meta-study found that “most people have relatively diverse media diets,” and only “small minorities, often only a few percent, exclusively get news from partisan sources.”65Amy Ross Arguedas, Craig T. Robertson, Richard Fletcher & Rasmus K. Nielsen, Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review 4 (2022), available at https://ora.ox.ac.uk/objects/uuid:6e357e97-7b16-450a-a827-a92c93729a08. Again, the empirical foundations of the normative worry here seem shaky.

Even if the evidence of filter bubbles existing was more robust, filter bubbles’ substantiated existence would not necessarily be cause for concern. Concern about filter bubbles focuses on the asymmetric character of the information voters consume; this then assumes that there is a counterfactual condition under which the voter might receive a “balanced” diet of information. But what does it mean to say that a person’s news inputs are balanced or symmetrical? Does it require equal shares of data that support Republican and Democratic talking points? What if one of those parties is more likely than the other to lean on false empirical claims? Should a balanced informational diet reflect or discount for such a lean? How are the problems of misinformation or distorted information to be addressed? Is it part of a balanced informational diet to receive a certain amount of “fake news”? These questions admit of no easy answers. Rather, they suggest that the concern with filter bubbles trades on a notion of balance that is hard to cash out in practice without difficult anterior ideological and political choices.

In brief, the available empirics suggest that consequentialist critiques of social platforms are better focused on tail effects instead of the way platform engagement changes the median user or the mass of users. It is also worth underscoring a point that is somewhat obscured by the bottom-line results of these studies but implicit in what I have just set out. That is, the tail effects of social platforms arise from a complex and unpredictable mesh of interactions between technical design decisions and users’ decisions. The external political environment hence shapes platforms’ spillover effects, and when that environment is more polarized and more prone to panics or even violence, it seems likely that the tail risks of social platforms would correspondingly rise. When, by contrast, there are a plethora of reliable non-digital sources which are accurate and easily accessible, the threat to democratic life from social platforms may well be far less acute.

C. Deontic Critiques of Social Platforms

Critiques of social platforms do not need to rest on evidence of their consequences. It is also possible to pick out features of the relationship between platforms and users as morally problematic even in the absence of any harm arising. Two particular strands of such “deontic” critique can be traced in existing literature.

First, social platforms (among other entities) gather data about their users and then use that data to target advertisements to those same users. For many, this circular pattern of data extraction and deployment constitutes a morally problematic exploitation. Such exploitation occurs when “one party to an ostensibly voluntary agreement intentionally takes advantage of a relevant and significant asymmetry of knowledge, power, or resources” to offer otherwise unacceptable contracting terms.66Claire Benn & Seth Lazar, What’s Wrong with Automated Influence, 52 Canadian J. Phil. 125, 135 (2022).

Shoshana Zuboff, who is perhaps the leading expositor of this view, argues that platforms have “scraped, torn, and taken for another century’s market project” the very stuff of “human nature.”67Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 94 (2019). She condemns the “rendition” and “dispossession of human experience” through “datafication.”68Id. at 233–34. Zuboff’s critique of platform exploitation is nested in a broader set of concerns about the presently hegemonic form of “informational” or “financial” capitalism. Reviewing Zuboff’s book, Amy Kapczynski thus asserts that “informational capitalism brings a threat not merely to our individual subjectivities but to our ability to self-govern.”69Amy Kapczynski, The Law of Informational Capitalism, 129 Yale L.J. 1460, 1467 (2020). Similarly, danah boyd characterizes private firms’ use of digital power as a malign manifestation of “late-stage capitalism . . . driven by financialization.”70danah boyd, The Structuring Work of Algorithms, 152 Dædalus 236, 238 (2023). And as Katharina Pistor puts it, “[t]he real threat that emanates from Big Tech using big data is not just market dominance . . . [but] the power to transform free contracting and markets into a controlled space that gives a huge advantage to sellers over buyers.”71Katharina Pistor, Rule by Data: The End of Markets?, 83 Law & Contemp. Probs. 101, 117 (2020); accord Julie E. Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev. 133, 145–48 (2017). The structure of financial or quasi-financial transactions on social platforms, in this view, conduces systemically to users’ exploitation.

In an earlier piece, I have expressed sharp skepticism elsewhere about the empirical and normative arguments offered by Zuboff and Kapczynski.72Mariano-Florentino Cuéllar & Aziz Z. Huq, Economies of Surveillance, 133 Harv. L. Rev. 1280, 1298 (2020). Their concerns about exploitation seem to trade on imprecise and potentially misleading analogies to more familiar and normatively troubling forms of economic exploitation, despite meaningful differences in structure and immediate effect. Indeed, both analogies fail to take those differences seriously. More generally, their arguments borrow a suite of concerns associated with the larger structures of economic life labeled “neoliberalism,” which have developed since the 1970s. Such critiques of neoliberalism, however, concern aspects of economic life that have little to do with social platforms (for example, deregulation and financialization). One can have neoliberalism with or without social platforms. I see little analytic gain in combining these very different lines of argument respecting quite distinct targets, and I see no reason to invite confusion by mushing together distinct phenomena to achieve guilt-by-association more generally.

Second, concern about exploitation overlaps with a distinct worry about non-domination. Claire Benn and Seth Lazar capture this possibility in their argument that social platforms might compromise an intrinsic, non-instrumental “value of living in societies that are free and equal.”73Benn & Lazar, supra note 66, at 133. They argue that the public is necessarily ignorant about the “tech companies’ control of the means of prediction” and so have “no viable way of legitimating these new power relations.”74Id. at 137. But the empirical premise of this argument—widespread public ignorance about predictive tools—seems shaky: As the empirical studies of fake news and political distortion show, there is publicly available knowledge about many salient effects of social platforms. To the extent that the public misconstrues those effects, Benn and Lazar’s argument likely overestimates their magnitude.75See supra notes 35 and 37 for examples of such overestimation. I hardly think these critiques are secret.

Still, I think Benn and Lazar are on to something useful when they identify the fact of corporate control as a morally salient one. Social platforms stand in an asymmetrical relation to the general public because of (1) knowledge asymmetries enabled by the corporate form; (2) collective action problems implicit in the one-to-many relation of firms to consumers; and (3) ideological effects (for example, false beliefs in the necessity of unregulated digital markets for economic growth). As a consequence of these dynamics, social platforms exercise a certain kind of unilateral power over the public. Such power might be especially worrying if it is concentrated in the hands of a limited number of people—and if these people have close connections to those in high state office (with the Musk/Trump relationship offering an obvious, highly salient example). This slate of worries comes sharply into play whenever platforms comprise an important part of the democratic public sphere. Under these conditions, Benn and Lazar point out that platforms ought not to merely prevent negative consequences for democratic politics; they must also ensure “that content promotion is regulated by epistemic ideals.”76Benn and Lazar, supra note 66, at 144. This entails, in their view, a measure of “epistemic paternalism.”77Id. It rests on platforms’ unilateral, and effectively unconstrained, judgments about interface and algorithmic design.

This deontic argument can also be stated in the terms of Philip Pettit’s influential theory of republican freedom. On Pettit’s account, an individual wields dominating power over another if the former has the capacity to interfere in certain choices of the latter on an arbitrary basis.78Philip Pettit, Republicanism: A Theory of Freedom and Government 52 (1997). Pettit asserts that this arbitrariness condition is satisfied if an agent’s actions are subject only to the arbitrium—the will or judgment—of the agent, and when the interfering agent is not “forced to track the interests and ideas of the person suffering the interference.”79Id. at 55. For example, a person ranked by law as a slave is just as unfree even if their master always acts with their interests in mind. Even when an arbitrary legal relationship is exercised in a beneficent fashion with the interest of the weaker party in mind, Pettit suggests that there is a displacement of the subject’s “involvement, leaving [them] subject to relatively predictable and perhaps even beneficial forms of power that nevertheless ‘stifle’ and ‘stultify.’ ”80Patchen Markell, The Insufficiency of Non-Domination, 36 Pol. Theory 9, 12 (2008). To be clear, Markell here is criticizing and extending Pettit’s account.

Yasmin Dawood has fruitfully deployed Pettit’s framework for thinking about the abuse of public power in democratic contexts.81Yasmin Dawood, The Antidomination Model and the Judicial Oversight of Democracy, 96 Geo. L.J. 1411, 1431 (2008). Her conceptual framing, moreover, could be extended to private actors such as social platforms without too much difficulty. For instance, one might view the exercise of extensive control over the informational environment online as a species of domination, whether or not it was exercised in a malign or a paternalistic direction. That idea might be rendered more precise by drawing on work by Moritz Hardt, Meena Jagadeesan, and Celestine Mendler-Dünner that defines the “performative power” of an algorithmic instrument in terms of “how much participants change in response to actions by the platform, such as updating a predictive model” as a numerical parameter.82Moritz Hardt, Meena Jagadeesan & Celestine Mendler-Dünner, Performative Power, 2022 NIPS ’22: Proc. of the 36th Int’l Conf. on Neural Info. Processing Sys. 2. This concept of “performative power” usefully captures the way that platforms “steer” populations.83Id. at 5–6. As such, it offers a way of understanding and measuring “domination” in social platforms more precisely.

In setting out these two kinds of deontic critiques of social platforms, I thus suggest that there are plausible grounds for worry about social platforms, even absent robust empirical findings of spillovers onto users’ beliefs and dispositions. I recognize that both the exploitation and the domination critiques rest on further moral premises, which I have neither spelled out in full nor tried to substantiate. But I spell out both deontic arguments here to show readers the minimally plausible non-consequentialist grounds for concern about the structure and operation of social platforms and to allow readers to make their own judgments.

D. Making a Better Case Against Social Platforms

Social platforms have become scapegoats of sorts for many of the ills that democratic polities are now experiencing. But the available evidence suggests that many of these critiques miss the mark. For many people, platforms simply do not play a very large or dominant epistemic role (although this may well change in the near future). They also seem not to have the polarizing and epistemically distorting effects many bemoan.

That is not to say, however, that there is no reason for concern. Consequentialist worries about the behavior of users on the tails of the ideological distribution, as well as deontic worries about exploitation or domination, point toward the need for reforming measures. Of course, these arguments might not all point in the same direction in terms of practical change. But reforms that render platforms more responsive and responsible to epistemically grounded truths and the interests of all their users (as well as interests of the general public at large) are plausibly understood to respond to all the salient critiques discussed above.

II. Islands of Integrity—Real and Digital Examples

One way of thinking about how existing social platforms might be reformed is to identify an aspirational end-state, or a model, of how they might ideally work. With an understanding of the best version of a social platform in view, it may be easier to evaluate extant reform strategies and to propose new ones. This inquiry might proceed at the retail level—focusing on what an “ideal” or a “better” platform might look like—or at a general level—asking how the digital ecosystem overall should be designed. With the first of these paths in mind, I introduce in this Part a conceptual framework for thinking about “islands of integrity” developed in the sociological and political science studies of development. While that literature has not yielded any simple or single formula for reaching that aspiration, it still offers a useful lens for starting to think about well-functioning social platforms. Or so I hope to show in what follows.

A. Building Islands of Integrity in the Real World

In recent decades, concern about the legality and the quality of governance has shaped the agenda of international development bodies such as the World Bank.84Aziz Z. Huq, The Rule of Law: A Very Short Introduction 75–78 (2024). One of the strategies identified to enhance the quality of public administration centers the idea of “islands of integrity” or “pockets of effectiveness” in sociopolitical environments that are “otherwise dominated by patrimonialism, corruption, and bureaucratic dysfunction.”85Prasad, supra note 15, at 376. An island of integrity has been defined as an entity or unit (generally of government) that is “reasonably effective in carrying out [its] functions and in serving some conception of the public good, despite operating in an environment in which most agencies are ineffective and subject to serious predation . . . .”86David K. Leonard, ‘Pockets’ of Effective Agencies in Weak Governance States: Where Are They Likely and Why Does It Matter?, 30 Pub. Admin. & Dev. 91, 91 (2010); see also Michael Roll, The State That Works: A ‘Pockets of Effectiveness’ Perspective on Nigeria and Beyond, in States at Work: Dynamics of African Bureaucracies 365, 367 (Thomas Bierschenk & Jean-Pierre Olivier de Sardan eds., 2014) (“A pocket of effectiveness (PoE) is defined as a public organisation that provides public services relatively effectively despite operating in an environment, in which public service delivery is the exception rather than the norm.”). The normative intuition is that it is possible to seed islands of integrity, despite pervasive corruption, as a starting point for more large-scale reforms.

There are by now a wide variety of case studies on islands of integrity. Monica Prasad, for example, points to the recently stood-up Indian Institutes of Technology (“IITs”), an archipelago of meritocratic technology-focused colleges across the subcontinent, as an instance where an educational mission is successfully pursued against “a context of patrimonialism and corruption.”87Prasad, supra note 15, at 380. IITs’ mission is preserved and protected from distortion through the use of selection strategies of “meritocratic decoupling” that sort both students and teachers based on academic merit, alongside efforts to show how the institution benefited those who were excluded.88Id. at 382–83.

In a different case study, Eliška Drápalová and Fabrizio Di Mascio identify a pair of municipalities in Spain as “islands of integrity.”89Drápalová & Di Mascio, supra note 15, at 128. They contend that the key move in creating them was the fashioning of a “fiduciary relationship between mayors and city managers,” which allowed for the development of a bureaucratic structure shaped by professional (rather than patrimonial) norms.90Id. at 129–30, 135. City managers, they find, offer “accountability and responsiveness” to elected leaders without compromising the integrity of service-oriented institutions.91Id. at 135. Similarly, Michael Roll maps the emergence in Nigeria of well-run agencies managing food and drug regulation on the one hand, and human trafficking on the other, to demonstrate that islands of integrity can emerge even under very difficult circumstances given the right leadership.92Roll, supra note 86, at 370–73.

Most, but not all, of these case studies on islands of integrity concern real-world public administration, often at a local level.93One article applies the concept to public broadcasters in developing countries, but does not do so with enough detail to be useful. Cherian George, Islands of Integrity in an Ocean of Commercial Compromises, 45 Media Asia 1, 1–2 (2018). The generalizations drawn by the literature are concededly fragile: The heterogeneity of cultural, political, and institutional context makes inference instable, at least at a useful level of granularity.94Leonard compiles a number of general lessons, but these are pitched at a very high level of abstraction. Leonard, supra note 86, at 93. Still, a couple of regularities do tentatively emerge from a review of the available case studies in the development literature.

Crudely stated, the “islands of integrity” literature underscores the importance of institutional means and leadership motives for resisting patrimonial or corrupt political cultures. First, an island of integrity needs to internalize control over its own workings in order to “create a culture of meritocracy and commitment to the organization’s mission.”95Prasad, supra note 15, at 376. Underpinning this culture, it seems, must be a clear understanding of the public goods that the agency or body is supposed to produce. The truism that leadership is key seems to hold particularly strongly.96Leonard, supra note 86, at 94 (noting the importance of “leadership, personnel management, resource mobilisation and adaptability”). Autonomy over personnel choice is also crucial in order to maintain that culture.97Roll, supra note 86, at 379.

Second, there is a consistent institutional need for the creation of tools to resist demands from powerful external actors who try to capture a body for their immediate political or economic gains, which are unrelated to the public-regarding goals of the institution.98Id. at 377–78 (noting the role of tools for “political management”). Tools by which to mitigate such threats to institutional autonomy vary. Indian universities, Prasad found, tout the local jobs they create in cleaning and maintenance—even as they maintain the separation of student and faculty selection from local pressures—as a way of deflecting local politicos.99Prasad, supra note 15, at 385. Spanish city managers, Drápalová and Di Mascio explain, promise improvements in top-line municipal services to mayors who threaten their autonomy.100Drápalová and Di Mascio, supra note 15, at 135. In effect, reform is purchased in both cases by strategic payoffs to those who threaten its progress.

Just as it is important to work out how to build public-regarding institutional spaces in the real world, so too is it important to identify how to create such spaces in the virtual, digitally mediated world. Just as the bodies in India, Spain, and Nigeria need to have motive and means to keep the corroding forces of public sphere at bay, so too does a social platform that strives to be an island of integrity need leadership, internal culture, and means to create a non-exploitative, non-dominating structure while managing tail risk better than existing platforms. Taken as metaphor, therefore, “islands of integrity” offer a template for the desirable end goal of social platform reform as well as some modest clues about how to get there. Still, it is important not to make too much of this metaphor. The claim that the “islands of integrity” literature can be an inspiration for social platform reform is, at bottom, an argument from analogy, and one that needs to be tested carefully through the application of that analogy.

B. Digital Islands of Integrity: Two Examples

The aforementioned analogy gains force when one realizes that there are already examples of digital islands of integrity online. The two most prominent examples are Wikipedia and the British Broadcasting Company (“BBC”). To be clear, neither is a quintessential social platform as I have used that term here. Nor do they operate at the same scale as X or Instagram. But I offer a brief discussion of both by way of proof of concept.

Wikipedia emerged from the wreckage of an attempted for-profit online encyclopedia called Nupedia.101Emiel Rijshouwer, Justus Uitermark & Willem de Koster, Wikipedia: A Self-Organizing Bureaucracy, 26 Info., Commc’n & Soc’y 1285, 1291 (2023). The latter’s assets (for example, domain names, copyrights, and servers) were subsequently placed in an independent, charitable organization, the Wikimedia Foundation (“WMF”).102Id. at 1293. At first, corporate governance “emerged” organically from the efforts of those building the site, rather than being imposed from above.103Id. at 1298 (explaining that “bureaucratization emerges from interactions among constituents” of Wikipedia). A group of founders then “transformed their charismatic community into a bureaucratic structure” in which “power was diffused and distributed” across “a sprawling bureaucracy with a wide range of formal positions” in response to the perceived mission-related needs of the organization.104Id. at 1294. The organization’s trajectory has also been characterized by moments of greater centralization. For example, in the early 2010s, Wikipedia’s CEO led an effort to be “more inclusive and more open,” somewhat to the chagrin of the then-contributors.105Id. at 1296. That is, Wikipedia’s governance history centers on a choice of corporate form that insulates leadership from external profit-related pressures, a selection of strong leadership, and an exercise of leadership to broaden and empower the organization’s constituencies (potentially mitigating criticism of the organization) to generate a certain kind of “corporate culture.”106Cf. Pasquale Gagliardi, The Creation and Change of Organizational Cultures: A Conceptual Framework, 7 Organizational Stud. 117, 121–26 (1986) (exploring the meaning of the term “organizational value” and culture).

Even more directly relevant is the web presence of the BBC. The BBC produces thousands of new pieces of content each day for dissemination over a range of sites, such as BBC News, BBC Sport, BBC Sounds, BBC iPlayer, and World Service.107Alessandro Piscopo, Anna McGovern, Lianne Kerlin, North Kuras, James Fletcher, Calum Wiggins & Megan Stamper, Recommenders with Values: Developing Recommendation Engines in a Public Service Organization, Knight First Amend. Inst. (Feb. 5, 2024), https://knightcolumbia.org/content/recommenders-with-values-developing-recommendation-engines-in-a-public-service-organization [https://perma.cc/APX5-T9T2]. The corporation’s charter defines its mission as serving all audiences by providing “impartial, high-quality and distinctive output and services which inform, educate and entertain.”108Id. Like Wikipedia, the BBC is organized into a corporate form that is relatively impermeable by law to commercial pressures. To advance its charter goals, the BBC uses machine-learning recommender algorithms created by multi-disciplinary teams of data scientists, editors, and product managers.109Id. Once a recommender system has been built,110Id. Public service broadcasters such as the BBC cannot rely on “off-the-shelf” recommenders because they optimize for very different goals. Jockum Hildén, The Public Service Approach to Recommender Systems: Filtering to Cultivate, 23 Television & New Media 777, 787 (2022). editorial staff can offer “continuous feedback” on the design and operation of recommendatory systems to identify legal compliance questions and to ensure “BBC values” are advanced.111Piscopo et al., supra note 107.

Available accounts of this process—while perhaps a touch self-serving because they are written by insiders—suggest that the organization strives to cultivate a distinctive cultural identity. It then leverages that identity as a means of advancing its values via algorithmic design. Specifically, an important part of this recommender design process focuses on empowering users to make their own choices and to avoid undesirable (from the service’s perspective) behaviors. The BBC’s recommender tools are designed to permit personalization, albeit only to the extent that doing so can “coexist with the BBC’s mission and public service purposes.”112Id. An insider informant speaking anonymously reported that the BBC understands itself as “as ‘morally obliged’ to provide their users with the possibility of tweaking their recommendations.”113Hildén, supra note 110, at 786. In the same study, the employee of an unnamed European public broadcaster that managed a recommender system reported that their system proactively identified “users who consume narrow and one-sided media content and recommend[ed to] them more diverse content.”114Id. at 788. That is, the system was designed to anticipate and mitigate, to an extent, the possibility of extremism at the tails of the user distribution, while also preserving users’ influence over the content of their feeds. This is in stark contrast to systems that are designed to maximize engagement under conditions in which predictability entails driving users to more extreme (and even dangerous) content.

I do not want to strain the parallels between the “islands of integrity” literature and these digital examples too much. Both of the latter, nevertheless, point to ways in which the means and the motives to sustain an “island of integrity” can be imagined in an online world. Both services are, for example, explicitly oriented toward a public service mission in terms of leadership. They both also opted for corporate forms that allow for some protection against potentially compromising market forces. Both have systems in place to preserve and transit a valued internal culture, while buffering themselves somewhat against the risks of distorting external or internal pressure. Finally, both seem to have successfully cultivated persisting cultures of service to public-service goals by hard-wiring their cultures into bureaucratic structures or, alternatively, algorithmic designs.

III.  The Governance of Social Platforms: Aspiring to Build Islands of Algorithmic Integrity

With the general idea of “islands of integrity” in hand, along with the specific proofs of concept described in Section II.B, it is possible to ask how certain social platforms might be reformed with an ideal of islands of algorithmic integrity in mind. That is, how might we move toward alternative platform designs and operations that address the normative concerns outlined in Part II? What kind of private governance might be imagined that mitigates exploitation and domination concerns, while addressing the tail risk of extremism as best as we can? Could legal regulation play a role? Again, it would be a mistake to frame these questions as mechanical applications of the “islands of integrity” literature. It is better to think of them as falling out of the same institutional design goal.

I approach this inquiry in two stages. I first begin by critiquing leading regulatory strategies observed in the American states and the European Union from an “islands-of-algorithmic-integrity” standpoint. At bottom, these critiques draw out ways in which those regulatory strategies take social platforms as potential sources of harm, largely without an account of the positive role platforms could play. Second, I draw together a number of possible tactics for public or private actors to help build islands of algorithmic integrity. My positive accounting here is concededly incomplete. My hope, however, is that this effort serves as initial evidence of the fruitfulness of an approach oriented toward the aspiration of islands of algorithmic integrity.

A. The Limits of Existing Platform Regulation Regimes

Since 2020, social platforms have become an object of regulatory attention on both sides of the Atlantic. Three main regulatory strategies can be observed. These take the form of new state regulations purportedly targeting “censorship,”115Mary Ellen Klas, DeSantis Proposal Would Protect Candidates Like Trump from Being Banned on Social Media, Mia. Herald, https://www.miamiherald.com/news/politics-government/state-politics/article248952689.html [https://web.archive.org/web/20221017063802/https://www.miamiherald.com/news/politics-government/state-politics/article248952689.html] (quoting Florida governor Ron DeSantis). fresh efforts to extend common law tort liabilities to social platforms, and a risk-based regulatory regime that has been promulgated by the European Union. Broadly speaking, all such legal intervention is premised on concern about platforms’ society-wide effects on listeners, although deontic concerns may play a role too. The tools seized for those tasks, however, have been inadequate. Their shortfall can be traced to the way in which they focus exclusively on platform harms (missing the importance of benefits), misconstrue those harms, and then fail to incentivize the formation of platforms with the means and the motive to mitigate documented harms while resisting exploitation or domination.

  1. Regulating Ex Ante for Harms

The 2022 Digital Services Act (“DSA”) offers a first model of ex ante platform regulation. In important part, it trains on the potential for harms by recommender systems without any account of their positive effects. It contains a suite of new legal obligations: Article 25, for example, prohibits any digital platform design that “deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.”116Digital Services Act, supra note 13, at art. 25 § (1). Article 38 provides a right to opt out of non-personalized algorithms.117Id. at art. 38 (mandating “at least one option for each of their recommender systems which is not based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679 ”). Articles 14 and 26 set out some disclosure obligations on certain companies.118Id. at art. 14 § (1) and art. 26 § (1)(d). Most importantly, for present purposes, Article 34 of the DSA requires “very large online platforms and . . . online search engines” to conduct an annual assessment of any systemic risks stemming from the design or functioning of their service, including negative effects on civic discourse, electoral processes, or fundamental rights.119Id. at art. 34. For a close reading of Article 34, see Neil Netanel, Applying Militant Democracy to Defend Against Social Media Harms, 45 Cardozo L. Rev. 489, 566 (2023).

At first blush, the DSA seems oriented toward the creation of islands of algorithmic integrity. But there are reasons for being skeptical of conceptualizing the project this way. To begin with, the Article 38 opt-out is unlikely to be exercised by those “supersharers” at the tails of the ideological distribution who are most responsible for the diffusion of fake news.120Baribi-Bartov et al., supra note 55, at 979. Self-help remedies never avail those already fixated on harming themselves and others. Moreover, Article 34 risk assessments impose no clear affirmative command to build epistemically robust speech environments.121But see Netanel, supra note 119, at 566–67 (proposing that platforms be required to make “recommender system modifications to improve the prominence of authoritative information, including news media content that independent third parties have identified as trustworthy”). Netanel, however, is proposing in this passage an extension of Article 34 rather than offering a gloss on it, so far as I can tell. In effect, the act offers no clear account of how social platforms could or should enable democratic life. Even more problematic, the DSA ultimately leans on platforms themselves to accurately document and remedy their own flaws. It does not seem excessively cynical to predict that profit-oriented companies will not be falling over themselves to flag the negative externalities of their own products in publicly available documents and flagellate themselves over how to remedy them. The DSA, in short, is promising as theory. But it may fall substantially short in practice.

  1. Regulating Ex Ante for Balance

Both Florida and Texas have enacted statutes intended to limit platforms’ abilities to “deplatform” a person because of their violation of terms of service.122Florida defines “deplatform” as “the action or practice by a social media platform to permanently delete or ban a user or to temporarily delete or ban a user from the social media platform for more than 14 days.” Fla. Stat. § 501.2041(1)(c) (2021). Texas’s law has a similar provision. See H.B. 20, 87th Leg., Reg. Sess. (Tex. 2021) (prohibiting social media platforms from censoring users or a user’s expressions based on the viewpoint expressed in the content). The Florida statute, for example, prohibits platforms from “willfully deplatform[ing] a candidate for office who is known by the social media platform to be a candidate, beginning on the date of qualification and ending on the date of the election or the date the candidate ceases to be a candidate.”123Fla. Stat. § 106.072(2) (2021). In its July 2024 decision in Moody v. NetChoice, the U.S. Supreme Court cast doubt on the constitutionality of such measures.124Moody v. NetChoice, LLC, 603 U.S. 707 (2024). While litigation is ongoing as this essay goes to press, it seems likely that the deplatforming elements of both statutes will not survive.

Relying on familiar doctrinal tools from the First Amendment toolkit, a majority of the Moody Court reached two conclusions that are relevant here. First, Justice Elena Kagan’s majority opinion explained that when an entity “provide[s] a forum for someone else’s views” and is thereby “engaged in its own expressive activity, which the mandated access would alter or disrupt,” a First Amendment interest is implicated.125Id. at 728. Second, the Court held that the government has no constitutionally cognizable interest “in improving, or better balancing, the marketplace of ideas.”126Id. at 732. This anti-distortion argument is familiar from the campaign finance context.127See, e.g., Citizens United v. FEC, 558 U.S. 310, 340–41 (2010) (“By taking the right to speak from some and giving it to others, the Government deprives the disadvantaged person or class of the right to use speech to strive to establish worth, standing, and respect for the speaker’s voice.”). There, however, the argument is deployed generally by conservative justices to resist governmental efforts to advance an equality interest in political speech given its “dangerous[] and unacceptable” effects.128Id. at 351. In the Florida and Texas cases, by contrast, the argument was listed against efforts by Republican state governments to enforce their understanding of balance on the platform-based speech. Such ideological valence thus flipped from campaign finance to platform regulation.

Independent of these familiar constitutional logics, there are more empirically grounded reasons to conclude that Florida’s and Texas’s efforts to mitigate platforms’ curatorial capacity are likely to undermine, rather than promote, the emergence of islands of algorithmic integrity. These reasons run parallel to Justice Kagan’s reasoning, but are distinctive in character.

The first reason is banal and empirical. The available research suggests that conservative voices in the United States are asymmetrically responsible for the dissemination of fake news.129Baribi-Bartov et al., supra note 55, at 979 (“Supersharers had a significant overrepresentation

of women, older adults, and registered Republicans.”); González-Bailón et al., supra note 57, at 397 (“We also observe on the right a far larger share of the content labeled as false by Meta’s 3PFC.”). There is more to be said about rhetorical use of “balance” claims in law and politics, and its dynamic effects upon the tendency of people to go to extremes.
To the extent that Florida and Texas leaned on a conception of “balance” in the speech environment, they did so by culpably ignoring the platforms’ interest in a generally reliable and trustworthy news environment. Enforcement of the Florida and Texas laws, to the contrary, seems likely to lead (all else being equal) to a decline in the quality of those platforms. That is to say, by a sort of Gresham’s law for political speech, the increasing proportion of misleading speech on a platform will tend to drive out those concerned with truthfulness. Such an effect creates a vicious circle of sorts, one that is absent from the campaign finance context.

This argument might be supplemented by a further observation. As I show below, there are a number of fairly obvious affirmative measures that private and public actors can take if they are truly concerned with the creation of islands of algorithmic integrity.130See infra Part III.B. If we see a government failing to take these needful steps while affirmatively adopting counterproductive measures, there is some reason to doubt the integrity of its claim to be acting in the public interest. The islands of algorithmic integrity frame can be put to work here as a lens through which one may understand the gap between a state’s professed interests and its actual ambitions.131Cf. Geoffrey R. Stone, Free Speech in the Twenty-First Century: Ten Lessons from the Twentieth Century, 36 Pepp. L. Rev. 273, 277 (2009) (noting that “government officials will often defend their restrictions of speech on grounds quite different from their real motivations for the suppression, which will often be to silence their critics and to suppress ideas they do not like”). If, as Justice Kagan once suggested in her academic role, the First Amendment doctrine is best understood as “a series of tools to flush out illicit motives and to invalidate actions infected with them” and a “kind of motive-hunting,”132Elena Kagan, Private Speech, Public Purpose: The Role of Governmental Motive in First Amendment Doctrine, 63 U. Chi. L. Rev. 413, 414 (1996). then the failure to pick low-hanging fruit while making elaborated and far-fetched claims about one’s integrity-related aims is a telling one. To the extent that it identifies some of those low-hanging fruit, the islands of algorithmic integrity grafts on comfortably to advance those goals.

A second reason to be skeptical of measures such as Florida’s and Texas’s is conceptual in character: balance-promoting measures of their ilk help themselves to the assumption that there is a neutral baseline that has been disturbed by a platform’s algorithm. But “the most common choice of baseline fundamentally depends on the state of some components of the system,” and assumes away the effect of past bias and amplification.133Lum & Lazovich, supra note 26. Accordingly, the Florida and Texas laws’ presupposition of a neutral baseline of undistorted speech is misplaced; it is better to instead focus on the structural qualities associated with islands of integrity. Where a government asserts an interest in “neutrality” or “fairness” in the context of social platforms, its arguments should be viewed as pro tanto dubious since it is striving to return to a status quo that, for technological reasons, is imaginary. A version of this baseline difficulty arises in the campaign finance context, albeit for different reasons.134For a nuanced account of the difficulty of curbing the “bad tendencies of democracy,” see David A. Strauss, Corruption, Equality, and Campaign Finance Reform, 94 Colum. L. Rev. 1369, 1378–79 (1994). It also lacks the sociotechnical foundation that is present in the platform context.

  1. Tort Liability for Harmful Algorithmic Design

The Texas and Florida statutes impose ex ante controls on social platforms. An alternative regulatory strategy when it comes to platforms involves the ex poste use of tort liability to incentivize “better” (by some metric) behavior. Platforms benefit from a form of intermediate immunity from tort liability under Section 230 of the Communications Decency Act.13547 U.S.C. § 230; see also Zeran v. Am. Online, Inc., 129 F.3d 327, 328 (4th Cir. 1997) (holding that Section 230 immunized an online service provider from liability for content appearing on its site created by another party). Section 230 immunity is likely wider than the immunity from liability available under the First Amendment,136Cf. Note, Section 230 as First Amendment Rule, 131 Harv. L. Rev. 2027, 2030 (2018) (noting that “[j]udges and academics are nearly in consensus in assuming that the First Amendment does not require § 230”). although the scope of constitutionally permissible tort liability remains incompletely defined.137Jack M. Balkin, Free Speech Is a Triangle, 118 Colum. L. Rev. 2011, 2046 (2018).

Recent lawsuits have tried to pierce Section 230 immunity from various angles. Some have tried to exploit federal statutory liability for aiding and abetting political violence.138See, e.g., Twitter, Inc. v. Taamneh, 598 U.S. 471, 503 (2023) (rejecting that reading of federal statutory tort liability). Others lean on common law tort theories, but contend that Section 230 does not extend to suits that turn on platforms’ use of algorithmic controls to sequence and filter content. For example, in an August 2024 decision, a panel of the Third Circuit reversed a district court’s dismissal of a common law tort complaint against TikTok for its promotion of content that played a role in the death of a minor.139Nylah Anderson watched a TikTok video on the “Blackout Challenge” and died imitating what she saw. Anderson v. TikTok, Inc., 116 F.4th 180, 181 (3rd Cir. 2024). The circuit court held that Section 230 did not extend to a claim that TikTok’s “algorithm was defectively designed because it ‘recommended’ and ‘promoted’ the Blackout Challenge.”140Id. at 184. The Blackout Challenge, said the panel, was “TikTok’s own expressive activity,” and as such fell outside Section 230’s scope.141Id. This construction of Section 230 has been severely criticized.142See, e.g., Ryan Calo, Courts Should Hold Social Media Accountable—But Not By Ignoring Federal Law, Harv. L. Rev. Blog (Sept. 10, 2024), https://harvardlawreview.org/blog/2024/09/courts-should-hold-social-media-accountable-but-not-by-ignoring-federal-law [https://perma.cc/CFE6-3ZDZ]. Thus, it is far from clear how this ruling can be squared with the seemingly unambiguous Section 230 command that no platform can “be treated as the publisher or speaker of any information provided by another information content provider.”14347 U.S.C. § 230(c)(1) (emphasis added).

Reflection on the prospect of tort liability that is delimited in this fashion and consistent with Section 230 (especially with the idea of “islands of algorithmic integrity” in mind) offers some further reasons for skepticism of the Third Circuit’s decision and the consequences of tort liability for algorithmic design more generally. For it is far from clear how algorithmic-design-based liability of the sort that the Third Circuit embraced can be cabined. Every algorithmic decision changes the overall mix of content on the platform. So, it is always the case that such decisions in some sense “cause” the appearance of objectionable content.144One might interpose here some notion of algorithmic proximate cause. That presents, to say the least, rather difficult questions of doctrinal design. Indeed, one could argue that any mechanism imposed to limit one sort of harmful speech necessarily increases the likelihood that other sorts of speech (including other sorts of harmful speech) will feature prominently on the platform. For example, a decision to filter out speech endorsing political violence is (all else being equal) going to increase the volume of speech that is likely conducive to adolescent mental health problems. In this way, the Third Circuit’s decision (at least as written) has the practical effect of carving out all algorithmic content-moderation activity from Section 230’s scope. It is hard to imagine this concurs with Congress’s enacting intent.

Indeed, tort liability for algorithmic decision will inevitably push platforms to rely more on networks, rather than algorithms, as drivers of content. But the empirical evidence suggests that network-based platform designs are more, not less, likely to experience higher levels of fake news, and that they are less amenable to technical fixes.145See supra text accompanying notes 44–65. Tort liability, at least as understood by the Third Circuit in the TikTok case, therefore pushes platforms away from socially desirable equilibria. Paradoxically, all else being equal, it is likely to increase, and not decrease, the volume of deeply troublesome material on platforms of the sort at issue in the Third Circuit TikTok case. More generally, it is again hard to see how liability for algorithmic design decisions, all else being equal, is socially desirable.

B. The Possible Vectors of Algorithmic Integrity

The fact that state and national governments opt for partial or unwise regulatory strategies does not mean that are no promising paths forward. To the contrary, the examples examined in Part II suggest a range of useful reforms. I outline three here briefly.

To begin with, the examples of Wikipedia and the BBC suggest that it may be possible to build at least small-scale islands of algorithmic integrity either in the private or the public sector. Those examples further suggest that whether state or private in character, such an island needs mechanisms to shield itself from the pressure to maximize profits. An entity that is exposed to the market for corporate control is unlikely to be able to resist commercial pressures for long.

Corporate form hence matters. For example, social platforms’ incentive to maximize engagement, and hence maximize advertising revenue, has been “critical” to driving the dissemination of radicalizing and hateful speech.146Daron Acemoglu & Simon Johnson, Power and Progress 362 (2023). The transformation of Twitter to X after its purchase by Elon Musk, and the subsequent degradation and coarsening of discourse on the platform, offer an abject lesson in the perils of the unfettered free market for islands of algorithmic integrity.147There is some evidence that X systematically favored right-leaning posts in late 2024, suggesting a link between corporate control and political distortion. Timothy Graham & Mark Andrejevic, A Computational Analysis of Potential Algorithmic Bias on Platform X During the 2024 US Election (Queensland Univ. of Tech., Working Paper, 2024)), https://eprints.qut.edu.au/253211. The market for corporate control, which is often glossed over in light of the efficient capital markets hypothesis, is commonly viewed as an unproblematic good.

One of the main lessons of the islands of integrity literature, however, is the need for well-motivated leadership of the sort that has been described at Wikipedia and the BBC. It is hard to see how such motivation survives under the shadow of potential corporate takeover.

Second, islands of integrity require the right means (or tools), as well as the right motive. The use of algorithmic tools to curate a platform creates means in a way that reliance on network effects does not. It is thus a mistake to assume, as the Third Circuit seems to have done in the TikTok case, that an algorithmically managed platform is worse than a network based one. As Part I illustrated, the empirical evidence suggests that algorithmically managed platforms are generally not more polluted by misinformation than ones driven by users’ networks.148Budak et al., supra note 52, at 48; accord Hosseinmardi et al., supra note 47, at 1. Quite the contrary.

Moreover, a social platform built around an algorithm may have tools to improve its epistemic environment that a network-based platform lacks. For instance, a 2023 study found that certain “algorithmic deamplification” interventions had the potential to “reduce[] engagement with misinformation by more than [fifty] percent.”149Benjamin Kaiser & Jonathan Mayer, It’s the Algorithm: A Large-Scale Comparative Field Study of Misinformation Interventions, Knight First Amend. Inst. (Oct. 23, 2023), https://knightcolumbia.org/content/its-the-algorithm-a-large-scale-comparative-field-study-of-misinformation-interventions [https://perma.cc/Y4KU-76BY]. Another example of an instrument for epistemic integrity is, somewhat surprisingly, a feature of Facebook’s algorithm, which has baked in a preference for friends-and-family content that “appears to be an explicit attempt to fight the logic of engagement optimization.”150Narayanan, supra note 10, at 31.

Third, there is a range of tailored reforms that precisely target ways in which social platforms stand in asymmetrical relations of exploitation and dominance to their users. As a very general first step, Luca Belli and Marlena Wisniak have proposed the use of “nutrition labels,” detailing key parameters of platform operation as a way of enabling better informed consumer choice between platforms.151Luca Belli & Marlena Wisniak, What’s in an Algorithm? Empowering Users Through Nutrition Labels for Social Media Recommender Systems, Knight First Amend. Inst. (Aug. 22, 2023), https://knightcolumbia.org/content/whats-in-an-algorithm-empowering-users-through-nutrition-labels-for-social-media-recommender-systems [https://perma.cc/N7MW-SEVT]. This kind of notice-based strategy, while plausible to implement, assumes a measure of user choice over which platform to use. At present, such choice is largely illusory because of the market dominance of a small number of platforms.152Lina M. Khan, The Separation of Platforms and Commerce, 119 Colum. L. Rev. 973, 976 (2019) (“A handful of digital platforms exert increasing control over key arteries of American commerce and communications.”). It is also hard to see how consumers, particularly those already at the ideological margin, could be persuaded to make the right kind of choice. Inducing more competition, and hence more consumer choices, in social platforms would give notice-oriented measures more bite. Some work has been done on potential varieties of platform design,153For a recent survey of other possible models of “decentraliz[ed]” platform governance, see Ethan Zuckerman & Chand Rajendra-Nicolucci, From Community Governance to Customer Service and Back Again: Re-Examining Pre-Web Models of Online Governance to Address Platforms’ Crisis of Legitimacy, 9 Soc. Media + Soc’y, July–Sept. 2023, at 1, 7–9. but there remains ample room for inquiry and improvement. The basic point, though, is that some combination of increased competition and better consumer-facing notices would better allow certain users to select among different social platforms based on their own preferences—although it is hard to be confident that the right users, so to speak, will be those aided.

There are also steps that can be taken by a well-motivated platform manager. Within a platform, for example, the BBC’s strategy of promoting personalization could be adopted and redeployed in a number of ways. For instance, bots, or “user-taught” agents could be supplied to help individual users curate the shape of their feeds over time.154Kevin Feng, David McDonald & Amy Zhang, Teachable Agents for End-User Empowerment in Personalized Feed Curation, Knight First Amend. Inst. (Oct. 10, 2023), https://knightcolumbia.org/content/teachable-agents-for-end-user-empowerment-in-personalized-feed-curation [https://perma.cc/RAN8-QT7S]. These bots, however, might be constrained by the understanding of the platform’s mission, which excluded normatively troublesome activity characterizing the tails of the ideological distribution.

Finally, another way of mitigating exploitation concerns focuses on advertisers rather than users. Firms advertising on platforms are often unaware their products or services are marketed next to fake news, despite having an aversion to that arrangement.155Wajeeha Ahmad, Ananya Sen, Charles Eesley & Erik Brynjolfsson, Companies Inadvertently Fund Online Misinformation Despite Consumer Backlash, 630 Nature 123, 125–28 (2024). They lack, however, information on when and how this occurs. Increased disclosure by platforms on “whether . . . advertisements appear on misinformation outlets,” as well as increased “transparency for consumers about which companies advertise” there, provides the potential to stimulate a collective shift to a more truthful equilibrium.156Id. at 129. Such disclosures help ensure that “the means of ensuring legibility [will not completely] fade into the background of the ordinary patterns of our li[ves],”157Henry Farrell & Marion Fourcade, The Moral Economy of High-Tech Modernism, 152 Dædalus 225, 228 (2023). as platform affordances become too banal to notice. Such disclosures, finally, might be mandated by law, potentially as a means of mitigating fraud concerns related to platform use.

Conclusion

In this essay, I have tried to offer an affirmative vision of social platform governance in the long run, or at least the seeds of such a vision. No doubt this vision is leagues away from the grubby, venal, and hateful reality of social platforms now. It is, indeed, a stark contrast to those extant realities. But one of the functions of scholarship is to generate plausible pathways away from a suboptimal institutional status quo. The articulation of alternatives is itself of value.

As I have suggested, drawing on sociological and political science literature on islands of integrity in public administration allows us to see some of the limits of existing regulatory strategies with respect to social platforms. Doing so opens up new opportunities for improved public and private governance. Of course, the model of islands of integrity in a public administrative context cannot be mechanically transposed over to the platform context. But by offering us a new North Star for reforming governance efforts, I hope it can advance our understanding of how to build platforms fit for our complex, yet (perhaps still) fragile democratic moment.

98 S. Cal. L. Rev. 1287

Download

*  Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School, and associate professor, Department of Sociology. Thanks to Erin Miller for extensive and illuminating comments, and to participants in the symposium—in particular Yasmin Dawood—for terrific questions and conversation. The editors of the Southern California Law Review, in particular Michelle Solarczyk and Tyler Young, did exemplary work in making this essay better. The Frank J. Cicero Foundation provided support for this work.