First Amendment Governance: Social Media, Power, and a Well-Functioning Speech Environment

Introduction

In Moody v. NetChoice, LLC,1Moody v. NetChoice LLC, 603 U.S. 707 (2024). the Supreme Court declared, in a majority opinion by Justice Kagan, that “it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment.”2Id. at 732–33. In Moody, social media platforms claimed that their expressive freedom had been violated by state laws mandating certain content-moderation policies.3Id. at 713–17. Although Moody was decided on the criteria required to bring a facial challenge, it nonetheless provided some direction with respect to what the government can and cannot do vis-à-vis the First Amendment rights of social media platforms.4Id. at 717–19.

This decision also implicitly raises the question of what it means for a democracy to have a well-functioning political speech environment in the digital era. This question seems particularly urgent given the profound dilemma that social media poses for democratic theory and practice. On the one hand, social media democratizes communication and promotes egalitarianism by reducing the cost of speech.5See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805 (1995); Eugene Volokh, What Cheap Speech Has Done: (Greater) Equality and Its Discontents, 54 U.C. Davis L. Rev. 2303, 2305 (2021). It provides new avenues for expression and association, thereby strengthening public discourse. It has also been harnessed to enable citizen participation in political decision-making.6See Hélène Landemore, Open Democracy and Digital Technologies, in Digital Technology and Democratic Theory 62, 66 (Lucy Bernholz et al. eds., 2021); Roberta Fischli & James Muldoon, Empowering Digital Democracy, 22 Persps. on Pol. 819, 819 (2024). On the other hand, social media can undermine democratic functioning, giving rise to various challenges such as disinformation, echo chambers, troll armies, bots, microtargeting, citizen distrust, and foreign election interference.7See, e.g., Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media (2017); Nathaniel Persily, Can Democracy Survive the Internet?, 28 J. Democracy 63 (2017); Richard L. Hasen, Cheap Speech: How Disinformation Poisons Our Politics—and How to Cure It (2022). As various attempts at election subversion, including the attack on the Capitol, demonstrate, election disinformation can have damaging and destabilizing effects on democracy and can diminish the confidence that citizens have in elections. The ongoing stability of political institutions should not be taken for granted in our era of democratic decline.8See, e.g., Tom Ginsburg & Aziz Z. Huq, How to Save a Constitutional Democracy (2018); Steven Levitsky & Daniel Ziblatt, How Democracies Die (2018).

Although free speech has always posed this particular dilemma—both essential for, yet potentially injurious to, democracy—key features of the new digital era raise questions as to whether conventional regulatory approaches are sufficient to safeguard the public sphere. Social media platforms enjoy unprecedented asymmetries of wealth and power as compared to their users. These platforms play a crucial role in providing and regulating the online speech environment9See Jack M. Balkin, Free Speech is a Triangle, 118 Colum. L. Rev. 2011, 2011 (2018). and, hence, in constructing a significant dimension of public discourse. Aside from their dominance, these powerful social media platforms were not created to provide a healthy expressive realm for democracy. Instead, they engage in “surveillance capitalism”—a behavioral advertising business model that sells users’ data for immense profits.10See Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 16 (2019). This profit motive arguably renders the platforms unreliable as self-regulators.11See Abby K. Wood & Ann M. Ravel, Fool Me Once: Regulating “Fake News” and Other Online Advertising, 91 S. Cal. L. Rev. 1223, 1237, 1245 (2018). The outsized power of social media platforms to shape the expressive sphere, combined with their non-public regarding orientation, raises genuine concerns about the ongoing health of the political marketplace of ideas.

While the overwhelming power of the state has always—and rightly—been viewed as particularly perilous for the freedom of speech, dominant private actors, particularly those who either control or have disproportionate access to the means of communication, can likewise pose a threat to free speech. Is it possible to address such asymmetries of power consistent with the First Amendment? Should social media platforms be regulated to provide for the type of speech environment necessary for democracy? What are the normative attributes of a well-functioning sphere of political expression? More generally, what should be done to protect listeners, a category of democratic actor that tends to receive less scholarly attention than speakers?

This Article offers a preliminary analysis of these issues. It is organized in three parts. Part I begins by providing a brief overview of First Amendment doctrine as it applies to speakers and listeners. In addition, it outlines the three principal values—democracy, autonomy, and truth-seeking—that animate the First Amendment. For the purposes of the ensuing analysis, this Article adopts the view that the First Amendment is geared to promoting democratic self-government. Part I then sets out a normative account of a healthy expressive realm. A well-functioning political speech environment for speakers and listeners, I suggest, is one that is free of domination and coercion and in which acute asymmetries in political and economic power do not distort the capacity of individuals to engage in self-government, principally with respect to three central activities: (1) informed voting; (2) discussion and deliberation; and (3) meaningful participation. I claim further that the speech environment ought to protect individuals’ liberty, equality, epistemic, and nondomination interests in order to foster a healthy sphere of expression for these self-governing activities.

While this Article sets out an admittedly idealized account of what a well-functioning political speech environment would entail, and while such an account may never be attained in full (or even in part), a normative theory provides, I suggest, a useful benchmark by which to assess current challenges and their possible regulatory solutions.12To be sure, the idealized account offered here does not on its own furnish a roadmap for reform efforts; its ambition is instead cabined to identifying normative objectives and the problematic features of the world to which such objectives apply, following what Jacob Levy has described as “a back and forth process between cases and principles, evils and ideals.” Jacob T. Levy, There Is No Such Thing as Ideal Theory, 33 Soc. Phil. & Pol’y 312, 328 (2016). To this end, Part I also identifies certain challenges posed by the digital public sphere, and, in addition, advances a claim of “digital exceptionalism”—the idea that the online world of expression has distinctive features that not only distinguish it from the non-digital world but that also pose unique and profound difficulties for the attainment of a well-functioning expressive realm.

Part II turns to First Amendment jurisprudence to see whether it enables the government to address the challenges posed by the digital world so as to provide for a well-functioning political speech environment. It begins by describing the positive conception of the First Amendment, under which the state is viewed as having an affirmative role in protecting the democratic public sphere from the distortive influence of powerful private entities. Part II then offers a snapshot view of the current law of public discourse, focusing in particular on campaign finance regulation and the Moody decision, to show that the Court has largely abandoned the positive conception in favor of an approach that prohibits the government from ensuring a greater diversity of expression.

While the Court’s approach protects listeners from the power of the state, it gives rise to the troubling conundrum that the political speech environment is left unprotected not only from the dominant power of private tech giants but also from the deficits of the digital public sphere. Neither the state nor the platforms protect listeners from the effects of acute asymmetries of private power. Indeed, many regulatory responses to the challenges of digital exceptionalism would likely fall afoul of the First Amendment. For this reason, the sizeable gap between the normative ideal of a well-functioning political speech environment and the often disheartening reality of the digital public sphere cannot be closed by contemporary First Amendment doctrine.

In response to this conundrum, Part III makes an argument for “countervailance,” which is, in essence, the idea that certain mechanisms could counter, or at least lessen, these asymmetries in power and their resulting deficits such that listeners’ interests are better protected, even if that protection does not rise to the level of establishing the kind of equality needed for self-governance. I briefly consider a suite of countervailing mechanisms—including disclosure and transparency rules, a narrow prohibition of false election speech, strategies to manage deepfakes, state-led incentives structures and norms, public jawboning, and civil society efforts—that can be deployed by public entities, social media platforms, and civil society institutions. Given First Amendment constraints, however, these measures are necessarily modest in their scope and cannot serve as full-blown solutions to the challenges of digital exceptionalism.

I. A Well-Functioning Speech Environment and its Challenges

This Part sets out a normative account of a well-functioning political speech environment. It also argues for “digital exceptionalism”—the idea that the challenges faced by the digital public sphere are unique and may therefore require a tailored regulatory response. To ground the discussion, I begin with a brief overview of First Amendment values and doctrine as they apply to speakers and listeners.

A. Speakers, Listeners, and the First Amendment

In his philosophical examination of the freedom of expression, T.M. Scanlon identifies three groups of interests: those of participants, audiences, and bystanders.13See T.M. Scanlon, Jr., Freedom of Expression and Categories of Expression, 40 U. Pitt. L. Rev. 519, 520 (1979). Burt Neuborne’s Madisonian reading of the First Amendment likewise identifies a range of participants in a “neighborhood” of expressive freedom, including, most prominently, speakers and listeners.14See Burt Neuborne, Madison’s Music: On Reading the First Amendment 100 (2015). For Neuborne, listeners ought to be treated as equal partners, who, like speakers, require expressive freedom to develop their own identities and preferences.15See id. Speakers and listeners thus go hand in hand: the “free flow of ideas and information generated by autonomous speakers” is “essential to the ability of hearers to make the informed decisions on which the efficient functioning of choice-dependent institutions like democracy, markets, and scientific inquiry depend.”16Id. at 101.

In First Amendment doctrine, however, listener interests play a limited role; indeed, such interests are typically protected to the extent that they correspond to speaker interests.17See Derek E. Bambauer, The MacGuffin and the Net: Taking Internet Listeners Seriously, 90 U. Colo. L. Rev. 475, 477 (2019). To be sure, the underlying logic of the categorical approach to First Amendment jurisprudence—under which the Supreme Court has created tiers of speech based on the value of particular kinds of speech to public discourse—is implicitly oriented to the perspective of listeners.18See Elena Kagan, Private Speech, Public Purpose: The Role of Governmental Motive in First Amendment Doctrine, 63 U. Chi. L. Rev. 413, 476–77 (1996). For instance, political speech is afforded maximum protection because it provides indispensable information for citizens to fulfill their democratic roles, while libel is accorded no value because defamatory statements do not enhance, and indeed detract from, reasoned discourse.

The Supreme Court has also recognized that under the First Amendment, listeners may enjoy a “right to know” or an “independent right to receive information.”19Neuborne, supra note 14, at 103–04; Lamont v. Postmaster Gen. of U.S., 381 U.S. 301, 308 (1965) (Brennan, J., concurring); Kleindienst v. Mandel, 408 U.S. 753, 762–63 (1972). Indeed, the right of listeners to receive a free flow of information has served as the basis of the First Amendment’s protection of commercial and corporate speech.20Va. State Bd. of Pharmacy v. Va. Citizens Consumer Council, 425 U.S. 748, 771–72 (1976). However, in the face of the Court’s increasingly deregulatory posture toward commercial speech, critics have argued that rather than protecting listener interests, the Court has subordinated them to corporate speech rights.21See Morgan N. Weiland, Expanding the Periphery and Threatening the Core: The Ascendant Libertarian Speech Tradition, 69 Stan. L. Rev. 1389, 1415 (2017). Although speaker interests usually trump listener interests in the event of a conflict, there are some circumstances outside of public discourse in which listener interests can prevail. As Helen Norton explains, when “listeners have less information or power than speakers,” the law can prohibit speakers from providing false information or can require truthful disclosures with respect to, for example, consumer products or professional speech.22See Helen Norton, Powerful Speakers and Their Listeners, 90 U. Colo. L. Rev. 441, 441–42, 453 (2019). The Supreme Court’s deregulatory turn on compelled professional speech,23Nat’l Inst. of Fam. & Life Advocs. v. Becerra, 585 U.S. 755, 755 (2018). however, has created uncertainty about the status of a broad range of consumer-protective regulations.24See Alan K. Chen, Compelled Speech and the Regulatory State, 97 Ind. L.J. 881, 912–13 (2022).

For both speakers and listeners, there are three principal values that animate the First Amendment: democratic self-government; autonomy or self-fulfillment; and truth seeking through the marketplace of ideas.25See Thomas I. Emerson, Toward a General Theory of the First Amendment, 72 Yale L.J. 877, 878–79 (1963). An additional value proposed by Vincent Blasi—checking the abuse of power—also seems particularly relevant for democratic self-government.26See Vincent Blasi, The Checking Value in First Amendment Theory, 2 Am. Bar Found. Rsch. J. 521, 527 (1977). On this view, the freedoms of speech, assembly, and a free press provide a crucial countervailing force for checking the abuse of power by public officials.

However, there is considerable debate as to which value is predominant. According to Alexander Meiklejohn’s influential theory, the First Amendment is exclusively geared to producing a democratic system of government; hence, “[w]hat is essential is not that everyone shall speak, but that everything worth saying shall be said.”27Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948). Owen Fiss likewise argues that the “purpose of free speech is not individual self-actualization, but rather the preservation of democracy, and the right of a people, as a people, to decide what kind of life it wishes to live.”28Owen M. Fiss, Free Speech and Social Structure, 71 Iowa L. Rev. 1405, 1409–10 (1986). On this view, individual autonomy is simply a means to achieve collective self-determination.29See id.

For Robert Post, however, the value of autonomy is inseparable from democratic self-government because democracy depends on the active participation of citizens.30See Robert Post, Meiklejohn’s Mistake: Individual Autonomy and the Reform of Public Discourse, 64 U. Colo. L. Rev. 1109, 1120–21 (1993). Public discourse and free public debate—and, by extension, the autonomy of speakers—must be protected in service of democratic government.31See Robert Post, Equality and Autonomy in First Amendment Jurisprudence, 95 Mich. L. Rev. 1517, 1526–27 (1997). Some scholars place primacy on individual autonomy or self-realization apart from self-government,32See Martin H. Redish, The Value of Free Speech, 130 U. Pa. L. Rev. 591, 593 (1982). on the basis that, following Kant, all individuals possess the right to be treated as ends in themselves.33See Charles Fried, Speech in the Welfare State—The New First Amendment Jurisprudence: A Threat to Liberty, 59 U. Chi. L. Rev. 225, 233 (1992). Finally, the value of truth seeking emphasizes the First Amendment’s role in protecting, and indeed maximizing, the free flow of information, in order for society to better pursue the truth. As stated by Justice Holmes, “the best test of truth is the power of the thought to get itself accepted in the competition of the market.”34Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting).

This Article takes the view, as expressed by Cass Sunstein, that the First Amendment is “fundamentally aimed at protecting democratic self-government.”35Cass R. Sunstein, Free Speech Now, 59 U. Chi. L. Rev. 255, 263 (1992); see also Cass R. Sunstein, The First Amendment in Cyberspace, 104 Yale L.J. 1757, 1762–63 (1995) [hereinafter Sunstein, Cyberspace]. The other values—autonomy, truth seeking, and checking the abuse of power—will be treated as serving the democracy value.

A related, but conceptually distinct, question concerns the role of the democratic state: should the government regulate speech in order to promote the democracy value? There are two competing constellations of ideas, which correspond roughly with the libertarian and egalitarian approaches to speech. The libertarian approach asserts that state regulation of speech is particularly dangerous for democracy. Speech itself is a form of power: it enables citizens to hold leaders to account and check the abuse of official power. Given state incentives to stifle dissent and criticism, content-based regulations of speech are prohibited save for a few tightly circumscribed and justified exceptions for particularly disfavored speech such as obscenity or libel.36See Cass R. Sunstein, Democracy and the Problem of Free Speech 1–51 (1st Free Press Paperback ed. 1995). The overall posture is one of distrust of government,37See Helen Norton, Distrust, Negative First Amendment Theory, and the Regulation of Lies, 22-07 Knight First Amend. Inst. 3 (Oct. 19, 2022), https://knightcolumbia.org/content/distrust-negative-first-amendment-theory-and-the-regulation-of-lies [https://perma.cc/8F46-R2LH]. in keeping with what Vincent Blasi has termed the “pathological perspective,” whereby the First Amendment is “targeted for the worst of times.”38Vincent Blasi, The Pathological Perspective and the First Amendment, 85 Colum. L. Rev. 449, 449–50 (1985). Under the libertarian approach, expressive liberties are best served by minimizing state regulation, thereby enhancing the free flow of information in the marketplace of ideas. In general, this constellation of ideas is associated with a negative rights approach to the First Amendment, under which the role of the state is to refrain from interfering with citizens’ freedom of speech.

The second, and opposing, constellation of ideas holds that the primary value of a system of free expression is to enable citizens to “to arrive at truth and make wise decisions, especially about matters of public import.”39Kagan, supra note 18, at 424. Under the egalitarian approach, listeners have an interest in being exposed to a wide range of competing views.40See id. at 423–25. However, due to certain factors, such as, for example, the cost of political advertising in the campaign finance context, the marketplace of ideas may be skewed toward elite viewpoints. Listeners would thus be deprived of hearing the full range of ideas and political preferences necessary to reach an informed decision. To ensure that listeners are fully informed, the government may have to impose restrictions in order for all points of view to have a roughly equal opportunity of being heard.41See id. As described in more detail below,42See infra text accompanying notes 94–103. this constellation of ideas is associated with a positive rights approach to the First Amendment, under which the government may have to take affirmative steps to protect individuals’ expressive freedoms.

B. A Normative Account of a Well-Functioning Speech Environment

As Justice Kagan observed, a “well-functioning sphere of expression” is “the whole project of the First Amendment.”43Moody v. NetChoice LLC, 603 U.S. 707, 732–33 (2024). But what does it mean to have such a sphere of expression?44For an alternative account of a well-functioning sphere of expression, see Joshua Cohen and Archon Fung, Democracy and the Digital Public Sphere, in Digital Technology and Democratic Theory (Lucy Bernholz et al. eds., 2021). Cohen and Fung offer an account of the informal public sphere (as opposed to formal political processes of elections and decision-making) which has five elements: rights to expression and association, fair opportunities to participate, access to information from reliable sources, a diversity of views, and the capacity for joint action arising from discussion. Id. at 29–30. This Article argues, as a normative matter, for the promotion of a well-functioning political speech environment for speakers and listeners, one that is free of domination and coercion, and in which acute asymmetries in political and economic power do not distort the capacity of individuals to engage in various self-governing activities, including the following:

(1) Informed Voting: individuals form opinions on public matters based on reliable information in both digital and non-digital mediums, with access to a wide array of competing viewpoints, thereby engaging in informed voting;

(2) Discussion and Deliberation: individuals engage in discussion and deliberation with other citizens whether online or in person as an integral and ongoing democratic practice necessary to self-governing activities, including but not limited to voting; and

(3) Meaningful Participation: individuals participate meaningfully in the democratic process through a variety of avenues, including voting, deliberating, associating with others whether online or in-person, organizing events, consuming or producing political content online, petitioning, and the like, thereby ensuring governmental responsiveness and accountability.

The idea is that democratic citizens should be able to participate in the democratic process with full knowledge and equal freedom.

To foster a healthy expressive realm for these self-governing activities, I further claim that the speech environment ought to protect individuals’ liberty, equality, epistemic, and nondomination interests. The protection of these interests, I suggest, is required to ensure that public discourse is organized and conducted in a manner that serves the value of democratic self-government. To be sure, there will inevitably be conflicts among these interests that would require certain choices and tradeoffs to be made.45For an argument about how the conflicting values of equality and liberty should be instantiated in law, see Yasmin Dawood, Democracy and the Freedom of Speech: Rethinking the Conflict Between Liberty and Equality, 26 Canadian J.L. & Juris. 293 (2013). These interests may also overlap in various ways such that a given outcome could be described as involving, say, both equality and epistemic considerations. While it is beyond the scope of this Article to provide a full account of these interests and their possible conflicts, a few preliminary observations follow.

As described above with respect to the libertarian approach, individuals’ liberty interests are best served by the robust protection of their expressive and associational freedoms under the First Amendment.46See supra text accompanying notes 36–38. Speakers ought to be able to freely express their political opinions and policy preferences, while listeners’ right to know should likewise be shielded from government censorship. In addition to their liberty interests, citizens have equality interests in being exposed to speech that reflects a wide range of competing views, ideas, and political preferences. As described above with respect to the egalitarian approach, the government may have to take affirmative steps to protect listeners’ equality interests in hearing a wide range of viewpoints because the marketplace of ideas may be skewed in favor of elite viewpoints.47See supra text accompanying notes 39–42. For an argument about how the conflicting values of equality and liberty should be instantiated in law, see Yasmin Dawood, Democracy and the Freedom of Speech: Rethinking the Conflict Between Liberty and Equality, 26 Canadian J.L. & Juris. 293 (2013). The speech environment should also protect citizens’ epistemic interests in receiving accurate and reliable information, which is required for reaching good judgments. As Melissa Schwartzberg observes, these epistemic interests ought to also be understood to encompass the kinds of institutions and instruments needed to develop, inform, and assess such judgments.48See Melissa Schwartzberg, Epistemic Democracy and Its Challenges, 18 Ann. Rev. Pol. Sci. 187, 201 (2015). To be sure, epistemic interests may overlap with equality intersts to the extent that good judgments depend upon an exposure to a wide range of viewpoints.

Finally, a healthy expressive environment should also protect democratic actors from domination or coercion. As Philip Pettit argues in his influential account of republican freedom, an individual has dominating power over another person to the extent that they have the capacity to interfere on an arbitrary basis in certain choices that the other is in a position to make.49See Philip Pettit, Republicanism: A Theory of Freedom and Government 52 (1997). An act of interference is arbitrary to the extent that the dominating agent is not forced to track the avowable or relevant interests of the victim but instead can interfere as their will or judgment dictates.50See id. at 55. Individuals’ nondomination interests broadly capture the idea that speakers and listeners ought to be protected from the capacity of powerful agents, whether public or private, to interfere arbitrarily in their choices.51For an elaboration of these ideas in the democratic context, see Yasmin Dawood, The Antidomination Model and the Judicial Oversight of Democracy, 96 Geo. L.J. 1411 (2008).

While these four interests—liberty, equality, epistemic, and nondomination—apply to all three self-governing activities, they take different forms depending on the context. In addition, the self-governing activities overlap in various ways: meaningful participation may require informed discussion, for example. The discussion below provides additional details for each self-governing activity.

  1. Informed Voting

Freedom of speech is a precondition for informed voting. As noted by the Supreme Court, the First Amendment has the objective of “securing . . . an informed and educated public opinion with respect to a matter which is of public concern.”52Thornhill v. Alabama, 310 U.S. 88, 104 (1940). Voters learn about the key issues at stake in the election, the differences among political candidates, and the main features of the platforms of various political parties. As Meiklejohn observes, the well-being of the political community depends on the wisdom of voters to make good decisions.53See Meiklejohn, supra note 27, at 24–25. For voters to make wise decisions, they must be aware, to the extent possible, of all the relevant facts, issues, considerations, and alternatives that bear upon their collective life.

Thus, a well-functioning political speech environment provides voters with epistemically reliable information on matters of public import from a wide range of competing sources and perspectives. For this to take place, speakers’ liberty interests must be fostered, and listeners’ equality, epistemic, and nondomination interests must be satisfied. Under these conditions, listeners as voters have access to the information they need to understand matters of public concern.

  1. Discussion and Deliberation

Discussion and deliberation are crucial activities for those individuals we formally deem to be speakers. However, listeners are also, at times, speakers. Listeners do not develop their views in a vacuum: the activities of discussion and deliberation require democratic listeners to engage with others as they evaluate matters of public importance. The idea here is one of active listening, which involves not just the passive receipt of information but requires discussion and debate. Informal conversations among listeners enable them to consider issues of public policy and to make up their minds about what is best for their common lives—activities that lie at the heart of self-government. The First Amendment is principally concerned with the “authority of the hearers to meet together, to discuss, and to hear discussed by speakers of their own choice, whatever they may deem worthy of their consideration.”54Alexander Meiklejohn, Political Freedom: The Constitutional Power of the People 119 (1966) (emphasis added).

As such, the normative account offered here departs in significant ways from Habermas’s formal account of ideal deliberation. Habermas’s theory of the “ideal speech situation” envisions a reasoned discussion among free and equal participants who aim for consensus by being persuaded by the force of the better argument.55See Jürgen Habermas, Discourse Ethics: Notes on a Program of Philosophical Justification, in Moral Consciousness and Communicative Action 89 (Christian Lenhardt & Shierry Weber Nicholsen, trans., 1990). Formal accounts of deliberative democracy, while differing in various respects, all tend to share a commitment to reaching collective decisions through public reasons, that is, reasons that are generally persuasive to all the participants in the deliberation.

However, in my view, this ideal form of deliberation is not mandatory in order to achieve a well-functioning sphere of expression. Instead, as John Dryzek observes, deliberation can include informal discussion, humor, emotion, and storytelling.56See John S. Dryzek, Deliberative Democracy and Beyond: Liberals, Critics, Contestations 1 (2000). Rather than requiring consensus, we should instead focus on the values of mutual respect, reciprocity, cooperation, and compromise.57See Amy Gutmann & Dennis Thompson, Democracy and Disagreement 346 (1996); James Bohman, Public Deliberation: Pluralism, Complexity, and Democracy 238 (2000); Jane Mansbridge, James Bohman, Simone Chambers, David Estlund, Andrea Føllesdal, Archon Fung, Cristina Lafont, Bernard Manin & José luis Martí, The Place of Self-Interest and the Role of Power in Deliberative Democracy, 18 J. Pol. Phil. 64, 94 (2010). That being said, a basic predicate of a well-functioning speech environment is that speakers and listeners can engage in discussion, debate, and deliberation free of coercion, harassment, and deception.

To be sure, deliberation has come under criticism for being exclusionary because it tends to favor advantaged citizens.58See Lynn M. Sanders, Against Deliberation, 25 Pol. Theory 347, 349 (1997). Critics have also charged that deliberation is simply unfeasible given the complexity of democratic institutions59See Ian Shapiro, Enough of Deliberation: Politics Is About Interests and Power, in Deliberative Politics: Essays on Democracy and Disagreement 28, 31 (Stephen Macedo ed., 1999). or is difficult to realize in practice given the realities of electoral campaigns.60See James A. Gardner, What are Campaigns For? The Role of Persuasion in Electoral Law and Politics 1, 86, 92–93, 115 (2009). In addition, deliberation may accentuate group polarization.61See Cass R. Sunstein, Why Societies Need Dissent 111–14 (2003). These criticisms underscore the need for a more capacious and inclusive understanding of deliberation.

  1. Meaningful Participation and Governmental Responsiveness

A well-functioning political speech environment must also facilitate meaningful participation by listeners and speakers. Participation can take many forms, including voting and deliberating, but can also include such activities as joining a political party, attending a town hall or a candidate rally, volunteering for a political cause, penning an op-ed, marching and protesting, organizing a petition, or running for office. Meaningful participation has online analogues, such as reading or posting messages on social media platforms, consuming or developing political content, reading or writing blogs, listening to podcasts, or running websites. Citizens engage in meaningful participation when they criticize public officials or government policies. Or when they join forces with like-minded others and vote for change. Or when they organize to influence public policy and legislation. All of these activities depend upon a robust sphere of expressive freedom.

Meaningful participation could also be understood as requiring a relatively equal opportunity to influence the outcome of an election. On this view, listeners as voters would have a strong interest in ensuring a somewhat level electoral playing field.62See Burt Neuborne, The Status of the Hearer in Mr. Madison’s Neighborhood, 25 Wm. & Mary Bill Rts. J. 897, 906 (2017). Meaningful citizen participation is also crucial for ensuring governmental responsiveness and accountability. By communicating and associating with one another, citizens can join together to vote for new political leaders. The threat of being removed from office in the next election is one of the most effective mechanisms for ensuring governmental accountability. A well-functioning speech environment is thus indispensable to ensure that state power is responsive to the interests of citizens.

C. Digital Exceptionalism

Does the digital public sphere provide the conditions necessary to foster a well-functioning political speech environment? In what follows, I identify the central features of what I shall call “digital exceptionalism,” the idea that the digital public sphere has distinctive features that not only distinguish it from the non-digital world but that also pose unique challenges for the promotion of a healthy expressive realm.

A principal challenge is that social media platforms wield vast “asymmetries of knowledge and power” over their users.63See Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, 51 U.C. Davis L. Rev. 1149, 1162 (2018). The platforms act as private governors of online speech—enacting, implementing, and enforcing the rules that govern online expression.64See id. at 1197; Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1601–03 (2018). In addition, their power is remarkably concentrated: the digital public sphere is controlled in the main by three companies—Apple, Google, and Meta—that serve as the gatekeepers to online public discourse.65See Nikolas Guggenberger, Moderating Monopolies, 38 Berkeley Tech. L.J. 119, 121 (2023). To be sure, the media landscape in the pre-digital age was likewise highly concentrated: three networks shaped the news on television and a small handful of newspapers comprised the national market.66See Henry Farrell & Melissa Schwartzberg, The Democratic Consequences of the New Public Sphere, in Digital Technology and Democratic Theory 198 (Lucy Bernholz et al. eds., 2021). This concentration of pre-digital media power is likewise problematic for it undoubtedly reduced the plurality of differing points of view. However, certain mitigating features of the pre-digital public sphere are either absent, or greatly attenuated, in the digital world, and conversely, certain features unique to the digital world amplify the dangers posed by these power asymmetries. I briefly canvass a few of the relevant distinctions, noting, first, that these observations capture general trends and, second, that there are, of course, notable exceptions to each of these distinctions.

The first difference is that the pre-digital news media exerted a “strong gatekeeper” approach as compared to the “weak gatekeeper” approach of social media platforms.67See id. at 192. The traditional news media is bound by journalistic standards of objectivity and factual reliability. By contrast, social media platforms impose far fewer gatekeeping controls: while they filter certain prohibited topics such as graphic violence and pornography and rank or label other sorts of disfavored messages, there is far less ex ante quality control. Indeed, as of this writing, Meta has announced that it will eliminate fact checkers in the U.S. and rely instead on a “community notes” system similar to X (formerly Twitter).68See Our Approach to Political Content, Meta (Jan. 7, 2025), https://transparency.meta.com/features/approach-to-political-content [https://web.archive.org/web/20250207231253/https://transparency.meta.com/features/approach-to-political-content]. Research suggests, however, that community-based fact checking systems garner greater trust among users than professional fact-checking, in part because community notes provide additional information and context. See Chiara Patricia Drolsbach, Kirill Solovev & Nicholas Pröllochs, Community Notes Increase Trust in Fact-Checking in Social Media, 3 PNAS Nexus 1, 2, 9 (2024).

Second, as a result of this weak gatekeeping, there is said to be higher levels of misinformation on social media platforms. For example, Elon Musk’s false or misleading claims about elections accrued nearly 1.2 billion views on the social media platform X.69See David Ingram, Elon Musk’s Misleading Election Claims Have Accrued 1.2 Billion Views on X, New Analysis Says, NBC News (Aug. 8, 2024), https://www.nbcnews.com/tech/misinformation/elon-musk-misleading-election-claims-x-views-report-rcna165599 [https://perma.cc/7Q79-CYUH]. Recent empirical evidence suggests, however, that the degree of exposure to misinformation tends to be overstated with respect to the vast majority of users, at least in North America and Europe.70For an analysis of the empirical evidence, see Aziz Z. Huq, Islands of Algorithmic Integrity: Imagining a Democratic Digital Public Sphere, 98 S. Cal. L. Rev. 1287, 1297–98 (2025). Jurisdictions that rely heavily on social media, however, may have different outcomes. For instance, digital misinformation has proved to be a serious challenge in Brazil, with 90% of Bolsonaro supporters believing at least one piece of fake news in 2018.71See Christopher Harden, Brazil Fell for Fake News: What to Do About It Now?, Wilson Ctr. (Feb. 21, 2019), https://www.wilsoncenter.org/blog-post/brazil-fell-for-fake-news-what-to-do-about-it-now [https://perma.cc/7Z6M-4GSH]. In addition, deepfake technology may pose significant challenges for public discourse in the future.72See Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753, 1786 (2019). This is particularly true as the capacity to generate deepfakes using generative AI will soon outstrip both the platforms’ and users’ ability to detect them.73See Commc’ns. Sec. Establishment, Cyber Threats to Canada’s Democratic Process 18 (2023). A counterpoint, however, is that AI was used extensively, reportedly in a largely successful manner, in India’s recent national election, wherein politicians connected with voters by including deepfake impersonations of candidates and deceased politicians in campaign materials.74See Vandinika Shukla & Bruce Schneier, Indian Election Was Awash in Deepfakes—But AI Was a Net Positive for Democracy, The Conversation (June 10, 2024), https://theconversation.com/indian-election-was-awash-in-deepfakes-but-ai-was-a-net-positive-for-democracy-231795 [https://perma.cc/JT4C-3HWN].

A third difference is that social media platforms create a loss of epistemic trust. The decline in trust, rather than truth, may ultimately prove to be more damaging to the public sphere. Experimental evidence suggests that while exposure to deepfakes did not mislead participants, it left them feeling uncertain about the truthfulness of content.75See Cristian Vaccari & Andrew Chadwick, Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News, 6 Soc. Media + Soc’y 1, 2 (2020). This uncertainty, in turn, led to lower levels of trust with respect to news on social media. Researchers surmise that an increase in political deepfakes “will likely damage online civic culture by contributing to a climate of indeterminacy about truth and falsity that, in turn, diminishes trust in online news.”76Id. Epistemic distrust “can severely undermine a sense of democratic legitimacy among large parts of society.”77See Gilad Abiri & Johannes Buchheim, Beyond True and False: Fake News and the Digital Epistemic Divide, 29 Mich. Tech. L. Rev. 59, 65 (2022). The decay of trust also benefits leaders with authoritarian impulses.78See Chesney & Citron, supra note 72, at 1786. By contrast, in the pre-digital world, misinformation in public discourse was counteracted by civil society organizations, in particular the traditional news media, which maintained common standards for accuracy and objectivity, thereby instilling widespread trust in epistemic authorities.79See Abiri & Buchheim, supra note 77, at 65–66.

Fourth, social media platforms generate “epistemic fragmentation”—the idea that citizens no longer share a common set of facts and understandings about political life.80See id. at 66–67. Social media platforms tailor content for each user, leading to what Sunstein has dubbed “the Daily Me.”81Sunstein, supra note 7, at 2. Platforms also enable political campaigns to engage in microtargeting so that political advertising messages vary depending on the race and gender of the recipient. By contrast, citizens under the traditional news media paradigm were more likely to engage with the same news stories.82See Abiri & Buchheim, supra note 77, at 66–67. This fragmentation has compounded challenges to epistemic trust because “citizens no longer trust the same sources of information, and the reliability of the sources they do trust varies substantially.”83Farrell & Schwartzberg, supra note 66, at 192.

A fifth difference is that social media platforms rely on behind-the-scenes algorithms to do the vast majority of content filtering, in an effort to provide listeners with the kind of filtered experience that each user is seeking.84See Jane Bambauer, James Rollins & Vincent Yesue, Platforms: The First Amendment Misfits, 97 Ind. L.J. 1047, 1068 (2022); James Grimmelmann, Listeners’ Choices, 90 U. Colo. L. Rev. 365, 378–79 (2019). Because the predominant characteristic of the expressive environment online is the scarcity of listener attention, an important “means of controlling speech is targeting the bottleneck of listener attention, instead of speech itself.”85See Tim Wu, Is the First Amendment Obsolete? Knight First Amend. Inst. at Colum. Univ. (Sep. 1, 2017), https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete [https://perma.cc/Y5DM-BJUG]; Tim Wu, The Attention Merchants (2016). As a result of this algorithmic filtering, Erin Miller argues that media companies could exert “skewing power” over certain “consumers’ information pools in a way that prevents them from forming epistemically justified beliefs.”86Erin Miller, Media Power Through Epistemic Funnels, 20 Geo. J.L. & Pub. Pol’y 873, 901 (2022).

Finally, social media platforms “were not created principally to serve democratic values and do not have as their lodestar the fostering of a well-informed and civically minded electorate.”87Persily, supra note 7, at 74. Instead, the platforms engage in “surveillance capitalism,” trading users’ behavioral data for vast profits.88See Zuboff, supra note 10, at 16. This behavioral advertising business model depends on maximizing the amount of time users engage with social media. A variety of deleterious phenomena are thus good for the bottom line, including addictive behavior, sensationalist and divisive content, and weakened privacy norms.89See Lina M. Khan & David E. Pozen, A Skeptical View of Information Fiduciaries, 133 Harv. L. Rev. 497, 505 (2019). Unlike the traditional news media, internet platforms “are not built to create a digital public sphere of common concern.”90Abiri & Buchheim, supra note 77, at 66–67. In addition, the platforms’ system of private governance threatens citizens’ opportunities to engage meaningfully in democratic participation, particularly in light of their lack of accountability to users.91See Klonick, supra note 64, at 1603.

These features of the digital public sphere, taken together, raise serious questions about whether the online speech market provides the conditions necessary to sustain a well-functioning political speech environment. As of this writing, the asymmetry of power between platforms and users has arguably been heightened by the intertwining of governmental and private tech interests. Because social media platforms exert asymmetrical power on users in a way that does not track the public interest, this gives rise to the apprehension that listeners’ interests in nondomination are not satisfied. By contrast, selection intermediaries that act in public-regarding ways, such as a well-run national broadcasting corporation, do not pose the same degree of risk. To be sure, traditional media could also exert dominating power on their listeners to the extent they are not forced to track listeners’ avowable interests in a well-functioning public sphere. What matters is whether the selection intermediary is upholding public-regarding standards such as the provision of accurate information and a diversity of competing viewpoints.

Digital exceptionalism does not mean that the government must intervene in a way that differs from its regulation of traditional news media. Instead, the distinctive features of the digital public sphere suggest that a specialized and tailored set of regulatory responses may be warranted to foster a well-functioning speech environment. Jack Balkin’s distinction between the “old-school” speech regulation of the predigital world and the “new school” speech regulation of digital intermediaries seems applicable.92See Jack M. Balkin, Old-School/New-School Speech Regulation, 127 Harv. L. Rev. 2296, 2306 (2014). Finally, the concerns raised here do not amount to a blanket condemnation of social media platforms. These platforms provide a range of goods such as entertainment, commerce, convenience, and connection that are rightly valued by consumers.

II. Law and the Speech Environment

To what extent is the normative account outlined in Part I reflected in First Amendment jurisprudence? Or to put the question another way: does the First Amendment offer any conceptual resources that would enable the government to respond to the challenges posed by digital exceptionalism? While it is beyond the scope of this Article to provide a comprehensive answer to these questions, this Part begins by briefly describing the positive conception of the First Amendment, under which the state’s role is to affirmatively protect the democratic public sphere from powerful private actors. Part II then offers a snapshot view of the current law of public discourse,93By “public discourse,” I mean speech that is relevant to the formation of public opinion and that deals with matters of public concern. See James Weinstein, Participatory Democracy as the Central Value of American Free Speech Doctrine, 97 Va. L. Rev. 491, 493 (2011). For an alternative interpretation of this concept, see Robert Post, Participatory Democracy and Free Speech, 97 Va. L. Rev. 477, 488 (2011) (arguing that the “boundaries of public discourse are inherently normative”). focusing in particular on campaign finance regulation and the Moody decision to show that the Supreme Court has for the most part abandoned the positive conception and, as a result, has significantly restricted the range of allowable regulatory responses to the deficits of digital exceptionalism.

A. The First Amendment as a Positive Right

A positive conception of the First Amendment, as mentioned above, holds that the government may have to take affirmative steps to protect expressive freedom from powerful private entities.94See supra text accompanying notes 39–42. Owen Fiss asserts, for instance, that “the impact that private aggregations of power have upon our freedom” means that “sometimes the state is needed simply to counteract these forces.”95Owen M. Fiss, The Irony of Free Speech 2–3 (1996). The state has a duty to “preserve the integrity of public debate” in order to “safeguard the conditions for true and free collective self-determination.”96Fiss, supra note 28, at 1416. In keeping with this duty, the state may have to intervene to protect the “robustness of public debate in circumstances where powers outside the state are stifling speech.”97Fiss, supra note 95, at 4. Sunstein argues for a “New Deal for speech” under which the supposed democratic interferences with the autonomy of private actors are not abridgements of speech; indeed, the autonomy of private actors is itself a product of law and may amount to an abridgment.98See Cass R. Sunstein, The Partial Constitution 202 (1993). As such, “what seems to be government regulation of speech might, in some circumstances, promote free speech, and should not be treated as an abridgment at all.”99Id. at 204.     

As Genevieve Lakier observes, the Supreme Court understood the freedom of speech as having a positive dimension during the New Deal and Warren Court eras.100See Genevieve Lakier, The First Amendment’s Real Lochner Problem, 87 U. Chi. L. Rev. 1241, 1247 (2020). That is, the First Amendment did not only provide individuals with personal expressive freedom; it also provided them with the means for democratic self-government.101See id. at 1333. For example, in Red Lion Broadcasting Co. v. FCC, the Supreme Court upheld, against a First Amendment challenge, the FCC’s fairness doctrine, which required broadcasters to provide adequate and fair coverage to public issues in a way that accurately captured competing viewpoints.102Red Lion Broad. Co. v. FCC, 395 U.S. 367, 375 (1969). The FCC repealed the fairness doctrine in 1987. According to the Court, the fairness doctrine furthered the “First Amendment goal of producing an informed public capable of conducting its own affairs.”103Id. at 392. However, in the ensuing years, the Court has largely abandoned the positive conception of

the First Amendment,104But see Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180 (1997) (upholding against a First Amendment challenge must-carry rules requiring cable television networks to allocate some channels to local broadcast stations). including in the campaign finance context, as discussed below.

B. Public Discourse and Campaign Finance Regulation

The Supreme Court has interpreted the First Amendment as providing the highest possible protection to public discourse due to its centrality to self-government. One of the main ways in which public discourse—specifically electoral speech—is regulated is through campaign finance law.105The discussion that follows is drawn from Yasmin Dawood, The Theoretical Foundations of Campaign Finance Regulation, in The Oxford Handbook of American Election Law 817–42 (Eugene D. Mazo ed., 2024). In recent years, the Supreme Court has taken a deregulatory posture to campaign finance law, striking down significant parts of the legal infrastructure governing money in politics. This skepticism was apparent in an early landmark case, Buckley v. Valeo,106Buckley v. Valeo, 424 U.S. 1 (1976). in which the Court struck down limits on campaign expenditures because they were not justified by the government’s interest in preventing the actuality and appearance of corruption. In Buckley, the Court explicitly rejected the egalitarian—or equalization—rationale, stating that “the concept that government may restrict the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment.”107Id. at 48–49. Hence, the “governmental interest in equalizing the relative ability of individuals and groups to influence the outcome of elections” did not justify expenditure limits.108See id. at 49. The Buckley court found, however, that limits on campaign contributions were justified by the government’s interest in preventing corruption and its appearance. The provision of large contributions “to secure political quid pro quos from current and potential office holders” undermined the integrity of representative democracy.109See id. at 26–27.

In a subsequent decision, Austin v. Michigan State Chamber of Commerce,110Austin v. Mich. Chamber of Com., 494 U.S. 652 (1990), overruled by Citizens United v. FEC, 558 U.S. 310 (2010); see also FEC v. Mass. Citizens for Life, 479 U.S. 238, 257–58 (1986) (observing that the “corrosive influence of concentrated corporate wealth” may make “a corporation a formidable political presence, even though the power of the corporation may be no reflection of the power of its ideas”). the Supreme Court broadened the definition of corruption beyond quid pro quo corruption to encompass the concept of antidistortion which arose from the “corrosive and distorting effects of immense aggregations of wealth that are accumulated with the help of the corporate form and that have little or no correlation to the public’s support for the corporation’s political ideas.”111Austin, 494 U.S. at 660. The antidistortion concept was ultimately based on an equality rationale.112See, e.g., Stephen E. Gottlieb, The Dilemma of Election Campaign Finance Reform, 18 Hofstra L. Rev. 213, 229 (1989); Kathleen M. Sullivan, Political Money and Freedom of Speech, 30 U.C. Davis L. Rev. 663, 679 (1997). Concentrated corporate wealth gives certain voices far greater political influence than others due to the fact that speech is expensive.113See David Cole, First Amendment Antitrust: The End of Laissez-Faire in Campaign Finance, 9 Yale L. & Pol’y Rev. 236, 266 (1991). As a result of these inequities in speech capacities, listeners do not have access to the full range of views, which may affect their voting patterns and, hence, skew electoral outcomes. In McConnell v. FEC,114McConnell v. FEC, 540 U.S. 93 (2003) (quoting FEC v. Colo. Republican. Fed. Comm., 533 U.S. 431, 441 (2001)), overruled by Citizens United v. FEC, 558 U.S. 310 (2010). the Court held that corruption also encompassed the “undue influence on an officeholder’s judgment, and the appearance of such influence.”115Id. at 95. Undue influence arises when political parties sell special access to federal candidates and officeholders, thereby creating the perception that money buys influence. The undue influence standard is concerned with the skew in legislative, rather than electoral, outcomes.

The Supreme Court’s decision in Citizens United v. FEC,116Citizens United v. FEC, 558 U.S. 310 (2010). however, marked a turning point, implicating listener interests in at least four ways. First, the Supreme Court rejected Austin’s antidistortion rationale on the basis that it was actually an equalization rationale in violation of Buckley’s central tenet that the First Amendment prevents the government from restricting the speech of some in order to enhance the voice of others. The Court held that preventing quid pro quo corruption or the appearance thereof was the only governmental interest strong enough to overcome First Amendment concerns. Listener interests in the maintenance of a relatively level electoral playing field were undercut by this decision. In other cases, the Court has rejected equality-based arguments on the grounds that leveling the electoral playing field is impermissible under the First Amendment.117Davis v. FEC, 554 U.S. 724 (2008) (striking down on First Amendment grounds a federal statute that raised contribution limits for non-self-financed candidates who were running against wealthy self-financed opponents); Ariz. Free Enter. Club’s Freedom Club PAC v Bennett, 564 U.S. 721 (2011) (striking down on First Amendment grounds a state law that provided matching funds to publicly financed candidates in order to level the playing field by offsetting high levels of spending by privately funded opponents and independent committees).

Second, the Court held in Citizens United that corporations were henceforth allowed to spend unlimited sums from their general treasury funds as independent expenditures. According to the Court, independent expenditures do not give rise to the actuality or appearance of quid pro quo corruption. This reasoning gave rise to the emergence of Super PACs. In a subsequent case, SpeechNow.org v. FEC,118SpeechNow.org v. FEC, 599 F.3d 686 (D.C. Cir. 2010), cert. denied sub nom Keating v. FEC, 562 U.S. 1003 (2010). a lower court struck down contribution limits on PACs that engaged exclusively in independent spending—entities that are now known as Super PACs. Super PACs can accept unlimited contributions from individuals, corporations, and labor unions to fund independent ads supporting or opposing federal candidates. Listener interests are arguably undermined by the phenomenon of Super PACs: these entities have changed the political landscape by flooding huge sums of money into elections.119See Michael S. Kang, The Year of the Super PAC, 81 Geo. Wash. L. Rev. 1902 (2013). Not only is coordination with candidates a reality,120See Richard Briffault, Super PACs, 96 Minn. L. Rev. 1644 (2012). For a contrary view, see Bradley A. Smith, Super PACs and the Role of “Coordination” in Campaign Finance Law, 49 Willamette L. Rev. 603, 635 (2013). but Super PACs lack accountability and transparency relative to political parties and candidates, thereby further decreasing the influence of individual listeners on the democratic process.

Some may argue, however, that the increases in corporate advertising, and hence in available information, are beneficial to listeners. Indeed, the Court majority in Citizens United took this position, stating that the “right of citizens to inquire, to hear, to speak, and to use information to reach consensus is a precondition to enlightened self-government and a necessary means to protect it.”121Citizens United, 558 U.S. at 339 (emphasis added). The Court also asserted that “it is inherent in the nature of the political process that voters must be free to obtain information from diverse sources in order to determine how to cast their votes.”122Id. at 341.

Third, Citizens United and the deregulatory turn it ushered in, has broader implications for democracy. Money skews legislative priorities because it provides legislative access to large donors and lobbyists.123See Lawrence Lessig, Republic, Lost: How Money Corrupts Congress—and a Plan to Stop It 16 (2011); Christopher S. Elmendorf, Refining the Democracy Canon, 95 Cornell L. Rev. 1051, 1055 (2010) (arguing that “electoral systems should render elected bodies responsive to the interests and concerns of the normative electorate, i.e., the class of persons entitled to vote”). While access does not guarantee legislative outcomes, it is required to exert political influence. As such, officeholders are disproportionately responsive to the wishes of large donors than to other constituents.124See Nicholas O. Stephanopoulos, Aligning Election Law 240–46 (2024). Empirical studies have shown, for instance, that elected representatives are more responsive to the preferences of the affluent than to the preferences of low-income and middle-income individuals.125See, e.g., Larry M. Bartels, Unequal Democracy: The Political Economy of the New Gilded Age (2d ed. 2008); Martin Gilens, Affluence and Influence: Economic Inequality and Political Power in America (2012). It should be noted, however, that this does not speak directly to the impact of campaign money on legislative decision-making. The emphasis on the donor class disproportionately impacts the participation and representation of people of color and ordinary citizens.126See Spencer Overton, The Donor Class: Campaign Finance, Democracy, and Participation, 153 U. Pa. L. Rev. 73 (2004). Empirical research has demonstrated that donors “are not only wealthy, they are almost all white.”127Abhay P. Aneja, Jacob M. Grumbach & Abby K. Wood, Financial Inclusion in Politics, 97 N.Y.U. L. Rev. 566, 569 (2022). This racial gap has an impact on representation by affecting the electoral candidate pool and the behavior of legislators in office.128Id. at 630.

Finally, listener interests were at issue in the Court’s holding that disclosure and disclaimer requirements survived exacting scrutiny. The Court found that disclosure was “justified based on a governmental interest in ‘provid[ing] the electorate with information’ about the sources of election-related spending.”129Citizens United v. FEC, 558 U.S. 310, 368 (2010) (citing Buckley v. Valeo, 424 U.S. 1, 66 (1976)). The transparency resulting from disclosure “enables the electorate to make informed decisions and give proper weight to different speakers and messages.”130Id. at 371. Abby Wood argues that disclosure provides multiple informational benefits for voters.131See Abby K. Wood, Learning from Campaign Finance Information, 70 Emory L.J. 1091, 1102 (2021). By contrast, critics argue that disclosure rules violate privacy and raise the risk of retaliation. In a recent decision, Americans for Prosperity Foundation v. Bonta,132Ams. for Prosperity Found. v. Bonta, 594 U.S. 595 (2021). however, the Supreme Court has made it easier for disclosure laws to be found unconstitutional.133Although Bonta is not a campaign finance case as it concerns disclosure by nonprofit organizations (and not candidates, parties, or PACs), it has clear implications for campaign finance disclosure laws. See Michael Kang, The Post-Trump Rightward Lurch in Election Law, 74 Stan. L. Rev. Online 55, 64–65 (2022); Abby K. Wood, Disclosure, in The Oxford Handbook of American Election Law 923, 924, 928–29 (Eugene D. Mazo ed., 2024).

C. Public Discourse and Social Media Platforms

In the campaign finance realm, listeners’ liberty interests in unrestricted access to the commercial speech market are protected. However, their equality interests in a relatively level electoral playing field are significantly undermined. A similar pattern is evident in the emerging law of social media platform regulation. Listeners’ liberty interests are largely protected on social media platforms given the sheer volume of information available, but their equality interests in a level electoral playing field, an open deliberative sphere, and access to competing viewpoints appear to be compromised in the online world. As described in Part I.C above, listeners’ epistemic and nondomination interests are likewise threatened as a result of the key features of digital exceptionalism.

In Moody v. NetChoice, LLC,134Moody v. NetChoice, LLC, 603 U.S. 707 (2024). the Court considered the constitutionality of state laws from Florida and Texas that restricted the ability of social media platforms to engage in content moderation. The laws required internet platforms to carry speech that might otherwise be demoted or removed due to the platforms’ content moderation policies.135Id. at 713–22. The laws also required a platform to provide an individualized explanation to any user whose posts had been altered or removed.136Id. The states’ underlying concern was that the platforms were politically biased and were unfairly silencing the voices of conservative speakers.137Id. at 740–41; NetChoice, LLC v. Att’y Gen., Fla., 34 F. 4th 1196, 1203 (11th Cir. 2022). NetChoice, an internet trade association, brought facial challenges to the laws. The U.S. Court of Appeals for the Eleventh Circuit upheld a preliminary injunction, finding that the Florida law likely violated the First Amendment.138NetChoice, LLC, 34 F. 4th at 1227–28. However, the Court of Appeals for the Fifth Circuit reversed a preliminary injunction of the Texas law, partially on the basis that the platforms’ content moderation activities did not amount to speech, and hence did not infringe the First Amendment.139NetChoice, LLC v. Paxton, 49 F. 4th 439, 494 (2022).

Writing for the Supreme Court in Moody, Justice Kagan vacated the lower court decisions and remanded the cases, on the grounds that there was an insufficient record to sustain a facial challenge.140Moody, 603 U.S. at 713–18. While the Court was unanimous that NetChoice’s facial challenge had failed, Justice Kagan, speaking for a six-member majority,141Justice Kagan was joined by Chief Justice Roberts and Justices Sotomayor, Kavanaugh and Barrett in full and Justice Jackson in part. nonetheless proceeded to provide substantive guidance as to how the lower courts should conduct the facial analysis.

The Court majority’s central proposition was that the laws in question infringed the First Amendment rights of large social media platforms (specifically with respect to Facebook’s NewsFeed, YouTube’s homepage, and the like). Drawing an analogy to newspapers, the Court asserted that such platforms should be viewed as speakers with the right to compile and curate the speech of others. Justice Kagan relied on Miami Herald Publishing Company v. Tornillo,142Mia. Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974). in which the Court had struck down a right-of-reply law that required newspapers to print the reply of any political candidate who received critical coverage in their pages. In Tornillo, the Court held that the First Amendment protects newspaper editors in their “exercise of editorial control and judgment.”143Id. at 258. The Court majority drew upon additional cases—involving a private utility’s newsletter (Pacific Gas and Electric Co. v. Public Utilities Commission of California),144Pac. Gas & Elec. Co. v. Pub. Util. Comm’n of Cal., 475 U.S. 1 (1986). must-carry rules for cable operators (Turner Broadcasting System, Inc. v. FCC),145Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622 (1994). The Court noted that in a later decision, the regulation was upheld because it was necessary to protect local broadcasting. Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180, 189–90 (1997). and regulations affecting parades (Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc.)146Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Boston, Inc., 515 U.S. 557 (1995).—to find that the First Amendment prohibits the government from directing a private entity to include certain messages where that entity is curating the speech of others to create its own expressive product.147Moody v. NetChoice, LLC, 603 U.S. 707, 731–32, 742–43 (2024).

In the same way, the curating activity of social media platforms amounts to expressive activity protected by the First Amendment. Justice Kagan noted that Facebook’s News Feed and YouTube’s homepage use algorithms to create a personalized feed for each user.148Id. at 710. Their content moderation policies filter prohibited topics, such as pornography, hate speech, and certain categories of misinformation, and rank or label disfavored messages. In making these choices, social media platforms “produce their own distinctive compilations of expression.”149Id. at 716. The Moody majority thus appears to have resolved the debate as to whether platforms should be treated as publishers or as common carriers under the First Amendment (at least with respect to Facebook’s NewsFeed and the like).150See, e.g., Adam Candeub, Bargaining for Free Speech: Common Carriage, Network Neutrality, and Section 230, 22 Yale J.L. & Tech. 391 (2020); Eugene Volokh, Treating Social Media Platforms like Common Carriers?, 1 J. Free Speech L. 377 (2021); Ashutosh Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2 J. Free Speech L. 127 (2022).

Consistent with the campaign finance context, the Court majority was adamant that the First Amendment prevents the state from interfering with “private actors’ speech to advance its own vision of ideological balance.”151Moody, 603 U.S. at 741. Government may not “decide what counts as the right balance of private expression,” and must instead “leave such judgments to speakers and their audiences.”152Id. at 719. This principle holds true even when there are credible concerns that certain private parties wield disproportionate expressive power in the marketplace of ideas. The majority noted that the regulations in Tornillo, PG&E, and Hurley “were thought to promote greater diversity of expression” and “counteract advantages some private parties possessed in controlling ‘enviable vehicle[s]’ for speech.”153Id. at 733 (citing Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Boston, Inc., 515 U.S. 557, 577 (1995)). The Court also drew on its campaign finance jurisprudence, citing Buckley’s proposition that the government may not “restrict the speech of some elements of our society in order to enhance the relative voice of others.”154Id. at 742 (citing Buckley v. Valeo, 424 U.S. 1, 48–49 (1976)). Justice Kagan argued that “[h]owever imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others.”155Id. at 733.

In a concurring judgment, Justice Alito (joined by Justices Thomas and Gorsuch) agreed with the majority’s facial unconstitutionality argument but took issue with the majority’s First Amendment analysis. Justice Alito argued that the states’ laws, at least in some of their applications, appeared to regulate passive carriers of third-party speech, which receive no protection under the First Amendment.156See id. at 788 (Alito, J., concurring). He criticized the majority for failing to address the states’ argument that Facebook and YouTube amount to common carriers,157See id. at 793–94 (Alito, J., concurring). as did Justice Thomas in a separate concurrence.158See id. at 751–52 (Thomas, J., concurring). Justice Alito also seemed more sympathetic to the states’ concerns, noting that the content moderation decisions of social media platforms can have “serious consequences,” including impairing “users’ ability to speak to, [and] learn from,” others; impairing a political candidate’s “efforts to reach constituents or voters”; compromising “the ability of voters to make a fully informed electoral choice”; and exerting “a substantial effect on popular views.”159Id. at 768 (Alito, J., concurring). He described the Florida law as an attempt “to prevent platforms from unfairly influencing elections or distorting public discourse,”160Id. at 770 (Alito, J., concurring). in a manner reminiscent of the very antidistortion arguments that were rejected by the conservative Justices in the campaign finance context.

III.  Possibilities for Countervailance

The Moody majority’s stance was consistent with a long line of precedent that has treated state control of speech with grave distrust. By “requir[ing] the platforms to carry and promote user speech that they would rather discard or downplay,”161Id. at 728. the states’ content moderation policies violated a central tenet that the government may not influence the content of speech. However, the Supreme Court’s interpretation of the First Amendment gives rise to a genuine conundrum: although this approach protects listeners from the power of the state, it does not protect the speech environment from the power of the platforms nor from the deficits that ensue from digital exceptionalism. Indeed, actions on the part of the state that would amount to an effective fix of the challenges of digital exceptionalism would very likely involve too great a governmental intrusion into expressive freedom. Hence, the gap between the ideal of a well-functioning speech environment and the challenges of digital exceptionalism cannot be resolved without dramatic changes to current First Amendment jurisprudence. As a result, there is a very narrow space for measures that might lessen the deleterious effects of digital exceptionalism without falling afoul of the First Amendment.

In light of this conundrum, this Part canvasses some possibilities for countervailance; that is, mechanisms that could lessen the deficits of the digital public sphere such that listeners’ interests are better protected, even if that protection does not rise to the level of establishing the kind of equality required for democratic self-governance. With respect to the challenge of disinformation in social media, I have argued elsewhere for a “multifaceted public-private approach that employs a suite of complementary tactics including: (1) disclosure and transparency laws; (2) content-based regulation and self-regulation; (3) norm-based strategies; and (4) civic education and media literacy efforts.”162Yasmin Dawood, Protecting Elections from Disinformation: A Multifaceted Public-Private Approach to Social Media and Democratic Speech, 16 Ohio State Tech. L.J. 639, 641 (2020). Using Canada as a case study, I suggested that the “combined and interactive effects of a multifaceted approach provide helpful protections against some of the harms of disinformation while still protecting the freedom of speech.”163Id. at 642.

A similar type of approach might be an appropriate way to think about countervailance. The idea is not that any one countervailing tactic will protect listener interests. Instead, the combined and interactive effects of a number of measures may serve as a countervailing force against the immense power of social media platforms. A caveat, however, is in order. These countervailing measures are imperfect, even deeply so, in terms of their ability to counter the challenges of digital exceptionalism. These measures will not on their own bring about a well-functioning speech environment; instead, they will bring such an environment closer to realization. Hence, the effect of this countervailance will no doubt be modest: listeners would still very much be at the mercy of the platforms. The objective would be to at least lessen the acuteness of the asymmetry and its resulting deficits.

Indeed, the majority opinion in Moody suggests that there are possibilities for regulation. Justice Kagan acknowledged, for instance, that “[i]n a better world, there would be fewer inequities in speech opportunities; and the government can take many steps to bring that world closer.”164Moody v. NetChoice, LLC, 603 U.S. 707, 741 (2024). Citing Turner I,165Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 647 (1994) (protecting local broadcasting). Justice Kagan explicitly recognized that the “government can take varied measures, like enforcing competition laws, to protect th[e] access”166Moody, 603 U.S. 707 at 732–33. to information from many sources. In recent years, the federal government has been pursuing antitrust cases against Google, Meta, and Amazon. The Court majority also noted that “[m]any possible interests relating to social media” can meet the First Amendment intermediate scrutiny test.167Id. at 711 (citing United States v. O’Brien, 391 U.S. 367, 377 (1968)). Under intermediate scrutiny, a law must advance a “substantial governmental interest” that is “unrelated to the suppression of free expression.” Id. The Court was pointed in its assertion that “nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects.”168Id. at 740.

In what follows, I briefly canvass an array of countervailing mechanisms, including disclosure and transparency rules; a narrow prohibition of false election speech; strategies to manage deepfakes; state-led incentive structures and norms, including mechanisms to provide listeners with increased choices and powers of their own; public jawboning; and civil society efforts. Each of these measures warrants a far more extensive treatment—particularly with respect to their advantages and disadvantages—than I am able to offer here. Although it is beyond the scope of this brief discussion to attempt anything more than a cursory analysis, I hope that it nonetheless provides some indication of the kinds of possibilities that merit attention.

A. Disclosure and Transparency

As described above, disclosure provides multiple informational benefits for voters, including not only the content of the disclosures but also their quality and the amount of information provided.169See Wood, supra note 131, at 1102. Disclosure and disclaimers with respect to online political advertising would help to facilitate counterspeech and deter disinformation.170See Abby K. Wood, Facilitating Accountability for Online Political Advertisements, 16 Ohio State Tech. L.J. 520, 523–24 (2020). Disclosure would also provide listeners with the context they need to assess political advertising. That being said, the disclosure regimen in the campaign finance context is subject to various limitations, including structural barriers to connecting disclosures to voters and enforcing disclosure rules against violators.171See Jennifer A. Heerwig & Katherine Shaw, Through a Glass, Darkly: The Rhetoric and Reality of Campaign Finance Disclosure, 102 Geo. L.J. 1443, 1486, 1498 (2014). Disclosure rules have also been criticized for violating privacy, raising the risk of retaliation, chilling speech, and discouraging political participation.172See, e.g., Richard Briffault, Two Challenges for Campaign Finance Disclosure After Citizens United and Doe v. Reed, 19 Wm. & Mary Bill Rts. J. 983, 988–92, 1013–14 (2011).

Outside of the campaign finance context, online platforms could increase transparency about the content curation decisions they make. Transparency requirements are also an appropriate regulatory response to political disinformation.173See Wood, supra note 170, at 539–40. Compared to other regulatory responses, transparency laws have various benefits: they provide additional information to consumers, allow for public accountability, and nudge companies to make better decisions in anticipation of public disclosure.174See Eric Goldman, The Constitutionality of Mandating Editorial Transparency, 73 Hastings L.J. 1203, 1206 (2022). In his concurring opinion in Moody, Justice Alito remarked that the platforms are providing various disclosures under the European Union’s Digital Services Act, and that “complying with that law does not appear to have unduly burdened each platform’s speech in those countries.”175Moody v. NetChoice, LLC, 603 U.S. 707, 797–98 (2024) (Alito, J., concurring). Justice Alito further suggested that courts on remand should investigate whether such disclosures chilled the platforms’ speech.

B. False Election Speech

In general, falsehoods and lies are constitutionally protected speech.176See N.Y. Times Co. v. Sullivan, 376 U.S. 254, 279–83 (1964). As Sunstein observes, “[p]ublic officials should not be allowed to act as the truth police” because if they are empowered to “punish falsehoods, they will end up punishing dissent.”177Cass R. Sunstein, Liars: Falsehoods and Free Speech in an Age of Deception 3 (2021). There are, of course, a few narrow exceptions to the general rule that false statements are protected speech, such as, for example, regulations concerning defamation and false or misleading advertising.

The best response to false speech is not censorship but counterspeech. As the Supreme Court plurality noted in United States v. Alvarez, “[t]he remedy for speech that is false is speech that is true. This is the ordinary course in a free society.”178United States v. Alvarez, 567 U.S. 709, 727 (2012). Abby Wood observes that as a remedy for disinformation, counterspeech “fits well in the court’s ‘marketplace of ideas’ theory of the First Amendment.”179Wood, supra note 170, at 541. Lies stated by a candidate during an election campaign should likewise be addressed by the counterspeech of the candidate’s political opponent.180See Eugene Volokh, When Are Lies Constitutionally Protected?, 4 J. Free Speech L. 685, 704 (2024). That being said, counterspeech is often ineffective given the realities of echo chambers and the partisan divide in the news media.

Although restrictions on false speech are generally unconstitutional, a narrowly drawn prohibition of false election speech aimed at disenfranchising voters might survive constitutional scrutiny.181See Richard L. Hasen, Deep Fakes, Bots, and Siloed Justices: American Election Law in a “Post-Truth” World, 64 St. Louis U. L.J. 535, 548 (2020). Such a prohibition would target the mechanics of voting. Indeed, in Minnesota Voters Alliance v. Mansky, the Supreme Court indicated that false speech about when and how to vote could be banned by the government.182Minn. Voters All. v. Masky, 585 U.S. 1 (2018). The government’s compelling interest in protecting the right to vote could serve as the justification for the law. An additional consideration is that false speech about the mechanics of voting would be difficult to redress with counterspeech particularly in the few days leading up to an election.183See Volokh, supra note 180, at 707.

C.  Deepfakes and AI

Deepfake technology poses serious threats of harm to democracy, including by distorting public discourse, eroding citizens’ trust in news media, and manipulating elections.184See Chesney & Citron, supra note 72, at 1777. There have been several attempts to regulate deepfakes by the states,185See Jack Langa, Deepfakes, Real Consequences: Crafting Legislation to Combat Threats Posed by Deepfakes, 101 B.U. L. Rev. 761, 786 (2021). such as legislation in California and Texas that prohibited the use of deepfakes within a designated pre-election period.186See Yinuo Geng, Comparing “Deepfake” Regulatory Regimes in the United States, the European Union, and China, 7 Geo. L. Tech. Rev. 157, 162–63 (2023). However, deepfakes are better regulated—by both public officials and private entities—through disclosure and counterspeech rather than by outright bans.187See Sunstein, supra note 177, at 117. Disclosure requirements could, for example, label deepfakes as “altered.”188Hasen, supra note 7, at 27.

To be sure, there are real dangers to having the government determine what is true and false, which suggests that laws regulating deepfakes should be treated with caution. If platforms on their own accord institute deepfake bans, they should exempt parody, education, or art, and should provide accountability to users for any speech that is suppressed, including a meaningful opportunity to contest the decision.189See Chesney & Citron, supra note 72, at 1818.A growing challenge facing both public and private interventions, however, is that it will become increasingly difficult to detect deepfakes, particularly given the availability of generative AI.190See Communications Security Establishment, supra note 73, at 18. As the technology advances, the capacity to create deepfakes “will diffuse and democratize rapidly.”191Chesney & Citron, supra note 72, at 1762.

D. Incentives and Norms

The government can also use incentive structures to pressure platforms into making responsible choices about the democratic public sphere. For example, online platforms are protected from liability for hosting third-party content under Section 230 of the Communications Decency Act—a protection that arguably encourages platforms to moderate harmful speech and thereby perform a task that the government is not permitted to do.192See Erwin Chemerinsky & Alex Chemerinsky, The Golden Era of Free Speech, in Social Media, Freedom of Speech, and the Future of Our Democracy 92 (Lee C. Bollinger & Geoffrey R. Stone eds., 2022). Platforms may also be motivated to respond to harmful content out of a concern that the government could amend Section 230 if they fail to take action (although this eventuality is, of course, dependent on the priorities of the incumbent administration).193See Chesney & Citron, supra note 72, at 1813. The Digital Services Act promulgated by the European Union provides a more extensive regulatory model, one that is unlikely to be adopted in the U.S. It imposes several mandatory obligations on platforms, including transparency, notice-and-takedown systems, internal complaint handling systems, deplatforming, and independent auditing.194Council Regulation, 2022/2065, arts. 14, 16, 20, 23, 39, 2022 O.J. (L 277) 1 (EU).

The government could also create incentives for platforms to provide users with greater control over the content they receive. Many platforms already enable users to block or mute content they do not wish to see. However, they could take additional steps to enable users to actively moderate their own feeds.195See Bambauer, Rollins & Yesue, supra note 84, at 1069. In addition, the government could impose data interoperability requirements, thereby enabling users to easily move their data across platforms.196See Khan & Pozen, supra note 89, at 538–39. Platforms that violate users’ rights would lose followers in favor of rival platforms with healthier environments.197See id. To be sure, greater user control could also lead to greater epistemic fragmentation if users choose to avoid competing viewpoints.

Public-regarding behavior could be indirectly encouraged by such mechanisms as digital charters.198See Dawood, supra note 162, at 663–65. These public-private norm-based initiatives “identify standards, best practices, and objectives to govern the digital world.”199Id. at 663. For example, the Declaration of Electoral Integrity, an initiative between the Canadian government and the major platforms, endorsed the values of integrity, transparency, and authenticity as the pillars of a healthy political discourse.200See id. at 663–64. Another initiative, the Digital Charter, identified ten principles, including universal access; safety and security; control and consent; transparency, portability and interoperability; a level playing field; strong enforcement and real accountability.201See id. at 665. Although these norm-based approaches were not legally binding, they identified democracy-enhancing norms that could serve as a “standard by which to judge actions taken or not taken.”202Id.

E. Public Jawboning

Can public jawboning play a salutary role as a countervailance mechanism? A recent Supreme Court decision, Murthy v. Missouri,203Murthy v. Missouri, 603 U.S. 43 (2024). involves what is colloquially referred to as “jawboning,” which takes place when the government pressures private actors to take certain actions without directly using its coercive power to do so. In Murthy, the record revealed that, over the last few years, White House and other federal officials had routinely communicated with social media platforms about misinformation related to COVID-19 vaccines and electoral processes. Some of these communications were public: government officials, in response to vaccine misinformation on the platforms, opined that reforms to antitrust laws and to Section 230 of the Communications Decency Act may be in order.204See id. at 51–52. Other communications were private: officials in the White House, CDC, FBI, and CISA “regularly spoke” with platforms about misinformation over several years.205See id. at 51. The District Court for the Western District of Louisiana had issued a preliminary injunction, which was affirmed by the Fifth Circuit, on the basis that government officials had “coerced or significantly encouraged” the platforms to censor disfavored speech in violation of the First Amendment.206Missouri v. Biden, 83 F. 4th 350, 392 (5th Cir. 2023).

In a 6-3 majority opinion by Justice Barrett, the Supreme Court overturned the Fifth Circuit’s decision on standing grounds.207See Murthy, 603 U.S. at 58–62. Justice Barrett also rejected the plaintiffs’ “right to listen” theory—which asserted that the First Amendment protects the interest of social media users to engage with the content of other social media users—on the grounds that it provided a “startlingly broad” right to users to “sue over someone else’s censorship.” Id. at 74–75. Dissenting in Murthy, Justice Alito (joined by Justices Thomas and Gorsuch) asserted that the issue was whether the government engaged in “permissible persuasion” or “unconstitutional coercion.”208Id. at 98–100 (Alito, J., dissenting). While the government may inform and persuade, it is barred under the First Amendment from coercing a third party into suppressing another person’s speech.209See id. (Alito, J., dissenting) (citing Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 67 (1963)). Drawing on the Court’s approach in National Rifle Association v. Vullo,210Nat’l Rifle Ass’n v. Vullo, 602 U.S. 175, 189–90 (2024). Justice Alito analyzed three factors—the authority of the government officials; the nature of the statements made by those officials; and the reactions of the third party alleged to have been coerced—to find that the government had engaged in coercion.211See id. at 100–07 (Alito, J., dissenting).

Ashutosh Bhagwat draws a helpful distinction between public jawboning and private jawboning: while public jawboning should rarely be considered coercive, in large part because government actors routinely hector corporations and often do so as part of their official responsibilities, private jawboning can sometimes amount to unconstitutional coercion.212See Ashutosh Bhagwat, The Bully Pulpit or Just Plain Bully: The Uses and Perils of Jawboning, 22 First Amend. L. Rev. 292, 306 (2024). However, “[d]etermining when private jawboning crosses the constitutional line . . . raises extremely difficult questions,” which require courts to engage in a highly contextual analysis.213Id. at 310. Justice Alito contended, for instance, that while the coercion in Murthy was “more subtle than the ham-handed censorship found to be unconstitutional in Vullo . . . it was no less coerceive.”214Murthy, 603 U.S. at 80 (Alito, J., dissenting). The danger is that if “a coercive campaign is carried out with enough sophistication, it may get by.”215Id. Ilya Somin catalogues the various ways in which government agencies post-Murthy can ensure that their pressure tactics avoid judicial scrutiny.216See Ilya Somin, The Supreme Court’s Dangerous Standing Ruling in Murthy v. Missouri, Reason.com: The Volokh Conspiracy (June 26, 2024, 5:57 PM), https://reason.com/volokh/2024/06/26/the-supreme-courts-dangerous-standing-ruling-in-murthy-v-missouri [https://perma.cc/64XB-E7FV].

Despite these legitimate concerns, there may be a role for public, but not private, jawboning to serve as a countervailing force against the power of the tech giants. Helen Norton’s “transparency principle”—namely, “an insistence that the governmental source of a message be transparent to the public”—could serve as a guide.217See Helen Norton, The Government’s Speech and the Constitution 30 (2019). As Norton observes, the “government’s speech is most valuable and least dangerous to the public when its governmental source is apparent: only then is the government’s speech open to the public’s meaningful credibility and accountability checks.”218Id. In an August 2024 letter to Congress, Mark Zuckerberg was unequivocal that Meta would no longer compromise its content standards in response to government pressure.219See Letter from Mark Zuckerburg, Founder, Chairman & CEO of Meta Platforms, Inc. to the Hon. Jim Jordan, Chairman, Comm. on the Judiciary, United States House of Reps. (Aug. 26, 2024). Indeed, Meta later announced the adoption of a new content moderation protocol that, among other things, removed restrictions on topics such as immigration and gender identity. If other platforms follow Meta’s lead, the protection (or not) of listener interests would be even more subject to the platforms’ decisions. Provided that the government’s use of public jawboning does not violate Vullo’s standards for coercion, it may prove to be a useful measure to protect users from the overwhelming power of the platforms.

F. Civil Society and the State

Civil society can also play a countervailing role. Truth-finding institutions, such as journalists and political activists, can combat false statements in an iterative process akin to the scientific method.220See Volokh, supra note 180, at 696–98. Collaborations between platforms and outside researchers could also lead to better responses for online misinformation.221See Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson & Duncan J. Watts, Misunderstanding the Harms of Online Misinformation, 630 Nature 45, 45 (2024). More generally, the concept of “knowledge institutions,” as developed by Vicki Jackson, captures the indispensable contribution of public and private entities, including universities, government agencies, libraries, and the press, to the collection and dissemination of knowledge needed for democratic self-governance.222See Vicki C. Jackson, Knowledge Institutions in Constitutional Democracies: Preliminary Reflections, 7 Canadian J. Compar. & Contemp. L. 156 (2021); see also Heidi Kitrosser, Protecting Public Knowledge Producers, 4 J. Free Speech L. 473 (2023).

The state can bolster the speech environment by supporting knowledge institutions. Over the last several decades, the federal government has fostered the public sphere by enacting legislation to support newspapers, establishing a system of broadcast licenses, regulating cable, and implementing antitrust laws.223See Martha Minow, Saving the News: Why the Constitution Calls for Government Action to Preserve Freedom of Speech 42–57 (2021). With respect to the threats currently facing private news organizations, Martha Minow argues that “[n]othing in the Constitution forecloses government action to regulate concentrated economic power . . . or strengthen public and private investments in the news functions presupposed by democratic governance.”224Martha Minow, Does the First Amendment Forbid, Permit, or Require Government Support of News Industries?, in Constitutionalism and a Right to Effective Government? 86 (Vicki C. Jackson & Yasmin Dawood eds., 2022). Minow further suggests that the “First Amendment’s presumption of an existing press may even support an affirmative obligation on the government to undertake reforms and regulations to ensure the viability of a news ecosystem.”225Minow, supra note 223, at 98. Emily Bazelon proposes that federal and state governments could create publicly funded TV or radio, in addition to funding nonprofit journalism.226See Emily Bazelon, The Disinformation Dilemma, in Social Media, Freedom of Speech, and the Future of Our Democracy 41, 49 (Lee C. Bollinger & Geoffrey R. Stone eds., 2022). To be sure, the independence of news organizations must be protected by

various mechanisms so that the government cannot control the media it funds and supports.227See Minow, supra note 223, at 138–42.

Finally, community participation in regulating online platforms may also improve the speech environment. For example, Reddit is internally governed by volunteer moderators, who establish and enforce rules about what conduct is permitted or prohibited in each subcommunity.228See Ethan Zuckerman, The Case for Digital Public Infrastructure, Knight First Amend. Inst. at Colum. Univ. (Jan. 17, 2020), https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure [https://perma.cc/F5EX-XTKV]. These moderators often put in “dozens of hours a week to ensure that content meets community standards and that participants understand why their content was permitted or banned.”229Id. Although Reddit is by no means perfect, it may be an example of what Aziz Huq has described as an “island of algorithmic integrity”; that is, a model of a well-functioning social media platform that acts in public-regarding ways and may thereby shift norms and expectations.230See Huq, supra note 70, at 1301–03.

Conclusion

This Article has offered a normative account of a well-functioning speech environment for speakers and listeners, under which individuals engage in three self-governing activities—informed voting; discussion and deliberation; and meaningful participation—while having their liberty, equality, epistemic, and nondomination interests satisfied. It also argued for digital exceptionalism—the idea that the expressive realm on social media platforms suffers from certain unique deficits that not only undermine the speech environment but that also pose challenges for regulation. The Article then turned to the law of public discourse, focusing on campaign finance regulation and the Moody decision, to find that First Amendment jurisprudence provides few conceptual resources to protect listeners’ equality, epistemic, and nondomination interests. Finally, the Article argued for countervailance, which is the idea that certain mechanisms could lessen the deficits of the online realm such that listener interests are better protected.

To be sure, there continues to be great uncertainty about how digital technologies will evolve over time and what new difficulties they will pose. The rapidly changing landscape of social media technology poses genuine challenges for regulation. While the Moody majority insisted that free speech principles do not change despite the challenges of applying them to evolving technology, the concurring Justices expressed reservations about how evolving algorithmic and AI technology would be covered by the First Amendment. For example, Justice Barrett queried whether there was a difference between an algorithm that did the curation on its own versus an algorithm that was directed by humans.231Moody v. NetChoice, LLC, 603 U.S. 707, 745–48 (2024) (Barrett, J., concurring). Justice Alito noted that the vast majority of the content moderation on the platforms is performed by algorithms, and now that AI algorithms are being used, the platforms may not even know why a particular content moderation decision was reached.232See id. at 793–95 (Alito, J., concurring). He asked: “Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?”233Id. (Alito, J., concurring); see also Toni M. Massaro & Helen Norton, Siri-ously? Free Speech Rights and Artificial Intelligence, 110 Nw. U. L. Rev. 1169, 1174 (arguing that AI speakers should be covered by the First Amendment due to the value of their speech to humans and the risk of government suppression). It is fair to say that much work remains to be done when considering how best to protect and promote a well-functioning political speech environment.

98 S. Cal. L. Rev. 1193

Download

* Professor of Law and Political Science, and Canada Research Chair in Democracy, Constitutionalism, and Electoral Law, Faculty of Law, University of Toronto; J.D. Columbia Law School, Ph.D. (Political Science) University of Chicago. I am very grateful to Ashutosh Bhagwat, Daniel Browning, James Grimmelmann, Aziz Huq, Michael Kang, Heidi Kitrosser, Erin Miller, Helen Norton, Eugene Volokh, Abby Wood, and the participants at the Listener Interests Symposium at USC Gould School of Law and the Public Law Colloquium at Northwestern Pritzker School of Law for very helpful comments and conversations. Special thanks to David Niddam-Dent for excellent research assistance and to the editors of the Southern California Law Review for their valuable editorial work.

Listeners’ Choices Online

The most useful way to think about online speech intermediaries is structurally: a platform’s First Amendment treatment should depend on the patterns of speaker-listener connections that it enables. For any given type of platform, the ideal regulatory regime is the one that gives listeners the most effective control over the speech that they receive.

In particular, we should distinguish four functions that intermediaries can play: (1) broadcast, such as radio and television, transmits speech from one speaker to a large and undifferentiated group of listeners, who receive the speech automatically; (2) delivery, such as telephone, email, and broadband Internet, transmits speech from a single speaker to a single listener of the speaker’s choosing; (3) hosting, such as YouTube and Medium, allows an individual speaker to make their speech available to any listeners who seek it out; and (4) selection, such as search engines and feed recommendation algorithms, gives listeners suggestions about speech they might want to receive. Broadcast is relevant mostly as a (poor) historical analogue, but delivery, hosting, and selection are all fundamental on the Internet.

On the one hand, delivery and hosting intermediaries can sometimes be subject to access rules designed to give speakers the ability to use their platforms to reach listeners because doing so gives listeners more choices among speech. On the other hand, access rules are somewhere between counterproductive and nonsensical when applied to selection intermediaries because listeners rely on them precisely to make distinctions among competing speakers. Because speakers can use delivery media to target unwilling listeners, they can be subject to filtering rules designed to allow listeners to avoid unwanted speech. Hosting media, however, mostly do not face the same problem, because listeners are already able to decide which content to request. Selection media, for their part, are what enable listeners to make these filtering decisions about speech for themselves.

Introduction

This is an essay about listeners, the Internet, and the First Amendment. In it, I will argue that the most useful way to think about online speech intermediaries is structurally: a platform’s First Amendment treatment should depend on the patterns of speaker-listener connections that it enables. For any given type of platform, the ideal First Amendment regime is the one that gives listeners the most effective control over the speech that they receive.

This essay does not stand alone. In a previous article, Listeners’ Choices, I outlined a two-part theory of the First Amendment based on recognizing listeners’ choices about what speech to hear.1James Grimmelmann, Listeners’ Choices, 90 U. Colo. L. Rev. 365, 366–67 (2019). First, any free-speech principle that does not take listeners’ choices seriously is self-defeating. In a world where speakers pervasively compete for listeners’ attention—which is to say, in our world—listeners’ choices provide the only normatively appealing way to resolve the inevitable conflicts among speakers. Second, existing First Amendment doctrine regularly defers to listeners’ choices. Many cases that are seemingly about speakers’ rights snap into focus as soon as we pay attention to which listeners are willing and which listeners are not. Listeners’ choices among speakers are typically content- and viewpoint-based, but a legal rule that defers to those choices can be content-neutral.

The theory I presented in Listeners’ Choices was skeletal. Here, my purpose is to flesh out the listeners’-choice principle so that it does useful doctrinal and policy work in our modern media environment. I will analyze the role of listeners’ choices in four structurally different functions that media intermediaries can carry out:

  • Intermediaries carrying out a broadcast function, such as radio and television, connect one speaker to a large and undifferentiated group of listeners who receive the speech automatically;
  • Intermediaries carrying out a delivery function, such as telephone, email, and broadband Internet, transmit speech from a single speaker to a single listener of the speaker’s choosing;
  • Intermediaries carrying out a hosting function, such as YouTube and Medium, allow an individual speaker to make their speech available to any listeners who seek it out; and
  • Intermediaries carrying out a selection function, including search engines and feed recommendation algorithms, give listeners suggestions about speech they might want to receive.

Notice that I refer to distinct “functions,” because media and intermediaries are not monolithic. There is no set of First Amendment rules for “the Internet,” nor can there be. The Internet is too vast and variegated for that to work. Distinguishing among broadcast, delivery, hosting, and selection helps us see that these functions can be disaggregated. On the Internet, we are accustomed to thinking of hosting and selection as intertwined; the term “content moderation” encompasses them both. But they do not necessarily need to be: YouTube the hosting platform and YouTube the search engine are different and could be subjected to different legal rules.

The original sin of broadcast was that it inextricably combined selection and delivery into a single take-it-or-leave-it package, in a way that was uniquely disempowering to listeners. Bandwidth limitations mean that broadcast media present listeners with a limited array of speakers to choose among. And the fact that listeners receive broadcast speech as a group, rather than individually, means that it is hard to protect unwilling listeners from that speech without blocking willing listeners’ ability to receive it. The result is a body of doctrine and theory that purports to act in listeners’ interest but is primarily concerned with allocating scarce bandwidth among competing speakers.

In contrast, listeners can be far more empowered on the Internet than they were offline. Delivery, hosting, and selection are all more listener-friendly than broadcast. The individually targeted nature of delivery media means that media intermediaries can block unwanted communications to unwilling listeners without offending core free-speech values. The pinched kinds of choices that broadcast media needed to make among competing speakers were a poor proxy for the much broader kinds of choices that listeners can make for themselves on hosting media. And the recommendations that selection media provide to help listeners choose among competing speakers are fundamentally oriented towards facilitating listeners’ autonomy, not speakers’.

Turning to the specifics of how these different kinds of media should be regulated, there are two structurally different kinds of legal rules that can apply to them:

  • Access rules ensure that speakers are able to use a medium, even when an intermediary would prefer to exclude them.2 Access rules for listeners raise harder issues because speakers can have associational, privacy, and economic interests in restricting the audience for a communication to exclude willing listeners. An activist organizer’s mailing list might exclude political opponents; a copyright owner’s catalog might have a paywall with different prices for hobbyist and professional subscribers. A communications platform’s access policies for listeners are often inextricably bound up with speakers’ preferences about their audiences. These are subtle questions, and I do not discuss them in this essay.
  • Filtering rules ensure that listeners are able to avoid unwanted speech, even when speakers would prefer to subject them to it. Sometimes they empower an intermediary to reject that speech on behalf of listeners (i.e., they are the opposite of access rules), but sometimes they require speakers and intermediaries to structure their communications in a way that enables listeners themselves to reject the speech.

From a speaker’s point of view, access rules look like they promote free speech and filtering rules look like they inhibit it. But from a listener’s point of view, both types of rules can promote the values of the First Amendment.

For access rules, the key distinction is between rival and non-rival media. Delivery and hosting can be non-rival on the Internet, where bandwidth is immense and can be expanded as needed. Speakers who use delivery and hosting media mostly do not interfere with each other, and so an intermediary can treat most speakers identically. But selection is fundamentally rival: listeners rely on these intermediaries to help them distinguish among speakers, and so selection intermediaries must favor some speakers and disfavor others. As a result, delivery and hosting intermediaries can often be subjected to access rules requiring even-handed treatment of all interested speakers, but the First Amendment mostly forbids imposing access rules on selection intermediaries.

For filtering rules, the key distinction is that delivery situates the relevant choices among speaker-listener pairings upstream (closer to speakers) while hosting situates those choices downstream (closer to listeners). When listeners can make their own choices among speech (as on hosting intermediaries), filtering rules—whether imposed by intermediaries or by the legal system—have the effect of thwarting those choices. However, when speakers make those choices in the first instance (as on delivery intermediaries), sometimes filtering rules are necessary to empower listeners to make choices for themselves. Selection media, for their part, provide listeners the information they need to choose which content on hosting media to request, and which content on delivery media to receive.

In part, this essay is a love letter to selection media, written on behalf of listeners. Selection media play an utterly necessary role in an environment of extreme informational abundance, and they can be more responsive to listeners’ informational choices and needs than any other form of media.3This is a generalization of a point I have been making for decades about search engines. See, generally James Grimmelmann, Don’t Censor Search, 117 Yale L.J. Pocket Pt. 48 (2007); James Grimmelmann, The Structure of Search Engine Law, 93 Iowa L. Rev. 1 (2007); James Grimmelmann, Information Policy for the Library of Babel, 3 J. Bus. & Tech. L. 29 (2008); James Grimmelmann, The Google Dilemma, 53 N.Y. L. Sch. L. Rev. 939 (2009); James Grimmelmann, Speech Engines, 98 Minn. L. Rev. 868 (2014) [hereinafter Grimmelmann, Speech Engines]. Access rules are often nonsensical when applied to them, and filtering rules must be applied with care, lest they trample on the filtering work that selection media are already doing.4See James Grimmelmann, Some Skepticism About Search Neutrality, in The Next Digital Decade: Essays on the Future of the Internet 435, 439–42 (Berin Szoka & Adam Marcus eds., 2010).

But the fact that selection media are often listener-friendly does not mean that they always are. I have argued previously that search engines can be regulated when they behave disloyally or dishonestly towards their users,5Grimmelmann, Speech Engines, supra note 3. and the same goes for selection media. More generally, I will argue here that structural regulation of selection media is often appropriate. For example, an intermediary could be forced to disaggregate its hosting and selection functions; the former can—and sometimes should­—be regulated in ways that the latter cannot. Indeed, an intermediary might need to open its delivery or delivery platform up to competing selection intermediaries (so-called “middleware”) to give listeners broader and freer choice over the speech they receive.

Finally, a note on scope. This is an essay about intermediaries, not an essay about all forms of media. I am focusing on intermediaries’ roles in carrying third-party speech from speakers to listeners, not on their own first-party speech that they want to share with listeners. Different structural and First Amendment considerations apply to first-party speech. I will argue in places that solicitude for intermediaries’ speech interests should not prevent us from regulating them in ways that promote listeners’ speech interests. But this is not primarily an essay about intermediaries’ speech itself.6See generally Stuart Minor Benjamin, Transmitting, Editing, and Communicating: Determining What ‘The Freedom of Speech’ Encompasses, 60 Duke L.J. 1673 (2011) (discussing whether and when the First Amendment encompasses transmission of speech by intermediaries).

This essay has four substantive Parts. Part I provides a short review of the argument from Listeners’ Choices and can be skipped if you are familiar with it. Part II describes the structural differences among broadcast, delivery, hosting, and selection media, and explains how they relate to each other. Part III considers how access rules play out in these four types of media, and Part IV does the same for filtering rules. As we will see, the appropriate legal treatment of these different kinds of intermediaries and rules falls out naturally. First Amendment doctrine becomes radically simpler when we carve up media at their joints.

I. Listeners’ Choices: A Review

The starting point of Listeners’ Choices is that we can think about speech as a matching problem: in an environment where billions of people speak and billions of people listen, who speaks to whom? This way of thinking about speech is mostly content-neutral: it focuses on the network structure of connections between speakers and listeners, rather than on the content of the speech they exchange over those connections. I called an actual arrangement of speakers and listeners a “matching” to emphasize its mutuality and the fact that it is a collective property of speakers and listeners overall.

The possible structures of speaker-listener matching are shaped by two things: choices and scarcities. Regarding the former, speakers make choices about what to say and how; regarding the latter, listeners make choices about what to listen to and how. Not all their choices can be simultaneously honored, but the heart of this way of thinking about free speech is that speakers and listeners make choices among each other, and that these choices are in large part constitutive of the values that free expression serves. They are subjective, individual, and profoundly content- and viewpoint-based. Some conflicts among speakers’ and listeners’ choices arise simply from their diverging values and goals; I called these conflicts “internal” limits on possible speaker-listener matchings.

As for scarcities, another class of limits on speaker-listener matchings are what I called “structural” limits: some combinations of who speaks to whom are physically or practically impossible. In particular, three types of scarcity shape the patterns of speech everywhere and always: bandwidth, attention, and ignorance. Bandwidth limits, such as the limited range of the human voice or the limited number of very high frequency (“VHF”) television channels, restrict the ability of speakers’ messages even to reach listeners. Attention limits are hard-wired into human anatomy and psychology. Although speech consists of information, which is potentially infinitely replicable, each person can only pay attention to one or a few speakers at a time. Finally, ignorance about the content of speech can lead people to make choices about what to listen to—choices that would not have made if they were fully aware of what the speech would be.

The upshot of having these scarcities is that listeners’ choices among competing speakers provide a compelling way to decide among competing speech claims. Listeners’ choices are valuable in themselves because listening is an indispensable part of any communication, and listeners’ choices should be elevated over speakers’ choices because of the scarcity of attention; the capacity to listen is limited in a way that the capacity to speak is not. In order to tune into a preferred speaker, a listener must be able to tune out other speakers, and a speech environment in which listeners cannot do so is one in which effective speech is impossible. From this general point, a few specific observations follow.

First, in one-to-many cases of conflicts between willing and unwilling listeners, willing listeners generally prevail. The “Fuck the Draft” jacket in Cohen v. California7Cohen v. California, 403 U.S. 15, 16 (1971). and the drive-in movie screen in Erznoznik v. Jacksonville8Erznoznik v. Jacksonville, 422 U.S. 205, 206 (1975). were seen by both willing and unwilling viewers. To censor these forms of expression at the insistence of the unwilling ones would deprive the willing ones of speech they were willing (and in Erznoznik, affirmatively choosing) to see. The unwilling ones are expected to avert their eyes or change the channel. This looks like a preference for speakers’ right of expression as against unwilling listeners, but really it is a preference for willing listeners over unwilling ones.

Second, in true one-to-one cases where a speaker addresses a single unwilling listener, the analysis is far less speaker-friendly. The Supreme Court has affirmed homeowners’ rights to literally and figuratively shut their doors to unwanted solicitors9Martin v. City of Struthers, 319 U.S. 141, 150 (1943). and mail.10Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 736–37 (1970). A general ordinance prohibiting Jehovah’s Witnesses from going door-to-door11See Martin, 319 U.S. at 142. or prohibiting the mailing of communist literature would be unconstitutional,12Lamont v. Postmaster Gen., 381 U.S. 301, 307 (1965). because of the presence of potentially willing listeners among the audience. That concern drops away when the speaker can stop attempting to communicate with individual listeners who specifically object while still reaching those who do not. Listeners can choose not to pay attention, and speakers who attempt to overcome listeners’ defenses (for example, with amplified sound trucks) can be barred from doing so.13Kovacs v. Cooper, 336 U.S. 77, 89 (1949). The caselaw here is rich and context-sensitive; a rule that listeners always win would be as wrong as a rule that speakers always win. Instead, the cases grapple with the interests of speakers, willing listeners, unwilling listeners, and—importantly—undecided listeners, who cannot decide whether they want to hear what the speaker has to say unless the speaker at least has an initial chance to ask.14See, e.g., McCullen v. Coakley, 573 U.S. 464, 489 (2014) (holding that a state law establishing six-foot buffer zone around people entering abortion facilities interfered with the right of anti-abortion advocates to engage in “consensual conversations” with people seeking abortions (emphasis added)).

Third, the general problem of sorting listeners into the willing and the unwilling involves what I call “separation costs”: the effort that willing listeners must take to hear, or that unwilling listeners must take to avoid hearing, or that speakers must take to distinguish between the two, or some combination of the above. The scale and distribution of separation costs can vary greatly based on the technological environment. I argue that the legal system, in a very rough way, seeks out the least-cost-avoider of speech conflicts: when a party can take a simple and inexpensive action to resolve the conflict, the law often expects them to do so.

II. Four Media Functions

This Part reviews the structural differences among the four media functions: broadcast, delivery, hosting, and selection. Along with some examples of each type, I discuss the ways in which each of them is one-to-one or one-to-many.15Eugene Volokh, One-to-One Speech vs. One-to-Many Speech, Criminal Harassment Laws, and “Cyberstalking”, 107 Nw. U. L. Rev. 731 (2013). I defer discussion of scarcity and bandwidth constraints to the next Part, as these issues bear heavily on access rules.

A. Broadcast

Start with the wired and wireless mass media that dominated most of the twentieth century: radio, broadcast television, satellite television, and cable. These mass media are characterized by their extensive reach: they enabled a single speaker to reach a large potential audience of listeners. They are, in Eugene Volokh’s taxonomy, one-to-many media.

 

To be clear, broadcast media collectively enable numerous speakers to reach large audiences; there are many TV stations, and each station broadcasts many different programs. Instead, when I say that broadcast is one-to-many, I mean that each individual speaker reaches a large and undifferentiated audience. Broadcast aggregates numerous such one-to-many communications, dividing them up by time (for example, WNBC-TV broadcasts the news at 7:00 and Access Hollywood at 7:30) and by intermediary (WNBC-TV and WABC-TV both broadcast their respective news programs at 7:00). The structural point is that WNBC-TV can only broadcast a single program at a time—such as Access Hollywood at 7:30—and when it does, it enables a one-to-many communication from Access Hollywood to its viewers.

B. Delivery

Next, consider delivery media like mail, telegraph, telephone, email, direct messaging, and Internet service. They all transmit speech from an individual speaker to an individual listener selected by the speaker, making them one-to-one media.16Id. at 742. More precisely, they are one-to-one with respect to individual communications from speaker to listener. In aggregate, they are many-to-many. The postal service delivers millions of letters, but each letter goes from a single sender to a single recipient. Delivery is therefore a kind of disaggregated broadcast: instead of sending joint communication to all listeners at once, individual communications are sent to individual listeners at the speaker’s request.

Most delivery media use some form of medium-specific addresses for a sender to specify their chosen recipient. A letter goes to a specific postal address; a telephone call to a specific telephone number; an email to a specific email address; an Internet Protocol (“IP”) datagram to a specific IP address; and so on. A speaker can choose to send the same message to many listeners by sending many individual communications to different addresses. Conversely, by having an address, a listener makes themselves reachable by speakers and then can receive a mostly undifferentiated stream of communications from any speaker who wants to reach them.

Some delivery media—such as telephone and direct messaging—are interactive, but it still makes sense to talk of “the speaker” and “the listener.” First, at the beginning of a conversation, one user is trying to establish a connection with another: the phone rings, or an email appears in the inbox. The user trying to establish the connection is the one who chose to initiate the communication, chose when to do it, and most importantly, chose with whom to establish it. They are a speaker, and if the other user agrees, they receive the message and become a listener. Second, what we think of as “interactive” media are really bidirectional media. A telephone connection is “full duplex”: it requires two speech channels, one in each direction. The same is true for a Zoom call, an email conversation, or anything else that travels on the Internet. These interactive exchanges are made up of individual IP datagrams, each traveling from a sender to a recipient identified by IP address. Third, all delivery media are interactive on a long-enough time scale. Pen pals exchange letters, trading off the roles of speaker and listener. Each letter is still a discrete one-to-one communication carried by the postal service; mail is still a delivery medium.

C.  Hosting

A third category of Internet media consists of hosting platforms. Third-party speakers send content to these intermediaries, which make the content available to listeners on request. For example, an artist uploads illustrations from her portfolio of work to a Squarespace site and individual fans visit the site to view the illustrations.

Other examples of hosting intermediaries include (1) bulk storage like Google Drive and Amazon S3; (2) content-delivery networks (“CDNs”) like Akamai and Cloudflare; (3) hosting functions of social-media platforms like YouTube and X; and (4) web-based self-publishing features of platforms like Medium and Substack. Structurally, online marketplaces are also hosting services as long as they (a) sell digital content instead of physical goods or services, and (b) feature speaker-submitted third-party content. Examples include App Stores by Apple and Google, e-book stores by Barnes & Noble and Amazon, video game stores by Steam and Epic, and even Spotify as a distributor of podcasts and music.

Hosting is the mirror image of delivery. Both are one-to-one media; each individual communication goes from a single speaker to a single listener. The difference is that in delivery media, the speaker selects which listeners to speak to; in hosting media, the listener selects which speakers to listen to. Although hosting is usually thought of as a service offered by platforms to speakers, the listener’s request plays a crucial role in the process. Hosting is also a kind of disaggregated broadcast: instead of sending a joint communication to all listeners at once, individual communications are sent to individual listeners, this time at the listener’s request.

Hosting and delivery functions are often used in conjunction. A website host, for example, responds to a user’s request for a particular URL by sending a response with the contents of the page at that address. The request and the response are both made using delivery media—the Internet service providers (“ISPs”) along the delivery path between the host and the user. (So, for that matter, is the transmission from speaker to the website host with the content the speaker wants to make available, and so is the website host’s acknowledgement that it has received the content.) But the host’s own activities—its responses to listeners’ requests for content—have the listener-selected nature of hosting, not the speaker-selected nature of delivery.

Some intermediaries offer both hosting and delivery. Substack is a good example: each post is both made available on Substack’s website and also mailed out to newsletter subscribers. Substack is a hosting service for listeners who read the post on the website, but it is a delivery service for listeners who read the post in their email inbox. Sometimes the distinction is irrelevant, but sometimes it matters. Substack allows newsletter authors to import a mailing list of subscribers, so it is not safe to assume that everyone who receives a Substack delivery has consented to it. For a user who objects to newsletter spam, Substack is a delivery intermediary, not a hosting intermediary.

Like delivery, hosting can be aggregated into a one-to-many medium. Indeed, this is typically the default on the Internet. Unless a host affirmatively restricts which listeners have access to a speaker’s content—for example, with a list of subscribers to a paywalled publication—anyone with an Internet connection can access it, and it is far easier to leave access unrestricted than to impose selective restrictions. Thus, from a speaker’s perspective, hosting can function like broadcast in that it allows a speaker to reach an indeterminately large audience with a single act of publication.

D. Selection

Finally, consider the selection function of some media, which consists of recommending some content for users. Selection media include general search engines that index third-party sites, such as Google, Bing, Kagi, and DuckDuckGo, as well as site-specific search engines that index the content on a specific platform such as the search bars built into YouTube, TikTok, and X. They also include recommendation engines that may provide personalized results not explicitly tied to a user query, such as the feed algorithms on Facebook and TikTok or the watch-next suggestions on YouTube. The key feature of a selection platform is that it tells users about content, which they can then consume in full if they want.

Selection media are not strictly one-to-one or one-to-many in the same way that broadcast, delivery, and hosting are; they do not by themselves carry content from speakers to listeners. Instead, it is helpful to think of selection media as being many-to-one because they help individual listeners choose speech from a large variety of speakers. They turn an overwhelming volume of available content into a much smaller number of selections or recommendations that a listener can meaningfully experience, and they do so in ways that can be individuated for each specific listener.

Selection media are hardly new, but two features of the Internet make selection media particularly important online. First, the sheer scale of the Internet makes selection an absolute necessity. There is far more content on the Internet, or even on social-media platforms and not-especially large websites, than any one user can plausibly engage with. The shift from bandwidth to attention as the most salient bottleneck makes selection a crucial site of contestation.

Second, the Internet has often enabled selection to be disaggregated from delivery and hosting. The selection function of a television channel is obvious: because it can transmit so little compared with what it might, the choice of what to transmit does most of the work of selection. However, YouTube is both a content host and a content recommender: it can host a video without ever recommending that video to anyone. It is the difference between an album (selection bundled with hosting) and a playlist (selection by itself). This point cuts both ways—distinguishing the two functions takes some First Amendment pressure off of hosting, but piles more onto selection.

III.  Access

A. Scarcity

One of the fundamental structural constraints on choices about speech is scarcity: limits on the number of communications that a given medium, or an intermediary using that medium, can carry. Scarcity forces choices among speakers to be made upstream by the intermediary or by regulators allocating the medium among speakers and intermediaries. In contrast, non-scarce media allow choices among speakers to be made downstream by listeners themselves. Unsurprisingly, there is a long history of scarcity arguments in telecommunications policy.

The standard story, as reflected in caselaw, points to the scarcity of broadcast spectrum as a justification for regulation. First, the available spectrum needs to be allocated to different users to prevent chaos and interference. Then, once it has been handed out, these users can be required to carry a reasonable diversity of speakers so that the intermediaries do not have undue power over speech. The usual citation for this form of argument is Red Lion Broadcasting Co. v. FCC, which used scarcity arguments to uphold the FCC’s fairness doctrine.17Red Lion Broad. Co. v. FCC, 395 U.S. 367, 400–01 (1969).

In contrast, other media are not thought of as scarce in the same way. There is room for many simultaneous speakers, which means there is no need for regulatory intervention. Intermediaries themselves can choose which speakers to carry, and there is less risk of having a handful of powerful intermediaries entirely control the speech environment. The usual citation for this form of argument is Miami Herald Publishing Co. v. Tornillo, which declined to extend Red Lion to newspapers.18Mia. Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 257–58 (1974).

Instead, the Supreme Court upheld newspapers’ First Amendment right to pick and choose what content they print.

Thus, goes the story, there is a spectrum from scarce media, like broadcast, to non-scarce media, like newspapers. The scarcer the medium, the more regulable it is. Other media fall somewhere in between. Cable television, for example, can carry a limited number of channels, but typically more than broadcast can. Thus, the scarcity rationale for regulating cable exists, but is weaker than for regulating broadcast. This tracks with the regulatory regime: cable operators are required to set aside some of their channels for local broadcasters and public-access channels, but cable channels are not regulated for content. It also tracks with judicial treatment: the Supreme Court held 5-4 that this regulatory regime was constitutional in Turner Broadcasting System, Inc. v. FCC, almost exactly halfway in between the 9-0 decisions in Red Lion and Miami Herald.19Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180 (1997).

There are two problems with this story. The first is that it does not obviously explain why there are some media—such as telephone—that are even more regulated than broadcast. The telephone network has much higher capacity than broadcast does (it can carry millions of simultaneous conversations), but it is subject to a strict common-carriage regime. A naive scarcity argument would suggest the exact opposite: that because telephone capacity is effectively unlimited, there is no need for regulation.

The second problem is that even in cases that rely on scarcity arguments, those arguments do not always cut in the direction one would expect. In Miami Herald, it was the newspaper arguing that its editorial space was scarce—in the Supreme Court’s words, that it could not engage in “infinite expansion of its column space.”20Mia. Herald, 418 U.S. at 257. The Supreme Court accepted this argument as a rationale to uphold the newspaper’s First Amendment right to reject unwanted content—the exact opposite of what a naive scarcity argument would suggest.

The way out of these paradoxes is to recognize that there are two dimensions to scarcity. On one hand, there is what I call bandwidth scarcity: the limits on any one intermediary’s ability to carry the speech of multiple speakers. On the other hand, there is what I call entry scarcity: the limits on the number of intermediaries who can operate simultaneously. Entry scarcity cuts in favor of regulation: an intermediary is in a position to control who gets to speak, unconstrained by market forces and the threat of competition. But bandwidth scarcity cuts against regulation: it means that the intermediary necessarily exercises editorial judgment over which speakers have access, and it rules out simple common-carriage regimes that treat all

speakers equally. It is the interplay between these two distinct forms of scarcity that determines whether a medium is regulable.

In particular, mapping the two dimensions of scarcity in a two-by-two diagram reveals the underlying pattern of scarcity arguments:

  • In the top-right quadrant are print media, which are moderately bandwidth-scarce (it is possible to add pages to a newspaper or book, but at some expense and only by modifying its physical layout) and mostly not entry-scarce (physical printing is a commodity business). Thus, both scarcity considerations cut against regulation: there is no physical or economic need to allocate a limited ability to print among competing speakers, and imposing access rules comes at a real cost to a publisher’s ability to print the content it wants. Indeed, as Miami Herald illustrates, the Supreme Court’s solicitude for intermediaries’ speech is at its zenith here.
  • In the bottom-left quadrant are the classic common carriers. They are entry-scarce (the costs of running a second telephone network to every home were prohibitive), but they are not particularly bandwidth-scarce (carrying one more conversation or letter is a trivial burden for the phone network or the mails). Indeed, these are typically the most regulated communications intermediaries.
  • In the top-left quadrant are broadcast media. They are both entry-scarce (only thirteen VHF channels were allocated, and the practical number that could operate in any given area was invariably smaller) and bandwidth-scarce (each VHF television channel had 6 megahertz to carry a 525-line video signal at 30 frames per second). They are off-axis: their entry scarcity cuts in favor of regulation, but their bandwidth scarcity cuts against it. This is why they have historically been required to carry some diversity of content, but never with full common-carriage rules. They are more regulable than print, but less regulable than common-carriage networks.
  • In the bottom-right quadrant are media that are neither entry-scarce nor bandwidth-scarce. This is also an off-axis combination, but it is the opposite of the situation with broadcast, where access rules were both necessary (to give disfavored speakers access) and costly (because doing so comes at the cost of other speech the broadcasters could have carried). Here, access rules do not have a speech cost: giving additional speakers the ability to use an intermediary does not require the intermediary to drop other speakers to make room. However, it is also not clear whether these rules are necessary in the first place, because ordinary market forces would likely suffice to provide all speakers with the ability to speak.

As we will see, this two-dimensional framing of scarcity is quite helpful in situating the speech claims for and against access to the four types of intermediaries discussed in this essay: broadcast, delivery, hosting, and selection. Entry scarcity provides the justification for access rules to ensure listeners the widest possible range of choices among speakers without artificial limits imposed by incumbent intermediaries. However, bandwidth scarcity, when it exists, bespeaks caution: access rules come at their own sharp cost, limiting intermediaries’ ability to select the speech they think their listeners will most appreciate the ability to choose among. Thus, as we will see, hosting and delivery media (which are not bandwidth-scarce) may appropriately be the subject of common-carriage regulation where there are real issues of entry scarcity. However, selection media (which are intrinsically bandwidth-scarce) mostly should not be the subject of regulation regardless of entry scarcity.

I should note that there are competing definitions of “scarcity,” and my intention is to be agnostic among them. At different times and places, scarcity has been used to describe physical constraints (such as the laws of physics that govern electromagnetic interference), economic constraints (such as the cost of building out the infrastructure to run a telephone network), and regulatory constraints (such as limits on the number of cable franchises that will be awarded in a geographic area). Some commentators use scarcity narrowly to include only physical constraints; others use it broadly to include economic and regulatory constraints. These varying uses often reflect different beliefs about what kinds of regulations are appropriate for scarce media.21See generally Richard R. John, Sound Policy: How the Federal Communications Commission Worked in the Age of Radio (2025) (unpublished manuscript) (on file with author) (discussing these debates in the early years of the FCC). My argument here is modular with respect to the definition of scarcity in use. If you, according to your preferred definition, believe that a medium is entry-scarce but not bandwidth-scarce, I hope you will agree with my arguments for why common carriage might be an appropriate regulatory regime.

With these observations about scarcity in mind, we can turn to how access rules play out for different types of media. The focus throughout will be on how different rules increase or limit the choices available to listeners.

B. Broadcast

Twentieth-century broadcast media had highly limited capacity and were both bandwidth- and entry-scarce. These limits were primarily physical and technological and secondarily economic and regulatory. The available techniques for modulating an audio or audiovisual signal into one that could be transmitted through the atmosphere (radio, television, and satellite) or through wires (cable) allowed only a small number of such signals to be transmitted simultaneously in any geographic region. This number expanded over time with developments in telecommunications engineering: from AM to FM radio broadcasting; from VHF (very high frequency) to UHF (ultra high frequency) television broadcasting; from coaxial to fiber-optic cables; and so on. The basic structure remained the same: a fixed, finite menu of channels transmitted simultaneously to all potential listeners.

In such a setting, speaker-listener matching arises from a two-stage process. First, a few speakers are chosen to have access to the available channels, and then each listener chooses from the speech that speakers make available on those channels. In the United States, the first-stage choice among speakers was (and is) made by the operator of the physical infrastructure—the transmitting equipment or physical cable network—subject to some regulatory limits. The second-stage choice was (and is) made by individuals: members of the public with appropriate receiving apparatus (restricted in some cases, such as cable and satellite, who have subscribed to the operator’s service). The phrase most commonly used to describe this second-stage choice—changing the “channel”—reflects the way in which the technological constraints of twentieth-century broadcast funneled speech into a small and finite number of options.

Consider a speaker who is denied access to a channel, or who receives less access than they want, or who is limited in how they are allowed to use it, or who is charged more than they want for their access. In each case, they are obviously aggrieved. It is harder, however, from a purely speaker-centric position to explain why they have been wronged. The challenge—and this is a recurring challenge for speaker-centric analyses—is the problem of symmetry among speakers. It is one thing to say that the lucky speaker who receives access is better off than the unlucky speaker who does not, but it is quite another to make them change places. Doing so simply swaps the problem of the network operator picking winners and losers with the problem of the government picking winners and losers. To give A access and deny it to B amounts to preferring A’s speech to B’s, and on most theories of free speech, this preference is an awkward one for the government to engage in.

Instead, rationales for broadcast content regulation tend to rely on the needs of listeners, rather than speakers. As many scholars have noted,22E.g., David A. Strauss, Rights and the System of Freedom of Expression, 1993 U. Chi. Legal F. 197, 202 (1993). this is the upshot of Alexander Meiklejohn’s famous phrase, “What is essential is not that everyone shall speak, but that everything worth saying shall be said.”23Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948). The basic idea of this regulatory paradigm is to give listeners either high-quality content, a wide range of options of content, or both—on the assumption that speakers and broadcasters, left to their own devices, will provide neither. As the Supreme Court put it in Red Lion’s famous phrasing, “It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount.”24Red Lion Broad. Co. v. FCC, 395 U.S. 367, 390 (1969).

Ringing rhetoric aside, it is hard to find actual listeners in the resulting regulatory regime. In an environment of severe bandwidth constraints, it is impossible to solicit and honor all individual listeners’ choices; there are never enough channels to give each member of the audience what they personally want. Instead, they make their desires known only collectively and statistically by tuning in to channels and by paying for those channels or for the things advertised on them. Thus, as the long-running theme in media criticism goes, broadcast was a “vast wasteland” of boring, mediocre, and fundamentally majoritarian content.25Newton N. Minow, Television and the Public Interest, 55 Fed. Commc’n L.J. 395, 398 (2003) (reprinting Minow’s speech on May 9, 1961, before the National Association of Broadcasters). The larger the mass audience, the lower the common denominator.26See C. Edwin Baker, Media, Markets, and Democracy (2002) (arguing that mass media tend towards popular content to the exclusion of content of interest to smaller communities).

Consider some of the most notable examples of broadcast access regulations: the Mayflower doctrine27Mayflower Broad. Corp., 8 F.C.C. 333, 339–40 (1941). and its successor the fairness doctrine,28Rep. on Editorializing by Broadcast Licensees, 13 F.C.C. 1246, 1253 (1949). the right of reply,29Pers. Attacks; Pol. Eds., 32 Fed. Reg. 10303 (July 13, 1967); Red Lion Broad., 395 U.S. at 367 (upholding the constitutionality of the FCC’s right of reply rules). and the equal-time rule.3047 U.S.C. § 315. None of these were concerned with any specific listeners’ choices among speakers. Instead, they were all attempts to provide for listeners’ interests generically—by anticipating what groups of hypothetical listeners might want or need.

The few occasions on which broadcast media regulations have attempted to take account of actual listeners’ choices when setting access rules only show how hard it is to do so. The most striking example is format regulation. For years, the FCC interpreted the Communications Act of 1934’s requirement that broadcast licensees serve the “public convenience, interest, or necessity” to mean that it should consider stations’ formats in its licensing procedures.31Id. § 303. It would deny approval for new pop-music radio licenses, for example, if it felt that an existing market was adequately served by the radio stations already licensed to operate in the area.32Citizens Comm. to Pres. the Present Programming of the Voice of the Arts in Atlanta on WGKA-FM v. FCC, 436 F.2d 263, 270 (D.C. Cir. 1970). Indeed, a licensee seeking permission to change formats was required to petition the FCC for approval.33See Hartford Commc’ns Comm. v. FCC, 467 F.2d 408, 411–12 (D.C. Cir. 1972). These rules have long since gone by the wayside. The FCC now takes the position that broadcasters have a First Amendment right to broadcast any content format they want. In FCC v. WNCN Listeners Guild, the Supreme Court upheld the FCC’s policy decision not to consider formats in licensing renewal and transfer proceedings. 450 U.S. 582, 595–96 (1981).

Format regulation was in theory a listener-based system, but the FCC seemed genuinely flummoxed when actual listeners showed up in licensing procedures demanding a voice in the first-stage choices of who got access to the airwaves and on what terms. In Office of Communication of United Church of Christ v. FCC, a group of civil-rights activists attempted to intervene in a license-renewal proceeding before the FCC, alleging that WLBT in Jackson, Mississippi had aired only pro-segregation viewpoints.34Off. of Commc’n of United Church of Christ v. FCC, 359 F.2d 994, 997–98 (1966). The FCC denied their request, arguing that these “representatives of the listening public”35Id. at 997. could “assert no greater interest or claim of injury than members of the general public.”36Id. at 999. The D.C. Circuit reversed and remanded for an evidentiary hearing, as listeners were “most directly concerned with and intimately affected by the performance of a licensee.”37Id. at 1002.

There followed a string of cases in which the FCC and the D.C. Circuit struggled with how to actually take listeners’ views into account.38E.g., Citizens Comm. to Pres. the Present Programming of the Voice of the Arts in Atlanta on WGKA-FM v. FCC, 436 F.2d 263, 270 (D.C. Cir. 1970); Hartford Commc’ns Comm. v. FCC, 467 F.2d 408, 414 (D.C. Cir. 1972); Lakewood Broad. Serv., Inc. v. FCC, 478 F.2d 919, 924 (D.C. Cir. 1973); Citizens Comm. to Keep Progressive Rock v. FCC, 478 F.2d 926, 929 (D.C. Cir. 1973). In Citizens Committee to Keep Progressive Rock v. FCC, for example, WGLN in Sylvania, Ohio, switched to an all-prog-rock format in late 1971, and then received FCC approval in 1972 to switch to “generally middle of the road music which may include some contemporary, folk and jazz.”39Citizens Comm. to Keep Progressive Rock, 478 F.2d at 928. The Citizens Committee to Keep Progressive Rock petitioned the FCC to object. The D.C. Circuit ordered a hearing on whether the Toledo metropolitan area was adequately served by prog-rock stations as compared with top-forty stations,40Id. at 932. and discussed such details as whether a “golden oldies” format was sufficiently distinct from “middle of the road.”41Id. at 928 n.5. “In essence, one man’s Bread is the next man’s Bach, Bacharach, or Buck Owens and the Buckeroos, and where ‘technically and economically feasible,’ it is in the public’s best interest to have all segments represented,” the opinion sagely intoned.42Id. at 929.

My point here is not that the FCC’s enterprise of supervising formats or of requiring balanced public-interest programming in the name of listener interests was ill-considered. Instead, I want to emphasize that these interventions were more about listeners’ interests than about listeners’ choices. Some of them were about giving listeners information that it is considered important for them to have, and some of them were about moderately diversifying the menu of speech from which listeners could choose. But in an environment of severely limited bandwidth serving mass audiences, there was almost nothing more that could be done.

I make this point here because there are two misconceptions about listeners that are extraordinarily prevalent in the literature on access to the media. Both of them are direct consequences of inappropriately extending reasonable assumptions about the broadcast environment to other domains where they are much worse fits.

The first mistaken assumption is that speakers seeking access to media are necessarily good proxies for listeners. In 1967, Jerome Barron wrote, “It is to be hoped that an awareness of the listener’s interest in broadcasting will lead to an equivalent concern for the reader’s stake in the press, and that first amendment recognition will be given to a right of access for the protection of the reader, the listener, and the viewer.”43Jerome A. Barron, Access to the Press—A New First Amendment Right, 80 Harv. L. Rev. 1641, 1666 (1967) (emphasis added). In broadcast media, a strong right of access for diverse speakers may be a way to promote listeners’ practical ability to choose speech.

In other media, which are not characterized by the same combination of broad distribution and narrow bandwidth, there is much less reason to think of speakers as proxies for listeners. To give a simple example, many of the speakers most loudly demanding—and sometimes suing for—a right of access to Internet platforms are unrepentant spammers.44E.g., Cyber Promotions, Inc. v. Am. Online, Inc., 948 F. Supp. 436, 443–44 (E.D. Pa. 1996). Less charitably, the Republican National Committee. See Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *2–3 (E.D. Cal. Aug. 24, 2023). The access they seek is the access of pre-FCC unlicensed broadcast, as in the right to overwhelm media and listeners with high-volume speech that drowns out alternatives and reduces listeners’ practical ability to choose among speakers.

The second misconception about listeners’ choices that arises from seeing all media as broadcast media is the belief that nothing else can be done. Both the justifications for and many of the criticisms of regulations like the fairness doctrine and format review arise from thinking about speech environments in which listeners are fundamentally passive. The only controls they have—or can have—are the channel dial and the on-off switch. It seems to follow that the only useful regulatory interventions must happen upstream and that individual listeners themselves can have little involvement in the matching process. The entire model of media criticism that conceptualizes individuals as television viewers—numb, motionless, and mindless zombies or couch potatoes tuned in to the idiot box—is blind to the ways in which they engage with media that give listeners more agency and more choices.45Even in the case of television, it misses the way that fans engage. See generally Henry Jenkins, Textual Poachers: Television Fans and Participatory Culture (1992); Betsy Rosenblatt & Rebecca Tushnet, Transformative Works: Young Women’s Voices on Fandom and Fair Use, in eGirls, eCitizens 385 (Jane Bailey & Valerie Steeves eds., 2015). This is a different type of agency than the agency I am discussing as listeners. We will see many examples soon. For now, remember that the assumption of listener passivity is just that—an assumption.

C. Delivery

Delivery media are mostly not bandwidth-scarce, especially on the Internet. Any given delivery intermediary’s platform tends to face fewer capacity constraints than broadcast media did. Partly this is structural: delivery media solve a smaller problem because they only try to route a communication to one recipient, rather than many. Partly it is due to physical differences: the phone network could handle more simultaneous connections by running more wires in trunk lines, whereas cable could not increase the number of channels without reengineering every subscriber’s wiring and equipment. Partly it is due to the telecommunications engineering triumphs of the telephone system and the Internet, which have scaled up over many orders of magnitude in their lifetimes. And partly it is due to recognizing the limits of the possible: telegraph companies did not attempt to offer video service.

Whatever the reason, any given communication takes up a much smaller fraction of a delivery provider’s capacity than a corresponding communication would take up of a broadcaster’s capacity. Comcast as a cable operator can offer its subscribers a few hundred channels, while Comcast as an ISP can offer its subscribers delivery to and from millions of sites. The result is that Comcast’s Internet-service subscribers interfere with each other far less than the cable channels vying for transmission do. One more subscriber is trivial from Comcast’s perspective, and it has every economic incentive to sign up as many as it can. However, each cable carriage agreement is individually negotiated, and Comcast is ready to say “no” if the terms are not good enough because Comcast has to devote some of a sharply limited resource to each channel it offers.

Entry scarcity varies among delivery media. Some, such as email, are almost completely open to entrants: anyone can set up their own SMTP server and start exchanging emails. Others, such as telephone and Internet service, have limited competition among intermediaries who can serve any particular customer or region because the need to place physical infrastructure, such as fiber-optic cables or cell-phone towers, in particular locations creates economic and regulatory barriers to entry. The postal service is an extreme example: it has a statutory monopoly on the carriage of letters.4618 U.S.C. § 1694 (fining anyone who, in regular point-to-point service, “carries, otherwise than in the mail, any letters or packets”).

There is a long and robust tradition of speakers’ rights to access delivery media. Older delivery media, in particular, have frequently been subjected to common-carriage rules that require them to accept communications from all senders and for all receivers, and forbid them from discriminating on the basis of the contents of those messages.47See Genevieve Lakier, The Non–First Amendment Law of Freedom of Speech, 134 Harv. L. Rev. 2299, 2316–30 (2021); Blake E. Reid, Uncommon Carriage, 76 Stan. L. Rev. 89, 110–13 (2024). The postal service “shall not . . . make any undue or unreasonable discrimination among users of the mails . . . .”4839 U.S.C. § 403. This statutory obligation is almost certainly a First Amendment rule.49See Blount v. Rizzi, 400 U.S. 410, 416 (1971) (“The United States may give up the Post Office when it sees fit, but while it carries it on the use of the mails is almost as much a part of free speech as the right to use our tongues . . . [P]rocedures designed to deny use of the mail . . . violate the First Amendment unless they include built-in safeguards against curtailment of constitutionally protected expression . . . .”). Similarly, the Communications Act prohibits “any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services” by telecommunications common carriers including telephone companies.5047 U.S.C. § 202(a). This is the modern continuation of a long tradition: laws in the nineteenth century required telegraph companies to “operate their respective telegraph lines as to afford equal facilities to all, without discrimination in favor of or against any person, company, or corporation whatever.”51Telegraph Lines Act, ch. 772, 25 Stat. 382–83 (1888) (codified as amended at 47 U.S.C. § 10); See Lakier, supra note 47, at 2320–24 (surveying history of telegraph common-carrier laws). Indeed, the postal service,52See 39 U.S.C. § 101(a) (“The United States Postal Service shall be operated as a basic and fundamental service provided to the people by the Government of the United States . . . .”). telephone network,53See 47 U.S.C. § 254 (establishing universal service policy). and broadband Internet service54See generally FCC, Connecting America: The National Broadband Plan (2010). are all the subjects of universal-service policies that affirmatively attempt to provide access to all American residents.

On the other hand, it is an open doctrinal question whether government can require modern delivery providers—specifically email and broadband Internet—to provide uncensored access to speakers and listeners. The best and most prominent example is the FCC’s network neutrality rules that attempted to require broadband ISPs to carry traffic to and from all edge providers (that is, speakers) on a nondiscriminatory basis.55The most recent version was the Safeguarding and Securing the Open Internet Order of 2024, 89 Fed. Reg. 45404 (June 7, 2024). See 47 C.F.R. § 8.3(a) (2024) (ISPs “shall not block lawful content, applications, services, or non-harmful devices”); id. § 8.3(b) (ISPs shall not “impair or degrade lawful internet traffic on the basis of internet content, application, or service”); id. § 8.3(c)(1) (ISPs shall not “directly or indirectly favor some traffic over other traffic” for compensation); id. § 8.3(d)(1) (ISPs shall not “unreasonably interfere with or unreasonably disadvantage” users’ ability to access and edge providers’ ability to make available lawful content). That order was set aside by the Sixth Circuit. See Ohio Telecom Ass’n v. FCC, 124 F.4th 993, 933 (6th Cir. 2025). It is unlikely that federal network-neutrality rules will be revived in the short run, although state-level counterparts remain in force. See, e.g., Cal. Civ. Code § 3100 (West 2024). The D.C. Circuit upheld one version of the FCC’s network neutrality rules against a First Amendment challenge in 2016.56See U.S. Telecom Ass’n v. FCC, 825 F.3d 674, 675 (D.C. Cir. 2016). Dissenting from denial of rehearing en banc, Judge Kavanaugh argued that ISPs exercise editorial discretion protected by the First Amendment.57See U.S. Telecom Ass’n v. FCC, 855 F.3d 381, 382 (D.C. Cir. 2017). There are also dicta in the Moody v. NetChoice majority opinion describing First Amendment protections for social-media companies’ “choices about the views they will, and will not, convey” that would seem to apply equally well to ISPs.58Moody v. NetChoice, LLC, 603 U.S. 707, 737 (2024).

Indeed, § 230 affirmatively shields Internet delivery media from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”5947 U.S.C. § 230(c)(2)(A). The precise contours of what constitutes “good faith” are unsettled,60See, e.g., Darnaa, LLC v. Google, Inc., No. 15-cv-03221-RMW, 2016 U.S. Dist. LEXIS 152126, at *9 (N.D. Cal. Nov. 2, 2016). as is the scope of the “otherwise objectionable” catchall,61See, e.g., Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040, 1047 (9th Cir. 2019). but the general result is to preempt any state attempts (by statute or common law) to impose access mandates.62See, e.g., Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *10–11 (E.D. Cal. Aug. 24, 2023).

It is also notable that many delivery media are governed by strict privacy rules that limit carriers’ ability even to determine the contents of a message. The USPS is legally prohibited from opening first-class mail without a search warrant.63See 39 U.S.C. § 404(c). Telephone carriers are restricted from listening to conversations by the Wiretap Act,64See 18 U.S.C. § 2511(1)(a) (prohibition on interception); id. § 2511(2)(a)(i) (describing limited exception to that prohibition for interceptions “necessary incident to the rendition of his service or to the protection of the rights or property of the provider of that service”). as are ISPs and email providers.65See, e.g., United States v. Councilman, 418 F.3d 67, 69 (1st Cir. 2005) (finding Wiretap Act interception by email provider). Even beyond legal limits, many delivery providers now use encryption systems that technologically prevent the provider from determining message contents; for example, Apple Messages and Signal are end-to-end encrypted so that only the designated recipient (and not any intermediary, including Apple or Signal) can decrypt a message. A fortiori, carriers who cannot even tell what a message says cannot discriminate on the basis of its contents.

It is easy to justify common-carriage access rules for delivery media—old and new—in light of their structural characteristics. From the intermediary’s point of view, the weak bandwidth constraints mean that carrying any particular communication is not a substantial technical burden. In the aggregate, of course, communications add up, but that is primarily an economic problem—one to be addressed with appropriate pricing and funding.66See generally Brett Frischmann, Infrastructure: The Social Value of Shared Resources (2012). Where pricing is not available or insufficient, capacity limits on the volume of communications to or from a user are largely content-neutral ways of allocating bandwidth.67Similarly, communications that impair the network itself can be addressed through anti-abuse rules that target the harmful effects and only incidentally burden speech. See., e.g., 47 C.F.R. § 68.108 (2023) (allowing telephone providers to discontinue service to customers who attach equipment that harms the network); id. §§ 8.3(a), (b), (d)(2) (making exceptions to network neutrality rules for “reasonable network management”).

Carrying a communication is not a speech problem, except to the extent that the intermediary wants to make an expressive statement by carrying or refusing to carry particular messages. Historically, though, that argument has carried very little weight for traditional delivery media. This attitude is easy to justify by seeing delivery media from the perspective of speakers and listeners. Willing speakers and willing listeners have essentially the same interest in access to delivery media: the goal of forming the core free speech interest by communicating with each other.68Grimmelmann, supra note 1, at 382; Jovy Chan, Understanding Free Speech as a Two-Way Right, 1 Pol. Phil. 156, 164 (2024). If you want to send me an email and I want to receive the email, we are both thwarted if your email provider deletes your email.

An intermediary’s speech claims are weaker when they go up against those of matched speaker-listener pairs. The intermediary may not want to help the speaker and listener connect, but this is fundamentally an objection to their speech, not a claim about its own speech. It might prefer to deliver messages from other speakers it likes better; but when it does so, it forces listeners to receive messages from speakers they prefer less. As I argued in Listeners’ Choices, it is a core free-speech violation to make a listener listen to a speaker whose speech they do not want rather than listen to a speaker whose speech they want.69Grimmelmann, supra note 1, at 388. So while a delivery intermediary’s denial of access to a speaker or listener is not by itself a First-Amendment violation, the First Amendment leaves ample room for government to require delivery intermediaries to provide access.

In general, both speakers and listeners have standing to challenge denials of access to a delivery platform. In Murthy v. Missouri, the Supreme Court held that listeners do not have standing to challenge restrictions on speakers unless “the listener has a concrete, specific connection to the speaker.”70Murthy v. Missouri, 603 U.S. 43, 75 (2024). In the case of a speaker attempting to send a message to a specific listener (as opposed to the hosting platforms at issue in Murthy itself), this connection seems clearly satisfied. And where it is the listener who has been excluded from a platform (for example, disconnected by their ISP over alleged copyright violations), the impact on their speech interests as a listener is equally obvious.

If there is a distinction between analog and digital delivery media, it cuts in favor of applying access rules to modern digital intermediaries, not against. As bandwidth constraints drop further and further away, intermediaries’ arguments that they have a technical or economic need to discriminate among users on the basis of their speech get weaker and weaker. Most arguments to the contrary rest on a confusion between delivery and selection media. Commentators project the strong expressive interests in an intermediary’s selection function (both the intermediary’s own and those of the listeners they serve) onto the intermediary’s delivery function, without stopping to consider whether these functions can be separated and distinguished.

D. Hosting

Common-carriage access rules for hosting media generally facilitate listener choice. There is an obvious argument in favor of access rules: the more speakers that are available through a hosting intermediary, the wider the range of choices it offers to listeners. The entire web was better than AOL’s walled garden; a streaming service with ten million tracks beats one with one million. The hosting intermediary might have self-interested reasons to limit access (for example, to favor its affiliated speakers or to extract more money from speakers through price discrimination), but the listeners who use the platform generally prefer that it offer the widest possible range of speakers and speech. To a first approximation, listeners either side with the speaker in a speaker-hosting platform dispute (if they want the speech) or are at most indifferent (if they do not want the speech).

Common arguments against access rules that apply to other forms of media mostly do not apply to hosting media. First, there is no scarcity of bandwidth compelling hosting intermediaries to pick and choose among speakers to carry. Bandwidth on the Internet is effectively infinite. Cloudflare could serve every user in the United States if it needed to. This is not to say that Cloudflare could, would, or should do so for free—this level of access would be quite expensive and a speaker wanting to support hundreds of millions of massive downloads would quite reasonably be expected to pay commensurately. It is just that Cloudflare could serve everything to everyone.

Second, there are generally no operational constraints that cause one speaker’s content to interfere with another’s. Common Internet hosting intermediaries are technically capable of carrying almost any item of content within a category: videos at a given resolution, files consisting of arbitrary bitstrings, and so on. These items of content may have different sizes—and might be subject to caps for short-run capacity or economic reasons—but from a technical perspective, the intermediary is entirely indifferent as to their content. A broadcast radio station must deal differently with a talk-show host in studio one, a live musical performance in studio two, and a recorded program coming via audio link from a remote location. However, in an important sense, all apps in an app store are the same. Offering speaker A’s app does not divert resources needed to offer speaker B’s.

Third, there is no scarcity of listeners’ attention compelling hosting providers to prioritize some content over others. A delivery platform can fill up a listener’s queue with unwanted speech, making it harder to receive to the speech they want. If your telephone is ringing off the hook with telemarketers, your friends will get a busy signal every time they call. However, a hosting platform does not make any claims on a listener’s attention; it simply sits there passively until the user seeks out and requests the speech. No one is interested in all 100,000,000 tracks on Spotify; but for the most part, having access to an extra 99,900,000 does not take anything away from the 100,000 one might actually be interested in listening to.

To be sure, a hosting platform with 100,000,000 pieces of content is harder to browse than a platform with 100. But this should be understood as more of a selection problem than a hosting problem. Combining hosting and selection into a single platform function takes some of the control over speaker-listener matching away from listeners and vests it in the platform. A movie theater that shows 5 movies at time offers far less listener choice than a streaming platform that gives listeners access to a catalog of 50,000. Give that same listener a list of 5 recommended hot new releases and they have all of the choice-related benefits of the movie theater and none of the drawbacks. The creation of Internet-scale hosting intermediaries creates its own need for equally useful selection intermediaries, but the first step towards facilitating their healthy development is recognizing that selection is distinct from hosting.

None of this is not to say that access rules always actually enhance the choices available to listeners. The economics of multi-sided markets are complicated, and a badly designed access rule could undermine a pricing strategy that successfully attracts more speakers and more listeners to an intermediary. My goal here is narrower. I want to argue that rules that have the effect of increasing the range of speakers available on a hosting platform are pro-listener-choice, whether or not they are structured as open access rules. The actual creation of a regulatory regime involves difficult policy considerations and mechanism designs. My point is only that this policy space ought to be available to regulators and not be foreclosed by the First Amendment.

Indeed, access rules are even easier to justify for commodity hosting platforms than they are for delivery platforms. As we have seen, filtering rules for delivery media frequently translate into corresponding exceptions to access rules. Spam-blocking, for example, might be a case of reasonable network management under network neutrality rules. This, in turn, means that regulators need to be cautious with imposing access rules, lest they inadvertently cut off filtering that listeners depend on. A must-carry rule for email, for example, would be a spammer’s dream.

To the extent that listeners do their own filtering in accessing a hosting platform, hosting platforms do not require the same degree of caution with access rules. If regulators require that Candy Crush be available in app stores, it does no harm to a user who does not enjoy match-three games. If you don’t want to play Candy Crush, don’t download it.

E. Selection

For decades, speakers have been demanding access to selection intermediaries. In the 2000s, the issue of the day was “search neutrality”: equal access to search engines’ rankings.71See generally Grimmelmann, supra note 4. More recently, speakers have complained about being “downranked” on social media—that is, not placed in other user’s algorithmic feeds. In both cases, the complaint is the same: their speech is theoretically available to users but not recommended in practice.

The fundamental challenge with giving a coherent account of access to selection is the baseline problem.72See generally Grimmelmann, supra note 4. It is nearly impossible to describe what “correct” or “neutral” rankings would look like. Different users have different preferences, and even the same user has different preferences in different contexts and at different times. My Facebook News Feed should not be identical to yours; we have different friends and you like fashion while I like sports. My search results for “crab cakes” should be different than my search results for “crab canon,” and even my search for “Vikings” could be referring to Scandinavian seafarers, a football team, Mars probes, a TV series, or kitchen appliances.73See Grimmelmann, Speech Engines, supra note 3, at 913 (discussing challenge of defining relevance). As a result, different selection media can quite reasonably make different choices about speakers. Indeed, for a regulator to prescribe what a selection platform should do is to become a selection platform itself.

Thus, selection stands in sharp contrast to delivery and hosting, both of which have a plausible neutral baseline: deliver or host everything. Selection is more like broadcast in this respect: choices must be made. However, the reason for the choices is very different. The need for choices in broadcast stems from bandwidth being scarce; not all speech can be made available at all. The need for choices in selection stems from attention being scarce; listeners must choose among these the speech available to them. In broadcast, transmission and selection are inextricably linked. However, on the Internet, transmission (that is, hosting plus delivery) and selection can be distinct functions, one of which substantially overcomes the scarcity problem and the other of which confronts it full-force.

Access claims in the selection context are therefore effectively a zero-sum fight among speakers. To move speaker A up one place in a feed means pushing some other speaker B down one place. Platforms might make this choice for a variety of content-based reasons—profit, ideology, whimsy—but it is much harder to identify a legitimate reason for a regulator to prefer A to B or vice-versa. A neutrality rule in a delivery or hosting context works because the government can tell an ISP to deliver all IP datagrams with equal priority (network neutrality) or a cloud-hosting provider to host all lawful content (a must-carry regime); the baseline is content-neutral. But there is no simple corresponding neutrality rule for selection. To select is to choose on the basis of content.

I argued in Speech Engines for a more limited principle of relevance to search users. That is, a search result is a search engine’s guess at what a user will find relevant to their query.74Grimmelmann, Speech Engines, supra note 3, at 913. The user’s goals are subjective from their perspective, but it is an objectively observable fact from the search engine’s perspective how well a result corresponds to a user’s goals. The search engine must make a subjective guess at what the user will find relevant, but it is an objective fact whether the result the engine actually shows to the user corresponds to that best guess. A regulator therefore has a principled basis to intervene when a search engine is disloyal to its users—and it is disloyal when it shows them results that (objectively) differ from the engine’s own (subjective) judgment about what the users are likely to find relevant. This does not mean the regulator can substitute its own relevance judgments for those of the user or the search engine, but it does mean that the regulator can prevent the search engine from lying to users and it might be able to prevent certain conflicts of interest that might tempt the search engine into underplaying its hand.

This argument generalizes into a broader claim about selection intermediaries and listeners. A selection intermediary offers listeners a way to choose among speakers. To prohibit the intermediary from doing so, or to dictate how it makes the selection, is to interfere with listeners’ ability to choose. We should understand this as an interference with listeners’ First Amendment rights to listen (and not just the intermediary’s right to speak). At the same time, we should recognize that a selection intermediary that is dishonest or disloyal also interferes with listeners’ First Amendment interests. The dishonesty and disloyalty can provide a content-neutral basis for identifying problematic recommendations by selection intermediaries, even though those recommendations are themselves content-based.

  1. Moody v. NetChoice

The Supreme Court’s recent decision in Moody v. NetChoice was a missed opportunity to clarify these principles.75Moody v. NetChoice, LLC, 603 U.S. 707, 724–28 (2024). Texas and Florida passed content-moderation laws that, in various ways, prohibited major social-media platforms from restricting content on the basis of political viewpoint (Texas) or from restricting content from political candidates or journalistic enterprises (Florida). The actual holding in Moody was a nothingburger about the appropriate standards for facial challenges; but in dicta, a five-justice majority explained that the platforms’ “selection, ordering, and labeling of third-party posts” were protected expression.76Id. at 727.

This was a thoroughly speaker-oriented perspective. It treated the problem with the states’ laws as that “an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”77Id. at 731. This perspective makes perfect sense when the entity is a newspaper or a parade, both of which contribute to the marketplace of ideas by adding perspectives they think that readers or viewers will appreciate. And it is true, in a sense, for social media, where many platforms curate speech in ways that reflect specific viewpoints.

However, in another more accurate sense, the value of selection algorithms on social media is to users as listeners: the selection algorithms help them find speech they find interesting, valuable, and relevant to their diverse interests. A state mandate to insert some speech into a user’s feed or search results interferes with the user’s ability to listen to the speech that the user actually wants to hear. It is not just compelled speech as against the platform—it is also compelled listening as against the user. Put this way, the First Amendment problem is blindingly obvious.78See generally Brief of First Amendment and Internet Law Scholars as Amici Curiae Supporting Respondents, Moody v. NetChoice, LLC, 603 U.S. 707 (2024) (Nos. 22-277 and 22-555) (making this argument).

This shift in perspective—from speaker to listener, from platform to user—is important for two reasons. First, it gives a more convincing response to the states’ argument that the platforms are not really speaking in most of their selection decisions. Facebook does not really have an opinion on whether my cousin’s apple pie photos or my friend’s story about a long line at the grocery store is worthier speech, but I certainly do. There is a sense in which the speech value of Facebook’s ranking decisions is derivative of my speech interests.

This is a compelling response to Texas’s attempt to inject political speech into social-media feeds on a viewpoint-neutral basis. It is a bit uncomfortable for Facebook to argue that it has an expressive preference to discriminate on the basis of viewpoint, but it is perfectly natural for individual users to have expressive viewpoints and to prefer content on that basis. For listeners to choose speakers on the basis of viewpoint is not to interfere with the freedom of speech; it is an exercise of that freedom and the point of the whole enterprise. Subscribing to The Nation instead of The National Review (or vice-versa) is viewpoint discrimination on the user’s part, and that is a good thing! Social-media users want feeds that reflect their divergent interests and viewpoints, and social-media platforms advance, rather than inhibit, First Amendment values when they cater to these listener preferences.

Second, the focus on listeners’ expressive interests in choosing what speech they receive on social-media platforms and on having platforms that can algorithmically make selections in accordance with those interests makes clearer that this is an argument only about selection and not necessarily about hosting. To the extent that states attempt to regulate platforms’ hosting functions with neutrality or must-carry mandates, those laws may rest on a firmer basis than their attempts to regulate platforms’ selection functions.79Eugene Volokh, Treating Social Media Platforms Like Common Carriers?, 1 J. Free Speech L. 377, 448 (2021). As I argued above, there is a plausible neutral baseline for hosting, and regulating hosting by itself does not interfere with listeners’ choices in the same way as regulating selection does.

In the actual Moody and Paxton cases, the platforms’ hosting and selection functions were closely related, and the most common content-moderation remedy they applied was to delete the content entirely.80See generally Eric Goldman, Content Moderation Remedies, 28 Mich. Tech. L. Rev. 1 (2021) (discussing much wider range of remedies available to platforms). Similarly, the states’ laws ran rules that sounded in hosting (“permanently delete or ban”) together with rules that sounded in selection (“post-prioritization” or “shadow ban”), as if all of these practices were entirely equivalent. However, it is possible to imagine future laws that more clearly require hosting of content on a viewpoint-neutral basis while leaving platforms greater discretion over selection. I think these laws pose genuinely harder questions. Moody’s majority opinion collapses these distinctions in an unhelpful way.

  1. Antitrust and Self-Preferencing

A listeners’-choice perspective also shows why antitrust regulation of selection intermediaries is broadly permissible, even when some of the anticompetitive conduct complained of involves the selection of speech.81        See generally Hillary Greene, Muzzling Antitrust: Information Products, Innovation and Free Speech, 95 B.U. L. Rev. 35 (2015). The actual antitrust analysis is highly fact-specific and requires careful technological and economic reasoning about particular products and markets. See generally Erik Hovenkamp, Platform Exclusion of Competing Sellers, 49 J. Corp. L. 299 (2024); Erik Hovenkamp, The Antitrust Duty to Deal in the Age of Big Tech, 131 Yale L.J. 1483 (2022). My point here is only that in many circumstances, the First Amendment does not block a court from reaching the merits of an antitrust case involving a selection intermediary. Again, the key point is that although users have content- and viewpoint-based preferences among speech, the government can act neutrally in terms of content by taking those preferences into account, whatever they are. An app store that rejects fart apps because “the App Store has enough fart, burp, flashlight, fortune telling, dating, drinking games, and Kama Sutra apps, etc. already”82App Review Guidelines § 4.3 Spam, Apple Dev., https://developer.apple.com/app-store/review/guidelines [https://perma.cc/9FA3-N67R]. is certainly expressing a viewpoint. However, to the extent that users want fart apps and the app store is suppressing competing fart apps in favor its own, promoting welfare-enhancing consumer choices is a perfectly

legitimate government interest and the harm is cognizable under traditional antitrust principles.

Thus, rules against self-preferencing by selection intermediaries will generally be permissible under the First Amendment. This position may sound absurd if one sees only the First Amendment interests of the intermediary, and it is still difficult if one takes into account the interests of its competitors. However, it becomes entirely reasonable if one considers the interests of affected users. Indeed, there is a natural congruence between the interests of users as listeners (my argument in this essay) and the interests of users as consumers (the traditional stance of antitrust law).

More specifically, it would be permissible to have a rule that a pure selection intermediary must treat first-party content that it itself produced evenhandedly with third-party content from competitors. The intermediary will have valid, expressive reasons to prefer some content over others, and these decisions will mostly be off-limits to regulatory scrutiny, as discussed above. However, a regulator can make clear that the platform cannot prefer first-party content simply because it is first-party content. The platform can use any ranking rules it wants, but those rules must be applied even-handedly to all—or at least, the platform must give users the option of disabling any self-preferencing.

For similar reasons, disclosure of speech-selection intermediaries’ commercial ties is also generally permissible under traditional consumer-protection principles. Listeners can legitimately expect to know when a speaker has a financial incentive to tell them one thing rather than another, an expectation that applies to speech selection as well as to speech itself. At the moment, paid advertising in search results and in social-media feeds must be disclosed as such; however, a stronger rule that required selection platforms to disclose when recommended content is first-party, or when there are substantial financial ties between the platform and a speaker, would also be allowable for the same reasons.

Finally, full structural separation between hosting, delivery, and selection is a plausible antitrust remedy or regulatory mandate. In Part IV, I will discuss in more detail why this kind of separation might be appealing from a free-speech perspective. For now, I just want to note that the economic and technical separation of these functions is itself plausible from a First Amendment perspective, Moody notwithstanding. I have been arguing that hosting and delivery platforms could be subject to must-carry rules, but selection platforms generally cannot. Much of the gap between the two sides’ positions in Moody arose from the fact that the laws’ proponents generally cited caselaw about common carriage in hosting and delivery settings, while

the laws’ opponents generally cited caselaw about expressive choices in selection settings.

The thing that made the Moody cases difficult to resolve was that the platforms combined both hosting and selection functions, and most of the briefing (and the opinions) ran these functions together. This would seem to open up an argument on the platforms’ part: Moody confirms they have full First Amendment protection when they engage in selection, so even a pure hosting platform is always allowed to engage in selection—i.e., there is a First Amendment right to combine these two functions. However, I think this does not follow from Moody; or to the extent that it does, Moody is wrong.

The thrust of the common-carriage cases is that the public provision of standardized service can be subject to nondiscrimination obligations.83There is a parallel tradition that these standardized services can be structurally separated from other services that involve more individualized offerings. This, for example, is what the Telecommunications Act of 1996 attempted to do with its distinction between “telecommunications service” (standardized and common-carriage) and “information service” (bespoke and unregulated). To the extent that this distinction is coherent (and I think that it is, much of the time), nondiscrimination obligations should apply to the standardized services and not to the individualized ones. Moody may have missed this distinction, but the Court’s opinion in 303 Creative LLC v. Elenis seems to hinge on it; that is, it is First-Amendment-compelled speech to require a designer to make a custom wedding website (“pure speech”), but it is perfectly permissible to require a merchant to sell a commodity product to all comers.84303 Creative LLC v. Elenis, 600 U.S. 570, 593–94 (2023); see also Dale Carpenter, How to Read 303 Creative v. Elenis, Volokh Conspiracy (July 3, 2023, 2:11 PM), https://reason.com/volokh/2023/07/03/how-to-read-303-creative-v-elenis [https://perma.cc/KVQ9-KD2N] (arguing that 303 Creative applies to products that are customized and expressive). In listener terms, listeners are paying attention to the intermediary’s own speech in individualized cases like selection, while paying attention to third-party speech in standardized cases like hosting.

  1. Unranked Feeds

An interesting partial and special case of separating hosting from selection is to require a provider to include an unranked or chronological feed for those users who want it. Facebook offers both “Top Posts” (algorithmically ranked) and “Most Recent” (chronological) feeds; Reddit offers “Best” and “Hot” (algorithmically ranked) but also “New” (chronological) sorting options.

What makes these options feasible is that there is a plausible objective baseline. A chronological feed on Facebook is “all posts from friends and pages I follow, sorted by recency.” This is workable in a way that “all posts I would be interested in” is not. The restriction to content from accounts that one follows is what makes the option to display everything tractable. A purely chronological feed of everything posted to X (the “firehose”) is not of interest to most users—it would be overwhelmingly vast—but a purely chronological feed of everything posted by those they follow is. For similar reasons, a non-algorithmic search engine is an oxymoron except in domains that are so small or simple as to barely require a search engine at all. Anything larger than “find on this webpage” requires contestable choices about ordering.

A chronological-feed option is listener-choice enhancing. A chronological-feed mandate would not be. Facebook and other social-media platforms have extensive evidence showing that users stay on their sites longer and engage with more posts when they see non-chronological feeds. This is a legitimate user preference; given the limits of attention, the user benefits greatly from delegating the choice to Facebook.85I think it is more accurate to call this a “delegation” of choice rather than “choosing not to choose.” Cf. Cass R. Sunstein, Choosing Not to Choose, 64 Duke L.J. 1, 9 (2014). However, not every user wants algorithmic feeds. I, for example, only used chronological ordering on Twitter, and have stuck to that preference on federated platforms. This, too, is a legitimate user preference; a platform that forces algorithmic ordering on everyone when chronological ordering is feasible thwarts some listeners’ choices about speech selection.

This is another way in which Moody paints with too broad a brush. Seeing selection as purely a matter of platform speech makes the majority insensitive to listeners’ speech interests. Requiring a chronological option from social media feeds in addition to a platform’s preferred algorithmic option looks like a restriction on the platform’s speech rights; indeed, to the majority it might even be compelled speech. However, a chronological feed option is also a way of respecting users-as-listeners’ choices about speech without forcing a platform to make ranking choices that it and its users would otherwise disagree with. Requiring a chronological option strictly increases the choices available to listeners, while not interfering with a platform’s ability to provide its preferred ordering to any listeners who are interested in hearing it.

IV. Filtering

Now consider media from the perspective of unwilling listeners. As we will see, there are really three different types of unwilling listeners in media regulation. In each case, it is helpful to distinguish between (1) downstream filtering infrastructure that empowers listeners themselves to avoid unwanted content, and (2) upstream filtering rules that prevent that content from reaching them in the first place.

First, there are listeners who are uninterested in or who actively dislike particular content: opera fans who loathe rap music or reality television fans who find scripted shows unbearably dull. Here, downstream filtering infrastructure is typically sufficient. As long as there is something they would rather watch (an access problem), as long as they are able to find out about it (a selection problem), and as long as they are actually able to switch to it (which is true for most media),86Exceptions typically involve being in public places, such as in an auto mechanic’s waiting room or on a subway car with someone having a loud video call. they can watch operas and reality shows, and ignore the rap and scripted dramas. It does not bother them, because they do not need to see it. Upstream filtering rules are unnecessary.

Second, there are listeners who are individually targeted with specific unwanted content that is hard for them to avoid. This is fundamentally a delivery problem; it does not arise with other types of media. Sometimes speakers target individual listeners, like a harassing telephone caller. Sometimes they target many listeners indiscriminately, like an email spammer. Either way, listeners can try to use self-help downstream filtering to avoid it, but if that fails, they may need upstream filtering to help prevent it from reaching them in the first place.

And third, there are minors. Sometimes, children want to avoid violent, sexual, disturbing, or other adult-themed content because it upsets them, but they come across it by accident and cannot look or flip away in time. Sometimes—perhaps more often—the problem is that children are willing to see this material, but their parents or guardians want to shield them from it. In both cases, the theory is that children are less capable of making choices for themselves as listeners than adults are, and therefore that some kind of upstream filtering rules are necessary because downstream ones will fail. Either the kids themselves will be less good at filtering than their parents would be, or the kids will affirmatively evade the filtering their parents try to impose.

Downstream filtering infrastructure also plays a crucial role in supporting (or undermining) the rationales for other kinds of media regulations. On the one hand, good downstream filtering plays a crucial role in making it possible for listeners to pick and choose among the superabundance of content that access rules try to make available. On the other, good downstream filtering can reduce the need for upstream filtering rules—in First Amendment terms, it is frequently a “less restrictive alternative.”

A. Broadcast

In broadcast media, unwilling listeners were typically expected simply to change the channel. They may not always have had many other broadcast options, but no one was forcing them to watch any particular broadcast. Even this limited measure of choice was sufficient to protect unwilling listeners from programs they despised. As the range of channels expanded (with it, the range of choices), the less of an imposition any one unwanted channel was on listeners—indeed, the less likely they were to notice or care about it at all. Similarly, by their nature, very few broadcast programs were personally targeted at, or specifically harmful to, individual listeners. The local CBS affiliate simply did not care enough about Angela Johnson at 434 Oakview Terrace to preempt Murder She Wrote with an hour-long special insulting Johnson and her life choices.

Instead, the filtering problems on broadcast media primarily concern minors. The theory of “just change the channel” does not work for them for two reasons. First, something offensive or shocking could come up unexpectedly when one is just flipping through channels. This was the case in FCC v. Pacifica Foundation, in which the Supreme Court upheld the FCC’s finding that a radio broadcast of George Carlin’s “seven dirty words” routine was indecent in violation of its regulations.87FCC v. Pacifica Found., 438 U.S. 726, 740–41 (1978). And it is the case with the FCC’s modern attempts to extend its obscenity-and-indecency rules to cover fleeting expletives and other sudden intrusions into otherwise family-friendly broadcasts, like Bono calling U2’s Best Original Song win at the Golden Globes “really, really, fucking brilliant” live on air, or the 2004 Super Bowl wardrobe malfunction.88See generally FCC v. Fox Television Stations, Inc., 567 U.S. 239, 248, 258 (2012) (finding the FCC’s rule unconstitutionally vague as applied to fleeting expletives). These are cases where a listener (here, a parent making choices on behalf of their child) cannot effectively make a choice not to receive the unwanted material because of the linear, real-time nature of broadcast audio and video. The character of the channel changes more quickly than the listener can flip away.

Second, sometimes children want to watch shows their parents do not want them to. Nominally, the theory here is that parents cannot constantly supervise their children’s TV viewing; stations have to do the filtering work that parents cannot.89See J.M. Balkin, Media Filters, the V-Chip, and the Foundations of Broadcast Regulation, 45 Duke L.J. 1131, 1136–38 (1996) (arguing persuasively that the difficulty of parental supervision is the real import of courts’ language that broadcast media are uniquely “pervasive”). This is why the FCC’s indecency regulations are confined to only the hours from 6:00 AM to 10:00 PM each day: at night, when indecency regulations do not apply, kids are assumed to be in bed and not watching TV.9047 C.F.R. § 73.3999(b) (2023). In comparison with indecency rules, obscenity regulations apply at all hours of the day. Id. § 73.3999(a). The indecency rules are an incursion on adults’ abilities as listeners to choose what speech they want to receive. They are an exception to the normal rule that willing listeners beat unwilling listeners. The justification is simply the usual one offered so often in American law: protecting the supposed innocence of the young from the purportedly corrupting influence of being aware that sex is a thing that exists. The eight hours at night when indecency rules do not apply serve as a concession to adults’ interests as listeners.

I say that this is “nominally” the theory of broadcast indecency regulation because it only really makes sense in a world where the main audio and video media are broadcast—a world we have not lived in for decades. Cable, satellite, and other subscription services have never been subject to the indecency rules. Here, the theory is that parents can choose whether or not to subscribe, presumably in a different way than they could choose whether or not to have a TV. Thus, they have an upfront choice that they can use to prevent their children from receiving unwanted indecent material. If you do not want your kids to watch Skinemax late at night, do not get cable, or do not pay extra for premium channels. Similar laws and similar logic apply to “over-the-top” broadcast services on the Internet, like ESPN+’s live sports games. If you do not like it, do not subscribe.

At times, the government has tried to impose more stringent filtering rules on broadcasters. Listeners’ choices provide a simple and compelling explanation of where the doctrine has come to rest. Consider United States v. Playboy Entertainment Group, Inc., where § 505 of the 1996 Telecommunications Act required cable operators to “fully scramble or otherwise fully block”91Codified at 47 U.S.C. § 561(a). sexually explicit programs except between the hours from 10:00 PM to 6:00 AM the next day.92United States v. Playboy Ent. Grp., Inc., 529 U.S. 803, 806 (2000). Of course, most cable operators already scrambled sexually explicit channels for non-subscribers, and sexually explicit channels like Playboy Television were typically “premium” offerings sold à la carte, so only paying subscribers to these specific channels would have a converter box to descramble them.93See id. at 807. So far, this was simply a case of parental choice over what broadcast services to subscribe to.

The technological complication was “signal bleed”; the analog scrambling technologies available in the 1990s could not prevent portions of the audio and video from leaking through, albeit in somewhat garbled form.94Id. at 807–08. To Congress, signal bleed meant that existing scrambling by itself was insufficient, and so cable companies would need to “fully block” such content if they could not “fully scramble” it. However, the Supreme Court observed that there was a less-restrictive alternative to fully banning a channel—“block[ing] unwanted channels on a household-by-household basis.”95Id. at 815. Indeed, this capacity was already required of cable systems by § 504 of the Act,96Codified at 47 U.S.C. § 560. so the law contained its own less-restrictive alternative. In other words, a legal regime requiring upstream filtering for all listeners by broadcast intermediaries was unconstitutional because there was a downstream alternative that gave individual listeners a more granular choice.

A more technical complex broadcast filtering system is the “V-chip,” which the 1996 Telecommunications Act required in all televisions shipped through interstate commerce.9747 U.S.C. § 330(c)(1); see generally Balkin, supra note 89. The Act describes the V-chip bloodlessly as “a feature designed to enable viewers to block display of all programs with a common rating,”9847 U.S.C. § 303(x). but the intent and implementation were that the rating systems would flag programs with sexual, violent, or other type of adult content. While the V-chip is mandated by law, the ratings that it interprets are not. The TV Parental Guidelines, which include classic bangers like TV-14-LS (many parents would find the contents unsuitable for children under 14 because of crude language and sexual situations) are “voluntarily rated by broadcast and cable television networks, or program producers.”99Frequently Asked Questions, TV Parental Guidelines, http://tvguidelines.org/faqs.html [https://perma.cc/CMF3-PQWK]. Indeed, there is a strong argument that a mandatory rating system would constitute unconstitutional compelled speech. See Book People, Inc. v. Wong, 91 F.4th 318, 336–40 (5th Cir. 2024) (holding unconstitutional a mandatory self-applied age-rating system for websites). Overall use of the V-chip seems to have peaked at about 15 percent of parents.100Henry J. Kaiser Family Foundation, Parents, Children, & Media: A Kaiser Family Foundation Survey, KFF, https://www.kff.org/wp-content/uploads/2013/01/entmedia061907pres.pdf [https://web.archive.org/web/20250221161327/https://kff.org/wp-content/uploads/2013/01/entmedia061907pres.pdf].

It is enlightening to consider the V-chip, like § 504, as a mechanism for creating listener choice under the choice-unfriendly conditions of broadcast. In both cases, signals are still transmitted indiscriminately to all listeners, but in both cases, listeners can individually choose whether to opt in or opt out of making those signals intelligible. Section 504 does so in a less granular way (entire channels), while the V-chip does so in a more granular way (individual programs), but the general idea is the same. It is not a coincidence that in both cases, the regulatory regime converged on a technical system that put more choices in the hands of individual households. This overall downstream movement of choices about speech—from speakers and intermediaries to listeners; from “push” media to “pull” media—is one of the most significant trends in recent media history.

B. Delivery

Now consider filtering rules that help unwilling listeners avoid unwanted deliveries. The First Amendment does not operate directly here; outside of some narrow contexts involving a “captive audience,” there is no First Amendment right not to be spoken to.101See Frisby v. Schultz, 487 U.S. 474, 487–88 (1988) (upholding an ordinance against residential picketing on the grounds that people are captive audiences in their own homes); Snyder v. Phelps, 562 U.S. 443, 459–60 (2011) (rejecting liability for funeral protests on the ground that the mourners were not a captive audience when the protesters “stayed well away from the memorial service”). Instead, laws designed to protect listeners from unwilling communications in delivery media are generally constitutional, provided that they are suitably tailored to the actual harms suffered by listeners who are genuinely unwilling.

The most obvious example is that anti-harassment laws have repeatedly been upheld when they involve one-to-one communications.102E.g., Lebo v. State, 474 S.W.3d 402, 407 (Tex. Ct. App. 2015) (upholding conviction for repeatedly sending threatening emails and telephone calls to victim). Repeated telephone calls or harassing emails can be the subject of valid restraining orders, civil judgments, or criminal convictions.103See, e.g., 47 U.S.C. § 223(a) (prohibiting telephone harassment). See also United States v. Lampley, 573 F.2d 783, 788 (3d Cir. 1978) (upholding constitutionality of § 223(a)); United States v. Darsey, 342 F. Supp. 311, 312–14 (E.D. Pa. 1972) (describing problems § 223(a) was meant to solve). See generally Genevieve Lakier & Evelyn Douek, The First Amendment Problem of Stalking: Counterman, Stevens, and the Limits of History and Tradition, 113 Calif. L. Rev. 143, 170–77 (2025) (discussing history of anti-stalking law). The key here, as I argued in Listeners’ Choices, is that these restrictions do not prevent speakers from addressing willing listeners.104Grimmelmann, supra note 1, at 392. They remain free to telephone anyone else they want; only one particular number is forbidden. The legal system can therefore protect the unwilling victims of harassment without interfering in the core First Amendment relationship between willing speaker and willing listener.105See generally Leslie Gielow Jacobs, Is There an Obligation to Listen?, 32 U. Mich. J.L. Reform 489 (1999). An order requiring a speaker to take down a blog post about the victim interferes with that relationship; an order requiring them to stop sending direct messages to the victim does not.106See Volokh, supra note 15, at 742–43 (making one-to-many vs. one-to-one distinction).

Listeners can opt out of unwanted one-to-one commercial speech. The Controlling the Assault of Non-Solicited Pornography and Marketing Act (“CAN-SPAM”) for email, the Telephone Consumer Protection Act (“TCPA”) for telephone and Short Message Service (“SMS”), Do-Not-Call for telephone, and the TCPA for faxes all broadly prohibit sending certain types of commercial solicitations to unwilling listeners. CAN-SPAM uses an opt-out system; a sender gets one bite at the apple but must refrain from further emails once a recipient objects.10715 U.S.C. § 7704(a)(3)(A)(i). With some exceptions, TCPA prohibits the use of automated dialers and prerecorded messages (that is, bulk communications particularly unlikely to be of interest to individuals) unless they affirmatively opt in.10847 U.S.C. § 227(b)(1)(B). Do-Not-Call bars all unsolicited commercial calls to numbers on the list,10915 U.S.C. § 6151; 16 C.F.R. §310.4(b)(1)(iii)(B) (2024). and TCPA bars all unsolicited commercial faxes.11047 U.S.C. § 227(b)(1)(C). All of these laws have been upheld against First Amendment challenges.111See generally Mainstream Mktg. Servs., Inc. v. FTC, 358 F.3d 1228 (10th Cir. 2004) (discussing Do-Not-Call); United States v. Smallwood, No. 3:09-CR-249-D(07), 2011 U.S. Dist. LEXIS 76880 (N.D. Tex. July 15, 2011) (discussing CAN-SPAM); Moser v. FCC, 46 F.3d 970 (9th Cir. 1995) (discussing telephone provisions of TCPA); Missouri ex rel. Nixon v. Am. Blast Fax, Inc., 323 F.3d 649 (8th Cir. 2003) (discussing fax provisions of TCPA).

The First Amendment rule for unwanted postal mail is even stronger. In Rowan v. United States Post Office Department, the Supreme Court upheld a law under which “a person may require that a mailer remove his name from its mailing lists and stop all future mailings to the householder.”112Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 729 (1970). Although the law was framed in terms of allowing recipients to opt out of receiving “erotically arousing or sexually provocative” advertisements,113Id. at 730. it allowed recipients “complete and unfettered discretion in electing whether or not [they] desired to receive further material from a particular sender,”114Id. at 734. and the legislative history indicated that neither the postal service nor a reviewing court could “second-guess[]” the recipient’s decision.115Id. at 739 n.6. “Nothing in the Constitution compels us to listen to or view any unwanted communication,” wrote Chief Justice Burger for a unanimous court.116Id. at 737. Compare Rowan with Bolger v. Youngs Drug Products Corp., in which the Court held a law prohibiting the mailing of contraceptive advertising unconstitutional:117Bolger v. Youngs Drug Prods. Corp., 463 U.S. 60, 72 (1983). that is, a prohibition on the use of mailings was constitutional when the prohibition was requested by the recipient (Rowan) but unconstitutional when the prohibition was imposed by the government (Bolger).

Although Rowan is sometimes discussed as a captive-audience case,118E.g., Snyder v. Phelps, 562 U.S. 443, 459–60 (2011). it is better understood as a case about delivery media. Consider Frisby v. Schultz, a true captive-audience case: there is nowhere to go to hide from protesters outside your door, so a law prohibiting residential picketing is constitutional.119Frisby v. Schultz, 487 U.S. 474, 487–88 (1988). By contrast, the Supreme Court has treated self-help as effective against unwanted mail. Bolger stated that the “short, though regular, journey from mail box to trash can is an acceptable burden, at least so far as the Constitution is concerned.”120Bolger, 463 U.S. at 72 (internal quotation omitted). The only way this Bolger dictum can be squared with Rowan is if the basis of Rowan’s holding is listeners’ rights against unwanted communications, rather than one being a captive audience in one’s home against unwanted postal mail.

It is also widely accepted that there is no First Amendment problem if a delivery carrier implements some form of filtering or blocking at the request of a user. Wireless and landline telephone companies offer call blocking to their customers, which allows a user to block all further calls from a number. Indeed, FCC regulations explicitly permit providers to block calls that are likely to be unwanted based on “reasonable analytics”12147 C.F.R. § 64.1200(k)(3)(i) (2023). so long as the recipient has an opportunity to opt out of the blocking.122Id. § 64.1200(k)(3)(iii). Email filtering is also incredibly widely deployed. Some users do the filtering themselves, manually or with an app, but many rely on the filtering (both explicit blacklists and using machine learning) offered by their email providers. Here again, § 230 plays a role: the most common reason that delivery media block “otherwise objectionable” communications is that their users object to them, and spam is a common reason.123See, e.g., Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *11 (E.D. Cal. Aug. 24, 2023).

Finally, many laws require speakers to accurately identify themselves upstream when using delivery media so that listeners downstream can decide whether or not to receive their speech. CAN-SPAM prohibits false or misleading header information,12415 U.S.C. § 7704(a)(1). prohibits deceptive subject lines,125Id. § 7704(a)(2). and requires that advertisements be disclosed as such.126Id. § 7704(a)(5)(i). The Truth in Caller ID Act prohibits spoofing caller ID information “with the intent to defraud, cause harm, or wrongfully obtain anything of value.”12747 U.S.C. § 227(e)(1). The Junk Fax Prevention Act of 2005 (“JFPA”) requires clear “identification of the business, other entity, or individual sending the [fax] message.”128Id. § 227(d)(1)(B). Although there is a right to speak anonymously under many circumstances, there are limits on how far a speaker can go in lying about their identity to trick a listener into hearing them out. Importantly, some of these laws require delivery intermediaries to implement the infrastructure for accurate identification. The FCC, for example, requires telephone providers to implement a comprehensive framework against caller-ID spoofing known as “secure telephone identity revisited and signature-based handling of asserted information using tokens standards,” otherwise abbreviated as “STIR/SHAKEN.”12947 C.F.R. § 64.6300 (2023).

C. Hosting

Listener choices play a central role in the justifications for hosting providers’ First Amendment rights—and also in the justification for speakers’ access rights to hosting platforms. These justifications presume that listeners can voluntarily choose to engage with hosted content they want and to avoid hosted content they do not want. In the terminology of Listeners’ Choices, listeners can be asked to bear the necessary “separation costs” because they can easily and inexpensively choose where to click.130Grimmelmann, supra note 1, at 395–96. It follows, then, that unwilling listeners’ objection to content are not a sufficient reason to prevent it from being hosted for willing listeners.

The Supreme Court’s decision in Snyder v. Phelps is a nice example.131See generally Snyder v. Phelps, 562 U.S. 443 (2011). In addition to its funeral protests, the Westboro Baptist Church has a website that is, if anything, more offensive and upsetting. However, a website is even easier for an unwilling listener to avoid. The Church physically picketed at Albert Snyder’s son’s funeral, but he only found the website “during an Internet search for his son’s name.”132Id. at 449 n.1. Unsurprisingly, he pressed only the funeral-protest theory before the Supreme Court and abandoned his tort claims based on the website.133Id. The Court held that the First Amendment protected the Church’s picketing, and the argument is even stronger for the website.

Now consider whether hosting providers can have responsibilities to avoid carrying harmful-to-minors material. To simplify only slightly, the history of anti-indecency regulation is that some adults have tried to restrict minors’ access to sexually themed content by passing upstream filtering laws requiring speakers and hosting platforms to prevent the posting of such content. The courts have responded by invalidating these laws whenever listener-controlled downstream filtering is a plausible alternative. Indeed, it is striking how many contexts the same basic rationale has worked in.

Start with Sable Communications of California, Inc. v. FCC, in which federal law regulated “dial-a-porn” services by prohibiting the transmission of indecent interstate commercial telephone messages.134Sable Commc’ns of Cal., Inc. v. FCC, 492 U.S. 115, 118 (1989). While the prohibition might have been constitutional as to minors, adults have a constitutional right to view indecent but not obscene material. Because the statute prohibited transmission to adults as well, it restricted protected speech, and therefore was unconstitutional.

Put this way, Sable is a classic hosting case of both willing and unwilling listeners. The fact that the speech might reach some unwilling (minor) listeners does not mean that it can be prohibited entirely in such a way as to deprive willing (adult) listeners. Indeed, this first-cut explanation will apply perfectly well to almost all of the cases in this section. It is not wrong.

However, Sable is also a filtering case. The FCC had previously considered multiple technologies to block minors without blocking adults, including credit-card verification, access codes that would be provided only following an age verification process, message scrambling requiring a descrambler that only adults would be able to purchase, and customer-premises blocking, in which subscribers could block their phones from being

able to call entire exchanges (including the paid numbers over which Sable and other dial-a-porn operators provided their services). The Court specifically identified these technical schemes as plausible “less restrictive means, short of a total ban, to achieve the Government’s interest in protecting minors.”135Id. at 129.

These are all technologies to distinguish adults from minors, but they are also all filtering technologies. All four of them require a user to take an affirmative step to listen to particular speech. Indeed, the act of dialing a phone number itself is an affirmative step that these other mechanisms could piggyback on. This is why I describe Sable as a close cousin to a hosting case. To be sure, Sable Communications was delivering its own speech and not that of third parties, but it was fundamentally sending content to listeners on demand, and in such a way that they could predict the general outlines of the speech they were about to receive. (This fact alone is sufficient to distinguish FCC v. Pacifica Foundation and the other broadcast-indecency cases.136FCC v. Pacifica Found., 438 U.S. 726, 748–49 (1978).)

The same arc is visible in the Supreme Court’s caselaw on indecency on the Internet. The first stop was Reno v. American Civil Liberties Union.137See generally Reno v. Am. C.L. Union, 521 U.S. 844 (1997). The Communications Decency Act prohibited the transmission of indecent or sexual material to minors138Id. at 859–60.—including a good deal of material that was fully constitutional for adults to receive.139Id. at 870–76. The government tried to defend the statute by arguing that it only required intermediaries to refrain from sending such material to minors, while leaving them free to send it to adults.140Id. at 876–79. However, the Court held that “this premise is untenable”—that “existing technology did not include any effective method for a sender to prevent minors from obtaining access to its communications on the Internet without also denying access to adults.”141Id. at 876. In other words, the absence of effective age verification turned a de jure rule against sending indecent material to minors into a de facto rule against hosting it in general.142The Supreme Court is currently reconsidering the constitutional status of age-verification technology, in the context of numerous state laws requiring pornographic sites to implement age verification. See Free Speech Coal., Inc. v. Paxton, 95 F. 4th 263, 284 (5th Cir. 2024), cert. granted, 144 S. Ct. 2714 (2024).

Seven years later, in Ashcroft v. American Civil Liberties Union, the Supreme Court confronted a more narrowly drafted law, the Child Online Protection Act (“COPA”).143See generally Ashcroft v. Am. C.L. Union, 542 U.S. 656 (2004). Again, the statute prohibited sending to minors certain material that was constitutional for adults to receive.144Id. at 661–62. This time, however, the affirmative defenses were broader; providers were protected as long as they required a credit card, digital age verification, or any other “reasonable measures that are feasible under available technology.”145Id. at 662. The Court held that COPA was unconstitutional because “blocking and filtering software”—software operated and controlled by parents to limit the sites their children can access—was a less restrictive and more effective alternative.146Id. at 666–70.

As in Playboy Entertainment Group, the availability of more effective downstream filtering technologies meant that a law requiring upstream filtering was unconstitutional. However, unlike in Playboy Entertainment Group, the downstream filters were made available by third parties. The fact that parents could install their own filtering software meant that website hosts were under no duty to do their own filtering. This is a listener-choice-facilitating rule: Yes, it transfers some of the burdens of filtering from intermediaries to listeners, but it also means that each family can choose for itself how to tune its filters, if any.

In United States v. American Library Ass’n, the Supreme Court upheld the provisions of the Children’s Internet Protection Act (“CIPA”), which conditioned federal funding to schools and libraries on their installation of filtering software.147United States v. Am. Libr. Ass’n, Inc., 539 U.S. 194, 214 (2003). A four-Justice plurality held that the condition was a valid exercise of Congress’s Spending Clause power and that library Internet access was not a public forum.148Id. at 205–06. Meanwhile, Justice Kennedy and Justice Breyer’s concurrences in the judgment made nuanced arguments about listeners’ choices. Justice Kennedy’s argument rested on the government’s claim that “on the request of an adult user, a librarian will unblock filtered material or disable the Internet software filter without significant delay”—that is, CIPA allowed willing adult listeners to decide for themselves what sites to view.149Id. at 214. Justice Breyer made a similar point, arguing that an unblocking request was a “comparatively small burden.”150Id. at 220. Whether or not these claims are empirically accurate, the general principle is consistent with a deference to listener-controlled choices about filtering, subject only to the carve-out that minors are not regarded as having the autonomy to choose to view certain material that their elders regard as harmful to them.

D. Selection

I have argued that selection generally facilitates listener choices among speech, and that government attempts to alter platforms’ selection decisions interfere with listeners’ practical ability to find the content that they want. This is not to say that platforms’ selection decisions are ideal or give listeners the full degree of choices they might enjoy. Platforms will almost always get some users’ choices wrong some of the time. Every update you scroll past or search result you ignore is a mistake from your perspective. Platform-provided selection is better than the chaos of content without selection, but there is almost always room to improve.151See generally James Grimmelmann, The Virtues of Moderation, 17 Yale J.L. & Tech. 42 (2015) (discussing moderation in online communities).

It is helpful, then, to recognize that the bundling of hosting and selection on today’s social-media platforms may be a bug rather than a feature. The previous subsection argued that separation of hosting and selection could be permissible as a way for government to ensure that speakers are able to be heard by listeners who genuinely want to hear them (hosting) while not forcing their speech on listeners who do not (selection). However, there is another advantage to clearly separating the two functions, whether required by regulation or voluntarily adopted by a platform.

What would a world where social-media platforms separated hosting from selection look like? The short answer is that it would look much more like web search already does. Hosting providers make content available at speakers’ request, with stable URLs at reachable IP addresses, and transmit that content to listeners at listeners’ request. Meanwhile, search engines index the content and provide recommendations of relevant content to listeners, also at listeners’ request. Listeners have a choice of competing search engines to help them make their choice among competing speakers. The system is not perfect—Google has a dominant market share for general web search in the United States—but there is competition for those users who are willing to use other search engines. For example, Bing, DuckDuckGo, and Kagi are three highly creditable alternatives.

Several commentators have described a similar possible separation for social media. One proposal from a group of Stanford researchers is for “middleware,” defined as “software, provided by a third party and integrated into the dominant platforms, that would curate and order the content that users see.”152Francis Fukuyama, Barak Richman, Ashish Goel, Roberta R. Katz, A. Douglas Melamed & Marietje Schaake, Middleware for Dominant Social Platforms: A Technological Solution to A Threat to Democracy, Stan. Cyber Pol’y Ctr. (2021), https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/cpc-middleware_ff_v2.pdf [https://perma.cc/SZ9Z-AW3P]; see also Francis Fukuyama, Richard Reisman, Daphne Keller, Aviv Ovadya, Luke Thorburn, Jonathan Stray & Shubhi Mathur, Shaping the Future of Social Media with Middleware, Found. for Am. Innovation (Dec. 2024), https://cdn.sanity.io/files/d8lrla4f/staging/1007ade8eb2f028f64631d23430ee834dac17f8e.pdf/Middleware [https://perma.cc/7TBA-UUR3]. Users on the platform would rely on the platform for hosting speakers’ content, but third-party middleware would do the selection. The first and most obvious virtue of middleware is that it introduces competition into the selection process, even when a platform is “dominant”; a monopoly on hosting does not automatically translate into a monopoly on selection.

The authors of the Stanford proposal argue that middleware would “dilute[] the enormous control that dominant platforms have in organizing the news and opinion that consumers see.”153Fukuyama, Richman, Goel, Katz, Melamed & Schaake, supra note 152, at 6. This is entirely correct, but I would put the point differently. Middleware pushes control from a platform towards its users, specifically towards users as listeners. An integrated platform benefits from its position at the center of the two-sided market for hosting, even if its selection is disappointing to users. However, when selection is broken out, selection intermediaries will attract users precisely to the extent that they succeed in satisfying those users’ desire for useful advice about what speech to listen to. That is, middleware selection providers compete along the right axis.

A close relative of middleware—or perhaps a subset of it—is “user agents”: software controlled by the end user that takes the content from a platform and curates it. The difference between middleware and a user agent is that middleware is integrated with the platform and takes over the selection function, while a user agent starts from the content selected by the platform and performs a second round of selection on it. For example, an ad blocker integrated into a user’s browser takes the content selected by a website and curates it by removing the ads. I have argued that these user agents are important for user autonomy in deciding what software to run on their computers, and a similar argument applies to users’ autonomies over what speech they receive.154James Grimmelmann, Spyware vs. Spyware: Software Conflicts and User Autonomy, 16 Ohio St. Tech. L.J. 25 (2020).

Ben Thompson, a technology and business analyst and journalist, offered a fascinating road-not-taken proposal for Twitter (prior to its transformation into X by Elon Musk).155Ben Thompson, Back to the Future of Twitter, Stratechery (Apr. 18, 2022), https://stratechery.com/2022/back-to-the-future-of-twitter [https://perma.cc/3P3G-94KG]. Thompson argued that Twitter should be split in two: TwitterServiceCo would be “the core Twitter service, including the social graph”; TwitterAppCo would be “all of the Twitter apps and the advertising business.”156Id. TwitterAppCo would pay TwitterServiceCo for application programming interface (“API”) access to post to timelines and read tweets, but so could other companies. As Thompson observes, this solution would “cut a whole host of Gordian Knots”: it would make it easier for new social-media entrants to compete on offering better clients or better content moderation; it would pull many controversial content-moderation decisions closer to the users they directly affect; and it would enable a far greater diversity of content moderation policies (both geographically and based on user preferences).157Id.

Needless to say, this was not the route that Musk followed after his acquisition of Twitter—but it is much closer to the route that many post-Twitter social-media services are following. In their ways, Mastodon, Bluesky, and Threads have embraced a version of the middleware ideal, but with an interesting twist. All three of these systems have a “federated” approach to hosting. Users have a direct affiliation with a server or system; they upload their posts to it, and they read other users’ posts through it.

So far, so familiar. The difference is that these services all federate with other services providing similar functionality to their own users. They copy posts from other servers; they make their own users’ posts available for other servers to copy. The result is that content posted by a user anywhere is available to all users everywhere. As a consequence, any given server has less power over its users; they can migrate to a different server without cutting themselves off from their connections on the social graph. Mastodon, for example, has built-in migration functionality that allows users to change servers and have their contacts automatically update subscriptions to the new one.

Federation also has substantial content-moderation benefits because, like middleware, it pushes content moderation closer to the listeners who are directly affected by it. Each federated server can have its own content-moderation policy—that is, each server can implement its own selection algorithm. This is not quite middleware as such, in that a server combines hosting and selection. However, it is much closer than a fully integrated platform would be. Indeed, once it hits a basic baseline of technical competence and reliability, a federated server’s principal differentiator is its moderation policy. So here, too, users who prefer a particular set of policies as listeners have the ability to choose on that basis. This, too, is speech-promoting.

The most careful theorization is of this model is Mike Masnick’s Protocols, Not Platforms.158Mike Masnick, Protocols, Not Platforms: A Technological Approach to Free Speech, Knight First Amend. Inst. at Colum. Univ. (Aug. 21, 2019), https://knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free-speech [https://perma.cc/ET69-VQ4E]. Masnick argues that the key move is to separate a platform into a standardized open protocol and a particular proprietary implementation of that protocol. The interoperable nature of the protocol is what ensures that implementations are genuinely competing on the basis of users’ preferences over content, and not just based on the lock-in network effects of a single platform that has the largest userbase. That is, interoperability enables migration, which enables competition, which promotes competition and speech values. Masnick gives a detailed argument for why this model promotes diversity in users’ speech preferences. I would add only that this diversity is primarily diversity of users as listeners.

To finish, I would like to note a type of selection that can come closer to the middleware goal of facilitating listener choice, even within proprietary platforms. Shareable blocklists (a) allow users to make and share a list of users they do not want to see or receive any content from, and (b) allow other users to import and use another’s shared blocklist.159See generally R. Stuart Geiger, Bot-Based Collective Blocklists in Twitter: The Counterpublic Moderation of Harassment in a Networked Public Space, 19 Info. Commc’n & Soc’y 787 (2016). Blocking is a relatively crude form of selection; it does not necessarily work against abusers or spammers who change their identity or use sock puppet accounts, nor does it let through individual worthwhile posts from users who are otherwise blocked. Still, blocklists satisfy the key desideratum: they are listener-controlled filters. Shareable blocklists have been used for email, on Twitter (before X discontinued this feature), and for ad-blocking on the web, among other settings.

Conclusion

Internet media come in different bundles of functions than pre-Internet media did. Offline, broadcast combined transmission and selection in a way that made it appear that there was a natural connection between speakers’ access to a platform and listeners’ interests, and that both were naturally opposed to media intermediaries’ own speech claims. All of this was true enough in that context, given the structural constraints of the broadcast medium.

However, the assumption that listeners and speakers are united against intermediaries is simply not true when applied beyond the broadcast context. Instead, we frequently find that intermediaries are listeners’ allies, providing them with useful assistance in finding and obtaining the speech of interest to them—and that they form a united front against speakers trying to push their speech on unwilling listeners. Applying the broadcast analogy in this context can result in making unwilling listeners into captive audiences, all while claiming that it is necessary in the Orwellian name of listeners’ rights.

Instead, I have argued that to think clearly about speech on the Internet, we must distinguish between the functions of delivering, hosting, and selecting content, and that we must see each of them from listeners’ point of view. In such a setting, carefully drafted neutrality rules on delivering and hosting can be genuinely speech-facilitating because they promote listeners’ choices. In contrast, most attempts to regulate selection interfere with listeners’ choices. There are a few exceptions—structural separation, interoperability and middleware, restrictions on self-preferencing, and chronological feed options—but all of them are about giving listeners genuine choice among selection intermediaries, or about ensuring loyalty within the intermediary-listener relationship. Beyond that, selection intermediaries should largely be free to select as they see fit, and listeners should largely be free to use them or not, as they see fit.

Seeing the Internet from listeners’ perspective is a radical leap. It requires making claims about the nature of speech and about where power lies online, which can seem counterintuitive if you are coming from the standard speaker-oriented First Amendment tradition. But once you have made that leap, and everything has snapped into focus again, it is impossible to unsee.160See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805, 1834–36 (1995) (presciently arguing that the Internet will lead to an abundance of speech and shift control over that speech from speakers to listeners).

This is not to say that listeners should always get what they want, any more than speakers should. A democratic self-governance theory of the First Amendment might be acutely concerned that groups of like-minded listeners will wall themselves off inside echo chambers and filter bubbles. This is a powerful argument, and to refute it by appealing to a pure listeners’ choice principle is to beg the question. However, even if a shift to listeners’ perspective cannot resolve the debate between self-governance theories and individual-liberty theories—between collective needs and individual choices—such a shift can still clarify these debates. The fear of echo chambers and filter bubbles is fundamentally a concern about listeners’ choices, not one about speakers’ rights. Focusing on what listeners want, and on the consequences of giving it to them, makes clear what is really at stake. It also sheds light on the tradeoffs involved in adopting one media-policy regime as opposed to another.

Listeners online live in a world where countless chattering speakers vie for their attention using every dishonest and manipulative tactic they can—partisans, fraudsters, advertisers, and spammers of every stripe. Selection intermediaries are listeners’ best, and in some cases their only, line of defense against the cacophony; it can be the only way to tune out the racket and hear what they actually want to hear. Intermediaries have immense power over listeners because of it, but what listeners need is to moderate that power and tip the balance more in their favor, instead of eliminating the intermediaries entirely. Being more protective of platforms’ selection decisions gives us more room to be skeptical of their hosting and delivery decisions; it lets us better distinguish when speakers have legitimate claims against platforms and when they do not.

Listeners are at the center of the First Amendment and more so online than ever before. It is time for First Amendment theory and doctrine to get serious about listeners’ choices among speech on online platforms.

 

98 S. Cal. L. Rev. 1231

Download

* Tessler Family Professor of Digital and Information Law, Cornell Law School and Cornell Tech. I presented an earlier version of this article at The First Amendment and Listener Interests symposium at the University of Southern California on November 8–9, 2024. My thanks to the participants and organizers, and to Aislinn Black, Jane Bambauer, Kat Geddes, Erin Miller, Blake Reid, Benjamin L.W. Sobel, and David Gray Widder. The final published version of this article will be available under a Creative Commons license.

Islands of Algorithmic Integrity: Imagining a Democratic Digital Public Sphere

Introduction

A class of digitally mediated online platforms play a growing role as the primary sources of Americans’ knowledge about current events and politics. Prominent examples include Facebook, Instagram, TikTok, and X (which had formerly been known as Twitter). While only eighteen percent of Americans cited social media platforms as their preferred source of news in 2024, this number had risen by a striking six points since 2023.1Christopher St. Aubin & Jacob Liedke, News Platform Fact Sheet, Pew Rsch. Ctr. (Sept. 17, 2024), https://www.pewresearch.org/journalism/fact-sheet/news-platform-fact-sheet [https://perma.cc/SJ49-28W6]. These platforms also compete in “one of the most concentrated markets in the United States,”2Caitlin Chin-Rothmann, Meta’s Threads: Effects on Competition in Social Media Markets, Ctr. for Strategic & Int’l Stud. (July 19, 2023), https://www.csis.org/analysis/metas-threads-effects-competition-social-media-markets [https://perma.cc/2MQN-YSUR]. as a consequence of network effects and high barriers to entry.3Id. Current trends suggest that social media will soon outpace traditional news websites as the main source for a plurality of Americans’ understanding of what happens in the world.4St. Aubin & Liedke, supra note 1. Such platforms, which I will call “social platforms” here, are thus in practice a central plank of the political public sphere given their growing role in supplying so many people with news.

The role that social platforms play in public life has sparked a small avalanche of worries even before the extraordinary entanglement of big tech’s corporate leadership with the partisan policy projects of the second Trump administration.5This essay was completed in late 2024 and edited in early 2025. I have not tried here to account for the synergistic entanglement of Elon Musk and the Trump White House, nor for the ways in which the X social platform has changed as a result. It is, as I write, too early to say how this exorbitant display of codependency between partisan and technological projects will alter the American public sphere. The worries are diverse. Many commentators have aired concerns about the effects of social-platform use on mental health and sexual mores,6See, e.g., Surgeon General Issues New Advisory About Effects Social Media Use Has on Youth Mental Health, U.S. Dept. of Health & Human Servs. (May 23, 2023), https://www.hhs.gov/about/news/2023/05/23/surgeon-general-issues-new-advisory-about-effects-social-media-use-has-youth-mental-health.html (noting “ample indicators that social media can also pose a risk of harm to the mental health and well-being of children and adolescents”). or the extent of economic exploitation in this platform-based gig economy.7See, e.g., Veena Dubal, On Algorithmic Wage Discrimination, 123 Colum. L. Rev. 1929, 1944 (2023). These important cultural and economic worries are somewhat distinct from worries surrounding the political functions of the digital public sphere. It is the latter’s pathologies, and only those problems, that this essay—as well as the broader symposium on listeners’ rights in which it participates—concentrates on.

Even within the narrower compass of political speech defined in strict and demotic terms, the role of social platforms raises several distinct concerns. I take up three common lines of criticism and concern here. A first line of critique focuses on these platforms’ alleged harmful effects on a broad set of user beliefs and dispositions thought to be needful for democratic life. Social platforms, it is said, pull apart the electorate by feeding them fake news, fostering filter bubbles, and foreclosing dialogue—to the point where democratic dysfunction drives the nation toward a violent precipice. This first argument concerns platforms’ effects on the public at large.

A second common line of argument, by contrast, makes no claim about the median social platform user. It instead focuses on the “radicaliz[ing]” effect of social media engagement on a small handful of users at the ideological margin.8Steven Lee Myers & Stuart A. Thompson, Racist and Violent Ideas Jump from Web’s Fringes to Mainstream Sites, N.Y. Times (June 1, 2022), https://www.nytimes.com/2022/06/01/technology/fringe-mainstream-social-media.html [https://web.archive.org/web/20250219041047/https://www.nytimes.com/2022/06/01/technology/fringe-mainstream-social-media.html]. If even these few users resort to violence to advance their views, it might be said that social media has had a deadly effect.9Id. This is an argument not about social platforms’ effects on the mass of users, but upon the behavior of a small tail of participants in the online world.

Yet a third sort of argument against social platforms does not sound in a strictly consequentialist register. It does not lean, that is, on any empirical evidence as to how users are changed by their engagement. Rather, it is a moral argument that picks out objectionable features of the relationship between platforms and their users. This plainly asymmetrical arrangement, it is said, allows invidious manipulation, exploitation, or even a species of domination. Even if users’ behaviors do not change, these characteristics of the platform-user relationship are said to be insalubrious. Especially given the role that algorithmic design plays in shaping users’ online experiences, it is argued, a morally problematic imbalance emerges between ordinary people and the companies that manage social platforms. In the limited case, in which there are few potential sources of information and in which those sources are controlled and even manipulated by their owners (usually men of a certain age who are disdainful of civility and truthfulness norms), an acute concern about domination arises.

If one accepts one of these arguments (and I will try to offer both their best versions and to explore their weaknesses in what follows), then there is some reason to think closely about the way social platforms are governed, and to look for regulatory interventions. Such governance might be supplied by platforms’ own endogenous rules, which are usually embodied in their contractual terms of service or other internal procedures (such as mechanisms to dispute a take-down or deplatforming decision). Alternatively, governance could be supplied by exogenous legislation or regulation promulgated by a state. Private governance and legal regulation, of course, are potential substitutes. They can both be used to achieve the same policy goals. But how? What should such governance efforts, whether private or public, aspire to? And which policy levers are available to achieve it?

Where a platform employs algorithmic tools to shape users’ experience by determining what they see, the range of potential interventions will be especially large. This is a result of the complexity of common computational architectures today. There are many ways to craft the algorithms on which many platforms run.10See Arvind Narayanan, Understanding Social Media Recommendation Algorithms, Knight First Amend. Inst. 9–12 (March 9, 2023), https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms [https://perma.cc/9WVD-7NJ6] (discussing common structural elements). And there are many technical choices about which instruments to use, how to calibrate them, and what parameter (engagement? a subset of engagement?) to optimize. Many of these decision points offer opportunities for unavoidably normative choices about the purpose and intended effects of social platforms. Resolving those choices in turn requires some account of what it means exactly to talk about a normatively desirable social platform: That is, what should a social platform do? And for whom?

Such questions takes on greater weight given (1) recent regulatory moves by American states to control platforms’ content moderation decisions;11Tyler Breland Valeska, Speech Balkanization, 65 B.C. L. Rev. 903, 905 (2024) (“In 2021 and 2022 alone, state legislators from thirty-four states introduced more than one hundred laws seeking to regulate how platforms moderate user content.”). (2) a recent Supreme Court decision responding to those efforts;12Moody v. NetChoice, LLC, 603 U.S. 707 (2024); see infra text accompanying notes 124–26. and (3) the European Union’s Digital Services Act, a statute that takes yet a different and more indirect tack in modulating platform design and its ensuing costs.13Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act), 2022 O.J. (L 277) 3 [hereinafter “Digital Services Act”]. Or consider a 2025 U.S. Supreme Court decision, rendered on a tightly expedited schedule, to uphold federal legislation banning TikTok.14TikTok Inc. v. Garland, 145 S. Ct. 57, 72 (2025) (per curiam). The legislation in question is the Protecting Americans from Foreign Adversary Controlled Applications Act, Pub. L. No. 118–50, 138 Stat. 955 (2024). The decision makes the remarkable suggestion that legislative control over social platforms—exercised by reshaping (or cutting off) the ordinary market from corporate control (for example, by forcing or by restricting a sale)—raises only weak First Amendment concerns. Applied broadly, such an exception from close constitutional scrutiny might allow broad state control over social platforms.

My main aim in this essay is to offer a new and fruitful analytic lens for thinking about these problems as questions of democratic institutional design. This is a way of approaching the problem of institutional design, not a set of prescriptions for how to do such design. I do so by pointing to a model of a desirable platform, and then asking how we can move toward that aspiration, and how much movement might be impeded or even thwarted. My aspirational model is not conjured out of the ether; rather, I take inspiration from an idea found in the scholarly literatures in political science and sociology that evaluates pathways of economic development. The idea upon which I draw is that development policy should aim to seed “islands of integrity” into patrimonial or nepotistic state structures as a way of building foundations for a more robust—and hence public-regarding—state apparatus.15For examples of the term in recent studies, see Monica Prasad, Proto-Bureaucracies, 9 Socio. Sci. 374, 376 (2022); Eliška Drápalová & Fabrizio Di Mascio, Islands of Good Government: Explaining Successful Corruption Control in Two Spanish Cities, 8 Pol. & Governance 128, 128 (2020). For further discussion, see infra Part II. This literature focuses on the question of the state’s seeds and nurtures zones (or those of another interested party, such as a private foundation or an international organization) where public-regarding norms, not self-regarding or selfish motives, dominate as a means of generating public goods.

By analogy to the examples of effective public administration discussed in this literature, I will suggest here that we should think about public-regarding platforms as “islands of algorithmic integrity” that advance epistemic and deliberative public goods with due regard to the potential for either exploitation or manipulation inherent in the use of sophisticated computational tools. With that threshold understanding in mind, we should then focus on how to achieve that specific, affirmative model—and not simply on how to avoid narrowly-defined and specific platform-related

harms. An affirmative ideal, that is, provides a baseline against which potential reform proposals can be evaluated.16I am hence not concerned here with the First Amendment as a template or limit to institutional design. The constitutional jurisprudence of free speech provides a different benchmark for reform. I largely bracket that body of precedent here in favor of an analytic focus on the question of what functionally might be most desirable.

To be very clear up front, this approach has limitations. It draws on the “island of integrity” literature here as a general source for inspiration, instead of a source for models that can be directly transposed. I do not think that there is any mechanical way of taking the lessons of development studies and applying them to the quite different virtual environment of social platforms. To the extent lessons emerge, they are at a high level of abstraction. Still, studies of islands of bureaucratic integrity in the wild can nevertheless offer a useful set of analogies: they point toward the possibility of parallel formations in the online world. They also help us see that there are already significant web-based entities that exemplify certain ideals of algorithmic integrity in practice because they hew to the general lessons falling out of the islands of integrity literature. These studies can illuminate how a more democratically fruitful digital public sphere might begin to be built given our present situation, even if they cannot offer a full blueprint of its ultimate design.

It is worth noting that my analytic approach here rests on an important and controversial assumption. That is, I help myself to the premise that reform of the digital public sphere can proceed first by the cultivation of small-scale sites of healthy democratic engagement and that these can be scaled up. But this assumption may not be feasible. It may instead be necessary to start with a “big bang”: a dramatic and comprehensive sweep of extant arrangements followed by a completely new architecture of digital space. If, for example, you thought that the problem of social platforms began and ended in their concentrated ownership in the hands of a few bad-spirited people, then the creation of new, more democratic platforms would not necessarily lead to a comprehensive solution. Given disagreement about the basic diagnosis of social platforms’ malady, it is hard to know which of these approaches is more sensible. Therefore, there is some value to exploring a piecemeal reform approach of the sort illuminated here. But that does not rule out the thought that a more robust “big bang” approach is in truth needed.

Part I of this essay begins with a brief survey of the main normative (consequentialist and deontic) critiques that are commonly lodged against social platforms, focusing on the three listed above. In Part II, I introduce the “islands of integrity” lens—briefly summarizing relevant sociological and political science literature—as a means to directly think about social platform reforms. My aim in so doing is to provide a litmus test for thinking about social platform reform in the round. With that lens in hand, Part III critically considers the regulatory strategies pursued by the American states and the European Union to date. I suggest some reasons to worry that these are unlikely to advance islands of algorithmic integrity. I close by reflecting on some alternative regulatory tactics that might move us quicker toward that goal.

I. The Case(s) Against Social Platforms

What is a social platform? Do such all platforms work in the same way and raise the same kind of normative objections? Or are objections to platforms better understood as training on a subset of cases or applications? This Part sets some groundwork for answering these questions by defining the object of my inquiries and by offering some technical details about different kinds of platforms. I then taxonomize the three different objections that are commonly lodged against social platforms as they currently operate.

A. Defining Social Platforms and Their Algorithms

A “platform” is “a discrete and dynamic arrangement defined by a particular combination of socio-technical and capitalist business practices.”17Paul Langley & Andrew Leyshon, Platform Capitalism: The Intermediation and Capitalisation of Digital Economic Circulation, 3 Fin. & Soc’y 11, 13 (2017). A subset of platforms are understood by their users as distinctively “social” rather than “commercial” insofar they provide a space for interpersonal interaction, intercalated with other activities such as “reading political news, watching media events, and browsing fashion lines.”18Lisa Rhee, Joseph B. Bayer, David S. Lee & Ozan Kuru, Social by Definition: How Users Define Social Platforms and Why It Matters, Telematics & Informatics, 1, 1 (2020). The leading “social platforms,” as I shall call them here, are Facebook, X, Instagram, and TikTok.19Id. I have added TikTok to the list in the cited text. I use the term “social platforms” because “social media platforms” is overly clunky and merely “platforms” is too vague.

Not all social platforms propagate content in the same way. There are two dominant kinds of system architecture. The first is the social network, where users see posts by other users who they follow (or subscribe to) as well as posts those users chose to amplify.20Narayanan, supra note 10, at 10. When Facebook and Twitter allowed users to reshare or retweet posts, they enabled the emergence of networks of this sort.21Id. Note that before the affordances that allowed users to share content in these ways, these had limited network capacity. Here, what one sees depends on who one “knows.” Interconnected webs of users on a network can experience “information cascades” as information flows rapidly across the system.22Id. This is known colloquially as “going viral.” The possibility of virality depends not just on platform design but also on users’ behaviors. But, in practice a very small number of posts go viral on social networks.23Id. at 15. Attention is a scarce commodity. We cannot and do not absorb most of what’s posted online. Our inability to absorb much means that it is only possible for a few items to achieve virality.

The second possible architecture is centered around an algorithm (or, more accurately, algorithms). On platforms of this sort, the stream of data observed by a user is largely shaped by a suite of complex algorithms, which are computational decisional tools that proceed through a series of steps to solve a problem. These algorithms, in the aggregate, are designed with certain goals in mind, such as maximizing the time users spend on the platform.24Id. at 10. Networks require both content processing tools (e.g., face recognition, transcription, and image filters) and also content propagation tools (e.g., search, recommendation, and content moderation). Id. at 8. I am largely concerned here with content propagation tools. TikTok’s “For You Page,” Google Discover, and YouTube all rely at least in part on algorithms.25Id. at 11.

In practice, what is for the sake of simplicity called “the algorithm” can be disaggregated into several different design elements, each of which is in truth a distinct algorithm or digital artifact. These include (1) the “surfaces of exposure” (that is, the visual interface encountered by users); (2) a primary ranking model (often a two-stage recommender system that combs through and filters potential posts); (3) peripheral models, which rank content that appears around the main surface of exposure (for example, ads); and (4) auxiliary models (for example, content moderation for illegal materials or posts that violate terms of service).26Kristian Lum & Tomo Lazovich, The Myth of the Algorithm: A System-Level View of Algorithmic Amplification, Knight First Amend. Inst. (Sept. 13, 2023), https://knightcolumbia.org/content/the-myth-of-the-algorithm-a-system-level-view-of-algorithmic-amplification [https://perma.cc/4WBQ-34WN]. For the sake of simplicity, I will refer to them together only as “the algorithm,” but it is worth keeping in mind that this is a simplification, and in fact there are multiple instruments at stake.

Algorithm design implicates many choices. At the top level, for example, an algorithmic model can be braided into a network model or integrated into a subscription-service model.27Narayanan, supra note 10, at 10–11 (“[N]o platform implements a purely algorithmic model . . . .”). At a more granular level, algorithms can be designed to optimize a broad range of varied parameters. These range from “meaningful social interactions” (Facebook’s measure at one point in time) to user’s watch time (YouTube’s measure) to a combination of liking, commenting, and watching frequencies (TikTok’s measure).28Id. at 19. The choice of parameter to optimize is important. Most common parameters quantify some element of users’ engagement with the platform, but they do so in different ways. Engagement measures are relevant from the platforms’ perspectives given their economic reliance on the revenue from advertising displayed to users.29For a useful account of the behavioral advertising industry, see generally Tim Hwang, Subprime Attention Crisis (2020). In theory, more engagement means more advertising revenue. But engagement on social platforms is surprisingly sparse. Somewhere between only one percent and five percent of posts on most social platforms generate any engagement at all.30Narayanan, supra note 10, at 28. And the movement from engagement to advertising is rarer still: most targeted online advertising is simply “ignored.”31Hwang, supra note 29, at 77; accord Narayanan, supra note 10, at 29.

B. Consequentialist Critiques of Social Platforms

There are, as I read the literature, three clusters of normative concerns raised by social platforms that merit consideration as the most important and common criticisms made of those technologies.32I recognize that there are complaints beyond those that I adumbrate here. I have selected those that seem to me supported by evidence and a coherent moral theory. I have ignored those wanting in such necessary ballast. Two are consequentialist, in the sense of training on allegedly undesirable effects of social platforms. Of course, such arguments need some means of evaluating downstream effects as either desirable or undesirable. In practice, they rest on some account of democracy as an attractive—even ideal—political order. (Note that as is often the case in legal scholarship, the precise kind of “democracy” at work in these critiques is not always fully specified. This lack of specification is a gap that will prove relevant in the analysis that follows.)33For an illuminating recent discussion on the varieties of democratic theory, see generally Jason Brennan & Hélène Landemore, Debating Democracy: Do We Need More or Less? (2021). The other cluster is deontic, in the sense of picking out intrinsically unattractive qualities of social platforms. These accounts do not rely on a causal claim about the effects of social platforms; they instead assert the prima facie unacceptability of platforms in themselves.

Let’s begin with the two consequentialist arguments and then move on to the deontic critique.

A first view widely held in both the academic and non-academic public spheres is that social platforms cause political dysfunction in a democracy because of their effects on the dispositions and beliefs of the general public.34See, e.g., Helen Margetts, Rethinking Democracy with Social Media, 90 The Pol. Q., Jan. 2019, 107, at 107 (assigning blame to social media for “pollution of the democratic environment through fake news, junk science, computational propaganda and aggressive microtargeting and political advertising”; for “creating political filter bubbles”; and for “the rise of populism, . . . the end of democracy and ultimately, the death of democracy.”). Using social platforms, this argument goes, drives (1) a dynamic of “affective polarization” (negative emotional attitudes towards members of opposition parties), or (2) traps us in “echo chambers” or filter bubbles that are characterized by limited, biased information.35Jonathan Haidt, Yes, Social Media Really Is Undermining Democracy, The Atlantic (July 28, 2022), https://www.theatlantic.com/ideas/archive/2022/07/social-media-harm-facebook-meta-response/670975 [https://perma.cc/7FFV-QRPB]. Social media users are also said to be exposed to “fake news,” which are “fabricated information that mimics news media content in form but not in organizational process or intent.”36David M. J. Lazer, Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts & Jonathan L. Zittrain, The Science of Fake News: Addressing Fake News Requires a Multidisciplinary Effort, 359 Sci. 1094, 1094 (2018); see also Edson C. Tandoc Jr., The Facts of Fake News: A Research Review, Soc. Compass, July 25, 2019, at 1, 2 (“[Fake news] is intended to deceive people, and it does so by trying to look like real news.”). For examples, see Aziz Z. Huq, Militant Democracy Comes to the Metaverse?, 72 Emory L.J. 1105, 1118–19 (2023). The terms “misinformation” and “disinformation” are also used to describe fake news and its variants. I leave aside questions about how to exactly define and distinguish these terms. High levels of exposure are said to be driven by algorithmic amplification.37See, e.g., Haidt, supra note 35; Zeynep Tufekci, Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency, 13 Colo. Tech. L.J. 203, 215 (2015) (criticizing Facebook for its power to “alter the U.S. electoral turnout” through algorithmic manipulation). Recent advances in deep-fake-creation tools have further spurred worries about an “information apocalypse” that destroys “public trust in information and the media.”38Mateusz Łabuz & Christopher Nehring, On the Way to Deep Fake Democracy? Deep Fakes in Election Campaigns in 2023, 23 Eur. Pol. Sci. 454, 457 (2024). Platforms, in this view, foster a world in which citizens lack a shared reservoir of mutual tolerance and factual beliefs about the world. Such deficiencies are said to render meaningful political debate on social platforms challenging—perhaps even impossible. As a result of these changes in peoples’ dispositions, the possibility of democratic life moves out of reach.

These arguments hence assume that democratic life requires the prevalence of certain attitudes and beliefs in order to be durably sustained (an assumption that may or may not be empirically justified). Another way in which these concerns can concretely be understood is to view them in light of the rise of anti-system parties,39Giovanni Capoccia, Anti-System Parties: A Conceptual Reassessment, 14 J. Theoretical Pol. 9, 10–11 (2002) (offering several different definitions of that term). which are characterized by their limited regard for democratic norms. Platforms might facilitate the growth of such anti-system candidates who disrupt or even undermine democratic norms such as broad trust in the state and in co-citizens. Through this indirect path, platforms have a detrimental effect on democracy’s prospects.

There are surprisingly few empirical studies that support the existence of a robust causal connection between social platforms and democratically necessary trust.40There is one experiment focused on search ranking that finds political effects, but the experiment is more than a decade old and focuses on how search results are displayed, not on the central issue of platform design today. Robert Epstein & Ronald E. Robertson, The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections, 112 Proc. Nat’l Acad. Sci. E4512, E4518–20 (2015). Yet some evidence for it can be found in the behaviors and beliefs of significant political actors. President Donald Trump, for example, declared in November 2016 that Facebook and Twitter had “helped him win” the 2016 U.S. presidential election.41Rich McCormick, Donald Trump Says Facebook and Twitter ‘Helped Him Win’, The Verge (Nov. 13, 2016, 7:02 PM PST), https://www.theverge.com/2016/11/13/13619148/trump-facebook-twitter-helped-win [https://perma.cc/5MUQ-7R73]. Since 2020, conservative donors such as the Bradley Impact Fund and the Conservative Partnership Fund have contributed millions to Republican-aligned groups combating effects to “take a tougher line against misinformation online.”42Jim Rutenberg & Steven Lee Myers, How Trump’s Allies Are Winning the War Over Disinformation, N.Y. Times, https://www.nytimes.com/2024/03/17/us/politics/trump-disinformation-2024-social-media.html [https://web.archive.org/web/20250401001211/https://www.nytimes.com/2024/03/17/us/politics/trump-disinformation-2024-social-media.html]. Such significant financial investments by important political actors, beyond merely cheap talk, suggest that social platforms do have predictable partisan effects for candidates and parties that have an arguable anti-systemic orientation.43A mea culpa: in previous work, I was too credulous in respect to claims of platform-related harms. Huq, supra note 36, at 1118–19. I should have been more cautious.

On the other hand, well-designed empirical studies have cast doubt on the negative, large-“N” effects of social platforms.44For a prescient popular argument to that effect, see Gideon Lewis-Kraus, How Harmful Is Social Media?, New Yorker (June 3, 2022), https://www.newyorker.com/culture/annals-of-inquiry/we-know-less-about-social-media-than-we-think [https://perma.cc/7FFV-QRPB]. Four studies are illustrative. A first well-designed randomized experiment, which tested the effect of platform deactivation for several weeks before the 2020 election, found no statistically significant effects of platform exposure on affective polarization, issue polarization, or vote choice.45The study found a non-significant pro-Trump effect from Facebook usage but cautioned against treating this finding as generalizable. Hunt Allcott, Matthew Gentzkow, Winter Mason, Arjun Wilkins, Pablo Barberá, Taylor Brown, Juan Carlos Cisneros, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Sandra González-Bailón, Andrew M. Guess, Young Mie Kim, David Lazer, Neil Malhotra, Devra Moehler, Sameer Nair-Desai, Houda Nait El Barj, Brendan Nyhan, Ana Carolina Paixao de Queiroz, Jennifer Pan, Jaime Settle, Emily Thorson, Rebekah Tromble, Carlos Velasco Rivera, Benjamin Wittenbrink, Magdalena Wojcieszak, Saam Zahedian, Annie Franco, Chad Kiewiet de Jonge, Natalie Jomini Stroud & Joshua A. Tucker, The Effects of Facebook and Instagram on the 2020 Election: A Deactivation Experiment, 121 Proc. Nat’l Acad. Sci., 1, 8–9 (2024). A second random experiment focused on the difference between Facebook’s default algorithms and a reverse-chronological feed. Again, the study found no effect on affective polarization, issue polarization, or political knowledge after switching from a network-driven feed to an algorithmically-driven feed, even though the use of a reverse chronological feed increased the amount of “untrustworthy” content seen.46Andrew M. Guess, Neil Malhotra, Jennifer Pan, Pablo Barberá, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, Sandra González-Bailón, Edward Kennedy, Young Mie Kim, David Lazer, Devra Moehler, Brendan Nyhan, Carlos Velasco Rivera, Jaime Settle, Daniel Robert Thomas, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Beixian Xiong, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud & Joshua A. Tucker, How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?, 381 Sci. 398, 402 (2023). This null finding from a study of opting into algorithmic content propagation has been replicated in a separate study of YouTube.47Homa Hosseinmardi, Amir Ghasemian, Aaron Clauset, Markus Mobius, David M. Rothschild & Duncan J. Watts, Examining the Consumption of Radical Content on YouTube, 118 Proc. Nat’l Acad. Sci., 1, 1 (2021).

Finally, an empirical inquiry into exposure to fake news found only a very small positive effect on the vote share of populist candidates in European elections.48Michele Cantarella, Nicolò Fraccaroli & Roberto Volpe, Does Fake News Affect Voting Behaviour?, Rsch. Pol’y, Jan. 2023, at 1, 2. Another study of 1,500 users in each of three countries (France, the United Kingdom, and the United States) identified no correlation between social platform use and more extreme right-wing views; indeed, in the United States, they found a negative correlation.49Shelley Boulianne, Karolina Koc-Michalska & Bruce Bimber, Right-Wing Populism, Social Media and Echo Chambers in Western Democracies, 22 New Media & Soc’y 683, 695 (2020). The authors concluded that their “findings tend to exonerate the Internet generally and social media in particular, at least with respect to right-wing populism.”50Id. Finally, a 2017 study found that President Trump erred when he claimed that Twitter and X helped him in the 2016 election; again, that study found a negative correlation between more extreme right-wing views and social platform usage.51Jacob Groshek & Karolina Koc-Michalska, Helping Populism Win? Social Media Use, Filter Bubbles, and Support for Populist Presidential Candidates in the 2016 US Election Campaign, 20 Info., Commc’n & Soc’y 1389, 1397 (2017) (“American voters who used social media to actively participate in politics by posting their own thoughts and sharing or commenting on social media were actually more likely to not support Trump as a candidate.”).

Summarizing the available research (including these studies) in a June 2024 issue of Nature, a team of respected scholars concluded that “exposure to misinformation is low as a percentage of people’s information diets” and further “the existence of large algorithmic effects on people’s information diets and attitudes has not yet been established.”52Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson & Duncan J. Watts, Misunderstanding the Harms of Online Misinformation, 630 Nature 45, 47–48 (2024); accord Sacha Altay, Manon Berriche & Alberto Acerbi, Misinformation on Misinformation: Conceptual and Methodological Challenges, Soc. Media + Soc’y, Jan.–Mar. 2023, at 1, 3 (“Misinformation receives little online attention compared to reliable news, and, in turn, reliable news receives little online attention compared to everything else that people do.”). The Nature team warned that the extent to which social platforms undermine political knowledge depends on the availability of other news sources. Where countries “lack reliable mainstream news outlets,” their negative knowledge-related spillovers may be greater.53Budak et al., supra note 52, at 49. I do not pursue that suggestion here, since it invites a bifurcated analysis that separately considers different national jurisdictions, depending on the robustness of their non-digital media ecosystems. What follows should be taken as parochially relevant to North American and European democracies (at least for now) but not the larger world beyond that.

A second view of social platforms’ harms identifies not its spillovers at scale, but rather its effects on certain narrow slices of the population—in particular, those at the tails of the ideological distribution. The intuition here is that engagement with social platforms may not change the dispositions or beliefs of most people, but there is a small subset of individuals who adopt dramatically divergent beliefs (and even behaviors) as consequences of their platform use. “Tail effects” of this sort may not be significant for democratic life under some circumstances, but of particular relevance, there is some evidence of increased support for political violence among Americans.54At least some surveys suggest rising levels of positive attitudes to violence. See Ashley Lopez, More Americans Say They Support Political Violence Ahead of the 2024 Election, NPR, https://www.npr.org/2023/10/25/1208373493/political-violence-democracy-2024-presidential-election-extremism [https://perma.cc/ZM4L-BRRV]. For other findings exhibiting a concentration of such support at the rightward end of the political spectrum, see Miles T. Armaly & Adam M. Enders, Who Supports Political Violence?, 22 Persp. on Pol. 427, 440 (2024). Extremism at the tails in this context and about this sentiment may have profound consequences. At a moment when President Trump has (twice) faced near-assassination during the 2024 presidential election cycle, and considering how his supporters previously precipitated a deadly confrontation at a 2021 Joint Session of Congress meant to count Electoral College votes, it seems prudent to reckon with the risk that radicalized individuals—even if few in number—may be able to inflict disproportionate harms on institutions that are necessary for core democratic political processes.

This more narrowly gauged claim stands on firmer empirical ground than the critiques of social platforms’ large-“N” effects discussed above. A 2024 study of fake news’ circulation on Twitter found that 0.3 percent of users account for four-fifths of its fake news volume.55Sahar Baribi-Bartov, Briony Swire-Thompson & Nir Grinberg, Supersharers of Fake News on Twitter, 384 Sci. 979, 980 (2024). These “supersharers,” who tended to be older, female, and Republican, in turn reached a “sizable 5.2% of registered voters on the platform.”56Id. at 979. Note that this is not necessarily the population one would expect to engage in political violence. A different study published around the same time also found “asymmetric . . . political news segregation” with “far more homogenously conservative domains and URLs circulating on Facebook” and “a far larger share” of fake news on the political right.57Sandra González-Bailón, David Lazer, Pablo Barberá, Meiqing Zhang, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Deen Freelon, Matthew Gentzkow, Andrew M. Guess, Shanto Iyengar, Young Mie Kim, Neil Malhotra, Devra Moehler, Brendan Nyhan, Jennifer Pan, Carlos Velasco Rivera, Jaime Settle, Emily Thorson, Rebekah Tromble, Arjun Wilkins, Magdalena Wojcieszak, Chad Kiewiet de Jonge, Annie Franco, Winter Mason, Natalie Jomini Stroud & Joshua A. Tucker, Asymmetric Ideological Segregation in Exposure to Political News on Facebook, 381 Sci. 392, 397 (2023).

Such findings are consistent with wider-angle studies of partisan polarization, which find different microfoundations on the political left and right.58Craig M. Rawlings, Becoming an Ideologue: Social Sorting and the Microfoundations of Polarization, 9 Socio. Sci. 313, 337 (2022). The Nature team mentioned above hence concluded that exposure to misinformation is “concentrated among a small minority.”59Budak et al., supra note 52, at 48. Those who consume false or otherwise potentially harmful content are already attuned to such information and actively seek such content out.60Id. Platforms, however, do not release “tail exposure metrics” that could help quantify the risk of harm from such online interactions.61Id. at 50; see also Vivian Ferrillo, r/The_Donald Had a Forum: How Socialization in Far-Right Social Media Communities Shapes Identity and Spreads Extreme Rhetoric, 52 Am. Pol. Rsch. 432, 443 (2024) (finding that users who engage often with a far-right community also use far-right vocabulary more frequently in other spaces on their platform, contributing to the spread and normalization of far-right rhetoric). As a result, it is hard to know how serious the problem may be.

What of the concern that social platforms conduce to “filter bubbles” that constrain the range of information sources users can access in problematic ways?62For an influential treatment of the topic, see generally Eli Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (2012). Once again, the evidence is at best inconclusive. A 2016 study found that social homogeneity of users predicted the emergence of echo chambers characterized by asymmetrical patterns of news sharing.63Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley & Walter Quattrociocchi, The Spreading of Misinformation Online, 113 Proc. Nat’l Acad. Sci. 554, 558 (2016). At the same time, the study offered no empirical evidence about the extent or effects of filter bubbles “in the wild,” so to speak. A 2021 review identified divergent results in studies surveying human users of social platforms or digital trace data; yet, it identified only a handful of studies substantiating the concern.64Ludovic Terren & Rosa Borge, Echo Chambers on Social Media: A Systematic Review of the Literature, 9 Rev. Commc’n Rsch. 99, 110 (2021) (reviewing fifty-five studies and finding only five yielding no evidence of echo chambers). A 2022 meta-study found that “most people have relatively diverse media diets,” and only “small minorities, often only a few percent, exclusively get news from partisan sources.”65Amy Ross Arguedas, Craig T. Robertson, Richard Fletcher & Rasmus K. Nielsen, Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review 4 (2022), available at https://ora.ox.ac.uk/objects/uuid:6e357e97-7b16-450a-a827-a92c93729a08. Again, the empirical foundations of the normative worry here seem shaky.

Even if the evidence of filter bubbles existing was more robust, filter bubbles’ substantiated existence would not necessarily be cause for concern. Concern about filter bubbles focuses on the asymmetric character of the information voters consume; this then assumes that there is a counterfactual condition under which the voter might receive a “balanced” diet of information. But what does it mean to say that a person’s news inputs are balanced or symmetrical? Does it require equal shares of data that support Republican and Democratic talking points? What if one of those parties is more likely than the other to lean on false empirical claims? Should a balanced informational diet reflect or discount for such a lean? How are the problems of misinformation or distorted information to be addressed? Is it part of a balanced informational diet to receive a certain amount of “fake news”? These questions admit of no easy answers. Rather, they suggest that the concern with filter bubbles trades on a notion of balance that is hard to cash out in practice without difficult anterior ideological and political choices.

In brief, the available empirics suggest that consequentialist critiques of social platforms are better focused on tail effects instead of the way platform engagement changes the median user or the mass of users. It is also worth underscoring a point that is somewhat obscured by the bottom-line results of these studies but implicit in what I have just set out. That is, the tail effects of social platforms arise from a complex and unpredictable mesh of interactions between technical design decisions and users’ decisions. The external political environment hence shapes platforms’ spillover effects, and when that environment is more polarized and more prone to panics or even violence, it seems likely that the tail risks of social platforms would correspondingly rise. When, by contrast, there are a plethora of reliable non-digital sources which are accurate and easily accessible, the threat to democratic life from social platforms may well be far less acute.

C. Deontic Critiques of Social Platforms

Critiques of social platforms do not need to rest on evidence of their consequences. It is also possible to pick out features of the relationship between platforms and users as morally problematic even in the absence of any harm arising. Two particular strands of such “deontic” critique can be traced in existing literature.

First, social platforms (among other entities) gather data about their users and then use that data to target advertisements to those same users. For many, this circular pattern of data extraction and deployment constitutes a morally problematic exploitation. Such exploitation occurs when “one party to an ostensibly voluntary agreement intentionally takes advantage of a relevant and significant asymmetry of knowledge, power, or resources” to offer otherwise unacceptable contracting terms.66Claire Benn & Seth Lazar, What’s Wrong with Automated Influence, 52 Canadian J. Phil. 125, 135 (2022).

Shoshana Zuboff, who is perhaps the leading expositor of this view, argues that platforms have “scraped, torn, and taken for another century’s market project” the very stuff of “human nature.”67Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 94 (2019). She condemns the “rendition” and “dispossession of human experience” through “datafication.”68Id. at 233–34. Zuboff’s critique of platform exploitation is nested in a broader set of concerns about the presently hegemonic form of “informational” or “financial” capitalism. Reviewing Zuboff’s book, Amy Kapczynski thus asserts that “informational capitalism brings a threat not merely to our individual subjectivities but to our ability to self-govern.”69Amy Kapczynski, The Law of Informational Capitalism, 129 Yale L.J. 1460, 1467 (2020). Similarly, danah boyd characterizes private firms’ use of digital power as a malign manifestation of “late-stage capitalism . . . driven by financialization.”70danah boyd, The Structuring Work of Algorithms, 152 Dædalus 236, 238 (2023). And as Katharina Pistor puts it, “[t]he real threat that emanates from Big Tech using big data is not just market dominance . . . [but] the power to transform free contracting and markets into a controlled space that gives a huge advantage to sellers over buyers.”71Katharina Pistor, Rule by Data: The End of Markets?, 83 Law & Contemp. Probs. 101, 117 (2020); accord Julie E. Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev. 133, 145–48 (2017). The structure of financial or quasi-financial transactions on social platforms, in this view, conduces systemically to users’ exploitation.

In an earlier piece, I have expressed sharp skepticism elsewhere about the empirical and normative arguments offered by Zuboff and Kapczynski.72Mariano-Florentino Cuéllar & Aziz Z. Huq, Economies of Surveillance, 133 Harv. L. Rev. 1280, 1298 (2020). Their concerns about exploitation seem to trade on imprecise and potentially misleading analogies to more familiar and normatively troubling forms of economic exploitation, despite meaningful differences in structure and immediate effect. Indeed, both analogies fail to take those differences seriously. More generally, their arguments borrow a suite of concerns associated with the larger structures of economic life labeled “neoliberalism,” which have developed since the 1970s. Such critiques of neoliberalism, however, concern aspects of economic life that have little to do with social platforms (for example, deregulation and financialization). One can have neoliberalism with or without social platforms. I see little analytic gain in combining these very different lines of argument respecting quite distinct targets, and I see no reason to invite confusion by mushing together distinct phenomena to achieve guilt-by-association more generally.

Second, concern about exploitation overlaps with a distinct worry about non-domination. Claire Benn and Seth Lazar capture this possibility in their argument that social platforms might compromise an intrinsic, non-instrumental “value of living in societies that are free and equal.”73Benn & Lazar, supra note 66, at 133. They argue that the public is necessarily ignorant about the “tech companies’ control of the means of prediction” and so have “no viable way of legitimating these new power relations.”74Id. at 137. But the empirical premise of this argument—widespread public ignorance about predictive tools—seems shaky: As the empirical studies of fake news and political distortion show, there is publicly available knowledge about many salient effects of social platforms. To the extent that the public misconstrues those effects, Benn and Lazar’s argument likely overestimates their magnitude.75See supra notes 35 and 37 for examples of such overestimation. I hardly think these critiques are secret.

Still, I think Benn and Lazar are on to something useful when they identify the fact of corporate control as a morally salient one. Social platforms stand in an asymmetrical relation to the general public because of (1) knowledge asymmetries enabled by the corporate form; (2) collective action problems implicit in the one-to-many relation of firms to consumers; and (3) ideological effects (for example, false beliefs in the necessity of unregulated digital markets for economic growth). As a consequence of these dynamics, social platforms exercise a certain kind of unilateral power over the public. Such power might be especially worrying if it is concentrated in the hands of a limited number of people—and if these people have close connections to those in high state office (with the Musk/Trump relationship offering an obvious, highly salient example). This slate of worries comes sharply into play whenever platforms comprise an important part of the democratic public sphere. Under these conditions, Benn and Lazar point out that platforms ought not to merely prevent negative consequences for democratic politics; they must also ensure “that content promotion is regulated by epistemic ideals.”76Benn and Lazar, supra note 66, at 144. This entails, in their view, a measure of “epistemic paternalism.”77Id. It rests on platforms’ unilateral, and effectively unconstrained, judgments about interface and algorithmic design.

This deontic argument can also be stated in the terms of Philip Pettit’s influential theory of republican freedom. On Pettit’s account, an individual wields dominating power over another if the former has the capacity to interfere in certain choices of the latter on an arbitrary basis.78Philip Pettit, Republicanism: A Theory of Freedom and Government 52 (1997). Pettit asserts that this arbitrariness condition is satisfied if an agent’s actions are subject only to the arbitrium—the will or judgment—of the agent, and when the interfering agent is not “forced to track the interests and ideas of the person suffering the interference.”79Id. at 55. For example, a person ranked by law as a slave is just as unfree even if their master always acts with their interests in mind. Even when an arbitrary legal relationship is exercised in a beneficent fashion with the interest of the weaker party in mind, Pettit suggests that there is a displacement of the subject’s “involvement, leaving [them] subject to relatively predictable and perhaps even beneficial forms of power that nevertheless ‘stifle’ and ‘stultify.’ ”80Patchen Markell, The Insufficiency of Non-Domination, 36 Pol. Theory 9, 12 (2008). To be clear, Markell here is criticizing and extending Pettit’s account.

Yasmin Dawood has fruitfully deployed Pettit’s framework for thinking about the abuse of public power in democratic contexts.81Yasmin Dawood, The Antidomination Model and the Judicial Oversight of Democracy, 96 Geo. L.J. 1411, 1431 (2008). Her conceptual framing, moreover, could be extended to private actors such as social platforms without too much difficulty. For instance, one might view the exercise of extensive control over the informational environment online as a species of domination, whether or not it was exercised in a malign or a paternalistic direction. That idea might be rendered more precise by drawing on work by Moritz Hardt, Meena Jagadeesan, and Celestine Mendler-Dünner that defines the “performative power” of an algorithmic instrument in terms of “how much participants change in response to actions by the platform, such as updating a predictive model” as a numerical parameter.82Moritz Hardt, Meena Jagadeesan & Celestine Mendler-Dünner, Performative Power, 2022 NIPS ’22: Proc. of the 36th Int’l Conf. on Neural Info. Processing Sys. 2. This concept of “performative power” usefully captures the way that platforms “steer” populations.83Id. at 5–6. As such, it offers a way of understanding and measuring “domination” in social platforms more precisely.

In setting out these two kinds of deontic critiques of social platforms, I thus suggest that there are plausible grounds for worry about social platforms, even absent robust empirical findings of spillovers onto users’ beliefs and dispositions. I recognize that both the exploitation and the domination critiques rest on further moral premises, which I have neither spelled out in full nor tried to substantiate. But I spell out both deontic arguments here to show readers the minimally plausible non-consequentialist grounds for concern about the structure and operation of social platforms and to allow readers to make their own judgments.

D. Making a Better Case Against Social Platforms

Social platforms have become scapegoats of sorts for many of the ills that democratic polities are now experiencing. But the available evidence suggests that many of these critiques miss the mark. For many people, platforms simply do not play a very large or dominant epistemic role (although this may well change in the near future). They also seem not to have the polarizing and epistemically distorting effects many bemoan.

That is not to say, however, that there is no reason for concern. Consequentialist worries about the behavior of users on the tails of the ideological distribution, as well as deontic worries about exploitation or domination, point toward the need for reforming measures. Of course, these arguments might not all point in the same direction in terms of practical change. But reforms that render platforms more responsive and responsible to epistemically grounded truths and the interests of all their users (as well as interests of the general public at large) are plausibly understood to respond to all the salient critiques discussed above.

II. Islands of Integrity—Real and Digital Examples

One way of thinking about how existing social platforms might be reformed is to identify an aspirational end-state, or a model, of how they might ideally work. With an understanding of the best version of a social platform in view, it may be easier to evaluate extant reform strategies and to propose new ones. This inquiry might proceed at the retail level—focusing on what an “ideal” or a “better” platform might look like—or at a general level—asking how the digital ecosystem overall should be designed. With the first of these paths in mind, I introduce in this Part a conceptual framework for thinking about “islands of integrity” developed in the sociological and political science studies of development. While that literature has not yielded any simple or single formula for reaching that aspiration, it still offers a useful lens for starting to think about well-functioning social platforms. Or so I hope to show in what follows.

A. Building Islands of Integrity in the Real World

In recent decades, concern about the legality and the quality of governance has shaped the agenda of international development bodies such as the World Bank.84Aziz Z. Huq, The Rule of Law: A Very Short Introduction 75–78 (2024). One of the strategies identified to enhance the quality of public administration centers the idea of “islands of integrity” or “pockets of effectiveness” in sociopolitical environments that are “otherwise dominated by patrimonialism, corruption, and bureaucratic dysfunction.”85Prasad, supra note 15, at 376. An island of integrity has been defined as an entity or unit (generally of government) that is “reasonably effective in carrying out [its] functions and in serving some conception of the public good, despite operating in an environment in which most agencies are ineffective and subject to serious predation . . . .”86David K. Leonard, ‘Pockets’ of Effective Agencies in Weak Governance States: Where Are They Likely and Why Does It Matter?, 30 Pub. Admin. & Dev. 91, 91 (2010); see also Michael Roll, The State That Works: A ‘Pockets of Effectiveness’ Perspective on Nigeria and Beyond, in States at Work: Dynamics of African Bureaucracies 365, 367 (Thomas Bierschenk & Jean-Pierre Olivier de Sardan eds., 2014) (“A pocket of effectiveness (PoE) is defined as a public organisation that provides public services relatively effectively despite operating in an environment, in which public service delivery is the exception rather than the norm.”). The normative intuition is that it is possible to seed islands of integrity, despite pervasive corruption, as a starting point for more large-scale reforms.

There are by now a wide variety of case studies on islands of integrity. Monica Prasad, for example, points to the recently stood-up Indian Institutes of Technology (“IITs”), an archipelago of meritocratic technology-focused colleges across the subcontinent, as an instance where an educational mission is successfully pursued against “a context of patrimonialism and corruption.”87Prasad, supra note 15, at 380. IITs’ mission is preserved and protected from distortion through the use of selection strategies of “meritocratic decoupling” that sort both students and teachers based on academic merit, alongside efforts to show how the institution benefited those who were excluded.88Id. at 382–83.

In a different case study, Eliška Drápalová and Fabrizio Di Mascio identify a pair of municipalities in Spain as “islands of integrity.”89Drápalová & Di Mascio, supra note 15, at 128. They contend that the key move in creating them was the fashioning of a “fiduciary relationship between mayors and city managers,” which allowed for the development of a bureaucratic structure shaped by professional (rather than patrimonial) norms.90Id. at 129–30, 135. City managers, they find, offer “accountability and responsiveness” to elected leaders without compromising the integrity of service-oriented institutions.91Id. at 135. Similarly, Michael Roll maps the emergence in Nigeria of well-run agencies managing food and drug regulation on the one hand, and human trafficking on the other, to demonstrate that islands of integrity can emerge even under very difficult circumstances given the right leadership.92Roll, supra note 86, at 370–73.

Most, but not all, of these case studies on islands of integrity concern real-world public administration, often at a local level.93One article applies the concept to public broadcasters in developing countries, but does not do so with enough detail to be useful. Cherian George, Islands of Integrity in an Ocean of Commercial Compromises, 45 Media Asia 1, 1–2 (2018). The generalizations drawn by the literature are concededly fragile: The heterogeneity of cultural, political, and institutional context makes inference instable, at least at a useful level of granularity.94Leonard compiles a number of general lessons, but these are pitched at a very high level of abstraction. Leonard, supra note 86, at 93. Still, a couple of regularities do tentatively emerge from a review of the available case studies in the development literature.

Crudely stated, the “islands of integrity” literature underscores the importance of institutional means and leadership motives for resisting patrimonial or corrupt political cultures. First, an island of integrity needs to internalize control over its own workings in order to “create a culture of meritocracy and commitment to the organization’s mission.”95Prasad, supra note 15, at 376. Underpinning this culture, it seems, must be a clear understanding of the public goods that the agency or body is supposed to produce. The truism that leadership is key seems to hold particularly strongly.96Leonard, supra note 86, at 94 (noting the importance of “leadership, personnel management, resource mobilisation and adaptability”). Autonomy over personnel choice is also crucial in order to maintain that culture.97Roll, supra note 86, at 379.

Second, there is a consistent institutional need for the creation of tools to resist demands from powerful external actors who try to capture a body for their immediate political or economic gains, which are unrelated to the public-regarding goals of the institution.98Id. at 377–78 (noting the role of tools for “political management”). Tools by which to mitigate such threats to institutional autonomy vary. Indian universities, Prasad found, tout the local jobs they create in cleaning and maintenance—even as they maintain the separation of student and faculty selection from local pressures—as a way of deflecting local politicos.99Prasad, supra note 15, at 385. Spanish city managers, Drápalová and Di Mascio explain, promise improvements in top-line municipal services to mayors who threaten their autonomy.100Drápalová and Di Mascio, supra note 15, at 135. In effect, reform is purchased in both cases by strategic payoffs to those who threaten its progress.

Just as it is important to work out how to build public-regarding institutional spaces in the real world, so too is it important to identify how to create such spaces in the virtual, digitally mediated world. Just as the bodies in India, Spain, and Nigeria need to have motive and means to keep the corroding forces of public sphere at bay, so too does a social platform that strives to be an island of integrity need leadership, internal culture, and means to create a non-exploitative, non-dominating structure while managing tail risk better than existing platforms. Taken as metaphor, therefore, “islands of integrity” offer a template for the desirable end goal of social platform reform as well as some modest clues about how to get there. Still, it is important not to make too much of this metaphor. The claim that the “islands of integrity” literature can be an inspiration for social platform reform is, at bottom, an argument from analogy, and one that needs to be tested carefully through the application of that analogy.

B. Digital Islands of Integrity: Two Examples

The aforementioned analogy gains force when one realizes that there are already examples of digital islands of integrity online. The two most prominent examples are Wikipedia and the British Broadcasting Company (“BBC”). To be clear, neither is a quintessential social platform as I have used that term here. Nor do they operate at the same scale as X or Instagram. But I offer a brief discussion of both by way of proof of concept.

Wikipedia emerged from the wreckage of an attempted for-profit online encyclopedia called Nupedia.101Emiel Rijshouwer, Justus Uitermark & Willem de Koster, Wikipedia: A Self-Organizing Bureaucracy, 26 Info., Commc’n & Soc’y 1285, 1291 (2023). The latter’s assets (for example, domain names, copyrights, and servers) were subsequently placed in an independent, charitable organization, the Wikimedia Foundation (“WMF”).102Id. at 1293. At first, corporate governance “emerged” organically from the efforts of those building the site, rather than being imposed from above.103Id. at 1298 (explaining that “bureaucratization emerges from interactions among constituents” of Wikipedia). A group of founders then “transformed their charismatic community into a bureaucratic structure” in which “power was diffused and distributed” across “a sprawling bureaucracy with a wide range of formal positions” in response to the perceived mission-related needs of the organization.104Id. at 1294. The organization’s trajectory has also been characterized by moments of greater centralization. For example, in the early 2010s, Wikipedia’s CEO led an effort to be “more inclusive and more open,” somewhat to the chagrin of the then-contributors.105Id. at 1296. That is, Wikipedia’s governance history centers on a choice of corporate form that insulates leadership from external profit-related pressures, a selection of strong leadership, and an exercise of leadership to broaden and empower the organization’s constituencies (potentially mitigating criticism of the organization) to generate a certain kind of “corporate culture.”106Cf. Pasquale Gagliardi, The Creation and Change of Organizational Cultures: A Conceptual Framework, 7 Organizational Stud. 117, 121–26 (1986) (exploring the meaning of the term “organizational value” and culture).

Even more directly relevant is the web presence of the BBC. The BBC produces thousands of new pieces of content each day for dissemination over a range of sites, such as BBC News, BBC Sport, BBC Sounds, BBC iPlayer, and World Service.107Alessandro Piscopo, Anna McGovern, Lianne Kerlin, North Kuras, James Fletcher, Calum Wiggins & Megan Stamper, Recommenders with Values: Developing Recommendation Engines in a Public Service Organization, Knight First Amend. Inst. (Feb. 5, 2024), https://knightcolumbia.org/content/recommenders-with-values-developing-recommendation-engines-in-a-public-service-organization [https://perma.cc/APX5-T9T2]. The corporation’s charter defines its mission as serving all audiences by providing “impartial, high-quality and distinctive output and services which inform, educate and entertain.”108Id. Like Wikipedia, the BBC is organized into a corporate form that is relatively impermeable by law to commercial pressures. To advance its charter goals, the BBC uses machine-learning recommender algorithms created by multi-disciplinary teams of data scientists, editors, and product managers.109Id. Once a recommender system has been built,110Id. Public service broadcasters such as the BBC cannot rely on “off-the-shelf” recommenders because they optimize for very different goals. Jockum Hildén, The Public Service Approach to Recommender Systems: Filtering to Cultivate, 23 Television & New Media 777, 787 (2022). editorial staff can offer “continuous feedback” on the design and operation of recommendatory systems to identify legal compliance questions and to ensure “BBC values” are advanced.111Piscopo et al., supra note 107.

Available accounts of this process—while perhaps a touch self-serving because they are written by insiders—suggest that the organization strives to cultivate a distinctive cultural identity. It then leverages that identity as a means of advancing its values via algorithmic design. Specifically, an important part of this recommender design process focuses on empowering users to make their own choices and to avoid undesirable (from the service’s perspective) behaviors. The BBC’s recommender tools are designed to permit personalization, albeit only to the extent that doing so can “coexist with the BBC’s mission and public service purposes.”112Id. An insider informant speaking anonymously reported that the BBC understands itself as “as ‘morally obliged’ to provide their users with the possibility of tweaking their recommendations.”113Hildén, supra note 110, at 786. In the same study, the employee of an unnamed European public broadcaster that managed a recommender system reported that their system proactively identified “users who consume narrow and one-sided media content and recommend[ed to] them more diverse content.”114Id. at 788. That is, the system was designed to anticipate and mitigate, to an extent, the possibility of extremism at the tails of the user distribution, while also preserving users’ influence over the content of their feeds. This is in stark contrast to systems that are designed to maximize engagement under conditions in which predictability entails driving users to more extreme (and even dangerous) content.

I do not want to strain the parallels between the “islands of integrity” literature and these digital examples too much. Both of the latter, nevertheless, point to ways in which the means and the motives to sustain an “island of integrity” can be imagined in an online world. Both services are, for example, explicitly oriented toward a public service mission in terms of leadership. They both also opted for corporate forms that allow for some protection against potentially compromising market forces. Both have systems in place to preserve and transit a valued internal culture, while buffering themselves somewhat against the risks of distorting external or internal pressure. Finally, both seem to have successfully cultivated persisting cultures of service to public-service goals by hard-wiring their cultures into bureaucratic structures or, alternatively, algorithmic designs.

III.  The Governance of Social Platforms: Aspiring to Build Islands of Algorithmic Integrity

With the general idea of “islands of integrity” in hand, along with the specific proofs of concept described in Section II.B, it is possible to ask how certain social platforms might be reformed with an ideal of islands of algorithmic integrity in mind. That is, how might we move toward alternative platform designs and operations that address the normative concerns outlined in Part II? What kind of private governance might be imagined that mitigates exploitation and domination concerns, while addressing the tail risk of extremism as best as we can? Could legal regulation play a role? Again, it would be a mistake to frame these questions as mechanical applications of the “islands of integrity” literature. It is better to think of them as falling out of the same institutional design goal.

I approach this inquiry in two stages. I first begin by critiquing leading regulatory strategies observed in the American states and the European Union from an “islands-of-algorithmic-integrity” standpoint. At bottom, these critiques draw out ways in which those regulatory strategies take social platforms as potential sources of harm, largely without an account of the positive role platforms could play. Second, I draw together a number of possible tactics for public or private actors to help build islands of algorithmic integrity. My positive accounting here is concededly incomplete. My hope, however, is that this effort serves as initial evidence of the fruitfulness of an approach oriented toward the aspiration of islands of algorithmic integrity.

A. The Limits of Existing Platform Regulation Regimes

Since 2020, social platforms have become an object of regulatory attention on both sides of the Atlantic. Three main regulatory strategies can be observed. These take the form of new state regulations purportedly targeting “censorship,”115Mary Ellen Klas, DeSantis Proposal Would Protect Candidates Like Trump from Being Banned on Social Media, Mia. Herald, https://www.miamiherald.com/news/politics-government/state-politics/article248952689.html [https://web.archive.org/web/20221017063802/https://www.miamiherald.com/news/politics-government/state-politics/article248952689.html] (quoting Florida governor Ron DeSantis). fresh efforts to extend common law tort liabilities to social platforms, and a risk-based regulatory regime that has been promulgated by the European Union. Broadly speaking, all such legal intervention is premised on concern about platforms’ society-wide effects on listeners, although deontic concerns may play a role too. The tools seized for those tasks, however, have been inadequate. Their shortfall can be traced to the way in which they focus exclusively on platform harms (missing the importance of benefits), misconstrue those harms, and then fail to incentivize the formation of platforms with the means and the motive to mitigate documented harms while resisting exploitation or domination.

  1. Regulating Ex Ante for Harms

The 2022 Digital Services Act (“DSA”) offers a first model of ex ante platform regulation. In important part, it trains on the potential for harms by recommender systems without any account of their positive effects. It contains a suite of new legal obligations: Article 25, for example, prohibits any digital platform design that “deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.”116Digital Services Act, supra note 13, at art. 25 § (1). Article 38 provides a right to opt out of non-personalized algorithms.117Id. at art. 38 (mandating “at least one option for each of their recommender systems which is not based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679 ”). Articles 14 and 26 set out some disclosure obligations on certain companies.118Id. at art. 14 § (1) and art. 26 § (1)(d). Most importantly, for present purposes, Article 34 of the DSA requires “very large online platforms and . . . online search engines” to conduct an annual assessment of any systemic risks stemming from the design or functioning of their service, including negative effects on civic discourse, electoral processes, or fundamental rights.119Id. at art. 34. For a close reading of Article 34, see Neil Netanel, Applying Militant Democracy to Defend Against Social Media Harms, 45 Cardozo L. Rev. 489, 566 (2023).

At first blush, the DSA seems oriented toward the creation of islands of algorithmic integrity. But there are reasons for being skeptical of conceptualizing the project this way. To begin with, the Article 38 opt-out is unlikely to be exercised by those “supersharers” at the tails of the ideological distribution who are most responsible for the diffusion of fake news.120Baribi-Bartov et al., supra note 55, at 979. Self-help remedies never avail those already fixated on harming themselves and others. Moreover, Article 34 risk assessments impose no clear affirmative command to build epistemically robust speech environments.121But see Netanel, supra note 119, at 566–67 (proposing that platforms be required to make “recommender system modifications to improve the prominence of authoritative information, including news media content that independent third parties have identified as trustworthy”). Netanel, however, is proposing in this passage an extension of Article 34 rather than offering a gloss on it, so far as I can tell. In effect, the act offers no clear account of how social platforms could or should enable democratic life. Even more problematic, the DSA ultimately leans on platforms themselves to accurately document and remedy their own flaws. It does not seem excessively cynical to predict that profit-oriented companies will not be falling over themselves to flag the negative externalities of their own products in publicly available documents and flagellate themselves over how to remedy them. The DSA, in short, is promising as theory. But it may fall substantially short in practice.

  1. Regulating Ex Ante for Balance

Both Florida and Texas have enacted statutes intended to limit platforms’ abilities to “deplatform” a person because of their violation of terms of service.122Florida defines “deplatform” as “the action or practice by a social media platform to permanently delete or ban a user or to temporarily delete or ban a user from the social media platform for more than 14 days.” Fla. Stat. § 501.2041(1)(c) (2021). Texas’s law has a similar provision. See H.B. 20, 87th Leg., Reg. Sess. (Tex. 2021) (prohibiting social media platforms from censoring users or a user’s expressions based on the viewpoint expressed in the content). The Florida statute, for example, prohibits platforms from “willfully deplatform[ing] a candidate for office who is known by the social media platform to be a candidate, beginning on the date of qualification and ending on the date of the election or the date the candidate ceases to be a candidate.”123Fla. Stat. § 106.072(2) (2021). In its July 2024 decision in Moody v. NetChoice, the U.S. Supreme Court cast doubt on the constitutionality of such measures.124Moody v. NetChoice, LLC, 603 U.S. 707 (2024). While litigation is ongoing as this essay goes to press, it seems likely that the deplatforming elements of both statutes will not survive.

Relying on familiar doctrinal tools from the First Amendment toolkit, a majority of the Moody Court reached two conclusions that are relevant here. First, Justice Elena Kagan’s majority opinion explained that when an entity “provide[s] a forum for someone else’s views” and is thereby “engaged in its own expressive activity, which the mandated access would alter or disrupt,” a First Amendment interest is implicated.125Id. at 728. Second, the Court held that the government has no constitutionally cognizable interest “in improving, or better balancing, the marketplace of ideas.”126Id. at 732. This anti-distortion argument is familiar from the campaign finance context.127See, e.g., Citizens United v. FEC, 558 U.S. 310, 340–41 (2010) (“By taking the right to speak from some and giving it to others, the Government deprives the disadvantaged person or class of the right to use speech to strive to establish worth, standing, and respect for the speaker’s voice.”). There, however, the argument is deployed generally by conservative justices to resist governmental efforts to advance an equality interest in political speech given its “dangerous[] and unacceptable” effects.128Id. at 351. In the Florida and Texas cases, by contrast, the argument was listed against efforts by Republican state governments to enforce their understanding of balance on the platform-based speech. Such ideological valence thus flipped from campaign finance to platform regulation.

Independent of these familiar constitutional logics, there are more empirically grounded reasons to conclude that Florida’s and Texas’s efforts to mitigate platforms’ curatorial capacity are likely to undermine, rather than promote, the emergence of islands of algorithmic integrity. These reasons run parallel to Justice Kagan’s reasoning, but are distinctive in character.

The first reason is banal and empirical. The available research suggests that conservative voices in the United States are asymmetrically responsible for the dissemination of fake news.129Baribi-Bartov et al., supra note 55, at 979 (“Supersharers had a significant overrepresentation

of women, older adults, and registered Republicans.”); González-Bailón et al., supra note 57, at 397 (“We also observe on the right a far larger share of the content labeled as false by Meta’s 3PFC.”). There is more to be said about rhetorical use of “balance” claims in law and politics, and its dynamic effects upon the tendency of people to go to extremes.
To the extent that Florida and Texas leaned on a conception of “balance” in the speech environment, they did so by culpably ignoring the platforms’ interest in a generally reliable and trustworthy news environment. Enforcement of the Florida and Texas laws, to the contrary, seems likely to lead (all else being equal) to a decline in the quality of those platforms. That is to say, by a sort of Gresham’s law for political speech, the increasing proportion of misleading speech on a platform will tend to drive out those concerned with truthfulness. Such an effect creates a vicious circle of sorts, one that is absent from the campaign finance context.

This argument might be supplemented by a further observation. As I show below, there are a number of fairly obvious affirmative measures that private and public actors can take if they are truly concerned with the creation of islands of algorithmic integrity.130See infra Part III.B. If we see a government failing to take these needful steps while affirmatively adopting counterproductive measures, there is some reason to doubt the integrity of its claim to be acting in the public interest. The islands of algorithmic integrity frame can be put to work here as a lens through which one may understand the gap between a state’s professed interests and its actual ambitions.131Cf. Geoffrey R. Stone, Free Speech in the Twenty-First Century: Ten Lessons from the Twentieth Century, 36 Pepp. L. Rev. 273, 277 (2009) (noting that “government officials will often defend their restrictions of speech on grounds quite different from their real motivations for the suppression, which will often be to silence their critics and to suppress ideas they do not like”). If, as Justice Kagan once suggested in her academic role, the First Amendment doctrine is best understood as “a series of tools to flush out illicit motives and to invalidate actions infected with them” and a “kind of motive-hunting,”132Elena Kagan, Private Speech, Public Purpose: The Role of Governmental Motive in First Amendment Doctrine, 63 U. Chi. L. Rev. 413, 414 (1996). then the failure to pick low-hanging fruit while making elaborated and far-fetched claims about one’s integrity-related aims is a telling one. To the extent that it identifies some of those low-hanging fruit, the islands of algorithmic integrity grafts on comfortably to advance those goals.

A second reason to be skeptical of measures such as Florida’s and Texas’s is conceptual in character: balance-promoting measures of their ilk help themselves to the assumption that there is a neutral baseline that has been disturbed by a platform’s algorithm. But “the most common choice of baseline fundamentally depends on the state of some components of the system,” and assumes away the effect of past bias and amplification.133Lum & Lazovich, supra note 26. Accordingly, the Florida and Texas laws’ presupposition of a neutral baseline of undistorted speech is misplaced; it is better to instead focus on the structural qualities associated with islands of integrity. Where a government asserts an interest in “neutrality” or “fairness” in the context of social platforms, its arguments should be viewed as pro tanto dubious since it is striving to return to a status quo that, for technological reasons, is imaginary. A version of this baseline difficulty arises in the campaign finance context, albeit for different reasons.134For a nuanced account of the difficulty of curbing the “bad tendencies of democracy,” see David A. Strauss, Corruption, Equality, and Campaign Finance Reform, 94 Colum. L. Rev. 1369, 1378–79 (1994). It also lacks the sociotechnical foundation that is present in the platform context.

  1. Tort Liability for Harmful Algorithmic Design

The Texas and Florida statutes impose ex ante controls on social platforms. An alternative regulatory strategy when it comes to platforms involves the ex poste use of tort liability to incentivize “better” (by some metric) behavior. Platforms benefit from a form of intermediate immunity from tort liability under Section 230 of the Communications Decency Act.13547 U.S.C. § 230; see also Zeran v. Am. Online, Inc., 129 F.3d 327, 328 (4th Cir. 1997) (holding that Section 230 immunized an online service provider from liability for content appearing on its site created by another party). Section 230 immunity is likely wider than the immunity from liability available under the First Amendment,136Cf. Note, Section 230 as First Amendment Rule, 131 Harv. L. Rev. 2027, 2030 (2018) (noting that “[j]udges and academics are nearly in consensus in assuming that the First Amendment does not require § 230”). although the scope of constitutionally permissible tort liability remains incompletely defined.137Jack M. Balkin, Free Speech Is a Triangle, 118 Colum. L. Rev. 2011, 2046 (2018).

Recent lawsuits have tried to pierce Section 230 immunity from various angles. Some have tried to exploit federal statutory liability for aiding and abetting political violence.138See, e.g., Twitter, Inc. v. Taamneh, 598 U.S. 471, 503 (2023) (rejecting that reading of federal statutory tort liability). Others lean on common law tort theories, but contend that Section 230 does not extend to suits that turn on platforms’ use of algorithmic controls to sequence and filter content. For example, in an August 2024 decision, a panel of the Third Circuit reversed a district court’s dismissal of a common law tort complaint against TikTok for its promotion of content that played a role in the death of a minor.139Nylah Anderson watched a TikTok video on the “Blackout Challenge” and died imitating what she saw. Anderson v. TikTok, Inc., 116 F.4th 180, 181 (3rd Cir. 2024). The circuit court held that Section 230 did not extend to a claim that TikTok’s “algorithm was defectively designed because it ‘recommended’ and ‘promoted’ the Blackout Challenge.”140Id. at 184. The Blackout Challenge, said the panel, was “TikTok’s own expressive activity,” and as such fell outside Section 230’s scope.141Id. This construction of Section 230 has been severely criticized.142See, e.g., Ryan Calo, Courts Should Hold Social Media Accountable—But Not By Ignoring Federal Law, Harv. L. Rev. Blog (Sept. 10, 2024), https://harvardlawreview.org/blog/2024/09/courts-should-hold-social-media-accountable-but-not-by-ignoring-federal-law [https://perma.cc/CFE6-3ZDZ]. Thus, it is far from clear how this ruling can be squared with the seemingly unambiguous Section 230 command that no platform can “be treated as the publisher or speaker of any information provided by another information content provider.”14347 U.S.C. § 230(c)(1) (emphasis added).

Reflection on the prospect of tort liability that is delimited in this fashion and consistent with Section 230 (especially with the idea of “islands of algorithmic integrity” in mind) offers some further reasons for skepticism of the Third Circuit’s decision and the consequences of tort liability for algorithmic design more generally. For it is far from clear how algorithmic-design-based liability of the sort that the Third Circuit embraced can be cabined. Every algorithmic decision changes the overall mix of content on the platform. So, it is always the case that such decisions in some sense “cause” the appearance of objectionable content.144One might interpose here some notion of algorithmic proximate cause. That presents, to say the least, rather difficult questions of doctrinal design. Indeed, one could argue that any mechanism imposed to limit one sort of harmful speech necessarily increases the likelihood that other sorts of speech (including other sorts of harmful speech) will feature prominently on the platform. For example, a decision to filter out speech endorsing political violence is (all else being equal) going to increase the volume of speech that is likely conducive to adolescent mental health problems. In this way, the Third Circuit’s decision (at least as written) has the practical effect of carving out all algorithmic content-moderation activity from Section 230’s scope. It is hard to imagine this concurs with Congress’s enacting intent.

Indeed, tort liability for algorithmic decision will inevitably push platforms to rely more on networks, rather than algorithms, as drivers of content. But the empirical evidence suggests that network-based platform designs are more, not less, likely to experience higher levels of fake news, and that they are less amenable to technical fixes.145See supra text accompanying notes 44–65. Tort liability, at least as understood by the Third Circuit in the TikTok case, therefore pushes platforms away from socially desirable equilibria. Paradoxically, all else being equal, it is likely to increase, and not decrease, the volume of deeply troublesome material on platforms of the sort at issue in the Third Circuit TikTok case. More generally, it is again hard to see how liability for algorithmic design decisions, all else being equal, is socially desirable.

B. The Possible Vectors of Algorithmic Integrity

The fact that state and national governments opt for partial or unwise regulatory strategies does not mean that are no promising paths forward. To the contrary, the examples examined in Part II suggest a range of useful reforms. I outline three here briefly.

To begin with, the examples of Wikipedia and the BBC suggest that it may be possible to build at least small-scale islands of algorithmic integrity either in the private or the public sector. Those examples further suggest that whether state or private in character, such an island needs mechanisms to shield itself from the pressure to maximize profits. An entity that is exposed to the market for corporate control is unlikely to be able to resist commercial pressures for long.

Corporate form hence matters. For example, social platforms’ incentive to maximize engagement, and hence maximize advertising revenue, has been “critical” to driving the dissemination of radicalizing and hateful speech.146Daron Acemoglu & Simon Johnson, Power and Progress 362 (2023). The transformation of Twitter to X after its purchase by Elon Musk, and the subsequent degradation and coarsening of discourse on the platform, offer an abject lesson in the perils of the unfettered free market for islands of algorithmic integrity.147There is some evidence that X systematically favored right-leaning posts in late 2024, suggesting a link between corporate control and political distortion. Timothy Graham & Mark Andrejevic, A Computational Analysis of Potential Algorithmic Bias on Platform X During the 2024 US Election (Queensland Univ. of Tech., Working Paper, 2024)), https://eprints.qut.edu.au/253211. The market for corporate control, which is often glossed over in light of the efficient capital markets hypothesis, is commonly viewed as an unproblematic good.

One of the main lessons of the islands of integrity literature, however, is the need for well-motivated leadership of the sort that has been described at Wikipedia and the BBC. It is hard to see how such motivation survives under the shadow of potential corporate takeover.

Second, islands of integrity require the right means (or tools), as well as the right motive. The use of algorithmic tools to curate a platform creates means in a way that reliance on network effects does not. It is thus a mistake to assume, as the Third Circuit seems to have done in the TikTok case, that an algorithmically managed platform is worse than a network based one. As Part I illustrated, the empirical evidence suggests that algorithmically managed platforms are generally not more polluted by misinformation than ones driven by users’ networks.148Budak et al., supra note 52, at 48; accord Hosseinmardi et al., supra note 47, at 1. Quite the contrary.

Moreover, a social platform built around an algorithm may have tools to improve its epistemic environment that a network-based platform lacks. For instance, a 2023 study found that certain “algorithmic deamplification” interventions had the potential to “reduce[] engagement with misinformation by more than [fifty] percent.”149Benjamin Kaiser & Jonathan Mayer, It’s the Algorithm: A Large-Scale Comparative Field Study of Misinformation Interventions, Knight First Amend. Inst. (Oct. 23, 2023), https://knightcolumbia.org/content/its-the-algorithm-a-large-scale-comparative-field-study-of-misinformation-interventions [https://perma.cc/Y4KU-76BY]. Another example of an instrument for epistemic integrity is, somewhat surprisingly, a feature of Facebook’s algorithm, which has baked in a preference for friends-and-family content that “appears to be an explicit attempt to fight the logic of engagement optimization.”150Narayanan, supra note 10, at 31.

Third, there is a range of tailored reforms that precisely target ways in which social platforms stand in asymmetrical relations of exploitation and dominance to their users. As a very general first step, Luca Belli and Marlena Wisniak have proposed the use of “nutrition labels,” detailing key parameters of platform operation as a way of enabling better informed consumer choice between platforms.151Luca Belli & Marlena Wisniak, What’s in an Algorithm? Empowering Users Through Nutrition Labels for Social Media Recommender Systems, Knight First Amend. Inst. (Aug. 22, 2023), https://knightcolumbia.org/content/whats-in-an-algorithm-empowering-users-through-nutrition-labels-for-social-media-recommender-systems [https://perma.cc/N7MW-SEVT]. This kind of notice-based strategy, while plausible to implement, assumes a measure of user choice over which platform to use. At present, such choice is largely illusory because of the market dominance of a small number of platforms.152Lina M. Khan, The Separation of Platforms and Commerce, 119 Colum. L. Rev. 973, 976 (2019) (“A handful of digital platforms exert increasing control over key arteries of American commerce and communications.”). It is also hard to see how consumers, particularly those already at the ideological margin, could be persuaded to make the right kind of choice. Inducing more competition, and hence more consumer choices, in social platforms would give notice-oriented measures more bite. Some work has been done on potential varieties of platform design,153For a recent survey of other possible models of “decentraliz[ed]” platform governance, see Ethan Zuckerman & Chand Rajendra-Nicolucci, From Community Governance to Customer Service and Back Again: Re-Examining Pre-Web Models of Online Governance to Address Platforms’ Crisis of Legitimacy, 9 Soc. Media + Soc’y, July–Sept. 2023, at 1, 7–9. but there remains ample room for inquiry and improvement. The basic point, though, is that some combination of increased competition and better consumer-facing notices would better allow certain users to select among different social platforms based on their own preferences—although it is hard to be confident that the right users, so to speak, will be those aided.

There are also steps that can be taken by a well-motivated platform manager. Within a platform, for example, the BBC’s strategy of promoting personalization could be adopted and redeployed in a number of ways. For instance, bots, or “user-taught” agents could be supplied to help individual users curate the shape of their feeds over time.154Kevin Feng, David McDonald & Amy Zhang, Teachable Agents for End-User Empowerment in Personalized Feed Curation, Knight First Amend. Inst. (Oct. 10, 2023), https://knightcolumbia.org/content/teachable-agents-for-end-user-empowerment-in-personalized-feed-curation [https://perma.cc/RAN8-QT7S]. These bots, however, might be constrained by the understanding of the platform’s mission, which excluded normatively troublesome activity characterizing the tails of the ideological distribution.

Finally, another way of mitigating exploitation concerns focuses on advertisers rather than users. Firms advertising on platforms are often unaware their products or services are marketed next to fake news, despite having an aversion to that arrangement.155Wajeeha Ahmad, Ananya Sen, Charles Eesley & Erik Brynjolfsson, Companies Inadvertently Fund Online Misinformation Despite Consumer Backlash, 630 Nature 123, 125–28 (2024). They lack, however, information on when and how this occurs. Increased disclosure by platforms on “whether . . . advertisements appear on misinformation outlets,” as well as increased “transparency for consumers about which companies advertise” there, provides the potential to stimulate a collective shift to a more truthful equilibrium.156Id. at 129. Such disclosures help ensure that “the means of ensuring legibility [will not completely] fade into the background of the ordinary patterns of our li[ves],”157Henry Farrell & Marion Fourcade, The Moral Economy of High-Tech Modernism, 152 Dædalus 225, 228 (2023). as platform affordances become too banal to notice. Such disclosures, finally, might be mandated by law, potentially as a means of mitigating fraud concerns related to platform use.

Conclusion

In this essay, I have tried to offer an affirmative vision of social platform governance in the long run, or at least the seeds of such a vision. No doubt this vision is leagues away from the grubby, venal, and hateful reality of social platforms now. It is, indeed, a stark contrast to those extant realities. But one of the functions of scholarship is to generate plausible pathways away from a suboptimal institutional status quo. The articulation of alternatives is itself of value.

As I have suggested, drawing on sociological and political science literature on islands of integrity in public administration allows us to see some of the limits of existing regulatory strategies with respect to social platforms. Doing so opens up new opportunities for improved public and private governance. Of course, the model of islands of integrity in a public administrative context cannot be mechanically transposed over to the platform context. But by offering us a new North Star for reforming governance efforts, I hope it can advance our understanding of how to build platforms fit for our complex, yet (perhaps still) fragile democratic moment.

98 S. Cal. L. Rev. 1287

Download

*  Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School, and associate professor, Department of Sociology. Thanks to Erin Miller for extensive and illuminating comments, and to participants in the symposium—in particular Yasmin Dawood—for terrific questions and conversation. The editors of the Southern California Law Review, in particular Michelle Solarczyk and Tyler Young, did exemplary work in making this essay better. The Frank J. Cicero Foundation provided support for this work. 

Pluralism and Listeners’ Choices Online

“The plain, if at times disquieting, truth is that in our pluralistic society, constantly proliferating new and ingenious forms of expression, ‘we are inescapably captive audiences for many purposes.’ ”1Erznoznik v. City of Jacksonville, 422 U.S. 205, 210 (1975) (quoting Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 736 (1970)).

The speech and technology world has changed dramatically, even unimaginably, since Justice Powell penned these words about drive-in movie theaters. In attempting to grapple with this quandary in the contemporary era, James Grimmelmann offers us the provocative and original paper, Listeners’ Choices Online.2James Grimmelmann, Listeners’ Choices Online, 98 S. Cal. L. Rev. 1231 (2025) [hereinafter, Listeners’ Choices Online]. His contribution to this Symposium builds on earlier work in which he argues for a theoretical approach to free speech that makes listeners’ interests the central focus of First Amendment doctrine.3James Grimmelmann, Listeners’ Choices, 90 U. Colo. L. Rev. 365, 365, 372–73 (2019). As he argues in the earlier paper, freedom of expression involves what he calls a “matching problem”—ideally lining up speakers with listeners who want to hear their expression, but not with listeners who do not.4Id. at 366.

The current paper is simultaneously both too complex and too nuanced to adequately summarize in this brief Comment, but here are a few of his main points as I interpret them, and that my comments will address.

  • Facilitating matching between willing speakers and willing listeners is the goal of a system of free speech. In that regard, “listeners’ choices matter more than speakers’.. . . A consistent

commitment to protecting these willing speaker-listener pairs results in a system of First Amendment law that regularly defers to listeners’ choices.”5Id.

  • Applying that model resolves some of the important First Amendment questions arising from the regulation of contemporary electronic speech media.
  • It is useful to disaggregate communication media into four types, each of which presents distinct matching challenges: (1) Broadcast (television, radio, cable); (2) Delivery (telephone, email, messaging); (3) Hosting (providers of space for speech, but not engaged in speech themselves); and (4) Selection (directing listeners to specific content via algorithms based on the perception of listener preferences).6Listeners’ Choices Online, supra note 2, at 1249–64. Currently, hosting and selection functions are frequently combined, though that does not have to be so.7Id. at 1265.
  • Selection intermediaries play a key role in determining what listeners hear or see. This is an essential function because the sheer volume of speech available on the Internet creates otherwise insurmountable attention scarcity problems for listeners.8Id. at 1261–62.
  • This listeners’ choice model allows for limited regulatory interventions on the media’s selection functions that would not violate the First Amendment.
  • It would violate the First Amendment for regulators to prohibit intermediaries from offering listeners the ability to choose what speakers to listen to because that interferes with listeners’ right to listen.9Id. at 1265.
  • However, the government may permissibly intervene when a search engine (or, presumably, other selection intermediary) is dishonest or disloyal to its users, “when it shows them results that (objectively) differ from the engine’s own (subjective) judgment about what the users are likely to find relevant,”10Id. at 1261. because that also interferes with listeners’ interests.
  • It would also be permissible to have a rule requiring pure selection intermediaries to treat first-party content evenhandedly with content posted by third parties.11Id. at 1264–66.
  • “Seeing the Internet from listeners’ perspective is a radical leap. It requires making claims about the nature of speech and about where power lies online that seem counterintuitive if you are coming from the standard speaker-oriented First Amendment tradition. But once you have made that leap, and everything has snapped into focus again, it is impossible to unsee.”12Id. at 1282.

There is much to admire in Professor Grimmelmann’s paper. It makes a number of important and original contributions to thinking about the regulation of social media and is in many parts completely persuasive. First, consistent with the objective of this Symposium, it highlights listeners’ interests as a basis to evaluate the American system of freedom of expression. It is indisputable that the Supreme Court and legal scholars have underappreciated the role of listeners’ interests in articulating First Amendment doctrine.13But see Leslie Kendrick, Are Speech Rights for Speakers?, 103 Va. L. Rev. 1767, 1775–79 (2017) (observing that although much First Amendment doctrine is expressed in terms of protecting speaker interests, in many cases the resulting legal framework is ultimately designed with listeners in mind). That argument does not, of course, detract from the proposition that we have much to learn from focusing more explicitly on listeners’ interests. The primary context in which the Supreme Court expressly considers listener interests involves unwilling listeners as captive audiences, but those are the only cases that place listeners’ interests at center stage.14See, e.g., Erznoznick v. City of Jacksonville, 422 U.S. 205, 210 (1975); Cohen v. California, 403 U.S. 15, 21–22 (1971). The Court has upheld legal rules that bar speakers from imposing speech on unwilling listeners when the listeners’ “substantial privacy interests are being invaded in an essentially intolerable manner.”15Cohen, 403 U.S. at 21. Even in captive audience situations, as Grimmelmann points out, under current doctrine the interests of willing listeners will sometimes outweigh the rights of unwilling listeners, particularly if it is easy for the latter to avoid the speech.16Listeners’ Choices Online, supra note 2, at 1271–73.

Listeners’ Choices Online also offers us a way out of the ongoing effort to find the appropriate perspective through which to evaluate how First Amendment doctrine should apply to the contemporary media environment. Much recent scholarship has struggled with this question, with legal scholars sometimes seeking to find appropriate analogies from regulation of past communication technologies to justify a legal framework for thinking about the regulation of social media platforms.17See, e.g., Jack M. Balkin, How to Regulate (and Not Regulate) Social Media, 1 J. Free Speech L. 71, 89–96 (2021). Is cable television like traditional television and radio broadcast media? Does regulation of telephone services offer any insight into how we ought to regulate digital communications? Is Facebook more like a parade or a shopping mall? Can social media companies be treated like common carriers, subjecting them to greater regulatory constraints than would otherwise be permissible to impose on private companies engaged in speech?18       See, e.g., Ashutosh Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2 J. Free Speech L. 127, 151–56 (2022); Eugene Volokh, Treating Social Media Platforms Like Common Carriers?, 1 J. Free Speech L. 377, 454–62 (2021).

None of the analogies work perfectly, however, because each different electronic speech medium bears some distinguishing features that complicate the analysis.19See Gregory M. Dickinson, Beyond Social Media Analogues, 99 N.Y.U. L. Rev. 109, 116–23 (2024) (criticizing the analogy-based approach to establishing norms for regulating social media). Some, as the article points out, are mere vessels for delivery of content, while others engage in important speech-impacting selection decisions that help listeners sort through the onslaught of online content, but, in doing so, may affect listeners’ interests by providing them content they do not want to hear or directing them away from content they would welcome.20See Listeners’ Choices Online, supra note 2, at 1287–88.

The Supreme Court has only just dipped its toes in the water, in its dicta in last term’s Moody v. NetChoice, LLC, with the majority opinion stating unequivocally that “[l]ike the editors, cable operators, and parade organizers this Court has previously considered, the major social-media platforms are in the business, when curating their feeds, of combining ‘multifarious voices’ to create a distinctive expressive offering.”21Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2405 (2024) (quoting Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Bos., Inc., 515 U.S. 557, 569 (1995)). But as Grimmelmann points out, that is looking at the challenged state laws exclusively from the platforms’ perspective, and not the listeners’.22Listeners’ Choices Online, supra note 2, at 1262–64.

Rather than attempting to argue purely by analogy with past regulations of earlier media technologies, Grimmelmann’s paper elegantly uses listeners’ interests and choices as an organizing principle that cuts across these different media to create a coherent First Amendment model for evaluating media regulations. He suggests that focusing on these interests allows us to see more clearly the competing speech interests involved in ways that the purely analogical approach simply cannot. His listeners’ choice theory emphasizes matching speakers to willing listeners, which can be accomplished by structural designs, by some content neutral government regulation, and, in part, by requiring the separation of hosting and selection functions in ways that maximize these speaker-listener connections.23Id. at 1232–37, 1265–67.

While Professor Grimmelmann’s model is intriguing and helps us think about media regulation in useful ways, I offer three modest thoughts, two focused on whether, in some circumstances, prioritizing listeners’ rights may come at the expense of other important First Amendment values, and one questioning whether there is a need for further promoting listeners’ choices on social media given the increasing market for niche social media sites.24I am also unconvinced that Grimmelmann’s model is generalizable beyond the electronic media context. However, that is not the ambition of his paper.

  1. Prioritizing Listeners’ Choices May Diminish Public Discourse

First, permitting limited regulation of selection intermediaries to protect listeners’ interests could, in some cases, have deleterious effects on public discourse. Even the modest regulatory interventions that Grimmelmann suggests would be permissible to advance listeners’ interests could be leveraged to challenge selection intermediaries’ decisions to offer a more balanced, fact-checked feed to their subscribers. Or, even if those effects do not come to fruition, the very existence of regulatory interventions might deter selection intermediaries from experimenting with innovations to promote delivery of a greater diversity of content that does not cater purely to listeners’ interests.

Consider a hypothetical new platform calling itself Balanced Social Media (“BSM”). Following Grimmelmann’s model, let us assume that a different company is the host for BSM, which exclusively serves a selection function. BSM designs an algorithm that, for the most part, favors listeners’ choices of content, but adds three specific features that veer from the default rule. First, it builds in its own fact-checking mechanism that flags content posted by third-party users that may be objectively false or come from sources that have proven unreliable or inaccurate in the past. The BSM algorithm will still direct the user to that content, but the content will be marked with a red flag that warns the user that the factual foundation of the material may not be valid, and provides a link to a source that disputes the factual validity of the original post.

Second, the algorithm is designed to monitor users’ feeds to determine if they are seeking content that is unilaterally biased toward one particular ideology, for example, if a user reads only content posted by Fox News or MSNBC. If the algorithm identifies users who seek ideologically unbalanced content, it will occasionally feed such users some third-party content that comes from a dissimilar political perspective. This counter-ideological feed could come randomly or perhaps after the user has viewed ten consecutive stories from sources with their preferred ideological perspective.

Alternatively, BSM could instead offer a slightly less intrusive option under which, rather than posting counter-ideological content, BSM could give the user a warning or notice to the effect that the user has been reading content that is exclusively coming from sources with a specific political orientation and asking if the user would like to see something from a different perspective. This might operate in a manner like TikTok’s option for its users to set a daily screen time limit and be notified when they have reached that limit.25Screen Time, TikTok, https://support.tiktok.com/en/account-and-privacy/account-information/screen-time [https://perma.cc/5E64-3RTR]. Under my hypothetical, however, users would not be able to turn off this setting.

Third, BSM occasionally posts its own independent content on the platform that discusses issues regarding the responsible use of social media and the importance of ensuring that information is factually accurate before posting it. As with the counter-ideological posts, it will feed periodically into all users’ feeds. BSM users cannot opt out of any of these functions; though, of course, they may decide they do not want to use BSM. When users sign up to use BSM, they are fully informed about the algorithm’s functions, which they agree to as part of the Terms of Service (“TOS”). The TOS even says, “BSM offers a new vision of social media, one that will deliver content that you did not ask for, or even that you do not want to see (of course, we cannot make you read it, that is up to you!). The goal of our model is to expose all people to a range of ideologically diverse content.”

Grimmelmann’s model seems to suggest that lawmakers might be able to forbid BSM to adopt these innovative features because they do not fully promote listeners’ choices. The fact-checking flags and counter-ideological feeds are content that many users may not wish to see; indeed, they may be viscerally repelled by these posts, particularly if this interferes with their ability to experience the emotional resonance associated with speech that highlights their own world views.26On the emotional value associated with the consumption of even false information, see Alan K. Chen, Free Speech, Rational Deliberation, and Some Truths About Lies, 62 Wm. & Mary L. Rev. 357, 423–24 (2020). He suggests that regulators may be able to restrict selection intermediaries’ use of such algorithms to the extent that “it shows [users] results that (objectively) differ from the engine’s own (subjective) judgment about what the users are likely to find relevant.”27Listeners’ Choices Online, supra note 2, at 1261. In fact, BSM’s model is designed to show user content they do not want to see. In Grimmelmann’s terms, the intermediary is being disloyal to its users (although because the algorithm’s functions are fully disclosed in the TOS, they can argue they are not being dishonest).28Id.

Moreover, the BSM-produced content (and maybe even the fact-checking posts) can be viewed as first-party content.29Another question worth considering is whether even paid advertising could be construed as first-party content. Even though it is produced by a third-party, which pays the selection intermediary to distribute its content, it is being promoted by the intermediary without regard to listener interests. Surely, selection intermediaries cannot be forbidden to prioritize advertising content or the entire economic model under which social media platforms operate would collapse. BSM is in some sense trying to compete in the social media market by offering a new way of delivering content. Would a pure listener-based approach result in such experiments being shut down by regulators because they are occasionally giving their first-party content priority over content posted by third parties?30Listeners’ Choices Online, supra note 2, at 1276–79. Grimmelmann qualifies this statement by saying this would apply to only pure selection intermediaries, so perhaps BSM would not be subject to regulation to the extent that it is holding itself out as a content producer as well as an intermediary. But even pure selection intermediaries might flag content with fact-checking warnings, and those posts presumably could be understood as promoting first-party content. That is, by feeding users first-party content in the form of sermons on the importance of truth in the responsible use of social media, has BSM interfered with listener choice? Because Moody holds that social media platforms are speakers when they make decisions about content moderation,31Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2405–06 (2024). they are unquestionably speakers if they are producing their own content. How would Grimmelmann’s model address the tension between a regulation prohibiting BSM from prioritizing first-party content to protect listeners’ choice and the platform’s First Amendment speech rights?

To the disloyalty argument, Grimmelmann might respond that because BSM is transparent about its algorithm, it is not actually being disloyal or dishonest to its users.32That is, assuming all subscribers read and fully understand the TOS, which is highly unlikely. A 2017 study by Deloitte found that 91% of people consent to TOS agreements without reading them. For respondents aged 18–34, the percentage rose to 97%. See Jessica Guynn, What You Need to Know Before Clicking ‘I Agree’ on That Terms of Service Agreement or Privacy Policy, USA Today (Jan. 29, 2020, 2:21 PM), https://www.usatoday.com/story/tech/2020/01/28/not-reading-the-small-print-is-privacy-policy-fail/4565274002 [https://perma.cc/C2JQ-LHFQ]. Listeners who do not want this type of balanced approach can simply choose a different platform that better suits their listening tastes. However, while BSM is certainly giving listeners choice at the first level (platform selection), its model will inevitably result in some BSM users receiving speech at the second level (content selection) that they subjectively do not want to hear.

  1. Elevating Listeners’ Choices Could Encourage Information Silos

A closely related concern with a system of electronic media regulation focusing primarily on promoting listeners’ interests is whether such an emphasis could have the broader systemic effect of exacerbating ideological information silos even more than under the current system.33See, e.g., Dawn Carla Nunziato, The Marketplace of Ideas Online, 94 Notre Dame L. Rev. 1519, 1527 (2019). An important function of a system of free expression is, of course, promoting robust public discourse. Public discourse is inherently oppositional—speakers of different viewpoints must be able to engage each other for it to meaningfully occur.

In many cases, speakers desire to reach listeners whom they believe will be persuaded by their messages if those listeners only had an opportunity to hear them. Anti-abortion advocates may sincerely believe that if women considering abortions only had more information, they would make different choices. Protesters concerned about the humanitarian crisis associated with Israel’s military actions in Gaza would like to reach those who are unconditionally sympathetic to Israel’s right to defend itself because they think, with additional information, these listeners may modify their positions. On social media as well, speakers try to convince unwilling listeners of the virtues of their political positions. Preaching only to the converted does not facilitate healthy discourse.

Outside of the captive audience context, which is almost exclusively applied to unwanted speech in one’s home,34See Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 738 (1970) (“That we are often ‘captives’ outside the sanctuary of the home and subject to objectionable speech and other sound does not mean we must be captives everywhere.” (quotation omitted)); But see FCC v. Pacifica Found., 438 U.S. 726, 730, 748 (1978) (upholding placement of Federal Communications Commission order indicating that licensed radio station “could have been the subject of administrative sanctions” for broadcasting program that violated FCC’s indecency regulations during daytime hours (quoting 56 F.C.C.2d 94, 99)); Lehman v. City of Shaker Heights, 418 U.S. 298, 302, 304 (1974) (holding that passengers on rapid transit street cars are captive audiences). Under Grimmelmann’s model (and in my view, as well), it would certainly seem that Pacifica was wrongly decided because favoring the unwilling listeners’ interests there meant cutting off speech to many willing listeners. Listeners’ Choices Online, supra note 2, at 1269–70. a key function of the First Amendment is served by advancing the interests of speakers to influence those who are not inclined to agree with them.35This is setting aside other narrow areas in which unwanted speech causes cognizable harms, such as with true threats. See Virginia v. Black, 538 U.S. 343, 359 (2003) (defining true threats, which are not protected under the First Amendment, as “statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals”). As the Supreme Court has recognized:

[Speech] may indeed best serve its high purpose when it induces a condition of unrest, creates dissatisfaction with conditions as they are, or even stirs people to anger. Speech is often provocative and challenging. It may strike at prejudices and preconceptions and have profound unsettling effects as it presses for acceptance of an idea.36Terminiello v. Chicago, 337 U.S. 1, 4 (1949).

These express values are in direct tension with a purely listener-based approach. This may be particularly true of speech on social media, which the Court has argued is one of the “most important places . . . for the exchange of views.”37Packingham v. North Carolina, 582 U.S. 98, 104 (2017).

Thus, a second concern I have with a model prioritizing listeners’ choices over speakers’ is that its application, in many contexts, may impede what we might describe as lawful, but uncomfortable, speech that is intended to persuade.38On the importance of persuasion as a free speech value, see generally David A. Strauss, Persuasion, Autonomy, and Freedom of Expression, 91 Colum. L. Rev. 334 (1991). If listeners can confine themselves only to speech they want to hear, even in the social media context, then prioritizing that interest can operate as a kind of quiet heckler’s veto. In a social media environment in which listeners’ choice prevails, it is hard to imagine how persuasion might work, either individually or collectively. Are there any situations involving such speech through media in which the default position is not valuing the listener over the speaker, and if so, how could that decision be implemented?

Perhaps our society is headed in this direction already given that, as Grimmelmann observes, even in the absence of regulation of selection intermediaries, listeners might deploy a combination of pure hosting platforms with middleware, a third-party software that allows them to customize their feeds at an even greater level of detail.39Listeners’ Choices Online, supra note 2, at 1279–81. While this, too, would benefit listeners’ choices, it would move us in the direction of a more atomized speech universe—which is not necessarily a good thing, but at least it would not be the product of government intervention.

  1. Market Responses Are Already Enhancing Listeners’ Choices

Finally, one could argue that market forces are already moving toward a listener-centric model with the proliferation of niche social media platforms, even in the absence of regulatory interventions.40Aisha Jones, The Rise of Niche Social Media Platforms: Opportunities for Community Building, Kubbco (Feb. 7, 2024), https://www.kubbco.com/blog/the-rise-of-niche-social-media-platforms-opportunities-for-community-building [https://perma.cc/V8ZP-NHWB]. There is some evidence that users are beginning to migrate from more general social media sites such as X (formerly known as Twitter), to special interest platforms where they can avoid the cacophony of hostile rhetoric in favor of sites where they can engage with a smaller cohort of people who share common interests. That development certainly enhances listener choice without risking the possible unintended consequences of regulations designed to promote listeners’ choice.

Especially during the 2024 election season, there seemed to be growing dissatisfaction with general social media sites because of the unavoidability of sometimes harsh political discourse. It was not uncommon to hear calls for platforms dedicated to only discussion of books, movies, music, gaming, and other mostly nonpolitical (or, at least, not primarily political) topics that listeners sought out to find some respite. Sports lovers initially were the exception to this rule, although even those users have now started fleeing X.41Compare Jesus Jiménez, As Users Abandon X, Sports Twitter Endures, N.Y. Times (Oct. 27, 2023), https://www.nytimes.com/2023/10/27/sports/sports-twitter-x-elon-musk.html [https://web.archive.org/web/20250127170503/https://www.nytimes.com/2023/10/27/sports/sports-twitter-x-elon-musk.html], with Will Leitch, The Slow, Painful Death of Sports Twitter, N.Y. Mag.: Intelligencer (Feb. 27, 2024), https://nymag.com/intelligencer/article/the-slow-painful-death-of-sports-twitter-under-elon-musk.html [https://web.archive.org/web/20240927124315/https://nymag.com/intelligencer/article/the-slow-painful-death-of-sports-twitter-under-elon-musk.html].

Available statistics suggest that the market has responded to this interest and is already enhancing listener choice by serving its own matching function. About 115,000 users deactivated their X accounts on the day after the November 2024 Presidential Election.42Kat Tenbarge & Kevin Collier, X Sees Largest User Exodus Since Elon Musk Takeover, NBC News (Nov. 13, 2024, 1:40 PM), https://www.nbcnews.com/tech/tech-news/x-sees-largest-user-exodus-musk-takeover-rcna179793 [https://perma.cc/FZ3E-3XKQ]. No matter how the total user base is measured, that is a very small percentage, which is unsurprising because network effects deter people from leaving even platforms with which they are dissatisfied. Of course, people can maintain active X accounts while still seeking out other outlets for speech. In comparison, niche social media platforms are still quite small. One of the largest, Goodreads, a platform to share book recommendations, had about 150 million users as of 2023.43Phil Stamper-Halpin, How to Reach More Readers on Goodreads, Penguin Random House: News for Authors (Sept. 2023), https://authornews.penguinrandomhouse.com/how-to-reach-more-readers-on-goodreads [https://perma.cc/4JP5-8D9C]. Houzz, a home design social media platform, reportedly has about 70 million users.44Terri Williams, 2025 Houzz Home Design Trends: These Are the Top 10 Predictions, Forbes (Oct. 31, 2024, 4:07 AM), https://www.forbes.com/sites/terriwilliams/2024/10/31/2025-houzz-home-design-trends-these-are-the-top-10-predictions [https://perma.cc/CCH3-42Z9]. A platform for movie lovers (especially indie) called Letterboxd now has about 17 million users.45Jill Goldsmith, Letterboxd, Indie Cinema’s Secret Weapon, Hit 17 Million Members—Here Are Their Top 2024 Films, Deadline (Jan. 8, 2025, 9:11 AM), https://deadline.com/2025/01/letterboxd-indie-films-members-surge-in-2024-favorite-films-1236251217 [https://perma.cc/U6Y7-EGP9]. Reddit, while open to a wide range of users, is well known for facilitating smaller communities to generate discussion of interest, and now has about 91 million daily active users.46David Curry, Reddit Revenue and Usage Statistics (2025), Business of Apps, https://www.businessofapps.com/data/reddit-statistics [https://perma.cc/3JLY-DYYF]. Finally, Substack, a platform for distributing individualized newsletters to both paid and unpaid subscribers, now has approximately 50 million subscribers.47Max Tani, Substack Wants to Do More Than Just Newsletters, Semafor (Oct. 6, 2024, 4:58 PM), https://www.semafor.com/article/10/06/2024/substack-wants-to-do-more-than-just-newsletters [https://perma.cc/SR96-WCPC]; A New Economic Engine for Culture, Substack, https://substack.com/about [https://web.archive.org/web/20250331060253/https://substack.com/about].

It may seem somewhat contradictory to fret about information silos while simultaneously celebrating the expansion of niche social media sites. To address this briefly, I would argue that the siloing problem is much more problematic on the larger, omnibus social media platforms than with niche social media platforms. Political discourse is one of the main features of the larger platforms, so cutting off speech that is ideologically diverse is truly undermining the opportunities for persuasion. In contrast, the niche social media sites are mostly excluding posts about other topics not because of any ideological commitments, but rather to help filter out what they regard as irrelevant information. That is not to say that political discourse cannot arise in the context of these niche sites,48I would certainly be the last to argue that things such as art or music do not evoke important social and political meaning. See generally Mark V. Tushnet, Alan K. Chen & Joseph Blocher, Free Speech Beyond Words: The Surprising Reach of the First Amendment (2017). but it is at least less likely to do so. And, of course, these users may be walling themselves off from any political speech, which could be problematic for public discourse in the long run. But there is nothing to suggest that these users might not still engage in political discourse on other platforms or in other contexts of communication in their non-online lives.

* * *

Notwithstanding my limited reservations and questions, I wholeheartedly welcome Professor Grimmelmann’s important and valuable contribution to thinking about the complex constitutional and social issues associated with regulation of electronic media in the current climate. Continued efforts to meaningfully apply standard First Amendment doctrine to new media allow us all to think critically about the best way forward.

98 S. Cal. L. Rev. 1387

Download

* Thompson G. Marsh Law Alumni Professor, University of Denver Sturm College of Law. Thank you to Erin Miller and to the editors and staff of the Southern California Law Review, and especially Simone Chu, for their efforts in organizing this fantastic Symposium. Thanks also to Nina Christensen and Charlotte Rhoad for their helpful research.

Protecting Listeners From Unwanted One-to-One Speech

I. The Value of the One-to-One vs. One-to-Many Line

“[N]o one has a right to press even ‘good’ ideas on an unwilling recipient,” the Supreme Court held in Rowan v. United States Post Office Department.1Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 738 (1970). At the same time, “[t]he fact that society may find speech offensive is not a sufficient reason for suppressing it. Indeed, if it is the speaker’s opinion that gives offense, that consequence is a reason for according it constitutional protection.”2Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 55 (1988) (cleaned up). That is generally true even if the speaker’s opinion gives offense not just to “society” but to many of the speaker’s listeners.3Bolger v. Youngs Drug Prods. Corp., 463 U.S. 60, 72 (1983).

The best way to reconcile these principles, it seems to me, is to distinguish (1) one-to-one speech said to an unwilling listener from (2) one-to-many speech that reaches both potentially willing and unwilling listeners.4Eugene Volokh, One-to-One Speech vs. One-to-Many Speech, Criminal Harassment Laws, and “Cyberstalking”, 107 Nw. U. L. Rev. 731 (2013); Eugene Volokh, Freedom of Speech in Cyberspace from the Listener’s Perspective: Private Speech Restrictions, Libel, State Action, Harassment, and Sex, 1996 U. Chi. Legal F. 377, 421–23 (1996); Eugene Volokh, Thinking Ahead About Freedom of Speech and Hostile Work Environment Harassment, 17 Berkeley J. Emp. & Lab. L. 305, 311 (1996); Eugene Volokh, Freedom of Speech and Workplace Harassment, 39 UCLA L. Rev. 1791, 1863–67 (1992) (using the terms “directed” and “undirected” instead of “one-to-one” and “one-to-many”). Ashutosh Bhagwat well explains both the precedents and the policy arguments supporting the distinction. Most speech should generally be protected because it may persuade or inform some potentially willing listeners even if others are upset.5Ashutosh Bhagwat, Respecting Listeners’ Autonomy: The Right to be Left Alone, 98 S. Cal. L. Rev. 1129, 1145 (2025). But speech said solely to an unwilling listener, where it’s clear the listener is unwilling, is likely only to offend. The government can in many situations help protect listeners against such one-to-one speech, because that promotes the unwilling listener’s autonomy without interfering with communication to potentially willing listeners.6Id. at 1145–48.

And this helps explain the constitutionality of many common speech restrictions, including:

  1. telephone harassment laws,7Volokh, One-to-One Speech, supra note 4, at 740.
  2. do-not-call registries,8See, e.g., Patriotic Veterans, Inc. v. Zoeller, 845 F.3d 303, 306 (7th Cir. 2017).
  3. harassment restraining orders that forbid speech to the protected person,9Volokh, One-to-One Speech, supra note 4, at 741.
  4. application of university “hostile environment harassment” policies to people “following students around and yelling slurs or otherwise directing hostile speech at individual students who have demanded to be left alone,”10Bhagwat, supra note 5, at 1153.
  5. application of workplace harassment law to one-to-one insults, or one-to-one repeated unwanted romantic advances,11Volokh, Freedom of Speech and Workplace Harassment, supra note 4, at 1863–68.
  6. residential picketing laws,12Bhagwat, supra note 5, at 1144-45. and more.

II.  Must Restrictions on Unwanted One-to-One Speech Be Content-Neutral?

This general conclusion, however, raises subsidiary questions. A particularly important one is whether restrictions on one-to-one speech must be content-neutral.

There is precedent suggesting this, as well as broader First Amendment principles supporting such a view. Frisby v. Schultz upheld a content-neutral residential picketing ban on the grounds that such picketing is essentially speech targeted to the unwilling listener in the home.13Frisby v. Schultz, 487 U.S. 474, 486, 488 (1988). But Carey v. Brown had earlier struck down a residential picketing ban that excluded labor picketing because that exclusion made the law content-based.14Carey v. Brown, 447 U.S. 455, 470–71 (1980). It was the content neutrality of the ban in Frisby that saved it.15Frisby, 487 U.S. at 481, 488.

We see something similar in Rowan v. United States Post Office Department.16Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 738 (1970). Rowan upheld a statute that barred senders from sending material to householders, once the householder informed the post office that he “in his sole discretion believes [the mailings] to be erotically arousing or sexually provocative.”17Id. at 730. The statute was thus content-based on its face, but the Court stressed it was essentially content-neutral as enforced:

Both the absoluteness of the citizen’s right under [the statute] and its finality are essential; what may not be provocative to one person may well be to another. In operative effect the power of the householder under the statute is unlimited; he may prohibit the mailing of a dry goods catalog because he objects to the contents—or indeed the text of the language touting the merchandise. Congress provided this sweeping power not only to protect privacy but to avoid possible constitutional questions that might arise from vesting the power to make any discretionary evaluation of the material in a governmental official.18Id. at 737.

Yet if content neutrality is indeed required in such situations, then many restrictions on one-to-one speech would be hard to defend. Telephone harassment laws, for instance, often specially target lewd or indecent harassing calls.19See, e.g., Wash. Rev. Code Ann. § 9.61.230 (2024). Workplace harassment law ends up specially targeting one-to-one speech that is personally insulting.

Likewise, when various laws target one-to-one speech intended to “harass” or “abuse,” they must be treated as content-based. As the Court held in Reed v. Town of Gilbert, “[s]ome facial distinctions based on a message are obvious, defining regulated speech by particular subject matter, and others are more subtle, defining regulated speech by its function or purpose. Both are distinctions drawn based on the message a speaker conveys, and, therefore, are subject to strict scrutiny.”20Reed v. Town of Gilbert, 576 U.S. 155, 163–64 (2015). When the “regulated speech” is defined by a purpose to harass or abuse, that definition generally targets speech that has a harassing or abusive “message.” The definition is therefore content-based.

More broadly, when even a “generally applicable law” is “directed at [a speaker] because of what his speech communicated”—when the speaker violates the law “because of the offensive content of his particular message”—that too is treated as a “content-based regulation of speech.”21Holder v. Humanitarian L. Project, 561 U.S. 1, 28 (2010). This would cover most harassment laws, at least when speech is found to be harassing because of its offensiveness rather than because it’s too loud or ties up telephone lines.

Indeed, relatively few of these laws actually set up Rowan-like rules that (1) require the listener to first tell a speaker, “stop speaking to me,” but then (2) make that order binding regardless of what the speaker wants to say. The laws are indeed aimed at “address[ing] the ‘first blow’ of curse words spoken only once.”22Bhagwat, supra note 5, at 1154. At the same time, they aim to avoid giving someone an absolute veto on future communications: consider, for instance, workplace harassment, where the law can’t let employees categorically forbid any future communications (including on legitimate work-related topics) by coworkers.

Now perhaps that’s the wrong approach—perhaps the law should indeed insist on content neutrality even as to restrictions on unwanted one-to-one speech. Or perhaps content-based restrictions should indeed be subjected to strict scrutiny but might in some situations be upheld.

But I think it might be better to recognize that at least some such content-based restrictions are permissible when it comes to one-to-one speech, even if they wouldn’t be permissible as to one-to-many speech. The Court has acknowledged that content-based restrictions may be constitutional when “substantial privacy interests are being invaded in an essentially intolerable manner.”23Erznoznik v. City of Jacksonville, 422 U.S. 205, 209–10 (1975) (quoting Cohen v. California, 403 U.S. 15, 21 (1971)). Perhaps the “privacy interests” here should be read as not just focusing on privacy in the home, or true captivity of a sort where it is “impractical for the unwilling viewer or auditor to avoid exposure.”24Id. at 209. Rather, perhaps they should also be seen as including intrusions on the listener’s autonomy rights that Professor Bhagwat rightly identifies: the targeting of a particular likely unwilling listener for one-to-one speech may be what is “essentially intolerable.”

R.A.V. v. City of St. Paul25R.A.V. v. City of St. Paul, 505 U.S. 377 (1992). may provide a helpful framework for dealing with this. The Court in R.A.V. held that content-based restrictions must generally be subject to strict scrutiny even when they are limited to subsets of unprotected categories of speech. For instance, a ban on racist fighting words would be presumptively unconstitutional even though a ban on all fighting words would be valid.26Id. at 386. But the Court also held that this principle has certain exceptions, again where the content discrimination is entirely within an unprotected category; the relevant exceptions are:

  1. “[w]hen the basis for the content discrimination consists entirely of the very reason the entire class of speech at issue is proscribable,”27Id. at 388. for instance when the law restricts “only that obscenity which is the most patently offensive in its prurience,” or “only those threats” that are especially disruptive;28Id.
  2. when “a particular content-based subcategory of a proscribable class of speech” is “swept up incidentally within the reach of a statute directed at conduct rather than speech”;29Id. and
  3. when “the nature of the content discrimination is such that there is no realistic possibility that official suppression of ideas is afoot.”30Id. at 390.

The same might apply with regard to subcategories of likely unwanted one-to-one speech, if Professor Bhagwat and I are right that such speech is essentially constitutionally unprotected. Indecent harassing phone calls, for instance, may well be especially likely to be unwanted, and a restriction on such calls may indeed be unlikely to involve “official suppression of ideas.”

Likewise, a prohibition of one-to-one speech intended to abuse or harass might be justified on the same theory, and might also be “swept up incidentally within the reach of a statute directed at conduct rather than speech,” given that such harassment laws often do target nonspeech conduct (such as physical stalking) as well as speech. R.A.V. itself gave hostile environment harassment law as an example of a law that may “incidentally” “swe[ep] up” “sexually derogatory ‘fighting words,’ among other words,” because it bans a wide range of conduct as well as speech.31Id. at 389. Likewise, the law may incidentally sweep up derogatory unwanted one-to-one speech more broadly. (For reasons I explain elsewhere, this rationale does not extend to offensive one-to-many ideological expression, even when it’s viewed as sexist, racist, and the like.32Volokh, Freedom of Speech and Workplace Harassment, supra note 4, at 1848–55.)

III.  When Must the Government Tolerate One-to-One Speech to Government Officials?

Though one-to-one speech to unwilling listeners may generally be forbidden, the analysis must be different when the speech is addressed to government employees on the job, especially public-facing employees. I agree with Professor Bhagwat that listeners generally have considerable autonomy interests in not hearing unwanted speech—interests that the government may protect. But when one works for the public,33Query whether the same principle should also apply to public-facing employees of some private companies as well. one must accept the risk of disapproving speech from the public:

[R]eceiving mail from disgruntled constituents is usual for a politician. A person “who decides to seek governmental office must accept certain necessary consequences of that involvement in public affairs . . . [and] runs the risk of closer public scrutiny than might otherwise be the case.” Here, given Michael’s status as a selectman and the content of the letters, it cannot be said that Michael’s “substantial privacy interests [were] invaded in an essentially intolerable manner.”34Commonwealth v. Bigelow, 59 N.E.3d 1105, 1113 (Mass. 2016) (citations omitted).

This is particularly clear for elected officials,35Id. at 1108, 1112 (town council member); U.S. Postal Serv. v. Hustler Mag., Inc., 630 F. Supp. 867 (D.D.C. 1986) (Congressman); Hicks v. Faris, No. 1:20-CV-680, 2024 WL 4011824, at *14 (S.D. Ohio Aug. 30, 2024) (county treasurer); see also United States v. Yung, 37 F.4th 70, 78–79 (3d Cir. 2022) (dictum) (city councilman). candidates for office,36State v. Drahota, 788 N.W.2d 796, 798, 804 (Neb. 2010) (candidate for state legislature); United States v. Sryniawski, 48 F.4th 583, 587 (8th Cir. 2022) (same). or high-level political appointees.37United States v. Popa, 187 F.3d 672, 673 (D.C. Cir. 1999) (U.S. Attorney). But it may be true for lower-level public-facing employees as well, such as police officers38State v. Fratzke, 446 N.W.2d 781, 782, 785 (Iowa 1989). or others.39State v. Golga, 239 N.E.3d 1165 (Ohio Ct. App.) (water department). Some cases do allow punishing offensive speech to such employees,40State v. White, No. 2024CA00044, 2025 WL 354802 (Ohio Ct. App. Jan. 29, 2025) (police officer); United States v. Waggy, 936 F.3d 1014, 1015 (9th Cir. 2019) (Veterans Administration employee). but I think they’re mistaken.41Cf. Hagedorn v. Cattani, 715 F. App’x 499, 507 (6th Cir. 2017) (viewing the Rowan principle as applicable to speech to a mayor’s personal email account because it is the “functional equivalent of a home mailbox”).

IV. The Borders of “One-to-One”

Finally, “one-to-one” and “one-to-many,” like many such useful general phrases, may not fully capture the legal principles that courts should and do apply. To give one example, say someone is speaking simultaneously to three listeners, all of whom have asked the speaker to stop bothering them. That’s technically one-to-three speech, not one-to-one speech. But it should be restrictable as tantamount to one-to-one speech, precisely because it is addressed solely at unwilling listeners.

Likewise, say Wendy Smith’s ex-husband Harry Smith posts a Facebook message on his own page saying, “My ex @WendySmith is a slimy trollop.” (This @ syntax is specifically designed to notify the Facebook user WendySmith about the post; Twitter and Instagram have the same feature.) It is thus more or less like an e-mail to Wendy (one-to-one speech), coupled with a post about her to the author’s friends (one-to-many speech). If Wendy gets a harassment restraining order barring further correspondence from Harry, it would be constitutionally permissible for that order to be interpreted as banning such mentions; Harry would still be able to communicate with his friends by posting the same item without the @ (“My ex Wendy Smith is a slimy trollop”).42See, e.g., ARM v. KJL, 995 N.W.2d 361, 368–69 (Mich. Ct. App. 2022).

The hardest question arises when speech appears to be largely aimed at a particular unwilling listener but also reaches some other listeners. This is what the Court faced in Frisby v. Schultz, where it reasoned that residential “picketing is narrowly directed at the household, not the public”:

The type of picketers banned by the Brookfield ordinance generally do not seek to disseminate a message to the general public, but to intrude upon the targeted resident, and to do so in an especially offensive way. Moreover, even if some such picketers have a broader communicative purpose, their activity nonetheless inherently and offensively intrudes on residential privacy. . . .

Because the picketing prohibited by the Brookfield ordinance is speech directed primarily at those who are presumptively unwilling to receive it, the State has a substantial and justifiable interest in banning it.43Frisby v. Schultz, 487 U.S. 474, 487–88 (1988).

Here the speech wasn’t “foisted (exclusively) upon unwilling listeners”44Bhagwat, supra note 5, at 1147.—presumably at least some residential picketers also want to reach the resident’s neighbors.45See Schultz v. Frisby, 807 F.2d 1339, 1341 (7th Cir. 1986), vacated, 818 F.2d 1284 (7th Cir. 1987). Rather, the Court says the speech was targeted “primarily” at the resident and acknowledges that it might have also had “a broader communicative purpose.”

Distinguishing the “primary” audience from the “secondary” is of course subjective, plus it’s not clear why even secondary audiences should be ignored. For instance, if animal rights protesters are picketing outside a fur store, is their speech “directed primarily” at buyers, who are likely “unwilling to receive” the message (especially if the message is framed harshly)? After all, fur buyers presumably know well where the fur comes from—and like it. Or is the speech directed at least equally to neighbors and passersby, or to the likely relatively rare ambivalent customer?

Likewise, most people who go to abortion clinics are likely unwilling to hear from anti-abortion protesters and counselors, but some might be open to their arguments.46See McCullen v. Coakley, 573 U.S. 464, 473 (2014) (“In unrefuted testimony, petitioners say they have collectively persuaded hundreds of women to forgo abortions.”). Most people who go to churches, synagogues, or mosques that are being picketed are unwilling to hear from protesters,47For cases upholding right to picket outside places of worship, see generally Survivors Network of Those Abused by Priests, Inc. v. Joyce, 779 F.3d 785 (8th Cir. 2015); Gerber v. Herskovitz, No. 22-1075, 2023 WL 2155050 (6th Cir. Feb. 22, 2023). but again some might be persuadable.

I’m not sure how this line is to be properly drawn. Perhaps courts should view Frisby as limited to “residential privacy,” given its reliance on the precedents saying that, “[a]lthough in many locations, we expect individuals simply to avoid speech they do not want to hear, the home is different.”48Frisby, 487 U.S. at 484 (citing Erznoznik v. City of Jacksonville, 422 U.S. 205, 210–11 (1975), and Cohen v. California, 403 U.S. 15, 21–22 (1971)). On the other hand, there will always be arguments for extending this sort of extra protection beyond the home to medical facilities,49Hill v. Colorado, 530 U.S. 703, 718 (2000). funeral homes,50Phelps-Roper v. Ricketts, 867 F.3d 883 (8th Cir. 2017); Phelps-Roper v. Strickland, 539 F.3d 356 (6th Cir. 2008). high schools,51Blythe v. City of San Diego, No. 24-CV-02211-GPC-DDL, 2025 WL 108185, at *4 (S.D. Cal. Jan. 14, 2025). places of worship,52Id. at *1. and more. Here, I just want to acknowledge the difficulty that this issue raises.

  Conclusion

The one-to-one/one-to-many distinction is critical to understanding how and when unwilling listeners may be protected. I hope this short article has helpfully elaborated on a few questions the distinction raises.

98 S. Cal. L. Rev. 1427

Download

* Thomas M. Siebel Senior Fellow, Hoover Institution (Stanford); Gary T. Schwartz Distinguished Professor of Law Emeritus, UCLA.

Filtered Dragnets and the Anti-Authoritarian Fourth Amendment

Filtered dragnets are digital searches that identify a suspect based on the details of a crime. They can be designed to withhold information from law enforcement unless and until there is a very high probability that the individual has committed the offense. Examples today include DNA matching, facial recognition from photographs or video of a crime, automated child sexual abuse material detection, and reverse geolocation (geofence) searches. More are sure to come, and their wide-scale use will be irresistible to improve the low rates of criminal detection that currently afflict many communities.

However, filtered dragnets imperil society precisely because they detect crime too well. Sudden increases in the detection of criminal conduct will intensify the pathologies of American criminal justice: namely, that too many marginally harmful acts are criminalized, crimes are punished too harshly, and police and prosecutors have too much discretion. If nearly everybody commits some technical violation of criminal law that can be easily detected and harshly punished, all Americans will be at the mercy of the constable’s pity.

These threats are not well constrained by current Fourth Amendment jurisprudence, based on privacy rights, because filtered dragnets detect crime without revealing irrelevant details. Thus, Fourth Amendment theory and doctrine must strengthen the anti-authoritarian objectives endowed in its roots. A search conducted with a filtered dragnet should be considered reasonable only if it is administered in an evenhanded manner, and a subsequent seizure of a person is reasonable only when the misconduct is abhorrent enough to justify arrest and imprisonment.

INTRODUCTION

Nearly forty years ago, Justice Brennan asked his colleagues, who had just given a constitutional stamp of approval to the drug-sniffing dog, to imagine a device “that, when aimed at a person, would detect instantaneously whether the person is carrying cocaine.”1United States v. Jacobsen, 466 U.S. 109, 138 (1984) (Brennan, J., dissenting). Justice Brennan went on to criticize the majority for ignoring not only the privacy interest that is intruded upon, but also the accuracy of the technique (or lack thereof) and “whether the surveillance technique is employed randomly or selectively.” Id. at 140. If the device could detect the presence of cocaine inside a building, “there would be no constitutional obstacle to the police cruising through a residential neighborhood and using the device to identify all homes in which the drug is present.”2Id. at 138. For a thoughtful discussion of this dissenting opinion, see Kiel Brennan-Marquez, Big Data Policing and the Redistribution of Anxiety, 15 Ohio State J. Crim. L. 487, 491–92 (2018). He believed the prospect of police having a tool of near-perfect detection presented a catastrophic threat that the courts have a duty to stop.

We are not too far off from this scenario anymore,3With the exception of conduct that takes place on the Internet and the geolocation of smart devices, the vast majority of human affairs still occurs outside the realm of digitized documentation. That said, sensor technologies, facial recognition, and biometric surveillance are beginning to convert more offline activities into tracked or trackable affairs. Perhaps the technology in development that is most analogous to Justice Brennan’s cocaine device are quantum magnetometry sensors that are sensitive enough to detect materials through walls and underground. See Chris Jay Hoofnagle & Simson L. Garfinkel, Law and Policy for the Quantum Age 31–76 (2022). and some strategies already in use by law enforcement and intelligence agencies are similar to Brennan’s machine. Examples include DNA matching, facial recognition from photographs or video of a crime when it was in progress, automated child sexual abuse material detection, and reverse digital searches (where police use information known about the crime, such as location, timing, or special instrumentalities, to cross-check against service provider data in order to identify a suspect). Many more of these investigative techniques are sure to come, especially if or when the Internet of Things reaches its potential by placing increasingly powerful sensors on nearly every machine.

Twenty-first century policing will increasingly use data collected from tracking and sensing technologies to conduct investigations that work backwards. Law enforcement will use the particulars of a crime as a “fingerprint,” so to speak, to determine who should belong in the pool of suspects. Unlike the standard dragnet, which permits law enforcement to observe large amounts of data and to choose their targets, filtered dragnets force investigations to focus on the evidence of a crime. Computers will automatically scan through data without exposing it and will make a disclosure only when there is probable cause to believe that a person’s data matches the signature of the crime. Moreover, even when data is disclosed, filtered dragnet programs can be designed so that the only data revealed is potentially relevant data; extraneous details can be withheld.

When surveillance technologies meet all these benchmarks—that is, when (1) they are used to find an individual related to a crime (rather than to find a crime related to an individual), (2) when they report details from an otherwise private database only after meeting a high threshold of confidence (e.g., probable cause or higher), and (3) when they withhold details that are ex ante unlikely to be relevant to the current criminal investigation, the nature of that surveillance is different from other types of police work. Filtered dragnets, as I will call them, are structured to avoid many problems traditionally associated with mass surveillance.

Fourth Amendment theory and reasoning is just starting to find its legs in digital search cases,4See Carpenter v. United States, 138 S. Ct. 2206, 2209 (2018) (accessing several days’ worth of geolocation data constitutes a search that will ordinarily require a warrant); United States v. Jones, 565 U.S. 400, 413–15 (2012) (Sotomayor, J., concurring) (arguing that GPS tracking should be a search irrespective of whether a tracking device has physically intruded into a protected area). but filtered dragnets will destabilize criminal procedure law again. They will whittle down most of the privacy rationales for Fourth Amendment protection. Mounting a Fourth Amendment defense will require a litigant to convincingly argue that even though the defendant very likely committed a crime, and even though the police did not see or have discretionary access to data for any other persons and did not even have irrelevant data about the defendant for that matter, the search was nevertheless unreasonable. That sort of privacy über alles argument might work for crimes of questionable legitimacy—drug possession, for example—but it won’t work in the context of universally reviled conduct like murder.

What is more, filtered dragnets may reduce privacy intrusions on net, as compared with current investigation techniques, because they can remove many people from the scope of suspicion who would otherwise become targets of investigation. In other words, filtered dragnets break the privacy-security trade-off because they simultaneously increase criminal detection and privacy. As Bennet Capers has explained, they may be a useful tool to simultaneously tackle under-protection and over-policing problems.5I. Bennett Capers, Techno-Policing, 15 Ohio State J. Crim. L. 495, 496 (2018) (“The task is to reimagine Big Brother so that he not only watches us; he also watches over us—to reimagine Big Brother as protective, and as someone who will be there to tell our side of the story.”); I. Bennett Capers, Crime, Surveillance, and Communities, 40 Fordham Urb. L.J. 959, 989 (2013). For a discussion of the moral injuries when police cause indignities and abuse, see Eric J. Miller, The Moral Burdens of Police Wrongdoing, 97 Res Philosophica (2020). Outright bans of these technologies, as have been advocated in many corners,6See, e.g., Antoaneta Roussi, Resisting the Rise of Facial Recognition, 587 Nature 350, 352 (2020) (quoting Woodrow Hartzog, who described facial recognition technology as the “most dangerous ever to be invented”); Kate Conger, Richard Fausset & Serge F. Kovaleski, San Francisco Bans Facial Recognition Technology, N.Y. Times (May 14, 2019), https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-sanfrancisco [https://perma.cc/858W-&M6N] (quoting ACLU attorney Matt Cagle, praising the ban as “forward-looking and looks to prevent the unleashing of this dangerous technology against the public”); Matthew Guariglia, Geofence Warrants and Reverse Keyword Warrants Are So Invasive, Even Big Tech Wants to Ban Them, Elec. Frontier Found. (May 13, 2022), https://www.eff.org/deeplinks/2022/05/geofence-warrants-and-reverse-keyword-warrants-are-so-invasive-even-big-tech-wants [https://perma.cc/VG22-ENMH]. would be irresponsible.7Undeterred crime is oppressive and unequal, too. James Forman Jr., Locking Up Our Own: Crime and Punishment in Black America 96­­–99 (2018); Alexandra Natapoff, Underenforcement, 75 Fordham L. Rev. 1715, 1715 (2006).

Nevertheless, even if filtered dragnets detect crime and nothing else, they pose serious social risks that Fourth Amendment law and scholarship are ill equipped to handle: What happens to Fourth Amendment theory and the practice of criminal justice if nearly every crime could be detected?

In the late 1990s, Larry Lessig asked this very question.8Lawrence Lessig, Code and Other Laws of Cyberspace 18 (1999) (“This difference complicates the constitutional question. The [technology’s] behavior is like a generalized search in that it is a search without suspicion, but it is unlike the paradigm case of a generalized search in that it creates no disruption of ordinary life and finds only contraband. . . . Is [it] constitutional? That depends on your conception of what the Fourth Amendment protects. . . . The paradigm case cited by the framers does not distinguish between these two very different protections. It is we, instead, who must choose.”). He anticipated that digital technologies may create a wedge between the privacy and anti-authoritarian rationales for criminal procedure. But most Fourth Amendment scholars do not even recognize a schism between privacy and anti-authoritarian goals. Instead, they continue to focus on privacy as the key constraint on any police activity that leverages large amounts of personal data. The scholars who have recognized liberty and anti-authoritarianism as a Fourth Amendment lodestar have insisted that all technology-assisted surveillance is a tool of abusive state power per se.9Paul Ohm, The Fourth Amendment in a World Without Privacy, 81 Miss. L. J. 1309, 1334–38, 1346 (declaring that considerations of power seem to be “the amendment’s essence, not merely a proxy for something deeper,” but then equating abuses of state power with the ability to solve crimes faster); David Alan Sklansky, Too Much Information: How Not to Think About Privacy and the Fourth Amendment, 102 Calif. L. Rev. 1069, 1120 (2014) (advocating for Fourth Amendment protection against any electronic surveillance that fails to leave a sphere of refuge or autonomy for the individual); Andrew Guthrie Ferguson, Surveillance and the Tyrant Test, 110 Georgetown L. J. 205, 266 (2021). But see Richard M. Re, Imagining Perfect Surveillance, 64 UCLA L. Rev. Discourse 264, 274–276, 281–285 (2016). Re’s essay, set in the year 2026 and describing a fictitious tool of perfect surveillance and crime reporting, anticipates the need for courts to shift the focus of Fourth Amendment law to the substance of criminal law. As a result, Fourth Amendment scholars lump filtered dragnets with all other surveillance and advocate for the strictest access controls, guaranteeing the continuation of a low rate of criminal detection.

This is the wrong course. The threat from filtered dragnets is tyranny, and the Fourth Amendment will be more effective and coherent if we recognize that. Filtered dragnets will dramatically increase the detection of crime, and this will intensify existing pathologies in American criminal justice that have little to do with privacy. Namely, we have too many crimes, too much punishment, and too much police and prosecutorial discretion. These problems jointly produce the risk of authoritarian power. An overly expansive criminal code paired with harsh penalties ensures that nearly everybody could be subjected to incarceration.10Glenn Harlan Reynolds, Ham Sandwich Nation: Due Process When Everything Is a Crime, 113 Colum. L. Rev. Sidebar 102, 103–04 (2013). See generally Harvey A. Silvergate, Three Felonies a Day: How the Feds Target the Innocent (2011). When the state also has unchecked power to choose where and when to investigate within the ocean of criminal-but-typically-ignored conduct, the populace is at the mercy of the state’s will.11Filtered dragnets, like any tool that cheaply and accurately finds evidence of crime, will not necessarily cause the state to abuse its power, but it will certainly give legislatures, police, and prosecutors a mechanism to abuse power more efficiently if they so choose.

Today, the criminal justice equilibrium rests on an unspoken compromise. The state has broad substantive law, harsh punishment, and unchecked discretion, it is true, but the populace has privacy rights that nearly guarantee low detection, even when police are highly motivated. When filtered dragnets give police near-perfect detection, the bargain has to be renegotiated.

This Article proposes a new grand bargain for Fourth Amendment law: the Supreme Court should recognize filtered dragnets as a legitimate and even desirable tool for criminal investigations. But constitutional rules should guarantee that the substance of American criminal law will be limited to conduct that is commonly recognized as heinous, that the severity of the punishment fits the reprehensibility of the crime, and that the enforcement of criminal laws is equitable and nonarbitrary.12In other words, as described in detail infra Part III, reversing Smith v. Maryland, 442 U.S. 735 (1979) and the third party doctrine will be of minimal relevance to the just use of filtered dragnets. Instead, cases that permit carceral arrest for minor misconduct (Atwater v. City of Lago Vista, 532 U.S. 318 (2001)) and that give police unfettered discretion in investigation and enforcement decisions (Whren v. United States, 517 U.S. 806 (1996)) are of much greater consequence. See infra Part V. Without these civil rights, if the substance of criminal law is left as broad and vague as it is today,13On vagueness and overbreadth, see Silvergate, supra note 10, at XI–XVI. See generally Risa Goluboff, Vagrant Nation (2016); Kiel Brennan-Marquez, Extremely Broad Laws, 61 Ariz. L. Rev. 641 (2019). and if penalties and the impact of prison are as debilitating as they are now, filtered dragnets would give the government the means of exercising tyrannical control through the omnipresent threat of criminal enforcement and the power of discretionary clemency.

This Article proceeds as follows: Part I describes some filtered dragnets that are already in use and lays out the essential features that distinguish them from other investigation tools.

Part II describes the potential social benefits that can be gained from the responsible use of filtered dragnets.

Part III describes the scholarship and caselaw challenging the constitutionality of filtered dragnets on privacy grounds and disagrees with it. By most common-sense meanings of privacy, filtered dragnets are in fact much more private than the sorts of investigations that routinely occur.

Part IV shows that the threat of filtered dragnets comes not in the form of privacy but in the form of tyranny. Perfect detection of crime in a system where criminal statutes are sprawling and criminal penalties are harsh will either create a country of convicts or will give government too much power to engage in selective leniency.

Part V reinterprets the Fourth Amendment prohibition of unreasonable searches and seizures to fit the criminal justice problems that emerging surveillance technologies will cause. The reasonableness of a seizure should depend on whether the defendant’s conduct truly warrants criminal liability and penalties. The reasonableness of a search should depend on both expectations of privacy and on evenhanded investigation practices.

Part VI explains why the Constitution, and the Fourth Amendment in particular, are well suited to carry out this shift even though it would mark a departure from twentieth century precedent.

The agenda laid out in this Article is ambitious—almost embarrassingly so. What I propose here would require a seismic shift in Fourth Amendment principles that would cross the procedural/substantive divide.14Other scholars have advocated for a Fourth Amendment theoretical inquiry that breaks out of a purely procedural lane. Morgan Cloud, Pragmatism, Positivism, and Principles in Fourth Amendment Theory, 41 UCLA L. Rev. 199, 200 (1993) (“The fragmentation of constitutional theory in law school curricula and academic scholarship is nowhere more evident than in the isolation of the fourth amendment from broad currents of contemporary jurisprudence. . . . This isolation has impoverished both fourth amendment theory and general constitutional theory alike.”); William J. Stuntz, The Substantive Origins of Criminal Procedure, 105 Yale L.J. 393, 393–411 (1995). Given that, I take comfort in the fact that I am not painting on blank canvas. This project is a remix of themes developed by Bill Stuntz,15William J. Stuntz, The Collapse of American Criminal Justice (2011). Bennett Capers,16Capers, supra note 5. Elizabeth Joh,17Elizabeth E. Joh, Discretionless Policing: Technology and the Fourth Amendment, 95 Calif. L. Rev. 199 (2007). Bernard Harcourt and Tracey Meares,18Bernard E. Harcourt & Tracey L. Meares, Randomization and the Fourth Amendment, 78 U. Chi. L. Rev. 809 (2011). Chris Slobogin,19Christopher Slobogin, Government Data Mining and the Fourth Amendment, 75 U. Chi. L. Rev. 317 (2008). Mark Kleiman,20Mark A. R. Kleiman, When Brute Force Fails (2009). and many others. Even so, it is awfully presumptuous to suggest courts might start invalidating criminal laws or sentencing rules using a new-fangled conception of the Fourth Amendment. But I will suggest it anyway because it is the only desirable and realistic option. The criminal justice system needs to be transformed in a manner that accepts much greater levels of detection in exchange for many fewer criminal prohibitions and punishments. It is a trade that has to be executed simultaneously in order to avoid disastrous consequences.21Criminal liability and sentencing cannot be reduced unless and until the detection of serious crimes is improved. Otherwise, the inevitable crime wave will turn on the backlash machinery of increased sentences and bloated criminal codes. On the other hand, unleashing filtered dragnet technologies without fixing existing statutes and sentences will expose many more people to criminal liability than is justified and will create too many opportunities for biased or opportunistic enforcement. See infra Part V. No legislative or local government process could pull off a massive rights horse trade of the sort that is required. It can only be accomplished through the style of landmark constitutional cases that, every generation or so, help realign Fourth Amendment operational rules with the ultimate purpose of Fourth Amendment protection.22I am referring here to the transition the Fourth Amendment made from a protection of property interests to a protection of privacy following Katz v. United States, 389 U.S. 347 (1967). See discussion infra Part V.

I.  WHAT ARE FILTERED DRAGNETS?

The progenitors of filtered dragnets have been around for a while. Fingerprinting analysis is a well-known and time-honored method of backwards investigation where the facts from the scene of a crime (the fingerprint markings) are cross-checked against a large stockpile of information in order to make a fairly confident match to a particular suspect.23Davis v. Mississippi, 394 U.S. 721, 727 (1969). Police dogs are another example.24Illinois v. Caballes, 543 U.S. 405, 409 (2005). We know that the mind-boggling sensitivity of a dog’s nose is such that, if it could talk, it could reveal vast amounts of information about a person—what is inside their bag, how their health is, whether they’ve been in recent contact with other people—that are unobservable to we mere humans. In some sense, the mind of a police dog is a treasure trove of personal information that remains inaccessible to police most of the time. But when they are trained to alert to contraband or to specific scents sampled from a crime scene, the dog and the training combine to create a “binary search”—a mechanism that tells the police nothing unless there is probable cause that a crime is being committed.25Jane Bambauer, Defending the Dog, 91 Ore. L. Rev. 1203, 1203 (2013).

These crime-driven, quasi-filtered investigations are the outliers in a system of police investigation that relies much more heavily on witnesses, confessions, and physical searches.26Throughout this article, I will distinguish suspect-driven investigations from crime-driven searches. See Slobogin, supra note 19, at 322–23 (using the term “event-driven”); Jane Bambauer, Other People’s Papers, 94 Tex. L. Rev. 205, 208 (2015) (using the term “crime-out”). But we can expect the practice to rapidly expand because of the greater amounts and variability of data available for cross-checking the facts of a crime against data from the population of potential suspects.

This Part lays out the two required features of filtered dragnets that will cause an unprecedented shock to Fourth Amendment theory. We will then visit examples of techniques that are already in use that either already satisfy the definition of filtered dragnets or soon will.

A.  Required Elements to Qualify as a Filtered Dragnet

Filtered dragnets provide a suspect’s data to police only if (a) their data matches uniquely criminal details such that there is a high probability they have engaged in criminal conduct; and (b) their data has been pared down to provide only relevant details about the suspected crime to the police. When combined, these features make filtered dragnets a qualitatively different style of police investigation.27Jack Balkin bristles when scholars describe “essential features” of a technology. Jack B. Balkin, The Path of Robotics Law, 6 Calif. L. Rev. Cir. 45, 45 (2015). Suffice it to say that I am defining here a techno-social application of data collection and processing. The same technology can be used in other ways, of course, but then those uses would not meet my definition of a “filtered dragnet.”

1.  Automated Matching of Uniquely Criminal Details

Filtered dragnet investigations will trawl through and process large amounts of data. There is no doubt that they are a dragnet. But to qualify as a filtered dragnet, the filter of the dragnet must constrain the system’s ability to leak information. A filtered dragnet must be programmed to alert police only if an individual’s data matches a unique fingerprint of a crime.28David H. Kaye, Identification, Individualization and Uniqueness: What’s the Difference?, 8 L. Probability & Risk 85, 92 (2009). In other words, the system blinds the police until at least probable cause (and hopefully more suspicion) is established.

Filtered dragnets are a subset of the category of investigations that Christopher Slobogin calls “suspectless searches.”29Christopher Slobogin, Suspectless Searches, 83 Ohio State L.J. 953, 954 (2022) [hereinafter Slobogin, Suspectless Searches]; see Christopher Slobogin, Virtual Searches 127–48 (2022) [hereinafter Slobogin, Virtual Searches]. Slobogin describes many of the same techniques that I do here, but his analysis has less futurism and is more interested in the way the Fourth Amendment should handle suspectless searches right now, when many cannot or do not match to uniquely criminal profiles. But they are a narrow subset. Very few of the suspectless searches that Slobogin analyzes (many of which I describe below) have the potential to become filtered dragnets. As they are practiced today, they will not meet the heightened standards for filtered dragnets because they do not use unique signatures of criminal behavior. For example, geofencing and familial DNA-matching procedures often allow police today to access data about a handful of individuals, all but one of whom are necessarily innocent, in order to help the police create leads for traditional follow-up investigation. To find the Golden State Killer, the FBI found a genetic match to a family member, and then used traditional genealogy to trace from that family member to the suspect.30Paige St. John, The Untold Story of How the Golden State Killer Was Found: A Covert Operation and Private DNA, L.A. Times (Dec. 8, 2020), https://www.latimes.com/california/story/2020-12-08/man-in-the-window [https://perma.cc/7LZU-9JGQ]. The revelation of that family member’s identity would not qualify as matching to “uniquely criminal detail.”

Slobogin argues that even when a small number of people, some of whom are guaranteed not to be the perpetrator (such as somebody whose DNA only partially matches that of the sample from a crime scene), are identified to the police, the intrusion into privacy is fairly minimal and should be handled through Fourth Amendment doctrines that allow for warrantless searches and seizures, like checkpoints.31Slobogin, Suspectless Searches, supra note 29, at 955–56. I agree with nearly all of Slobogin’s proposals about how courts should interpret the Fourth Amendment with respect to these examples. But they still do not meet the criteria I am setting—criteria that, when met, challenge the most basic conceptions of Fourth Amendment privacy. To meet the definition of a filtered dragnet for my purposes, police will remain ignorant to details and identities until there is a high probability that the information identifies and pertains to the perpetrators and no one else.

2.  Nondisclosure of Irrelevant Details

The first requirement on its own ensures that filtered dragnets are analogous to “binary searches” like drug-sniffing dogs—the sort that alert only if there is probable cause of a crime. But there is an additional affordance that should be exploited: filtered dragnets must refine the information that is ultimately disclosed to police by filtering out personal, irrelevant details even about a suspect. This is equivalent to a drug-sniffing dog that could magically produce a suspect’s drugs without any of the rifling through cars and pockets that are necessary today. Thus, the suspect will retain privacy over details that are not relevant to the criminal investigation at hand.

To be clear, neither of these requirements are meant to be absolute guarantees. All systems have error, and even if police are able to set very demanding thresholds for false positives, police will occasionally access licit, irrelevant details when a filtered dragnet falsely identifies a suspect who is then subjected to an arrest or probable cause–based search. But the requirements for disclosure in a filtered dragnet system can be calibrated to fit societal needs and expectations: the chance of false accusation error can be driven down to practically zero if we would like, if we are willing to tolerate the consequences that there will be more false negatives (more crimes that are not detected) or that police departments will need to access more data in order to maintain the same level of detection.

A.  Examples

Next, we will visit a set of backwards investigation techniques that are in use today. These use the particularities of a crime to lead police to a suspect. While most cannot meet the demanding definition of “filtered dragnet” formalized above, with time and additional data resources, they will surely get there.

1.  DNA Matching

DNA-matching investigations use parts (non-revelatory portions) of a DNA sequence produced from a sample collected at a crime scene or from a crime victim in order to identify a suspect using DNA databases. They are an obvious extension of fingerprinting analyses with some souped-up features. First, DNA matching can set a very high threshold of statistical probability of true match (or, in other words, a very low probability of a false match) because each DNA sequence has a large amount of data.32With enough of a sequence for matching, the investigator can have extremely high confidence that the combination of DNA markers will be unique to a single individual. Fingerprint analysis, by contrast, contains a natural limit on how confident an analyst can be that the patterns from prints left at a crime scene would be produced by just one person. Nevertheless, there are still opportunities for DNA matching to produce erroneous results. Erin E. Murphy, Inside the Cell: The Dark Side of Forensic DNA 29–83 (2015). Second, they can make use of popular commercial and ancestry databases for cross-checking and are therefore not limited to identifying individuals who have a history with the criminal justice system.

Third, familial or partial DNA matches are very useful for police investigations in a way that partial fingerprint matching is not. In familial DNA-matching investigations, such as the one that eventually led to the arrest of the Golden State Killer, police departments recover the identity not of the suspect but of one or more of the suspect’s genetic relatives.33David Lazer & Michelle N. Meyer, DNA and the Criminal Justice System: Consensus and Debate, in DNA and the Criminal Justice System: The Technology of Justice 907–08 (David Lazer ed., 2004) (describing “low-stringency” searches on DNA databases that will return results of individuals who are likely to be related to the person whose DNA was sequenced for the crime scene sample). This raises privacy concerns for the relatives whose identities are revealed to law enforcement in the course of finding the perpetrator.34Natalie Ram, Fortuity and Forensic Familial Identification, 63 Stan. L. Rev. 751, 791 (2011). So, as practiced today, familial DNA searches do not fit the definition of a filtered dragnet. They fail the second element (filtering out innocent and irrelevant details) by revealing identities and information about family members who are definitely not the perpetrator of the crime.35One might think these are relatively minor privacy intrusions (equivalent to a witness saying “the murderer was Moe’s cousin”). However, it is conceivable that in the future, if multiple databases are able to be accessed and triangulated, familial DNA matching can be part of a filtered dragnet system that automatically finds a familial match, trawls other data sources in order to identify the correct relative of familial match (based on, e.g., age, location, or personal history of the relatives), and discloses the identity of the suspect and the relevant details only when and if there is sufficient confidence that the correct suspect has been identified.36This is not far-fetched: police already use statistical packages like a service called “What Are the Odds” in order to understand the closeness of the blood relationship between the suspect and the person whose DNA created a familial match, and then they use traditional methods of genealogy research (e.g., cross-checking with Census records and other public records) to find the suspect. Ellen M. Greytak, CeCe Moore & Steven L. Armentrout, Genetic Genealogy for Cold Case and Active Investigations, 299 Forensic Sci. Int’l. 103, 103–04, 107 (2019). All of this can be automated.

DNA evidence holds an esteemed place in criminal justice and public perception. DNA evidence is durable (as long as it is handled properly) and judges and juries can justifiably place a high degree of confidence in the reliability of DNA-matching investigations.37Lazer & Meyer, supra note 33, at 880–81. Other types of data beyond DNA can have these qualities, too, but they provoke much more suspicion and dissent. Distinguishing them from DNA matching will become increasingly untenable.

2.  Facial Recognition

Facial recognition uses large databases of identified photographs (often scraped from the public Internet) to discover the identity of a person who would otherwise be anonymous.38The procedure works by converting images of faces into “face prints”—maps of the contours of an individual’s face—and then cross-checking the maps against each other. Natasha Singer, Never Forgetting a Face, N.Y. Times (May 18, 2014), https://www.nytimes.com/2014/05/18/technology/never-forgetting-a-face [https://perma.cc/L2PZ-DWL3]. The technology can be used as a filtered dragnet when police departments deploy facial recognition on photographic evidence from the scene of the crime.39Facial recognition can also be used when police have already sought and received a warrant for a person’s arrest based on probable cause from other sources and are attempting to locate the suspect. This would also constitute a filtered dragnet. For example, law enforcement has used facial recognition to pin identities to individuals who appeared in surveillance footage from the Capitol on January 6, 2021, as well as to robberies and street crimes.40Kashmir Hill, Your Face Is Not Your Own, N.Y. Times Mag. (Mar. 18, 2021), https://www.nytimes.com/interactive/2021/03/18/magazine/facial-recognition-clearview-ai [https://perma.cc/A2CC-GXGG]. Although facial recognition algorithms are less accurate for female and non-white faces,41Patrick Grother, Mei Ngan & Kayee Hanaoka, Nat’l Inst. of Standards and Tech., NISTIR 8280, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects 48 (2019). industry members claim this is not the case for top-performing algorithms in active use.42Jake Parker & David Ray, What Science Really Says About Facial Recognition Accuracy and Bias Concerns, Sec. Indus. Ass’n (July 23, 2022), https://www.securityindustry.org/2022/07/23/what-science-really-says-about-facial-recognition-accuracy-and-bias-concerns [https://perma.cc/Z2Z2-ZZN6]; Hoan Ton-That, The Myth of Facial Recognition Bias, Clearview AI (Nov 28, 2022), https://www.clearview.ai/post/the-myth-of-facial-recognition-bias [https://perma.cc/4WXT-65Y6].

3.  Automated CSAM Detection

Last year, Apple unveiled a program that would automatically scan iPhoto images and cross-check them against a library of known child pornography when the images were uploaded to the iCloud. Apple had planned to use a hashing technique to check all files sent from Apple devices to be stored on iCloud servers. Essentially, every image received by an Apple phone is converted to a code that corresponds to the visual image.43The hash is a 1:1 transform, meaning that the hash function would convert an image into just one particular string of numbers, and conversely a single code (or string of numbers) would translate into one particular image. This allows Apple to check the hash of every image against a library of hashes that represent known child sexual abuse material (“CSAM”) in order to detect child pornography. However, those who traffic in CSAM would be alert to this and could make minor changes to the image to avoid exact matches. To prevent circumvention, Apple uses a form of perceptual hashing (called NeuralHash) that uses fuzzy matching to detect and alert to images that do not match exactly but are very likely depicting the same image. Apple, CSAM Detection: Technical Summary 4 (2021). When a person’s iPhoto images produce ten matches, Apple employees would automatically be alerted and would share the information with authorities. Thus, while every image would be hashed and cross-checked against child pornography, only the images that matched could lead to a disclosure to law enforcement. Apple has since abandoned its plans in response to criticism,44Lily Hay Newman, Apple Kills Its Plan to Scan Your Photos for CSAM. Here’s What’s Next, Wired (Dec. 7, 2022, 11:11 PM) https://www.wired.com/story/apple-photo-scanning-csam-communication-safety-messages [https://perma.cc/G8SL-RE53]. but the technological capability still exists.

4.  Geofences and Other Reverse Searches

In 2019, a spate of arsons involving vehicles parked in commercial lots was committed in short succession.45In re Search Warrant Application for Geofence Location Data Stored at Google Concerning an Arson Investigation, 497 F. Supp. 3d 345, 351 (N.D. Ill. 2020). Based on the locations, surveillance footage, and similar modi operandi, police had reason to believe that a single set of co-conspirators was involved in all six arsons. When federal investigators requested that the court issue a warrant requiring Google to search its time-logged geolocation records for cellphones that were at or near the scenes of the arsons during the times that they were committed, a U.S. magistrate judge complied.46Id. at 364. This type of process—where police start with the location, approximate time, and other details of a crime and ask service-providers to find a matching account—is known as a “geofence warrant,” and magistrate judges have issued orders authorizing their use under certain conditions. Judges have refused to issue warrants (without deciding whether warrants are actually necessary) when the request cast too wide a net—that is, if too many devices are likely to be identified as matching the search criteria.47E.g., In re Matter of Search of Info. Stored at Premises Controlled by Google, 481 F. Supp. 3d 730, 733 (N.D. Ill. 2020). For example, if police are investigating a crime that took place during a Beyoncé concert, even a geofence with a small radius, during a fairly precise window of time, will draw in too many false matches—too many phones of innocent bystanders. But this concern falls away if police can use multiple details or the intersection of several geofences in order to create a search criteria that will be unique to the perpetrator.48The arson case would have been an ideal investigation to use intersecting geofences. Unfortunately, the government did not request records in that way, and the court did not address the difference between the union and intersection of geofences in its opinion. In re Search Warrant Application, 497 F. Supp. 3d at 345. For example, in one recent case, a perpetrator who was suspected to have cased the location of a murder on the day before he committed it was identified using overlapping geofences from the day before and the day of the murder.49Slobogin, Suspectless Searches, supra note 29, at 954 (citing Tyler Dukes, To Find Suspects, Raleigh Police Quietly Turn to Google, WRAL NEWS (July 13, 2018, 11:07 AM), https://www.wral.com/to-find-suspects-police-quietly-turn-to-google/17377435 [https://perma.cc/BU4W-2Z4Q]). License plate readers, drone footage, Internet of Things data, and satellite surveillance imaging could also be sources of geolocation information in the likely circumstance that criminals begin to leave their devices at home.50Id. at 954–55; Eldar Haber, The Wiretapping of Things, 53 UC Davis L. Rev. 733, 736 (2019).

Geolocation data can be combined with other types of information, too, to form a signature of crime that is more likely to be unique. As an illustration, US intelligence agencies located Osama bin Laden in part by looking for locations where they would expect to find Internet and cell service but in fact found none.51Peter Bergen, Did Torture Help Lead to Bin Laden?, CNN (Dec. 10, 2014, 12:26 PM), https://www.cnn.com/2014/12/10/opinion/bergen-torture-path-to-bin-laden/index.html [https://perma.cc/EJV6-FV6W]. There are data sources outside of location data that can create a signature for reverse searching. For example, while investigating an arson case, the Denver police department sought and received a “keyword warrant”—a court order requiring Google to reveal the account information of users who had recently searched for the address of the arson during a fifteen-day period leading up to the crime.52Celes Keene, Reverse Keyword Searches and Crime, Lexology (Aug. 11, 2022), https://www.lexology.com/library/detail.aspx?g=de2f5b21-a9b1-4650-a911-31dd1f39e671 [https://perma.cc/T8HH-RREJ]. Cyberstalking, child pornography, and many other online crimes have used forms of reverse searches in order to identify the accounts associated with IP addresses that were used to engage in those crimes.53See, e.g., United States v. Forrester, 512 F.3d 500, 505 (9th Cir. 2008); United States v. Hood, 920 F.3d 87, 89 (1st Cir. 2019); United States v. Contreras, 905 F.3d 853, 855–56 (5th Cir. 2018).

5.  Scanners, Sensors, Cameras, and Microphones

Red light cameras were one of the first ventures into automated policing and were also much despised.54Erin Mulvaney & Dug Begley, Opposition Putting a Stop to Red Light Cameras, Hous. Chron. (Apr. 25, 2013, 9:19 AM), https://www.houstonchronicle.com/news/houston-texas/houston/article/opposition-putting-a-stop-to-red-light-cameras-4461447.php [https://web.archive.org/web/20220708020423/https://www.houstonchronicle.com/news/houston-texas/houston/article/Opposition-putting-a-stop-to-red-light-cameras-4461447.php]. These systems used sensors to detect if a car entered an intersection after the light had turned red, took a photograph of the car, and later used the image of the car (and its license plate) to track down the owner and mail a ticket. These systems are not dragnets per se (they do not make use of pre-existing collections of data), but they set the stage for Automatic License Plate Readers that do capture an abundant amount of data in case some particular parts of it are useful later, as when police are searching for a stolen vehicle.55Slobogin, Suspectless Searches, supra note 29, at 955. Similarly, short-range communications technologies can reveal a car’s speed. Joh, supra note 17, at 200.

Patterns that are highly suggestive of crime can also be automatically detected using recording devices with cameras, microphones, or sensors that operate in “always on” mode.56Haber, supra note 50, at 735. One example in use today is ShotSpotter microphones that are constantly “listening” in a public setting but alert the police and save data long term only when the noises captured by the shot-spotter match the sounds of gunshots.57ShotSpotter, ShotSpotter Frequently Asked Questions (2018), https://www.shotspotter.com/system/content-uploads/SST_FAQ_January_2018.pdf [https://perma.cc/3SD4-B2JU]. In theory, Alexa, which also constantly records to respond to watchwords like “Hey Alexa,”58Amazon, How Alexa Works: Wake Word (last visited Feb. 25, 2024), https://www.amazon.com/b?ie=UTF8&node=23608571011 [https://perma.cc/JXB3-246D]. could be designed to detect sounds that are particular to domestic violence or home invasion and automatically alert the authorities.

Other sensitive devices like terahertz scanners can detect when naturally occurring radiation is blocked by metal objects. When the blocking metal objects are gun shaped, the scanners can be programmed to alert.59I. Bennett Capers, Race, Policing, and Technology, 95 N.C. L. Rev. 1241, 1275–77 (2017) (arguing that these tools can lead us to “real reasonable suspicion”). But this is nothing compared to what quantum magnetometry will be able to do in the near future.60Dmitry Budker & Michael Romalis, Optical Magnetometry, 3 Nature Physics 227, 227 (2007). Quantum sensing is so sensitive to minute differences in magnetic fields that the sensors will be able to detect trace amounts of chemicals, even when they are concealed behind walls. So, Justice Brennan’s nightmare scenario is here: we will soon have contraband detection devices.

This survey of suspicionless searches and backwards investigations demonstrates that there is increasing viability and interest in using these types of techniques. The practices currently in use do not usually meet the two formal requirements for “filtered dragnets,” but it is useful to assume they eventually will. By assuming investigations will eventually meet the demanding definition of filtered dragnets, we will be able to state with more rigor precisely why it is we are nervous about these law enforcement technologies, and what the policy or constitutional response should be.

II.  THE ADVANTAGES OF FILTERED DRAGNETS

This Article will eventually explain why filtered dragnets impose serious risks on society that are not adequately (or even nominally) addressed in Fourth Amendment theory. But first, we will explore reasons to embrace, rather than resist, the integration of filtered dragnets into policing.

Filtered dragnets offer several advantages over the investigation practices in common use.61A police investigation strategy cannot be judged without comparison to its next best alternatives. See Tal Z. Zarsky, Governmental Data Mining and Its Alternatives, 116 Penn. St. L. Rev. 285 (2011). These include decreased exposure of innocent details, increased accuracy and efficacy of criminal investigations, increased detection and deterrence of crime, decreased discretion for suspect selection, and decreased risk to witnesses and victims. In combination, these advantages contribute such compelling benefits to society that courts and attorneys should feel a moral obligation to harness their powers as much as possible.

A.  Decreased Exposure of Innocent and Irrelevant Details

Filtered dragnets protect the privacy of innocent individuals, as well as the innocent-and-irrelevant details of a suspect. They protect innocent individuals whose data is scanned in the process by allowing police and courts to set a high standard for false match error. That is, filtered dragnets can be programmed to alert and reveal personal information only when the statistical probability that the person has engaged in crime is greater than 50%, or 80%, or 99%. This would ensure that the number of innocent individuals who are initially approached and investigated will be only a fraction of the number of criminals who are found.62I have called this “hassle”—the imposition of searches, seizures, or even the stress of becoming a person-of-interest, experienced by an innocent person who is targeted based on probable cause. Jane Bambauer, Hassle, 113 Mich. L. Rev. 461, 461 (2015).

Moreover, filtered dragnets limit the type of information that is revealed even about the proper subjects of investigation who have committed a crime. This is a game-changer. If police could have searched a house or a car in a manner that blinded them to everything except contraband or criminal evidence, the text and interpretation of the Constitution would probably differ from what we have today. The closest analogy we have to filtered dragnets, as I have mentioned before, are drug-sniffing dogs. Police dogs are allowed to sniff and alert based on the (mostly defensible) assumption that they will be trained well enough to have a low error rate.63Florida v. Harris, 568 U.S. 237, 238 (2013). The dog sniff and subsequent alert are, controversially, treated as a non-search in Fourth Amendment law unless the dog has trespassed into the home or curtilage of a resident.64Florida v. Jardines, 569 U.S. 1, 6–7 (2013). But once the dog alerts, the police have probable cause to perform an entire human-conducted unfiltered search of a person’s vehicle, home, or effects, thereby revealing intimate and innocent details while they look for contraband. Filtered surveillance is more privacy-protective than drug-sniffing dogs because it can restrict the sort of data that is revealed even as police are verifying that the alert is accurate.

I do not mean to suggest that filtered dragnets avoid all revelations about innocent people or activities. Relevant data disclosed to police as a result of a high probability match will frequently, maybe even usually, reveal information that is not directly tied to wrongdoing. For example, if in the future the police used a system that combines familial DNA matching with other records to identify a sexual assault offender, police may see and use the identity of the family member in order to confirm that the identification is sound and to show how the case was solved to a jury. This could reveal the identity of estranged parents or children of the suspect or could uncover paternity that was not previously known.65Neil Richards, Why Privacy Matters 99 (2021). But this is a consequence of the fact that all successful investigations impose some irreducible privacy costs on the innocent. Even using traditional strategies, police will occasionally and appropriately question a spouse in a manner that reveals the suspect is having an affair or may make other similar sensitive revelations. If the revelations are in service of pursuing a probable cause–backed  investigation, these will be innocent-but-relevant details.66Thus, I disagree with scholars like Neil Richards who suggest that familial DNA matching inevitably presents a risk of a free-for-all where police will routinely learn about paternity or about the genetic propensity for disease. See id. The advantage I describe here pertains to the shielding of innocent-and-irrelevant information.

B.  Increased Accuracy

By definition, filtered dragnets identify suspects and reveal information only when there is a high probability of crime. This is a form of increased accuracy—a reduction in false positive error. (In the next subsection, I will discuss the other form of increased accuracy—the reduction in false negative error—which would allow filtered dragnets, if deployed consistently, to solve more crimes and increase clearance rates.)

If filtered dragnets are held to higher probability standards than standard investigation techniques, they will cause proportionally fewer false starts and erroneous arrests and searches along the way.67Ram, supra note 34, at 788 (identifying the potential for exoneration as a reason to adopt familial DNA matching). Similarly, a more accurate criminal justice system also reduces the potential for abuse, too, because it denies state agents the ability to credibly threaten the innocent. Dhammika Dharmapala, Nuno Garoupa & Richard H. McAdams, Punitive Police? Agency Costs, Law Enforcement, and Criminal Procedure, 45 J. Leg. Stud. 105, 111 (2016) (citing Keith N. Hylton & Vikramaditya S. Khanna, A Public Choice Theory of Criminal Procedure, 15 Sup. Ct. Econ. Rev. 61 (2007)). In time, a shift toward filtered dragnets should decrease the dangers and anxiety that come from false suspicion and conviction at every stage of criminal investigation. Indeed, facial recognition systems that identify a suspect based on photographs or surveillance footage from a crime already outperform the accuracy rates of average eyewitnesses and PC-based warranted searches by a large margin.68False match error rates for facial recognition algorithms are now under 1% in ideal conditions and under 10% when used in the field, and facial recognition services recommend law enforcement use a threshold of 95% confidence. William Crumpler, How Accurate Are Facial Recognition Systems—and Why Does It Matter?, Ctr. Strategic & Int’l Stud. (Apr. 14, 2020), https://www.csis.org/blogs/strategic-technologies-blog/how-accurate-are-facial-recognition-systems-and-why-does-it [https://perma.cc/3YQS-UM7C]. By comparison, eyewitness identification during a lineup has error rates of 20% or more. Gary L. Wells & John W. Turtle, Eyewitness Identification: The Importance of Lineup Models, 99 Psych. Bulletin 320, 320 (1986). The same is true for racial differences in error rates: while some facial recognition technologies were, at least for a time, more likely to produce false matches for photographs of Black faces, the gap in false match error has already been reduced. Stewart Baker, The Flawed Claims About Bias in Facial Recognition, Lawfare (Feb. 2, 2022, 12:57 PM), https://www.lawfaremedia.org/article/flawed-claims-about-bias-facial-recognition [https://perma.cc/E8TC-HV8A]. In any event, even if gaps persist, those gaps may be less bad than the differences in false match error from human systems of suspect identification. And unlike traditional policing methods, facial recognition technology can be calibrated to only produce a match when the risk of a false match is below a certain threshold regardless of the target’s constraining alerts, in other words, to ensure equal false positive rates by race. Setting the false match rate to be equal is equivalent to ensuring that “probable cause” for Black suspects means the same thing it does for whites. For a full articulation of race-conscious analyses of error, see Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218 (2019).

Skeptics will have at least two critiques of my optimistic prediction: all systems have some error, and the sort of error that comes from a highly technical and data-driven system might be particularly worrisome since a falsely accused defendant will have to go up against a trusted and more accurate system.69See Andrea Roth, Trial by Machine, 104 Geo. L.J. 1245, 1281 (2016) (describing the “seduction of quantification” in machine processes).

It is true that no investigation tool is free from error, and it is also possible that police, prosecutors, and juries could be at risk of reflexively trusting the results of a filtered dragnet system because they are so reliable. But the premise of the critique might be plain wrong. When a filtered dragnet produces a spurious result, the error could very well be easier to catch than when an informant or witness makes a spurious identification. For example, when a man named Michael Usry was the target of an investigation based on his father’s partial genetic match to crime scene DNA, Usry was cleared as soon as his own DNA sample was collected and analyzed because it did not match the sample collected at the scene of the crime.70Jim Mustian, New Orleans Filmmaker Cleared in Cold-Case Murder; False Positive Highlights Limitations of Familial DNA Searching, NOLA.com (Mar. 12, 2015), https://www.nola.com/article_d58a3d17-c89b-543f-8365-a2619719f6f0.html?mode=comments [https://perma.cc/S3GZ-59DY]; Natalie Ram, Christi J. Guerrini & Amy L. McGuire, Genealogy Databases and the Future of Criminal Investigations: The Police Can Access Your Online Family-Tree Search and Use It to Investigate Your Relatives, 360 Science 1078, 1078 (2018). This should generalize: the more independent sources of data there are, the more protection there should be for innocent.71See Joshua A.T. Fairfield & Erik Luna, Digital Innocence, 99 Cornell L. Rev. 981 (2014). A person wrongly identified by facial recognition is more likely to have a credible digital alibi (e.g., geolocation data that puts them in an entirely different state at the time of a crime) than a wrongly identified person who was accused by a confidential informant.

The facts of United States v. Chatrie72United States v. Chatrie, 590 F. Supp. 3d 901 (E.D. Va. 2022). illustrate the propensity for the erroneous targets of filtered dragnets to be cleared earlier and easier than erroneous targets in traditional investigations. In that case, police used a geofence warrant to access the deidentified location data of individuals who were near the scene of a bank robbery during the hour that the crime took place.73Id. at 917–22. The geofence produced the deidentified location records of nineteen individuals, only one of whom was the perpetrator.74Id. at 920–21. These facts do not fit the requirements of a filtered dragnet because law enforcement accessed and manually examined information related to the eighteen individuals who were not the perpetrator, but we can think of these eighteen as stand-ins for those who are wrongly targeted by filtered dragnet. One hour of anonymous geolocation data conclusively ruled out sixteen of them, and an additional hour ruled out the other two. None of the eighteen were identified (by name or other direct identifier) to the police, and none were questioned.75Id. at 921. By contrast, consider the experiences of two individuals who were briefly implicated in the investigation before the FBI used geofence technologies. Using traditional policing methods, the FBI first investigated the ex-boyfriend of a woman who saw news reports about the bank robbery and called the police to offer a false tip. They also investigated somebody who owned the same kind of car that was used as the getaway vehicle when a bank employee reported the possible tip, but that, too, was a dead end.76Id. at 917. It is not clear from the opinion what sorts of encounters and information-gathering the police used to rule out these two, but I suspect the anxiety and privacy burden absorbed by them was greater, by almost any measure, than the burden to the eighteen individuals whose approximate movements in public during one to two hours were disclosed in deidentified form. If this case is representative, the geofence warrant process should be a method of first resort, rather than last resort, because it is likely to lead more quickly to both the identification of the right suspect and the elimination of wrong ones.

A second skeptical critique is that I am describing the positive qualities of filtered dragnets under the assumption that the systems will be deployed as intended and will not be manipulated or tampered with. This is a legitimate concern to which the long history of flaws in forensic labs can attest.77Murphy, supra note 32, at 29–83; John Solomon, More Wrongdoing Found at FBI Crime Lab, Midland Daily News (Apr. 14, 2013), https://www.ourmidland.com/news/article/More-Wrongdoing-Found-at-FBI-Crime-Lab-7133820.php [https://perma.cc/D43V-8T9L]. The FBI has acknowledged that flawed forensics have affected dozens of death penalty cases. FBI Admits Flawed Forensic Testimony Affected at Least 32 Death Penalty Cases, Equal Just. Initiative (Apr. 29, 2015), https://eji.org/news/fbi-admits-flawed-forensic-testimony-in-32-death-penalty-cases/#:~:text=These%20FBI%20examiners%20trained%20500,those%20defendants%20have%20been%20executed [https://perma.cc/RNX9-KZTH]. But as a comparative matter, data-driven techniques of this sort might be more accountable and auditable than old-school forms of criminal investigation. When the same level of scrutiny and doubt is applied to traditional investigations that would have to continue in the absence of new technologies—the risks of error and manipulation present in eyewitness testimonies, suspect interrogation, or warrant affidavits78Lazer & Meyer, supra note 33, at 917. The Innocence Project found that half of the cases that they selected as being likely to be a false conviction did indeed lead to exoneration once DNA evidence was tested. How did they select these cases? By looking for convictions that were based on the traditional (and highly faulty) forms of evidence that are noisy signals of guilt: testimony from jailhouse snitches and eyewitnesses, the defendants’ confessions, and pseudo-scientific evidence (e.g., hair analysis). Id. at 898–99. Other factors include incompetent defense counsel and police or prosecutorial misconduct.—the prediction that filtered dragnets will be more corrupt and error-prone is hard to believe.79For example, one study found that more than 25% of sexual assault suspects are exonerated when DNA re-analysis becomes available. Peter Neufeld & Barry C. Scheck, Convicted by Juries, Exonerated by Science: Case Studies in the Use of DNA Evidence to Establish Innocence After Trial xxviii (1996). If this sample is typical, the findings imply that the quality of traditional police investigations leading to investigation, arrest, and conviction is rather shoddy.

C.  Increased Detection and Deterrence

The accuracy and efficiency of filtered dragnets can help tackle longstanding social problems of chronically unsolved crime, assuming filtered dragnets are used regularly.80Ram, supra note 34, at 788 (describing increased crime solving as an argument in favor of familial DNA searching). About twenty-five million Americans—8% of the population—suffer from a violent felony or a felony-level theft each year.81Alexandra Thompson & Susannah N. Tapp, U.S. Dep’t. of Just., NCJ 305101, Criminal Victimization, 2021 2–3 (2022). These events are of course disproportionately likely to beset low-income households. While violent crime rates today are still down compared to the high-water marks in the 1980s and early 1990s,82In the U.S., crime rates are quite low in historical terms. Violent crimes have dropped by at least half since the early 1990s, and property crimes have dropped even more dramatically. John Gramlich, What the Data Says (and Doesn’t Say) About Crime in the United States, Pew Rsch. Ctr. (Nov. 20, 2020), https://www.pewresearch.org/short-reads/2020/11/20/facts-about-crime-in-the-u-s [https://perma.cc/R9A8-SDUH]; Rachel E. Morgan & Barbara A. Oudekerk, U.S. Dep’t. of Just., NCJ 253043, Criminal Victimization, 2018 1 (2019). Although crimes of all sorts (particularly murder) have skyrocketed during the COVID-19 pandemic, the pandemic-related stress on social and economic wellbeing make the recent data difficult to interpret. Compare Paul G. Cassell, Explaining the Recent Homicide Spikes in U.S. Cities: The “Minneapolis Effect” and the Decline in Proactive Policing, 33 Fed. Sent’g Rep. 83 (2020) (finding under-policing and under-deterrence as a main cause), with Jeffrey Fagan & Daniel Richman, Understanding Recent Spikes and Longer Trends in American Murders, 117 Colum. L. Rev. 1235 (2017), and German Lopez, The Rise in Murders in the U.S., Explained, Vox (Dec. 2, 2020, 10:35 AM), https://www.vox.com/2020/8/3/21334149/murders-crime-shootings-protests-riots-trump-biden [https://perma.cc/9NZR-HBHC] (suggesting pandemic-related shocks are the primary driver of higher homicide rates). the statistics are still grim, particularly for communities of color. In the U.S., about five people in every 100,000 are murdered each year.83FBI Uniform Crime Report, Crime in the United States 2013, Expanded Homicide Data Table 6, U.S. Dep’t Just., Fed. Bureau Investigation (2013), https://ucr.fbi.gov/crime-in-the-u.s/2013/crime-in-the-u.s.-2013/offenses-known-to-law-enforcement/expanded-homicide/expanded_homicide_data_table_6_murder_race_and_sex_of_vicitm_by_race_and_sex_of_offender_2013.xls [https://perma.cc/W9H4-64BB]. For African-Americans, the rate is above six per 100,000.84Id. (By comparison, the rates in France and Italy are 1.28 and 0.52 per 100,000, respectively.)85Id. The United States, even in its lowest crime period, is still far more crime-ridden than other developed nations. For example, 5.4 out of every 100,000 Americans were killed by homicide in 2016, whereas in France the rate was 1.4 out of every 100,000. See Victims of Intentional Homicide, 1990–2018, United Nations Off. on Drugs and Crime, https://dataunodc.un.org/content/data/homicide/homicide-rate [https://perma.cc/NLL4-FNLL]. In addition to the trauma and losses to crime victims, society also absorbs a range of economic costs and psychological distress in the course of guarding against crime.86See, e.g., David Anderson, The Aggregate Burden of Crime, 42 J.L. & Econ 611, 629–30 (1999); Aaron Chalfin & Justin McCrary, Are U.S. Cities Under-Policed? Theory and Evidence, 100 Rev. Econ. & Stat. 167, 167 (2018); Kathryn E. McCollister, Michael T. French & Hai Fang, The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation, 108 Drug & Alcohol Depend. 98, 98 (2010). It is all too easy for scholars, lawmakers, and others who live in safe neighborhoods to forget: serious crime is just awful.

Crime clearance rates (that is, the proportion of crimes actually reported to the police that have led to an arrest or otherwise been considered solved) for violent crime is 42%, and the rate is under 15% for property crimes.87Crime Clearance Rate in the United States in 2020, by Type, Statista, https://www.statista.com/statistics/194213/crime-clearance-rate-by-type-in-the-us [https://perma.cc/XT5F-EHCQ]; Most Violent and Property Crimes in the U.S. Go Unsolved, Pew Rsch. Ctr. (2017) [hereinafter Pew Property Crimes], https://www.pewresearch.org/fact-tank/2017/03/01/most-violent-and-property-crimes-in-the-u-s-go-unsolved [https://perma.cc/XG8E-6FQ8]; What the Data Says (and Doesn’t Say) About Crime in the United States, Pew Rsch. Ctr. (2020), https://www.pewresearch.org/fact-tank/2020/11/20/facts-about-crime-in-the-u-s [https://perma.cc/92VY-8CGL]. Only about half of violent crimes and one-third of property crimes are ever reported to the police, and many arrests and convictions are erroneous. The low likelihood of reporting a crime, the low clearance rates, and the somewhat sizable chance of false arrest altogether mean that the probability a criminal will be prosecuted for any particular violent crime is probably under 20%.88Statista, supra note 87. The figure for property crime is 7%. Pew Property Crimes, supra note 87.

Clearance rates in black neighborhoods are even worse. The events over the last decade validate Bill Stuntz’s observation that “poor black neighborhoods see too little of the kinds of policing and criminal punishment that do the most good, and too much of the kinds that do the most harm.”89Stuntz, supra note 15, at 497; see also Randall Kennedy, Race, Crime, and the Law 19, 158–60 (1997). Dampening crime in lower income black communities is a civil rights goal of longstanding stature.90Forman, supra note 7, at 11 (“African Americans have always viewed the protection of black lives as a civil rights issue, whether the threat comes from police officers or street criminals.”), 61 (recounting the editorials in journals that served black D.C. neighborhoods that demanded more law enforcement to ensure that black neighborhoods stay peaceful), 128. Bennett Capers described underenforcement as the criminal justice problem that gets short shrift,91Capers, Techno-Policing, supra note 5, at 497. and that was before George Floyd’s murder made police violence and over-policing problems an issue of pressing global salience. There is some squeamishness today in discussing crime in black neighborhoods (and certainly in referring to that crime as “black on black”), but it is foolish to expect criminal justice reform to be lasting and meaningful if it does not tackle both of the scourges of inner-city policing: harsh policing and civilian violence.

The most obvious and natural way to curb future violent crime is to increase the detection of very serious crimes today.92Mark Kleiman’s work catalogued a set of “dynamic concentration” probation and drug treatment programs that were unusually successful at recidivism reduction. Kleiman, supra note 20, at 34–65. They depended on good detection. Id. at 164. Kleiman pointed out that predatory crimes—those that terrorize and corrupt communities the most—are also the hardest to observe. Id. at 165. I am suggesting here that technology may give us the opportunity to run Kleiman-style compassionate crime control programs at a much more ambitious scale. Some scholars, Tom Tyler chief among them, have made the case that in the long run, law-abiding behavior has less to do with criminal law enforcement tactics than with cultural, economic, community, and norms-based factors.93Tom Tyler, Why People Obey the Law 171 (2006). Occasionally, this insight has been oversimplified and distorted to leave the impression that law enforcement detection rates have nothing to do with crime rates.94Shaila Dewan, Refund the Police? Why It Might Not Reduce Crime, N.Y. Times (Nov. 8, 2021), https://www.nytimes.com/2021/11/08/us/police-crime.html [https://perma.cc/U56T-8EPP]. This is a mischaracterization of the evidence.95Even Tyler’s work demonstrates that belief that lawbreakers will be caught and punished has a sizable and statistically significant impact on behavior. Tyler, supra note 93, at 59. While there are multiple “root causes” of crime,96Crime rates are the result of many social and economic factors that fall outside the realm of criminal law enforcement, such as population demographics (when the population is disproportionately young, there is more crime), fluctuations in the black market for drugs and other vices, environmental toxins (some criminologists have associated lead poisoning to impulsive and criminal behavior), and changes in the access to guns. Forman, supra note 7, at 50. data and common sense confirm that holding other factors steady, criminal behavior is sensitive to the probability of law enforcement detection. The relevant criminology studies consistently find evidence that detection reduces the incidence of future crime.97See, e.g., Aaron Chalfin & Justin McCrary, Criminal Deterrence: A Review of the Literature, 55 J. Econ. Lit. 5, 13–15, 23–29 (2017) (finding abundant evidence that crime is reduced when police manpower and redeployments increase, and much less consensus in the literature on severe punishment); Steven N. Durlauf & Daniel S. Nagin, Imprisonment and Crime: Can Both Be Reduced?, 10 Crim. & Pub. Pol’y 9, 17 (2011); Daniel S. Nagin, Deterrence in the Twenty-First Century, 42 Crime & Just. 199, 201 (2013); Daniel S. Nagin, Deterrence: A Review of the Evidence by a Criminologist for Economists, 5 Ann. Rev. Econ. 83, 88 (2013); Jeffrey Grogger, Certainty vs. Severity of Punishment, 29 Econ. Inquiry 297, 307–09 (1991); Kleiman, supra note 20, at 74–78; Jennifer L. Doleac, How Do State Crime Policies Affect Other States? The Externalities of State DNA Database Laws 1–3 (Dec. 2016) (unpublished manuscript), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2892046 [https://perma.cc/2KP5-7FHJ]. There is also some evidence that the swiftness of enforcement—the “celerity”—makes a difference.98Chalfin & McCrary, supra note 97, at 10.

Increased detection of crime not only reduces crime rates, but also improves other measures of social mobility and security as well. Greater crime detection increases the likelihood that offenders will seek and find employment, enroll in education, and live in a stable family environment, and it reduces school absenteeism in the community.99Anne Sofie Tegner Anker, Jennifer L. Doleac & Rasmus Landersø, The Effects of DNA Databases on the Deterrence and Detection of Offenders, 13 Am. Econ. J. Applied Econ. 194, 195 (2021). Indeed, given how dramatic the impact of detection is on increasing pro-social behavior, it is not at all clear that law enforcement should even be distinguished from the so-called “root causes” of crime. Fear that crime will not be well controlled is a root of many of the root causes of crime.100“Safe streets are a necessary platform for neighborhood growth and prosperity. . . . [T]he notion that poverty is the mother of crime has been turned on its head.” Philip J. Cook, Assessing Urban Crime and Its Control: An Overview 3 (Nat’l Bureau of Econ. Rsch., Working Paper No. 13781, 2008). To be clear, there are plenty of independent reasons to endorse or adopt the rehabilitative programs that criminologists and criminal justice scholars propose. See, e.g., Rachel Elise Barkow, Prisoners of Politics 76–77 (2019), for an example of an argument in favor of focusing on rehabilitative programs. But scholars like Barkow do not discuss the possibility that greater detection of crime can reduce crime rates and reduce net punishment.

So, an enduring and well-documented fact is that an increased likelihood of detection and enforcement drives crime rates down. This is much less true, and possibly not true at all, for the severity of punishment, where increasing the length of prison sentences is found to have no impact or even criminogenic effects.101Chalfin & McCrary, supra note 97, at 23–29. Thus, the state’s essential duty to protect its constituents from the violence and exploitation of others is well served by good detection. Unfortunately, crime rates are currently under the management of the American criminal justice system’s haphazard style of enforcement: occasional, error-prone, and harsh.102This critique, it should be noted, dates back to the eighteenth-century work of Jeremy Bentham and Cesare Beccaria. See generally Raymond Paternoster, How Much Do We Really Know About Criminal Deterrence?, 100 J. Crim. L. & Criminology 765 (2010).

D.  Decreased Discretion for Suspect Selection

Filtered dragnets are crime-driven rather than suspect-driven. In suspect-driven investigations, police have developed suspicion—or a hunch—around a particular individual and focus their observations in an attempt to develop a case.103Slobogin, supra note 19, at 322–23. Even Big Data–assisted suspect-driven investigations appear to perform poorly in identifying criminals who may have committed a crime. John S. Hollywood, Kenneth N. McKay, Dulani Woods & Denis Agniel, RAND Corp., Real-Time Crime Centers in Chicago: Evaluation of the Chicago Police Department’s Strategic Decision Support Centers 36 (2019). Suspect-driven investigations are propelled by the theories of police officers and proceed within their discretionary control. Police also have some control over filtered dragnet investigations (e.g., over where and when to deploy them), but once they are put into service, police lose control over the results. If facial recognition or reverse searches identify a wealthy or politically connected individual as the suspect of a crime, it will be much more difficult for police and prosecutors to avoid pursuing investigation and prosecution, as compared to cases where police use informants or witnesses as the main source of identification.

In later Parts, this Article describes the ways in which police can still exercise too much discretion by, for instance, using a filtered dragnet tool preferentially to solve some crimes and not using it on others that are substantially similar. But we should not lose sight of the ways filtered dragnets do constrain discretion. One of the greatest risks from mass surveillance (that is, dragnets) is its potential to create a resource for selecting the suspect first and then finding a crime, or for using legal but sensitive information to discredit political enemies and personal foes.104For example, the NSA’s strategy of revealing the pornography viewing habits of religious radical critics of the U.S. government. Conor Fridersdorf, The NSA’s Porn-Surveillance Program: Not Safe for Democracy, The Atlantic (Nov. 27, 2013), https://theatlantic.com/politics/archive/2013/11/the-nsas-porn-surveillance-program-not-safe-for-democracy/281914 [http://web.archive.org/web/20230323142324/https://www.theatlantic.com/politics/archive/2013/11/the-nsas-porn-surveillance-program-not-safe-for-democracy/281914]. Police cannot exert this type of control over filtered dragnets.105At least, they cannot exert control so easily. In Section IV.B, I will discuss how police units could still tamper with the process through the selection of crimes to solve or by avoiding or removing the analysis of a subset of constituents’ data.

The Supreme Court caselaw that has found fault with Big Data policing has involved digital searches in which the police first selected their target and then accessed long histories of their target’s whereabouts without a warrant.106Carpenter v. United States, 138 S. Ct. 2206, 2212 (2018) (accessing several days’ worth of geolocation data of a specific target); United States v. Jones, 565 U.S. 400, 403 (2012) (involving GPS tracking of a specific target). The Court is right to constrain investigations that permit police to access sensitive and detailed information without any justification or checking mechanism. Even when police have developed suspicion against a target, the low-tech factors that go into building up suspicion about a particular individual (e.g., testimony from an informant or presence in a “high crime neighborhood”) can impose an indirect racial tax on innocent minorities that could mostly be avoided with filtered surveillance programs that have very low error.107Kennedy, supra note 89, at 159; Ian Ayres & Jonathan Borowsky, ACLU of So. Cal., A Study of Racially Disparate Outcomes in the Los Angeles Police Department 27 (Oct. 2008), https://www.aclusocal.org/sites/default/files/wp-content/uploads/2015/09/11837125-LAPD-Racial-Profiling-Report-ACLU.pdf [https://perma.cc/U9GK-7BTU]; Floyd v. City of New York, 959 F. Supp. 2d 540, 556, 584 (S.D.N.Y. 2013). NYPD data showed that a substantial portion of the Terry stops (a.k.a. “stop-and-frisk”) had a predictably low chance of actually leading to the discovery of contraband based on the factors the police claimed were present. Sharad Goel, Maya Perelman, Ravi Shroff & David Alan Sklansky, Combatting Police Discrimination in the Age of Big Data, 20 New Crim. L. Rev. 181, 213 (2017).

Not all agree with this assessment. Kiel Brennan-Marquez has argued that “nothing about the logic or practice of data-driven law enforcement makes [] redistributive impulses necessary. On the contrary, they will be hard fought—and particularly in our current political climate, unlikely.”108Brennan-Marquez, supra note 2, at 490. I share a certain degree of Brennan-Marquez’s cynicism (I have wondered, for example, if law enforcement’s sloth-like speed in adopting crime-driven investigation practices rather than suspect-based practices are related to the loss of control over defining the pool of suspects),109Police use most of these tools as a last resort, perhaps because self-preservation of police discretionary power and popular (if ill-conceived) public resentment toward big data policing happen to push in the same direction. but he goes too far. There already is some evidence that data-driven policing has redistributed the costs of law enforcement and will continue to do so. DNA-based exonerations, for example, have proven the innocence of disproportionately more minority convicts than whites.110Edwin Grimsley, What Wrongful Convictions Teach Us About Racial Inequality, Innocence Project (Sept. 26, 2012), https://innocenceproject.org/what-wrongful-convictions-teach-us-about-racial-inequality [https://perma.cc/V3U6-R4FQ]. This suggests that, going forward, DNA-based investigations will shift police focus not only toward the guilty, but also away from wrongfully accused Black and minority suspects.

E.  Decreased Risk to Victims, Witnesses, and Suspects

Police investigations cause a range of problems that are not captured in the variables I have discussed so far—privacy intrusions, erroneous arrest, et cetera. When police have to rely on old school methods of case investigation, the system necessarily puts victims, witnesses, and suspects at risk of physical or economic harm.

Let us start with crime victims and witnesses. Cooperating with the government is a perilous activity for these individuals, as captured by the saying “snitches get stitches.”111Stuntz, supra note 15, at 4, 79–80. Drug and gun charges, by contrast, can be proven using physical evidence without any cooperating witnesses. On “snitches get stitches,” see Snitches Get Stitches—Meaning, Origin and Usage, English Grammar Lessons (Dec. 12, 2021), https://english-grammar-lessons.com/snitches-get-stitches-meaning [https://perma.cc/C242-MRDN]. By one theory, clearance rates for serious crimes are low in the U.S. because proving homicide or robbery cases requires victims and witnesses to testify and put themselves at risk.112In Washington, D.C., residents reported gunshots to 911 or police only 12% of the time as compared with the gunfire incidents detected by ShotSpotter technologies. The study found that crime is disproportionately underreported, and thus under-investigated, in minority and low-income neighborhoods. Jillian B. Carr & Jennifer L. Doleac, Brookings Inst., The Geography, Incidence, and Underreporting of Gun Violence: New Evidence Using ShotSpotter Data 2 (Apr. 2016), https://www.brookings.edu/wp-content/uploads/2016/07/Carr_Doleac_gunfire_underreporting.pdf [https://perma.cc/G7P6-3JBU]. Bill Stuntz hypothesized that police forces increased their focus on drug and gun possession charges because these crimes were “self-proving” once contraband was discovered, and therefore did not necessitate the cooperation of a victim or witness.113Stuntz, supra note 15, at 4. As a result, more serious crimes were harder to clear than low-level crimes. But, of course, those are the crimes that are more damaging to the community. If reverse searches, facial recognition, and other filtered dragnets could allow police to prove cases independently, without exposing victims and witnesses to the risk of social stigma and retaliation, they would contribute benefits to society that are not accounted for in the usual privacy-versus-security debates.

As for the suspects, the manner in which traditional policing builds up cases leave much to be desired. Police stops and searches are often vectors for bias and disrespect where swearing, insults, unwarranted accusations and suspicion, and unjustified physical contact lead to demoralization and distrust.114Capers, supra note 59, at 1243–44 (referring to “hard surveillance” and distinguishing it from soft forms); Forman, supra note 7, at 171. Traditional investigations are costly in terms of time, fear, property damage, and general unpleasantness. A person who is pulled over for a secondary inspection when a police dog alerts to her car may very well have no recourse when the police slash open the seats of her car to try to find drugs. Home searches and interrogations cause additional physical, emotional, and economic strain to suspects, irrespective of what sorts of private information is revealed. These costs will become more obvious and more salient when technology obviates the need for a government agent to tear open the upholstery of a suspect’s car, dishevel a dresser, and “grope[] and grab[] our children” at the airport.115As Senator Ron Paul colorfully puts it. Capers, supra note 59, at 1286.

***

In combination, these factors show that filtered dragnets should be part of any responsible law enforcement program. They extend the “pareto frontier” by allowing privacy and crime detection to increase at the same time.116As Part IV argues, the fact that filtered dragnets can rapidly increase crime detection is also the source of its risk. It would be counterproductive for law to prohibit their use based on a formalistic or expansive notion of Fourth Amendment protection. And yet, as the next Part shows, there is some risk that courts and lawmakers may do just that.

III.  FILTERED DRAGNETS AND PRIVACY

Most of the courts, scholars, and civil society organizations that have considered the societal impact of filtered dragnets such as geofencing and reverse keyword searches have concluded that they pose serious threats to privacy.117See, e.g., Guariglia, supra note 6. Putting aside for a moment whether filtered dragnets are consistent with the full set of Fourth Amendment principles, this Part argues that filtered dragnets pose almost no threat to Fourth Amendment privacy. What I mean is, among all of the meanings and purposes that the right to privacy is meant to capture, the only ones that are meaningfully violated by filtered dragnets are related to abuses of power. The privacy expectations of the non-offender, which are the ones that predominate Fourth Amendment analysis, suffer at most a technical violation. If we separate out the anti-authoritarian goals of privacy, nothing is left of the privacy critique of filtered dragnets.

This does not mean that filtered dragnets are harmless—to the contrary, as Part V will argue, they pose significant dangers to civil liberties. But by ruling out privacy as the vector of abuse, courts can harvest the benefits of analytical precision and adjust Fourth Amendment law to better match the problems. This Part describes how courts and scholars have responded to filtered dragnets so far and then explains why Fourth Amendment principles are so poorly suited to address the negative reactions.

A.  Judicial Reactions to Filtered Dragnets

Courts are not prepared for the challenges that filtered surveillance pose to Fourth Amendment jurisprudence. Indeed, they are struggling as it is to find principled limits in more common and straightforward digital dragnet cases.118For example, Carpenter v. United States, 138 S. Ct. 2206 (2018), wherein the Supreme Court considered the government’s access to seven days’ worth of cell site geolocation data and reached a holding without a rule. The access to records constituted a search requiring a warrant and probable cause, but the Court refused to say whether accessing data for a more limited amount of time would also be treated as a search. Id. at *11 n.3.

So far, lower court opinions are surprisingly unfriendly to technologies and practices that will be the predicates to filtered dragnets. For example, Baltimore tried to set up a program called Aerial Investigation Research (“AIR”) in which its police department collected and retained 45 days’ worth of aerial surveillance footage, but would not be allowed to access the footage unless a violent crime occurred and was likely to be caught on camera.119Slobogin, Suspectless Searches, supra note 29, at 962. Civil liberties organizations successfully challenged the program, arguing that the Fourth Amendment should constrain the government from amassing data that can be used for longitudinal location tracking no matter how constrained the Baltimore Police Department’s access and use of the data might be.120Leaders of a Beautiful Struggle v. City of Baltimore, 2 F.4th 330, 346 (4th Cir. 2021). The Fourth Circuit used the theoretical possibility of government access to information as a sufficient reason to find that a Fourth Amendment search on all Baltimore residents took place, regardless of the design, practice, and risk of abuse for the program.121Id. If this reasoning is adopted throughout the judiciary, law enforcement will not be able to collect their own information for filtered dragnets and will have to rely on data that is collected and held by private industry.

Many courts have expressed similar reservations when the government asks a private company like Google to trawl through its data to conduct reverse searches, too.122United States v. Chatrie, 590 F. Supp. 3d 901, 927 (E.D. Va. 2022). But these opinions suggest that a warrant process that is sufficiently narrow and “particularized” so as to avoid disclosing data of innocent bystanders to the police would satisfy Fourth Amendment requirements.123Id. at 927–32. This leaves an opening for filtered surveillance. It suggests that the automated scan that Google or another third party would perform of all its data in the process of identifying responsive records would not be a search in and of itself. In other words, the focus of the courts that have analyzed geofence warrants is not on the data that is scanned at all, but on the data that is ultimately revealed to police.

Courts might begin to clamp down on third-party scanning for law enforcement purposes following the logic of the Fourth Circuit’s decision in the Baltimore AIR case. Many scholars are advocating for this, as I describe next. But it is still not clear that filtered dragnets will be understood to be a search at all given that they are designed to alert only when probable cause of a crime has been established. Even if police use computing technologies to automatically scan through large amounts of personal data, the constitutionally relevant event is the revelation and use of information to the government agents who are making decisions.124It is tempting to think the aggregation and accumulation of data for potential eventual use is itself a form of risk or harm. This is the reasoning behind the “mosaic theory,” which captured the attention of some courts and scholars. United States v. Maynard, 615 F.3d 544, 562 (D.C. Cir. 2011); Priscilla J. Smith, Nabiha Syed, David Thaw & Albert Wong, When Machines Are Watching: How Warrantless Use of GPS Surveillance Technology Violates the Fourth Amendment Right Against Unreasonable Searches, 121 Yale L.J. Online 177, 201 (2011). Orin Kerr, who coined the term, is skeptical that courts can make it work. Orin Kerr, The Mosaic Theory of the Fourth Amendment, 111 Mich. L. Rev. 311, 346–47 (2012). It is worth noting that this theory does not comport with the attitudes of Americans. Matthew B. Kubler & Lior Jacob Strahilevitz, Actual Expectations of Privacy, Fourth Amendment Doctrine, and the Mosaic Theory, 6 Sup. Ct. Rev. 205, 248 (2016).

This is best captured by the binary search doctrine—the rule establishing that, for example, a drug dog’s alert is not a search under the Fourth Amendment because it reveals only the presence of contraband and criminal wrong-doing. There is little reason to believe the Supreme Court will backpedal. The Court has found that a universal fingerprinting database, possibly even one that requires involuntary contributions of fingerprints by individuals who are not yet in the database, could be justified, given that fingerprinting is an “inherently more reliable and effective crime-solving tool than eyewitness identification or confessions.”125Davis v. Mississippi, 394 U.S. 721, 727–28 (1969). More recently, in Maryland v. King, the Supreme Court found that police can forcibly swab an arrestee and cross-check his DNA against the database of DNA samples from unsolved crimes.126Maryland v. King, 569 U.S. 435, 465 (2012). The opinion focused almost entirely on the physical act of swabbing and took for granted that the cross-checking of a DNA sample to a crime database will not be a search because it reveals either nothing at all or reveals only a high-confidence match to a crime.127See id. at 445, 461–62.

That said, some of the Supreme Court decisions in the last ten years written by Justice Scalia incorporated a strong property-based formalism. In United States v. Jones, the use of a GPS device was a search not because of the sensitivity of the information gathered, but because of the touching of the suspect’s car.128United States v. Jones, 565 U.S. 400, 403 (2012). And in Florida v. Jardines, use of a drug-sniffing dog on a front porch was a violation of the Fourth Amendment because the practice involved a trespass with information gathering.129Florida v. Jardines, 569 U.S. 1, 5–6 (2013). The fact that the information gathering was in the form of a binary search did not alleviate the flaw, according to the majority.130Id. at 10–11. If Scalia’s formalism for real and tangible property is extended to personal data, filtered dragnets could be considered a search of all individuals whose data is mechanically scanned in the process, irrespective of how trivial the invasion to them may be.

Even if courts come to agree that mechanically processing data is a Fourth Amendment search, this would still not guarantee the death of the filtered dragnet. They might be reasonable searches under the special needs or checkpoints doctrines.131See Mich. Dep’t of State Police v. Sitz, 496 U.S. 444, 449–50 (1990); Illinois v. Lidster, 540 U.S. 419, 426–27 (2004). In the context of checkpoints, bulk searches, and other dragnets, the Supreme Court has articulated the factors that it would use to determine whether the searches are “reasonable” despite a lack of individualized suspicion. These factors include the intrusiveness of the search, the public and government interest that is served by the dragnet, and the degree of oversight or limitations on discretion that are involved.132See Christopher Slobogin, Government Dragnets, 73 Law & Contemp. Probs. 107, 107–08, 127 (2010). The Court focused on constraints over agents’ ad hoc discretion in United States v. Martinez-Fuerte, 428 U.S. 543, 559 (1976) (with respect to the location of a border and customs checkpoint). Justice Brennan, in dissent, pointed out that there remained a lot of agent discretion with respect to whom to focus on during the primary and secondary inspections, further emphasizing the importance of agent discretion. See id. at 576 (Brennan, J., dissenting).

Thus, judicial reasoning seems to be on a collision course between (a) cases that are eager to expand the recognition of privacy rights to cover all data subjects in large databases whose information is theoretically accessible to police and (b) cases that find highly probative “binary searches” are outside the ambit of Fourth Amendment prohibition.

B.  Scholarly Reactions to Filtered Dragnets

Lawrence Lessig saw this train wreck coming. In Code, he pointed out that the Internet and digital information technologies will allow police to identify a perpetrator with high confidence while remaining blind, by design, to the intimate details of the innocent. He explained that this will cause the privacy rationale for Fourth Amendment protection to lose relevance, at least when filtered dragnet investigations are possible. He expected these technologies would force a wedge between privacy and anti-authoritarian justifications for criminal procedure, when in the past, the two types of arguments traveled together.

Fourth Amendment scholars have doubled down on privacy.133See generally Sklansky, supra note 9; Ohm, supra note 9 (each arguing for strong and more capacious conceptions of privacy under Fourth Amendment law that will limit access to information no matter how or why it is sought). Even scholars like Andrew Ferguson and Neil Richards, who have focused on tyranny and power, have used those terms synonymously with surveillance capability. Ferguson, supra note 9, at 262–63, 266. They have lumped filtered dragnets together with all other digital surveillance in order to hinder police access. Dragnets of every sort, including the filtered sort, still suffer from analytical chaos because of value judgments and predictions that too often stay latent in the scholarship.134Christopher Slobogin took stock of the “analytical extremism” over a decade ago, and not much has changed. Slobogin, supra note 132, at 109. As a result, scholars are all over the map in terms of the proper treatment of digital dragnets, and none have focused on the right factors.

A few examples. Daphna Renan has argued that the collection, retention, and theoretical capability for law enforcement to access data is alone sufficient to constitute a privacy harm. Consent or a warrant should be required before the government collects any privately held data, and even before they access or request machine scanning of that data by third parties, irrespective of how limited and careful the readout is.135Daphna Renan, The Fourth Amendment as Administrative Governance, 68 Stan. L. Rev. 1039, 1042, 1054–55 (2016). Natalie Ram has approvingly held up Maryland’s law prohibiting law enforcement from using genomic databases to solve crimes unless they have received consent from all individuals whose data is in the genomic dataset.136Ram et al., supra note 70, at 1078–79. She has argued that Americans have a constitutional right, under the Carpenter decision, to the privacy of the genomic data held by a private third-party company and that unless consent to a law enforcement search is exhibited in some way, the police should not be able to ask or force the company to identify a match to a criminal sample. Natalie Ram, Genetic Privacy After Carpenter, 105 Va. L. Rev. 1357, 1366–67 (2019). More generally, this brand of scholars use access to data, rather than how it is used, as the sine qua non for Fourth Amendment analysis and ask why anybody should be under “lifetime surveillance.”137Lazer & Meyer, supra note 33, at 904 (summarizing what other scholars have asked with respect to including juveniles in DNA databases).

Scott Sundby and Nadine Strossen take the more moderate position that dragnets (of any sort) should be used only as a last resort,138Scott E. Sundby, A Return to Fourth Amendment Basics: Undoing the Mischief of Camara and Terry, 72 Minn. L. Rev. 383, 446 (1988); Nadine Strossen, The Fourth Amendment in the Balance: Accurately Setting the Scales Through the Least Intrusive Alternative Analysis, 63 N.Y.U. L. Rev. 1173, 1176, 1197 (1988) (suggesting a challenged investigation should be invalid if there is a less intrusive option, and finding mass searches are more intrusive than individualized ones). though it is not clear they would apply their conclusions to filtered dragnets in particular. Eldar Haber, in considering how the Internet of Things can become a rich source of police investigatory data for reverse searches, advocates for a warrant requirement that goes beyond the “super-warrant” requirements of the current Wiretap Act to create an “ultra-warrant” requirement.139Haber, supra note 50, at 785. Since the super warrant requires police to exhaust all other means of investigating before securing a wiretap warrant, the effect and objective of Haber’s recommendation is similar to Sundby’s and Strossen’s—to ensure that the criminal justice system strongly disfavors use of Internet of Things data in investigation.14018 U.S.C. § 2518. Haber’s reasoning is also consistent with Justice O’Connor’s reasoning in a dissenting opinion, in which she argued suspicionless inspections should only be permitted when law enforcement would not be effective using traditional police tactics that build up reasonable suspicion or probable cause before a search takes place. See Vernonia Sch. Dist. 47J v. Acton, 515 U.S. 646, 674 (1995) (O’Connor, J., dissenting).

Continuing down the spectrum, some scholars appreciate the potential benefits of filtered dragnets and have advocated for a style of restraint that differs from prohibition or PC-based warrant requirements. Stephen Henderson and Kiel Brennan-Marquez argue that police departments should have a budget for searches and seizures (including digital investigations that, at least right now, operate outside the formal definition of a Fourth Amendment search) so that they are incentivized to use the most efficacious practices rather than the most expedient ones.141Keil Brennan-Marquez & Stephen Henderson, Search and Seizure Budgets, 13 U.C. Irvine L. Rev. 389, 396–97 (2023). In my opinion, it would make more sense to limit government power by imposing a “prison budget” so that the state is forced to reserve incarceration resources for their most effective uses. See Kleiman, supra note 20, at 785. Christopher Slobogin has explicitly called for a more nuanced understanding of dragnets and suspicionless surveillance. He would allow dragnets that meet a standard of “generalized reasonable suspicion” where their efficacy outweigh the privacy intrusion enough to merit their use in criminal investigations.142Slobogin, supra note 132, at 139–40. Slobogin measures efficacy using the hit rate—the chance that an investigative technique will reveal relevant criminal evidence. Id. at 139. However, it is not entirely clear what he uses as the denominator in a hit rate. If courts are supposed to ask whether a person whose data is disclosed to police by a filtered dragnet is highly likely to be guilty of the investigated crime, filtered dragnets will always have high efficacy because they are defined to meet this standard. If the denominator is comprised of all individuals whose data is mechanically processed to find matches to the “fingerprint” of a crime, none of the filtered dragnets will meet the standard. Jeffrey Bellin recommends locating the Fourth Amendment interest in databases with the owner or holder of data, rather than the subject of the data searches, which would give a company the right to either consent to a search or to demand a warrant.143Jeffrey Bellin, Fourth Amendment Textualism, 118 Mich. L. Rev. 233, 270–72 (2019) (articulating an openness to considering some types of data and documents as personal to the consumer rather than owned and controlled by the third-party service provider, so context would play a role in edge cases under his proposal). Andrew Ferguson would allow the use of dragnets as long as the legislative branch explicitly authorizes their use.144Ferguson, supra note 9, at 272.

Reaching the other end of the spectrum, some scholars (myself included), see the use of filtered dragnets as a move toward justice rather than away from it.145See generally Bambauer, supra note 26. The prohibition of a highly reliable investigation tool is unethical when the prohibition would push police toward more invasive and less accurate investigation techniques and when serious crime would too often go undeterred. David Kaye and Michael Smith have made this argument with respect to DNA matching.146D.H. Kaye & Michael E. Smith, DNA Identification Databases: Legality, Legitimacy, and the Case for Population-Wide Coverage, 2003 Wis. L. Rev. 413 (2003).

Where does this leave us? Hopefully with an open mind and a hunger for reasoning from first principles.

C.  The Pointlessness of Fourth Amendment Privacy

Filtered dragnets will disrupt the equilibrium between the government, criminals, victims, and bystanders. That is obvious enough. Orin Kerr has made the descriptive and normative claim that courts intuitively adjust Fourth Amendment rules to strike a new balance between privacy and security whenever the government gains a significant new surveillance capability.147Orin S. Kerr, An Equilibrium-Adjustment Theory of the Fourth Amendment, 125 Harv. L. Rev. 476, 488–89 (2011). Filtered dragnets implicate only a few Fourth Amendment interests, and those few are not well served by the reasonable expectations of privacy test, by the warrant requirement, or even by intuitive adjustments. We are in new terrain in which a technology increases both privacy and crime control.

1.  Theoretical Dimensions of Fourth Amendment Privacy

Borrowing from a rich literature that catalogues and elucidates the concept of privacy,148Some attempts to organize the privacy discourse uses different stages of the information life cycle. See generally, e.g., Daniel J. Solove, A Taxonomy of Privacy, 154 U. Penn. L. Rev. 477 (2006); Jane Bambauer, The New Intrusion, 88 Notre Dame L. Rev. 205 (2012). For the purposes of this article, I have focused more heavily on articles that discuss the various types of risks and harms that occur when privacy is violated. the following arise most frequently in the context of government intrusions and surveillance:

i.  Freedom from Embarrassing Revelations, Social Dislocation, and Harassment

Perhaps the most common and robust form of privacy is the recognition that everybody has some legitimate, pro-social reason to want to keep licit details about their lives away from at least a subset of people.149Sklansky, supra note 9, at 1107–10 (using the concept of refuge). They want the freedom that comes from relative obscurity,150See generally Woodrow Hartzog & Evan Selinger, Surveillance as Loss of Obscurity, 72 Wash. & Lee L. Rev. 1343 (2015). where their decisions and behavior are not under the scrutiny and judgment of others.151Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stan. L. Rev. 1373, 1377 (2000); Danielle Keats Citron & Daniel J. Solove, Privacy Harms, 102 B.U. L. Rev. 793, 854 (2022); see also Jane Bambauer & Tal Zarsky, The Algorithm Game, 94 Notre Dame L. Rev. 1, 23 (2018); Danielle Keats Citron, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age 55–57 (2022) (describing how governments around the world have used details about licit-but-scandalous love affairs or other sexual secrets to suppress dissent). Everybody deserves to be shielded, at least to some degree, from embarrassment over the things they have said or done that did not cause any lasting harm to others and that can be misunderstood.152See Citron & Solove, supra note 151, at 837 (discussing reputational harms).

The scope of this interest ranges from trivial embarrassments (the regrettable hairstyle, the piece of toilet paper stuck to a shoe) to the truly life-changing (the ostracism of an HIV diagnosis, the physical attack carried out with the help of location information).153See Richards, supra note 65, at 146–51, 157–62. Much of the time, the sensitivity of a piece of information will depend greatly on context,154See generally Helen Nissenbaum, Privacy in Context (2010). but the point is that “everyone has facts about themselves that they don’t want shared, disclosed, or broadcast indiscriminately.”155Richards, supra note 65, at 73. When information is permitted to leap from one context to another and to be used in unexpected ways, it will cause harm.156See Solove, supra note 148, at 487–88; Cohen, supra note 151, at 1377; Richards, supra note 65, at 134, 142–45.

Filtered dragnets relieve, rather than exacerbate, these concerns. By shielding data from police (and everyone else) unless and until they match the fingerprint of a crime, filtered dragnets keep as much information private as practically possible.157Relatedly, filtered dragnets, when used as designed, will mitigate problems related to the dissolving boundaries between the state, private industry, and society by greatly limiting disclosure and use by law enforcement. For a description of dissolving boundaries, see Bernard E. Harcourt, Exposed 187–216 (2015). Indeed, if more police investigations were conducted through filtered dragnets, members of the community would be much more obscure and unknown vis-à-vis the state as compared with programs that involve heavy use of interviews, street patrols, traffic stops, and home searches.

ii.  Freedom from Manipulation

An actor can exploit access to another person’s data by discovering their vulnerabilities or gaps in rationality and then using those to persuade, cajole, or threaten the data subject into doing something.158See Richards, supra note 65, at 151; Citron & Solove, supra note 151, at 846. Again, as with freedom from embarrassment, filtered dragnets present a lower, rather than higher, risk of this sort because law enforcement and other government actors are blinded from nonrelevant information. The only use to which the dragnet data are put involves solving a crime.

iii.  Freedom from Indignity

The privacy literature prizes at least two forms of dignity that are not captured in other concepts on this list. First, privacy intrusions sometimes bring about an indignity from being singled out for suspicion.159One reason that courts have concluded that roadblock-style DUI checkpoints are reasonable under the Fourth Amendment is that all people are treated with equal indignity. This is borne out in public opinion surveys, where checkpoints and roadblocks are consistently rated as being a relatively low intrusion compared with other investigation techniques. See Christopher Slobogin & Joseph Schumacher, Reasonable Expectations of Privacy and Autonomy in Fourth Amendment Cases: An Empirical Look at ‘Understandings Recognized and Permitted by Society’, 42 Duke L.J. 727, 738 (1993). Dragnets, whatever their faults, do not have this intrusion. Nearly everybody suffers the same indignity when bulk data is scanned, just as they do at TSA checkpoints and DUI roadblocks.160This may explain why survey research finds that respondents generally do not find roadblocks intrusive; only 24% believed that they violate a reasonable expectation of privacy. James W. Hazel & Christopher Slobogin, ‘A World of Difference’? Law Enforcement, Genetic Data, and the Fourth Amendment, 70 Duke L.J. 705, 745 (2021). Another form of dignity concerns being treated as a human rather than being processed as a faceless line of data. This has some overlap with the concept of “individualized suspicion,” which I will discuss below, and which (in my opinion) filtered dragnets more than adequately should meet. Nonetheless, it is undeniable that filtered dragnets are entirely mechanical up until the point when a limited set of information is disclosed to police. Whether this should make a difference in the moral and legal status of filtered dragnets, though, is debatable.161See generally Frederick Schauer, Profiles, Probabilities, and Stereotypes (2006) (raising doubts about the differences between mechanical profiling and individualized consideration).

iv.  Freedom from Anxiety

A common theme throughout the discourse revolves around the idea of loss of control and the uncertainty and anxiety that arises from it.162See, e.g., Citron & Solove, supra note 151, at 841–42. When the government has personal information about a subject, the subject is uncertain how the information could be used and fears that it may be used against them. This fear is, in and of itself, a social cost. Kiel Brennan-Marquez has argued that new data-gathering technologies create, and to some extent have already created, an omnipresent low-level form of anxiety similar to the feeling one gets when seeing a patrol car in the rear-view mirror and “feeling your pulse quicken; awareness heightened and senses alert, as you try not to break any traffic rules.”163Brennan-Marquez, supra note 2, at 488.

A natural follow-up question is: What havoc can the government cause with data?164Although some would quibble, most privacy scholars at least implicitly recognize (and sometimes explicitly state) that privacy has primarily an instrumental value rather than an intrinsic one. See Richards, supra note 65, at 6. Richards later claims that “privacy is like other social goods, like public health or the environment,” id. at 97, but this seems incorrect to me. Personal and environmental health are both intrinsic goods—more of it is an end in itself, and there is no such thing as too much. The greatest risk posed by filtered dragnets is to offenders, and it is the risk that their offense (and nothing more) will be detected. Thus, for filtered dragnets, freedom from anxiety calls for a freedom from law enforcement itself. It vindicates the rights of the supposedly “guilty” rather than the innocent. Fourth Amendment privacy recognizes no such interest.

2.  Routine Compliance with Reasonable Expectations of Privacy

Data-driven policing has inspired a series of gloomy articles that predict the Fourth Amendment’s reasonable expectations of privacy test has become irrelevant.165See, e.g., Ohm, supra note 9, at 1320; Kimberly N. Brown, Outsourcing, Data Insourcing, and the Irrelevant Constitution, 49 Ga. L. Rev. 607, 659–63 (2015). As long as the third-party doctrine stands, permitting police to access data held by third-party companies without justification or oversight, privacy will be insufficiently protected. I agree with these scholars.166Bambauer, supra note 26, at 209. But courts are already addressing this problem. Cases like Carpenter v. United States—in which the Supreme Court found that police access to several days’ worth of geolocation data constitutes a search that would require a warrant or appropriate warrant exception—have proven that for suspect-driven searches, Fourth Amendment privacy is not yet irrelevant and is becoming more powerful by the day.167Carpenter v. United States, 138 S. Ct. 2206, 2209 (2018).

Nevertheless, the reasonable expectations of privacy test is very unlikely to impede the adoption of filtered dragnets. That test has repeatedly been interpreted to deny privacy interests of the guilty. “[A]ny interest in possessing contraband cannot be deemed ‘legitimate,’ and thus government conduct that only reveals the possession of contraband ‘compromises no legitimate privacy interest.’ ”168Illinois v. Caballes, 543 U.S. 405, 408 (2005). Jed Rubenfeld’s synthesis of Fourth Amendment caselaw seems to get it right: the Fourth Amendment aspires to support “a justified belief that if we do not break the law, our personal lives will remain our own.”169Jed Rubenfeld, The End of Privacy, 61 Stan. L. Rev. 101, 129 (2008) (differentiating the Fourth Amendment’s guarantee to security from a right to privacy). Filtered dragnets pass this test.170For binary searches, the reasonable expectations of privacy test adopts the “nothing to hide” attitude that privacy scholars very often condemn. See Richards, supra note 65, at 134. See generally Daniel J. Solove, Nothing to Hide: The False Trade-Off Between Privacy and Security (2011). Despite the scholarly criticism, it is an attitude that the general public shares with the Court. Public opinion surveys demonstrate that Americans’ taste for privacy is strongly influenced by whether they believe the person being searched has committed a crime or not. See Slobogin & Schumacher, supra note 159, at 759.

To be clear, there are reasons, independent of privacy, to protect law-violators-as-violators. These arguments, which I describe in depth in the next Part, are critical for understanding the threat from filtered dragnets. But they are only loosely related to “privacy” as the term is typically used, and they will not be incorporated into the reasonable expectations of privacy unless that test is changed beyond all recognition.

3.  The Irrelevance of the Warrant Requirement

In U.S. v. Chatrie, the geofence case described earlier, the court suggested it would approve a geofence warrant process if a magistrate or court got to make a probable cause determination before the geolocation data of a target were de-anonymized.171United States v. Chatrie, 590 F. Supp. 3d 901, 927 (E.D. Va. 2022). Generalizing to other filtered dragnets, law enforcement would seek a warrant after the filtered dragnet system alerts, but before any identifying data is revealed.

This process might be a good component for accountability and oversight, and to ensure that filtered dragnets are performing at or above the expected “hit rate,” but it is hard to imagine why a warrant could ever be denied. A warrant is valid as long as it is issued by a neutral judge or magistrate, is based on probable cause, and states with sufficient particularity what is to be searched or seized.172California v. Acevedo, 500 U.S. 565, 569–72 (1991); Illinois v. Gates, 462 U.S. 213, 230 (1983). The standards for both probable cause and particularization will be met—more than met—given that the definition of filtered dragnets I am using requires them to withhold information until the probability that the target has engaged in the investigated crime meets a high standard. As for particularization, because the filtered dragnet procedure begins with the signatures of a crime and works backwards to find the perpetrator, the profile for matching (what I have been calling the “fingerprint” of the crime) is as particularized to a crime as it can be.173Emily Berman argues that one of the purposes of the individualization requirement of the Fourth Amendment is to provide an opportunity for a suspect to challenge the evidence and beliefs of a police officer who thought they had probable cause to make the stop or search. Emily Berman, Individualized Suspicion in the Age of Big Data, 105 Iowa L. Rev. 463, 467 (2020). In this example, the non-privacy goal can be reconciled and adapted to filtered dragnets by requiring law enforcement to review and understand the data that connect the suspect to a crime.

Privacy advocacy groups have argued that warrants issued for reverse searches are tantamount to general warrants because they do not identify (or even anticipate) a particular suspect before they are issued.174Guariglia, supra note 6. But the only similarity that geofence warrants have to general warrants from the Colonial Era is the lack of a named suspect. In every other way, geofence warrants restrict the information that is revealed to that which is closely linked to a particular crime. By comparison, general warrants authorized agents of the colonial government to look for stolen or untaxed goods anywhere the agent “[should] think convenient to search.”175Brennan-Marquez & Henderson, supra note 141, at 402 (citing William J. Cuddihy, The Fourth Amendment: Origins and Original Meaning 233 (2009)). The only manner in which the geofence warrant is unconstrained—by allowing police to discover who the suspect is rather than requiring police to come with a suspect in mind—is a feature of geofence warrants that should be praised, as it limits the discretion of the police to select their targets in advance. This is the critical distinction between filtered dragnets like geofence warrants or DNA searches and suspect-driven searches—one that scholars and commentators too frequently gloss over.176See generally, e.g., Ram, supra note 136 (comparing the suspect-driven search in Carpenter to the crime-driven searches in the DNA forensic setting without recognizing the categorical differences between the two).

Thus, a warrant requirement is irrelevant to the adoption of filtered dragnets, apart from the time, resources, and general system friction involved, because they should routinely be granted.

***

Privacy scholars are courting disaster by lumping filtered dragnet techniques in with other types of dragnets and digital searches. Even if there are court victories in the short term, they will be pyrrhic. The very concept of “privacy” will become increasingly vulnerable to the “I have nothing to hide” argument that is loathed by the field (and rightly so).177See generally Solove, supra note 170. Courts might fail to sufficiently constrain unfiltered dragnets and suspect-driven investigations because of the utility and low harm of filtered dragnet techniques that happen to share the same Fourth Amendment bucket.

Arguments against mass surveillance often start with the observation that surveillance fundamentally shifts power from the surveilled to the surveillor.178“Privacy is about more than just keeping human information unknown or unknowable. . . . Put simply, privacy is about power.” Richards, supra note 65, at 3. Richards goes on to say, “we need to craft reasonable rules and protections so that we can maximize the good things about these technologies and minimize the bad things.” Id. at 5. This is true as far as it goes, but if the surveillor is constrained and can only see evidence of a crime, that power shift will often be a desirable one. In fact, assuming that the law is legitimate, the enforcement of a law is one of the most legitimate acts the government can do. The burden is therefore on surveillance scholars to explain why those who have violated the law may have justified interests in being protected from state detention and prosecution, even when their law-abiding conduct remains private. There are answers to this challenge, but they sound in tyranny rather than invasions of privacy. There is a virtue to being precise about the problems of filtered dragnets without reliance on capacious notions of privacy that would implicate nearly every law enforcement function.

IV.  FILTERED DRAGNETS AND TYRANNY

Filtered dragnets will provide a highly concentrated dose of criminal detection. Even though, in theory, the whole point of having law enforcement departments is to detect and prosecute crime, a drastic increase in criminal detection can have toxic effects on society. The dynamics and interaction of other criminal justice factors have come of age in a time of low detection and only make sense if detection continues to be difficult.

This Part begins by revisiting the interests that privacy scholars have identified that would be affected by filtered dragnets. Each of them is really an anti-tyranny concern garbed in the language of privacy. If we are more explicit about the goals and analyze the risks of authoritarianism that filtered dragnets may drag along with them, the problems (and, therefore, the remedies) become much more obvious.

The true threats from filtered dragnets are that: (1) many Americans will confront a real risk of criminal liability based on our overbroad criminal codes; (2) prosecutions of those crimes could lead to life-altering detentions in our inhumane prison systems; and (3) without the shield of abysmally low detection rates, the only protection is lenity, which is no protection at all from a government that attempts to exert authoritarian power.

A.  Privacy as a Stalking Horse for Anti-Authoritarianism

Neil Richards claims that privacy is a necessary bulwark “if we want political freedom against the power of the state.”179Richards, supra note 65, at 7. But privacy is inadequate on its own to protect the broad range of liberty and equality interests that arise with abuse of power. Filtered dragnets prove it. They can be used to trample liberties and to serve the public unequally even though the government will not know any irrelevant details about licit activities.

Instead of trying to expand the meaning of “privacy” to tackle every possible state abuse, courts and criminal justice scholars alike should seize the moment and force constitutional theory to shift its focus from privacy to anti-authoritarian constraint. To be sure, courts should continue to refine the conception of Fourth Amendment privacy interests to address unfiltered digital dragnets. But if we have any hope of harnessing the great potential of filtered dragnets without creating a despot’s playground, the Supreme Court will need to simultaneously cultivate an anti-authoritarian strand of Fourth Amendment rules.

When surveillance scholars use the concept of privacy to curb abuses of power, they are concerned about unnecessary social control and abuses of discretion.180They are also concerned about illegal use of a tool by rogue agents. See, e.g., Lazer & Meyer, supra note 33, at 906 (misusing DNA databases to extract phenotypes). There is always a risk that the government will use surveillance tools in violation of constitutional rules, statutory restrictions, or their own internal policies, but compared to opportunities of individual officers to abuse warrant or investigation practices in real space, filtered dragnets are more likely to be auditable.

1.  Unnecessary Social Control

Law enforcement serves the obvious and highly valued function of social control. As Kiel Brennan-Marquez explains, “we want people to worry about breaking the rules”181Brennan-Marquez, supra note 2, at 489.—at least, when the rules are good rules, and when the consequences for breaking rules are proportional and fair. However, Brennan-Marquez is concerned that data-driven policing tools will leave the police “awash in probable cause,” allowing them to stop, search, or arrest nearly anybody.182Id. at 491. This concern gets to the heart of the matter. But it is ultimately a critique of the substance of criminal law and the discretion of criminal justice decisionmakers. These are the same themes that Bill Stuntz repeatedly raised when he critiqued Fourth Amendment cases and scholars for allowing privacy to be a distraction from more pressing threats.183See generally Stuntz, supra note 15.

Let us return for a minute to Brennan-Marquez’s metaphorical driver who has just discovered a patrol car in the rearview mirror. If the government had done a massive purge of its penal codes and the only crimes left on the books were murder, rape, arson, armed robbery, and aggravated assault, and if false positive police error was vanishingly small, would the driver feel anxiety? For a time after the change, yes of course. There will be a short-term period of distrust and adjustment when technologies or rules change suddenly and dramatically.184People used to feel nervous about Caller ID, and at the advent of electricity, wealthy homeowners used to hire servants to turn on lights. Adam Thierer, Permissionless Innovation 70 (2016). But in the long run, anxiety will ebb under the pressure of persistent feedback of non-events and the absence of harm.

Public opinion surveys find that attitudes about privacy are mediated through attitudes about the substantive criminal law that is being enforced: a dog that is sniffing for bombs is perceived as less privacy-invasive than a dog that sniffs for drugs even though the experience is identical for the investigation target (at least, up until the moment that the dog alerts, that is).185Bambauer, supra note 25, at 1205. See also Slobogin & Schumacher, supra note 159, at 767 (speculating that the dangerousness of the investigated crime could explain some of their survey results). If assessments of privacy change not because of the revelations or techniques that are used but because of the crimes that are prosecuted, the concept of privacy is standing in for objections to the substance of the law.

The concern about unnecessary social control is better addressed by defining, as best we can, which types of antisocial conduct rise to the level of being worthy of criminal punishment and which do not. And the concern raises important questions about whether criminal violators are treated too harshly. Privacy is a blunt instrument for these purposes. It draws lines that have only a vague relationship to the distinctions we mean to draw.

2.  Selective Attention

Another serious concern is that police might make use of a system of surveillance to rifle around for something to use against a specific person or group.186Dan Markel, Against Mercy, 88 Minn. L. Rev. 1421, 1476–77 (2003); Joh, supra note 17, at 200; Brennan-Marquez, supra note 2, at 490–92. Motivations could range from political persecution to racism to personal vengeance to simply wanting to make a quota or appear well in performance metrics within a bureaucratized police department.

As with unjustified social control, the problem of discretion and selective attention is only indirectly related to privacy. Indeed, it is not even clear that privacy has any positive influence on police discretion. Privacy steers police toward information sources that disproportionately expose low-income and minority groups: if police cannot bring a drug-sniffing dog to a house, they will bring it to apartments and cars.187Bambauer, supra note 26, at 246. If police cannot search the full set of government and commercial DNA databases for a match to a crime scene sample, they will just use the government’s database of arrestee DNA data.188Ram et al., supra note 70, at 1078. At the same time, police can also engage in selective inattention by avoiding leads that could cause problems for friends or powerful people and by failing to give crimes perpetrated against low-status victims the same attention as the ones inflicted on high-status victims. When communities are under-protected, it is a form of too much privacy vis-à-vis the government.

The policy antidote to government discretion and bias is to directly limit discretion and bias. Filtered dragnets already do this, to some extent, because once they are employed, police lose control over who will ultimately be identified as a suspect. But law enforcement can still deploy filtered dragnets unfairly when selecting the neighborhoods or cases in which filtered dragnets will be deployed.189This is why Henderson’s and Brennan-Marquez’s proposal of search and seizure budgets seem inadequate to me: the concept of a budget does not guarantee that the budget will be spent wisely. See generally Brennan-Marquez & Henderson, supra note 141.

Thus, in the context of filtered dragnets, “privacy” concerns are attempting to capture and curb something bigger: too much social control at the discretion of the government.

B.  Filtered Dragnets and the Risks of Tyranny

An authoritarian regime thrives when it has unlimited discretion to issue stiff punishment based on criminal behavior that has negligible negative consequences (and possibly even positive consequences) to society. This threat is blunted if the state lacks the means to acquire evidence of criminal behavior, but with reliable surveillance mechanisms, law enforcement officials will be able to exert as much social control as they please, because nearly every person can be charged with a crime.190Kleiman, supra note 20, at 172–73.

Thus, filtered dragnets present risks that run along three vectors: (1) overbreadth of criminal law; (2) overly harsh punishment of criminals; and (3) overly discretionary investigations and enforcement. If these three forces remain unchecked, filtered dragnets could cause more harm than good. In the wrong hands, filtered dragnets could cause catastrophic risks of the sort that the Constitution is meant to prevent.

1.  Overbreadth of Criminal Law

A government that has the capacity to detect criminal behavior at very high rates must come under heightened standards of care when it promulgates or maintains its criminal laws. If we wince at the thought that everybody who commits a minor offense will get caught and will be prosecuted if they do not seem to qualify for a privilege or defense, this is a sign that the conduct is a poor fit for criminal law, and legislators must consider alternatives (e.g., warnings, civil fines, or positive incentives for pro-social conduct) instead.191Social stigma also provides a significant source of deterrence and self-control, often better than fear of punishment. Stuntz, supra note 15, at 52–53 (citing Daniel S. Nagin, Criminal Deterrence at the Outset of the Twenty-First Century, 23 Crime & Just. 1, 4–5 (1998)).

Right now, constitutional case law does very little to constrain the creation of criminal laws. Outside criminal statutes that would intrude upon specific individual liberties recognized in the Bill of Rights, the courts hold legislatures to very low standards of care (the rational basis test).192See generally Jeffrey D. Jackson, Classical Rational Basis and the Right to Be Free of Arbitrary Legislation, 14 Geo. J.L. & Pub. Pol’y 493 (2016). This latitude on substance has a curious relationship with the procedural restrictions imposed by the Fourth Amendment: as long as police have probable cause to believe that a person is violating or has violated a criminal law, police can make an arrest or initiate a search, no matter how trivial the offense. Thus, in Atwater v. Largo Vista, the Supreme Court found that the government acted within the bounds of the constitution when a police officer arrested a woman who was driving with two small children for the violation of a seatbelt law.193Atwater v. Largo Vista, 532 U.S. 318, 323–24 (2001).

Even if the Court is reluctant to interfere with legislators’ management of criminal codes, common sense dictates that some crimes are much worse than others. The state’s attention should focus on conduct that causes serious harm to others. There is a reason, for example, that the states that have regulated familial DNA-matching programs have allowed their use only for serious offenses like murder and rape,194Ram, supra note 34, at 781. and Baltimore’s Aerial Investigation Research (“AIR”) system, before it was dismantled, was restricted to use in investigating a limited set of very serious crimes.195Slobogin, Suspectless Searches, supra note 29, at 962. It is the same reason that the federal Wiretap Act permits courts to issue wiretap orders only when there is probable cause to investigate one of the explicitly listed serious criminal offenses.19618 U.S.C. § 2516. The same impulse explains why there is scholarly criticism and public outrage when a surveillance system adopted for the purpose of detecting one set of serious criminal violations (like smuggling or terrorism) is simultaneously used to detect violations of drug laws.197Renan, supra note 135, at 1060–63 (describing slippage between “silos” of law enforcement). The unstated assumption is that some crimes should be detected as well as possible (terrorism, for instance) and some should not.198Craig Lerner, The Reasonableness of Probable Cause, 81 Tex. L. Rev. 951, 1019–22 (2003).

The fact that state and federal criminal law has dramatically expanded in quantity and complexity is not in dispute.199Silvergate, supra note 10, at 268. “All of this is to say, of course, that many of those prosecuted are not real criminals who engaged in real crimes defined by clear and reasonable laws.” Id. And yet, curiously, responses to the problem tend to focus on procedural rather than substantive limits.200See, e.g., Reynolds, supra note 10 (advocating for due process constraints on charging decisions). The unchecked growth of substantive criminal law ironically creates a problem for public safety because the fear of prosecution prompts a demand for privacy and law enforcement obstruction.201This is, in a nutshell, the reason that Paul Ohm and other privacy scholars use law enforcement efficiency as a measure of Fourth Amendment violations. Ohm, supra note 9, at 1346. As Mark Kleiman put it, “improved enforcement of a law that should not have been passed in the first place can be a loss rather than a gain.” Kleiman, supra note 20, at 172.

The first and most obvious reason to place limits on criminal liability is to reduce the opportunity for unnecessary social control. The relationship between the government and the governed changes profoundly when a crime has been committed. The defendant in Atwater should have put a seatbelt on her children, and the government has an interest in encouraging, even requiring, that behavior. But not through criminal law.202Josh Bowers has criticized the Atwater decision, arguing that the reasonableness requirement of a Fourth Amendment seizure should protect individuals from “pointless indignities.” Josh Bowers, Probable Cause, Constitutional Reasonableness, and the Unrecognized Point of a ‘Pointless Dignity’, 66 Stan. L. Rev. 987, 1010 (2014). Every arrest is an indignity, of course, so the power of Bowers’ observation is the pointlessness of Atwater’s arrest. A second reason to constrain the substance of criminal law is to increase compliance with the rules we care about most.203Bloated criminal codes reduce law-abiding conduct because they cause what Murat Mungan calls “stigma dilution.” Murat Mungan, Stigma Dillution and Over-Criminalization, 18 Am. L. & Econ Rev. 88, 88 (2016). If functional and productive members of society are regularly engaged in violations of the criminal laws, the fact that a person has committed a crime (or has been convicted of it) loses its negative status signal. Overstuffed criminal codes also bleed into the problems of law enforcement discretion (discussed at greater length below) because the government has too much power to decide which members in the nation of criminals to send to prison.

Consider two examples that illuminate the problem through opposite ideological lenses. First, abortion will be criminalized in many states in light of Dobbs v. Jackson Women’s Health Organization.204Dobbs v. Jackson Women’s Health Org., 597 U.S. 215 (2022). Some states are considering criminal liability for women who seek out an abortion.205Andy Rose, Alabama Attorney General Says He Has Right to Prosecute People Who Facilitate Travel for Out-of-State Abortions, CNN (Aug. 31, 2023, 7:39 AM), https://www.cnn.com/2023/08/31/politics/alabama-attorney-general-abortion-prosecute [https://perma.cc/B7RP-ANNL]. For liberals and progressives, criminal liability for abortion-seekers represents an intolerable overreach of the state. To combat the substance of these laws, organizations such as the ACLU have already issued warnings about the risk that geofence searches could facilitate arrests and prosecutions of a law that a sizable portion of the state’s constituents believe is unjust.206Chad Marlow & Jennifer Stisa Granick, Celebrating an Important Victory in the Ongoing Fight Against Reverse Warrants, ACLU (Jan. 29, 2024), https://www.aclu.org/news/privacy-technology/fight-against-reverse-warrants-victory [https://perma.cc/C2PB-NGKH].

By contrast, conservatives might be concerned about overzealous enforcement of gun restrictions.207Several credit card networks now flag gun transactions automatically. Landon Mion, Visa Joins Mastercard, AmEx in Specifically Labeling Gun Store Sales, N.Y. Post (Sept. 11, 2022), https://nypost.com/2022/09/11/visa-joins-mastercard-amex-in-specifically-labeling-gun-store-sales [https://perma.cc/M554-C4L9]. Geolocation and credit card transaction data could be used to create a filtered dragnet that finds individuals without a gun license who cross state lines, attend a gun show, make a sizable purchase, and immediately return to their state.

In both cases, perceived flaws in the substance of the law would not be so troubling if the laws carried only modest punishments—warnings or fines, for example, rather than the incarceration and downstream labor and housing problems that inevitably follow conviction.208See generally James B. Jacobs, The Eternal Criminal Record (2015). But given the breadth and severity of criminal law, plus the mostly unchecked discretion that police departments have when deciding which among an ocean of technical criminal violations to investigate, the prospect of near-perfect detection takes on a more sinister character. Thus, when people have reservations about, for example, Alexa devices being used to detect the sounds of domestic violence, the reservations stem not from the specific use case but the general capabilities. They wonder, for good reason, what mischief can be made from such a technology when the set of conduct that is forbidden and harshly punished is sprawling and unevenly enforced.209Jessica Bulman-Pozen & David E. Pozen, Uncivil Obedience, 115 Colum. L. Rev. 809 (2015) (illustrating that the set of legal rules operating on U.S. residents is often so unrealistic that fastidious obedience to them can annoy and frustrate law enforcement agents).

Criminal codes are often expanded when the state has not gotten a handle on crimes of violence and property theft. The criminalization of vice (alcohol and drugs) was supported by the community not necessarily out of concerns that the drugs themselves cause to users but because of the “unconscionable violence” that came along with trafficking and addiction.210Forman, supra note 7, at 129 (quoting Carl T. Rowan, Locking Up Thugs Is Not Vindictive, Washington Star (Apr. 23, 1976)). In other words, substantive criminal law is expanded to compensate for deficiencies in the detection and prosecution of crimes that were already on the books so that police could arrest for lower level crimes and (stochastically) reduce the incidence of more serious crimes.211K. Jack Riley, Nancy Rodriguez, Greg Ridgeway, Dionne Barnes-Proby, Terry Fain, Nell Griffith Forge, Vincent Webb & Linda J. Demaine, Just Cause or Just Because?: Prosecution and Plea-Bargaining Resulting in Prison Sentences on Low-Level Drug Charges in California and Arizona 76 (2005). If detection of the serious crimes were more functional, this should relieve the need for sprawling criminal codes.

Hence the dilemma: better crime detection could help stop the pattern of an upward ratchet, but as long as the criminal codes are already sprawling, there will be resistance to increasing detection.

2.  Overly Harsh Punishment

On severity of punishment, the United States stands out among developed nations. We use incarceration intensively. In France and the U.K., a criminal who punches a person in the nose would be sentenced to less than six months in jail.212U.K. Parliament, Comparative Prison Sentences in the EU, House of Commons Library (2015), https://commonslibrary.parliament.uk/research-briefings/cbp-7218 [https://web.archive.org/web/20240510064827/https://commonslibrary.parliament.uk/research-briefings/cbp-7218/. The same conduct in the U.S. would result in a sentence of about three years.213U.S. Sentencing Commission, Sourcebook of Federal Sentencing Statistics Table 15 (2020), https://www.ussc.gov/sites/default/files/pdf/research-and-publications/annual-reports-and-sourcebooks/2020/Table15.pdf [https://perma.cc/33WN-APC8]. Note, though, that the differences for non-violent offenses like theft appear to be smaller (fewer than 6 months in U.K. compared to a median of 8 months in the U.S.). Id. Moreover, no outsider would mistake our prisons for institutions of rehabilitation: the entire sentence is usually carried out in a facility that is punishing, with drab quarters, humiliating toilet and bathroom facilities, and rancid food.214Craig Haney, Criminality in Context 335–44 (2020). Once released, the negative consequences continue as the housing and labor markets penalize criminal convicts.215Forman, supra note 7, at 219. See generally Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2012). Long sentences also create risks of abuse by giving police officers and other state agents leverage to extract bribes, pleas, and false confessions.216Dharmapala et al., supra note 67, at 111 (citing David Friedman, Why Not Hang Them All?: The Virtues of Inefficient Punishment, 107 J. Pol. Econ. S259 (1999)).

The harshness of our sentences is the byproduct of a low detection rate. Communities that at various times have been disfigured from crime waves tend to demand more and harsher criminal penalties.217James Forman Jr.’s book Locking Up Our Own documents the set of factors and conditions that led communities of color to make entirely understandable demands for greater punishment, even though the result of those efforts have not had their intended effects. Forman, supra note 7, at 124. The intuitive appeal of using long prison sentences to make up for low detection rates became the explicit policy of federal and local governments following the landmark work of Gary Becker. Becker modeled crime with a simple formula determined by the probability of conviction and the severity of punishment.218Gary S. Becker, Crime and Punishment: An Economic Approach, 76 J. Polit. Econ. 169, 170 (1968). See also A. Mitchell Polinsky & Steven Shavell, The Theory of Public Enforcement of Law, in Handbook of Law and Economics 421 (2007). Because it is much easier and cheaper for the state to ratchet up punishment than to catch more perpetrators, his work persuaded many politicians to manage crime through tough sentencing.219Cass R. Sunstein, David Schkade & Daniel Kahneman, Do People Want Optimal Deterrence?, 29 J. Legal Studs. 237 (2000).

The sparseness of Becker’s model for crime rates leaves much to be desired for anybody looking for a comprehensive explanation for crime—crime, of course, has a range of social and economic causes220These are the levers most directly under the control of a politically accountable legislators, mayors, police departments, and prosecutors, but there are of course other factors. See generally Stephen J. Schoenthaler & Ian D. Bier, The Effect of Vitamin-Mineral Supplementation on Juvenile Delinquency Among American Schoolchildren: A Randomized, Double-Blind Placebo-Controlled Trial, 6 J. Alt. & Complementary Med. 7 (2000) (discussing malnutrition as a factor in crime); Civic Research Institute, The Science, Treatment, and Prevention of Antisocial Behaviors (Diana H. Fishbein ed., 1999) (reviewing evidence of the impact of alcoholism, drug use, sexual abuse, cognitive and genetic factors, and family/gender role factors); Clifford R. Shaw & Henry D. McKay, Juvenile Delinquency and Urban Areas (1942) (discussing the effect of weakened or disorganized social institutions on crime; this work planted the roots of what would become the “broken windows” theory).—but as Part II explained, there is little doubt that detection has a significant influence over the amount of crime in a given community.221Executive Office of the President, Economic Perspectives on Incarceration and the Criminal Justice System 36–40 (2016) (citing to the empirical literature finding that increased incarceration reduces crime, but less effectively than equivalent increased spending on police); Andrew von Hirsch, Doing Justice: The Choice of Punishments 62–65 (1976). See generally Raymond Paternoster, The Deterrent Effect of the Perceived Certainty and Severity of Punishment: A Review of the Evidence and Issues, 42 Just. Q. 173 (1987); Beau Kilmer, Nancy Nicosia, Paul Heaton & Greg Midgette, Efficacy of Frequent Monitoring with Swift, Certain, and Modest Sanctions for Violations: Insights from South Dakota’s 24/7 Sobriety Project, 103 Am. J. Pub. Health e37 (2013); Lawrence W. Sherman, Police Crackdowns: Initial and Residual Deterrence, 12 Crime & Just. 1 (1990). Punishment, by contrast, seems to have a U-shaped relationship to recidivism, where no punishment and long, harsh punishment both tend to increase the odds that a perpetrator will recidivate.222Amanda Y. Agan, Jennifer L. Doleac & Anna Harvey, Misdemeanor Prosecution (Nat’l Bureau Econ. Rsch., Working Paper No. 28600, 2021).

I do not want to overstate the case for reducing prison time. Roughly half of the inmates in prison are individuals with such consistent sociopathic and antisocial behaviors that for those inmates, long-term incapacitation has positive externalities. Not only does incapacitation prevent these particular individuals from committing additional crimes (specific deterrence), but their families and particularly children may benefit from having less, rather than more, exposure to them.223See generally Samuel Norris, Matthew Pecenco & Jeffrey Weaver, The Effects of Parental and Sibling Incarceration: Evidence from Ohio, 111 Am. Econ. Rev. 2926 (2021); Sara R. Jaffee, Terrie E. Moffitt, Avshalom Caspi & Alan Taylor, Life with (or Without) Father: The Benefits of Living with Two Biological Parents Depends on the Father’s Antisocial Behavior, 74 Child Dev. 109 (2003). Nevertheless, the social costs of harsh punishment do not seem to serve deterrence or otherwise be justified outside the context of heinous or repeated criminal activity.

Over-punishment and criminal detection are inextricably connected. We cannot expect to find a political will to reduce punishment unless the police have—and use—new means to detect and root out crime. Filtered dragnets can jolt and resettle the criminal justice system in a new equilibrium where detection, rather than harsh punishment, is the key mechanism for crime control.

3.  Discretionary Application

Once the police have committed to investigating a particular crime, filtered dragnets take discretion away from the police to drive the investigation. But there are other points in time before and after a filtered dragnet may be used when government agents can exert control over the process:

i.  Selective Protection

When it comes to serious crimes of violence and theft, American police forces have a troubling history of systematically ignoring the suffering of minority communities. Police once actively conspired to deprive former slaves of their right to protection by joining the murderous mobs.224Stuntz, supra note 15, at 104–05. Over the subsequent century, police started to exhibit a more passive form of selection by simply not investigating and pursuing crimes committed against African-Americans as zealously as crimes committed against whites.225This trend can be seen in studies finding that models predicting enforcement and sentencing often include a large and statistically significant effect for the race of the victim (with white victims receiving better protection). John J. Donohue III, An Empirical Evaluation of the Connecticut Death Penalty System Since 1973: Are There Unlawful Racial, Gender, and Geographic Disparities?, 11 J. Empirical Legal Studs. 637, 640 (2014). This is a form of inequality that is not adequately addressed in constitutional caselaw.226In fact, in the context of capital sentencing, the Supreme Court has explicitly said that there is not a constitutional guarantee that would prevent discretionary leniency to be executed arbitrarily. McCleskey v. Kemp, 481 U.S. 279, 292 (1987). Thus, courts must prevent police from using filtered dragnets to solve crimes committed against one set of privileged crime victims while failing to use the same tools to solve comparable (and comparably detectable) crimes committed against others.

ii.  Selective Crackdowns

Police also decide which crimes to target,227Mila Sohoni, Crackdowns, 103 Va. L. Rev. 31, 33–34 (2017). and when and where to focus their resources.228See generally Jeffrey Fagan, Garth Davies & Adam Carlis, Race and Selective Enforcement in Public Housing, 9 J. Empirical Legal Studs. 697 (2012) (describing selective enforcement of criminal trespass by race or public housing status). For example, police will decide which crime scene images should be subjected to facial recognition. There is no guarantee that they will pursue arrest and prosecution of violent or destructive participants at Black Lives Matter protests or at a pro-Trump rallies with the same vigor.

iii.  Controlling the Data

Whether police use government-held data or data held by private companies to operate a filtered dragnet, they can exert some influence over the process if they are allowed to use a subset of available information to run through the filtered dragnet.229Indeed, this is one counterintuitive reason it may be better to have police access data from third-party companies rather than collecting it themselves, so that private industry may serve as a source of public information and whistle blowing. Farhang Heydari, Hoover Inst., Aegis Series Paper No. 2106, Understanding Police Reliance on Private Data 6 (2021). For example, if the government were able to limit DNA-matching to the data collected from ex-convicts only, or if a geofence warrant could direct a service provider to look for matching records only among customers who live in a certain precinct, the police could do an end run around the discretion-reducing function of filtered dragnets.

iv.  Downstream Decisions

After a suspect is identified by a filtered dragnet, police and prosecutors still have unchecked power to use leniency and to simply not pursue the leads that they do not like.230Discretion among judges at the point of sentencing seems to reduce racial disparities or, at least, make them no worse. See Drug Arrests Stayed High Even as Imprisonment Fell From 2009 to 2019, Pew Charitable Trs. (Feb. 15, 2022) https://www.pewtrusts.org/en/research-and-analysis/issue-briefs/2022/02/drug-arrests-stayed-high-even-as-imprisonment-fell-from-2009-to-2019 [https://perma.cc/Z65C-26JF]. It is possible that institutional and cultural influences downstream have started to change the risks of disparate racial impact over time. See generally Joshua B. Fischman & Max M. Schanzenbach, Racial Disparities Under the Federal Sentencing Guidelines: The Role of Judicial Discretion and Mandatory Minimums, 9 J. Empirical Legal Studs. 729 (2012).

The unifying theme across these decision-making practices is that the Supreme Court has avoided interfering with law enforcement discretion any time it has a plausible connection to judgment about the best use of resources. In Whren v. United States, the Supreme Court rejected a constitutional challenge by a criminal defendant who was pulled over for making an illegal U-turn. The defendant argued that the police would not have pulled over a white person, or any person about whom the police did not have a pre-existing “hunch,” under similar circumstances.231Whren v. United States, 517 U.S. 806, 809 (1996). The court believed that the defendant’s theory of unequal enforcement of minor traffic infractions was irrelevant and unworkable.232Id. at 815. At the time it probably was.233In individual cases, it would have been difficult to prove that race was a but-for cause of a police officer’s decision to conduct a seizure. However, even at the time, some argued that the fact that race clearly played a role systemically should have been sufficient for the Court to decide that pretextual stops violated the Fourth Amendment. See Tracey Maclin, Race and the Fourth Amendment, 51 Vand. L. Rev. 333, 375 (1998). But it is not anymore and will be even less so in the future. Today, a defendant bringing a case like Whren might have the data, thanks to GPS tracking of police and civilian cars, to demonstrate that police pull over only a small fraction of the illegal U-turns and other traffic infractions that they observe, and that the enforcement disproportionately targets minority drivers (if this is so).234Christopher Slobogin has characterized law enforcement use of pretextual stops as a species of general warrant. Slobogin, Virtual Searches, supra note 29 at 102.

If police are able to use filtered surveillance to solve crimes at minimal expense, there will be even less need for discretion. So, if police have a filtered dragnet, courts must make sure they have an acceptable response to the question: “Why did you enforce the criminal law here and not there?”235See generally Harcourt & Meares, supra note 18 (recommending that the degree of suspicion and the evenhandedness of a search program should be of utmost Fourth Amendment importance).

In summary, a government that has the capacity to detect criminal behavior at very high rates must come under heightened standards of care with respect to the promulgation of criminal laws, the use of incarceration and punishment, and the application of detection tools.

V.  THE ANTI-AUTHORITARIAN FOURTH AMENDMENT

Anti-authoritarianism, rather than privacy, should be the benchmark for the Fourth Amendment when police develop cases using filtered dragnets. What makes facial recognition or a geofence or some other form of filtered dragnet “reasonable” is not that the privacy of the innocent is protected—they will all do that. Rather, an “unreasonable” use of these technologies means the state is misusing its power to punish and control.

The current trajectory of Fourth Amendment caselaw suggests that we are headed for one of two suboptimal endpoints: either the state will be able to use filtered dragnets with little to protect its citizens from the perils of broad criminal laws, harsh criminal sentences, and selective enforcement, or the state will effectively be prohibited from using filtered dragnets, leaving a criminal justice status quo that nobody would devise and few would defend.236Barkow, supra note 100, at 5 (“One could say our approach to crime is a failed government program on an epic scale, except for the fact it is not a program at all. It is the cumulative effect of many isolated decisions to pursue tough policies without analyzing them to consider whether they work or, even worse, are harmful.”). But if the courts start to take seriously the fundamental differences between filtered dragnets and other investigation techniques—if they recognize that technology can explode longstanding assumptions about the nature of risk when police increase the detection of crime—courts can harness the disruptive technology and help society land in a better equilibrium.

Thus, the Fourth Amendment must evolve to demand “reasonableness” when detection is easy. The thrust of my proposal is that the phrase “reasonable searches and seizures” should be understood as a more expansive and robust guarantee of reasonableness.237To some extent, this builds on the constitutional case law and scholarship that give the “reasonableness” phrase pride of place in Fourth Amendment interpretation. See Akhil Reed Amar, The Constitution and Criminal Procedure: First Principles 35 (1997); Miriam H. Baer, Law Enforcement’s Lochner, 105 Minn. L. Rev. 1667, 1730 (2021); Renan, supra note 135, at 1044, 1081–82. Specifically, the requirement of “reasonable” seizures should guarantee that the consequences of a seizure (e.g., carceral arrest and a possible prison sentence) are fitting and proportionate to the gravity of the suspected crime. The requirement of “reasonable” searches should guarantee not only that the search is conducted based on probable cause and in line with established warrant requirements, but also that the decision to search or not search is reasonable and non-arbitrary. The former ensures that the criminal law being enforced is serious enough to justify the loss of rights that comes along with an arrest or a long sentence. The latter ensures that criminal detection tools are used in an even-handed manner.

A.  Reasonable Seizing—Restricting the Substantive Criminal Law

The prospect of near-perfect detection requires more care in defining a reasonable seizure. In order for a carceral seizure of a person to be reasonable, state uses of force and coercion involved must be justified by the harm that the arrestee has imposed on society. “Freedom from unreasonable . . . seizures” should be interpreted to protect the interests of individuals who have engaged in conduct that is technically illegal but not morally reprehensible.238See generally Robert M. Cover, Violence and the Word, 95 Yale L.J. 1601, 1608 (1986) (reminding readers that all prison sentences are backed by the credible threat of state violence). Again, my argument is similar to Bill Stuntz’s work suggesting the physical intrusion and coercion of the policing process to be the main source of trouble. William J. Stuntz, Privacy’s Problem and the Law of Criminal Procedure, 93 Mich. L. Rev. 1016, 1026 (1995). Thomas Jefferson’s unfinished vision laid out in the Declaration of the Rights of Man and of the Citizen provides the blueprint. Article 4 states, “Liberty consists in the power to do anything that does not injure others”; Article 5 states, “The law has the right to forbid only such actions as are injurious to society”; and Article 8 states, “The law ought to establish only penalties that are strictly and obviously necessary.”239Declaration of the Rights of Man and of the Citizen (France 1789), https://avalon.law.yale.edu/18th_century/rightsof.asp [https://perma.cc/VZF7-CZ6G].

A seizure should only be reasonable if the underlying criminal conduct and the resulting punishment are also reasonable. While substantive due process rights and the Eighth Amendment provide some absolute constitutional limits against unreasonable criminal codes or punishments, these rights must be bolstered in the face of near-perfect detection. An analysis of reasonable seizures in light of filtered dragnets has two aspects to it: (1) whether the behavior is sufficiently blameworthy to belong in the criminal code at all, and (2) if so, whether the punishment fits the risks and harms of the crime.

Is the conduct crime-worthy? The first inquiry asks whether the suspect’s conduct is bad enough to justify arrest and incarceration at all.240Given the public interest in having the state intermediate misdemeanor and civil infractions as well, non-carceral short-term seizures should not require judicial scrutiny of the substance of the law. See Rachel A. Harmon, Why Arrest?, 115 Mich. L. Rev. 307, 359 (2016). This is a threshold issue. Criminal conviction needs to be blameworthy and stigmatizing. Defining what sort of conduct is “blameworthy” raises deep philosophical questions, but there is an aspect of the question that is empirical: it needs to be rare. If the conduct captured by the scope of the criminal codes is commonplace, the actor’s community evidently has not incorporated restraint deeply into its moral fabric.241A useful methodology may be the sort of surveys of past behavior that Tom Tyler relied on in his seminal work, Why People Obey the Law. One survey of Chicago residents suggested that there might be a natural breakpoint between minor traffic violations and neighborhood infractions, where survey respondents sometimes engaged in the activity (even if rarely), and the conduct for which over 90% of respondents state they have never engaged in (e.g., theft). Tyler, supra note 93, at 41. In those cases, government intervention short of criminal liability (including expressive law, civil fines, or positive reinforcement for its opposite) should be used.242To increase cultural legitimacy, punishment should rely more on reputation and relationship consequences than on punishment. Stuntz, supra note 15, at 30–31. One broad category of criminal laws that may deserve constitutional scrutiny are laws that criminalize the possession or sale of contraband items to adults. These are acts that are transactional. Kleiman, supra note 20, at 154–55.

This is at odds with cases like Atwater, where the court refused to second-guess a local government’s decision to criminalize a minor driving infraction,243Atwater v. Largo Vista, 532 U.S. 318, 323–24 (2001). but Fourth Amendment case law does occasionally break rank with Atwater and peeks at the substance of the criminal violation in order to gauge the reasonableness of a procedure. For example, when analyzing whether a warrantless traffic checkpoint is constitutional as a reasonable warrantless seizure, the Supreme Court explicitly considers “the gravity of the public concerns served by the seizure” as one of the factors.244Illinois v. Lidster, 540 U.S. 419, 427 (2004) (quoting Brown v. Texas, 443 U.S. 47, 51 (1979)). And the Court has refused to allow exigent circumstances to excuse the failure to secure a warrant for a home search and arrest when the underlying crime is a minor offense.245Welsh v. Wisconsin, 466 U.S. 740, 750 (1984) (citing McDonald v. United States, 335 U.S. 451, 459–60 (Jackson, J., concurring)). And Atwater is ahistorical: a quick tour of the notorious cases the Crown directed against colonists that inspired the Bill of Rights are offensive, in large part, because of the substance of the crimes. These included crimes such as writing or publishing “gross and scandalous reflections and invectives upon his majesty’s government” or the crimes of illegal trade and inadequate record-keeping.246Laura K. Donohue, The Original Fourth Amendment, 83 U. Chi. L. Rev. 1181, 1197 (quoting Entick v. Carrington, 19 Howell’s State Trials 1029, 1034 (CP 1765)), 1199 (publishing criticism), 1243 (illegal trade and recordkeeping), 1247 (same) (2016). Moreover, Donohue describes the limits in eighteenth century England to the meaning of the term “felon” or “felony,” which included only the most morally reprehensible crimes such as murder, theft, suicide, rape, and arson. Id. at 1222–23.

Is the punishment too harsh? If the suspect’s conduct is reprehensible enough to pass the initial threshold test, a post-conviction seizure could still be unreasonable if the quality and length of detention is disproportionately harsh.247Andrew von Hirsch, Doing Justice: The Choice of Punishments 66–83 (1976). The sentences of many crimes, even violent crimes, could probably be reduced to weeks or days, or even converted to non-carceral forms of punishment (like public service or surveillance-enabled supervised release) without increasing crime rates if detection rates were much higher than they currently are. Long-term prison sentences can be reserved for murder, treason, severe sexual assault, severe child abuse, and for the incapacitation of repeat criminals.248See generally Eric Helland & Alexander Tabarrok, Does Three Strikes Deter?: A Nonparametric Estimation, 42 J. Hum. Res. 309 (2007) (finding significant deterrent effect, and not just incapacitation effect, from three strikes laws). For other crimes, detection through filtered dragnets, rather than a small chance of very harsh punishment, can be the door jamb that stops the metaphorical revolving door of recidivism.

B.  Reasonable Searching—Minimizing Discretion

A police department’s use of filtered dragnets will be fair if it avoids gaps in the protection from crime as well as gaps in leniency from enforcement.

1.  Duty to Search

All cases of reported or otherwise known crimes that are equally suitable for filtered dragnets should be investigated.249At the very least, they should be investigated randomly rather than haphazardly. See Harcourt & Meares, supra note 18, at 851–54. For example, if a police department can use filtered dragnets to detect gun violence or robberies, and it fails to investigate daytime violence and robberies taking place near low-income schools even though it investigates every daytime robbery or assault that takes place near high-income schools,250Forman, supra note 7, at 125. the uneven use of filtered dragnets would render it an unreasonable search. As a practical matter, while it would make more sense for a constitutional challenge to come in the form of a § 1983 claim brought by a resident who is harmed by a detectable or deterrable crime, the challenge is more likely to emerge when a criminal defendant brings a claim similar to the claim brought in Whren (arguing that although they committed an offense, the crime is unequally enforced).251Whren v. United States, 517 U.S. 806, 810 (1996). Courts should be open to a claim and evidentiary proof of this sort.

2.  Duty to Cast a Large Dragnet

Law enforcement should not have undue control defining the search pool that will be used by a filtered dragnet. The database that will be used to cross-check against the facts of a crime should include everyone possible whose data is accessible and whose participation in the crime would not be an impossibility. This reduces the risk of arbitrariness or bias that could result if police search for potential leads and matches in one population while ignoring another.

By this standard, facial recognition systems like Clearview AI are more legitimate (in the sense of being less susceptible to bias or discretion, at least) when they match surveillance footage at a crime scene against the largest possible set of publicly available portraits on the open web. Contrast this with DNA filtered dragnets: it is increasingly common and popular to restrict local law enforcement who are running DNA searches to CODIS, the federally maintained database of arrestee or convict DNA samples.252Kaye & Smith, supra note 146, at 414–15; Ram, supra note 34, at 789 (it is not fair to subject relatives of people who are in the CODIS database to more police scrutiny than relatives of those who are not). Local police departments have expanded their DNA databases by choosing to include “exclusion samples” (that is, DNA samples collected from suspects or victims) and juvenile defendants. Lazer & Meyer, supra note 33, at 904. Whatever rationale might justify subjecting convicts to greater likelihood of being caught in their own future crimes, the logic does not follow to arrestees or to individuals whose crimes are detected through familial DNA.253Lazer & Meyer, supra note 33, at 909–11. Commentators have noted the race disparities in likelihood of detection that result from using arrestee DNA only. Ram, supra note 34, at 789.

The principle of evenhanded enforcement is consonant with what Bennett Capers meant when he argued that equitable policing may require “redistributing privacy.”254Bennett Capers, supra note 59, at 1243–45 (“In exchange for a reduction in hard surveillance of people of color, it will require an increase in soft surveillance of everyone.”). But it may require courts to enforce subpoenas or issue warrants in order to pierce through corporate policies that resist law enforcement access.255See generally Yan Fang, Internet Technology Companies as Evidence Intermediaries, 110 Va. L. Rev. (forthcoming 2024). These policies are already in place at some companies.256Ancestry, Ancestry Privacy Statement (Aug. 11, 2020), https://www.ancestry.com/c/legal/privacystatement_2020_8_11#:~:text=In%20the%20interest%20of%20transparency,data%20across%20all%20our%20sites.&text=We%20may%20share%20your%20Personal,(e.g.%2C%20subpoenas%2C%20warrants)%3B [https://perma.cc/Y8NN-FSXJ]. Of course, there may be times when law enforcement resources really are constrained so that investigating every trackable crime or casting the widest possible dragnet will not be possible, but the police should be able to offer some reasonable explanation. And an explanation that would not be reasonable is that too many individuals would be caught: if the availability of filtered dragnets forces law enforcement to confront the problem that there are too many criminal acts, the proper government response is to revisit and narrow or purge some of the substantive criminal laws.

C.  Police Culture: The Era of the Nerdy Police Force

The adoption of filtered dragnets will require law enforcement agencies to become more technocratic. Much of the initial investigation work is likely to be centralized, in upper management working at desks, and their compliance with Fourth Amendment restrictions will require competence, if not expertise, in statistical methods and data auditing procedures. To some extent, this change in operations is already happening with the gradual introduction of DNA forensic labs, facial recognition, and now, reverse searches. With clear Fourth Amendment guidance for filtered dragnets, police forces could rapidly adopt filtered dragnets and divest somewhat from traditional techniques. Police operations would shift away from self-initiated patrols and field-based investigation toward data-driven initiation and investigation. This will change who is qualified for and attracted to a policing job. Police investigators who are used to solving cases through interrogations and informants will begin to feel like the baseball scouts who still visit high school and college teams looking for “good legs” while their younger, nerdier, and (eventually) better paid colleagues use Bill James-style statistics to prioritize the team’s recruiting efforts.257See generally Michael Lewis, Moneyball (2003).

This may prove to be a feature—a way to achieve the reform of police culture by working backwards from shared ends that are appealing to both suburban families and Black Lives Matter activists (lowering crime, reducing false convictions, and achieving even-handed enforcement). The cultural shift can provide counterpressure to a problem that currently plagues police recruitment—that the people most interested in working for law enforcement have stronger-than-average preferences for meting out punishment.258Dharmapala et al., supra note 67, at 107. All the more reason civil liberties organizations should reconsider their instinctive negative reactions to filtered dragnets.

The criminal defense bar may get transformed, too. Andrew Ferguson has made the case that law enforcement data-collection and data-mining practices can be inverted to discover negligent or abusive practices within police departments.259Andrew Guthrie Ferguson, The Exclusionary Rule in the Age of Blue Data, 72 Vand. L. Rev. 561, 600–08 (2019). Defendants can make use of “blue data” to prove their cases that, for example, law enforcement had used an unreasonably narrow dragnet.260Id. To be fully effective, blue data investigations may require increased transparency and access to police programs. See generally Hannah Bloch-Wehba, Visible Policing: Technology, Transparency, and Democratic Control, 109 Calif. L. Rev. 917 (2021). This may offend a police department’s sense of agency and self-determination, but this is a reasonable price to pay for the power and efficiency of filtered dragnets.261Some will no doubt be concerned that filtered dragnets are a progression of the sort of bureaucratization of policing that has already caused dysfunction—the Compstat meetings, bulk, assembly-line adjudication, et cetera. Stuntz, supra note 15, at 57. But it is not clear that there are viable alternatives to a bureaucratic police force.

VI.  ADDRESSING FRIENDLY OBJECTIONS

Some readers will no doubt disagree with my description of the looming opportunities and problems that will arise with filtered dragnets, and as a result will reject the policy solutions offered in Part V. I addressed doubts about the upsides of filtered surveillance or the downsides of near-perfect detection as best I can in those earlier Parts. Whatever disagreements about the policy implications remain will have to be aired in other fora. Here, I address some objections that will be raised even by readers who agree that the policies advanced in this Article are sound.

“Friendly” critics will wonder why it is necessary to constitutionalize these policies rather than advocating for a legislative response. The answer, in brief, is that constitutional protections are the only viable tools when several criminal justice rules must be changed at the same time.

Friendly critics may also wonder why the Fourth Amendment is the right vehicle for course correction even if all agree that constitutional law must be pressed into service. On this question, I am more neutral. If the Eighth Amendment and Due Process clauses can be interpreted to reach the same anti-authoritarian objectives, there is little reason to insist on the Fourth Amendment as the primary source of these rights. But since filtered dragnets will inevitably cause seismic activity in Fourth Amendment law, and since highly efficient searches are the reason that the threat of government tyranny will become more pronounced, it is at least fair to say that the Fourth Amendment could be the right constitutional source for the anti-authoritarian rights described in Part V.

A.  Why the Courts? (Or, Why Not the Legislature?)

Not every problem in law enforcement needs to be solved through the constitution, but this one does. The political process is exceedingly unlikely to get us out of our criminal justice rut, where low detection rates are messily compensated through criminal liability for minor infractions. Political winds bob from too much lenity to authoritarian severity,262Stuntz, supra note 15, at 34–35. and as a result, surveillance restrictions and decriminalization usually rise and fall together depending on whether the mood is pro-rights or anti-crime. Political institutions do not have the tools to break surveillance and substantive criminal law apart and to work out a criminal justice horse trade. But a horse trade is what we need: we simultaneously need the police to detect more violent crime while also ensuring that no person who is caught with a $10 baggie of drugs could ever be in a position to go to prison for the rest of their life.263Forman, supra note 7, at 121 (describing a former client in this position). Even the more probable outcome—a five-year sentence, say, id. at 122, is vastly over-punitive compared to the risk of harm posed to the community. See generally Jane Bambauer & Andrea Roth, From Damage Caps to Decarceration: Extending Tort Law Safeguards to Criminal Sentencing, 101 B.U. L. Rev. 1667 (2021).

This trade—reduced criminal liability in exchange for greater detection—can only be accomplished through constitutional adjustment. If criminal liability and punishment are reduced without a simultaneous increase in detection, crime rates will rise and the ballot box consequences for political actors will be harsh. If detection capacity is increased without any change to the criminal codes, the political actors’ constituents will be justifiably nervous about how the newfound power of detection will be used. But if the two reforms happen at the same time—if the state is constrained by constitutional interpretation from detaining or imprisoning individuals based on minor infractions, or from levying long sentences for anything other than the most serious and violent offenses—surveillance is defanged because the threat of unjust prosecution is reduced.264See generally Bambauer & Roth, supra note 263 (using a new empirical approach to measure just sentences and finding that criminal sentences are disproportionate to the social harm the crimes caused).

Put another way, the political pressure to limit or ban surveillance tools might make sense as a second-best solution if decriminalization and reduced sentencing is politically infeasible, but the risk is that the strategy can lock out the first best solution—the low penalty/high detection solution. Indeed, in the wake of rising murder rates, the decriminalization and police reform movements are already more politically controversial than they were just a couple years ago. If crime rates continue to rise while detection is capped or suppressed through new legal constraints on technology, politically accountable decisionmakers will continue to use mass incarceration to manage crime.

To be fair, many luminaries in the field of criminal justice have seen roughly the same patterns of dysfunction and technological disruption that I have recounted and have recommended solutions in the form of legislation, administrative regulation, and restoring the role of local government. Bill Stuntz, for example, argued that many of the abuses of power in the criminal justice system would be avoided if local governments (rather than states) were the primary promulgators of criminal law and if juries (rather than prosecutors) were the decisionmakers who most often determined whether a defendant should be convicted or serve time.265Stuntz, supra note 15, at 8, 39. See generally Wayne A. Logan, Fourth Amendment Localism, 93 Ind. L.J. 369 (2018). Chris Slobogin, Barry Friedman, Maria Ponomarenko, Catherine Crump, and Andrew Ferguson have argued that legislatures and regulatory agencies should be more active in structuring how (non-filtered) dragnet and surveillance technologies should and should not be used in the field.266Ferguson, supra note 9, at 272. See generally Christopher Slobogin, Panvasive Surveillance, Political Process Theory, and the Nondelegation Doctrine, 102 Geo. L.J. 1721 (2014); Barry Friedman & Maria Ponomarenko, Democratic Policing, 90 N.Y.U. L. Rev. 1827 (2015); Catherine Crump, Surveillance Policy Making by Procurement, 91 Wash. L. Rev. 1595 (2016). But they also acknowledge that politically accountable bodies always run the risk that their decisions will disproportionately benefit the politically powerful and will be relatively indifferent to problems of under-protection and prejudiced enforcement.267Slobogin, supra note 132, at 134.

Daphna Renan has argued, convincingly in my opinion, that political processes alone cannot be expected to produce the sort of basic rights and counter-majoritarian protections that the Constitution should guarantee.268See generally Renan, supra note 135. Our agreement ends there, though, because Renan advocates for a Fourth Amendment superstructure, or set of principles, that would set requirements and boundaries on administrative agencies (such as the Privacy and Civil Liberties Oversight Board) tasked with creating law enforcement surveillance programs.269Id. at 1108–25. Again, Renan is primarily (though not exclusively) analyzing surveillance technologies that are not crime-driven filtered types of tools that I focus on here. But no board, no matter how independent, could actually make the grand maneuver that I’m asking readers to consider here—where filtered dragnets are permitted, but in exchange for protection from bad laws, harsh punishment, and discretionary application. Renan’s proposal may be a good second-best solution, but a dramatic reorientation of constitutional priorities can only be done by the Supreme Court. It is time for constitutional renewal in search of a better equilibrium.270Jack M. Balkin, The Cycles of Constitutional Time 44–65 (2020) (describing cycles of constitutional “rot,” where the accretion of rules and exceptions have permitted authoritarian practices to fester, and “renewal,” where constitutional theory and courts correct course).

B.  Why the Fourth Amendment?

The harder question, and I confess this is where I am on shakier ground, is why the anti-authoritarian principles that I claim are so important during this inflection point are the responsibility of the Fourth Amendment to solve rather than other parts of the Bill of Rights or notions of substantive due process.271Christopher Slobogin, A Defense of Privacy as the Central Value Protected by the Fourth Amendment’s Prohibition on Unreasonable Searches, 48 Tex. Tech. L. Rev. 143, 155 (2015). The case is somewhat easier for the principle that reasonable searching requires evenhandedness. At the founding, the Fourth and Fifth Amendments were meant to prevent the government from being able to rummage through a disfavored target’s things looking for evidence of a crime, so equal and non-arbitrary treatment was always a goal.272Stuntz, supra note 15, at 72.

The case for using the Fourth Amendment to put constraints on substantive criminal law and sentencing is a bit harder. After all, the Supreme Court has repeatedly authorized law enforcement agencies to execute stops, searches, and arrests, no matter how trivial the law-violating behavior may be to overall public safety.273See discussion of Atwater and Whren, supra Part V. As early as Boyd v. United States, decided in 1886, the Court found that Fourth Amendment protections do not apply to those who have committed a public offense, and courts have declined to second-guess whether the public offense was valid in the course of a Fourth Amendment analysis.274Boyd v. United States, 116 U.S. 616, 630 (1886). The Fourth Amendment protects rights that have “never been forfeited by his conviction of some public offence.” Id. And one may reasonably think that if courts are going to invalidate an overly harsh prison sentence on constitutional grounds, as I argue they should under the guise of protecting against unreasonable seizures, they would have already imposed these limits under the Eighth Amendment’s cruel and unusual punishment clause.275Harmelin v. Michigan, 501 U.S. 957, 997 (1991) (while the Eighth Amendment prohibits “grossly disproportionate” mandatory sentences, noncapital sentences would almost never be found to be grossly disproportionate).

Perhaps it would make as much sense to make Eighth Amendment or Due Process protections more robust to ensure that criminal liability is not overbroad and sentences aren’t overlong.276Note, though, that the Court has already stated a reluctance to expand substantive due process if other parts of the Bill of Rights are relevant to the claim. Sacramento v. Lewis, 523 U.S. 833, 842 (1998). But a long view of the Fourth Amendment can support a shift from the protection of the property, privacy, and autonomy of non-offenders to the protection of those same interests of those who are innocent in the more platonic sense.

In many ways, the history of Fourth Amendment caselaw shows a faltering and incoherent attempt to get to the main point: to make sure the state does not have too much power to enforce silly crimes and scare its constituents into submission.277Cloud, supra note 14, at 202. Cloud also notes that early Fourth Amendment case law was designed to constrain discretion (or “autonomy”) of law enforcement and the judiciary. Id. at 276–284. Silly crimes have been at the center of the original construction of the Fourth Amendment and each of its major reforms. Shortly after the American Revolution, sedition laws motivated creative lawyers like Alexander Hamilton to use procedure in order to correct flaws in the substantive criminal law that were not, at that time, adequately constrained by the First Amendment.278Stuntz, supra note 15, at 71–72. It is particularly strange that the attack required procedural rather than substantive challenges because prosecutions for the crime of seditious libel conducted by the British Crown was a major motivating force behind the Bill of Rights. Thomas P. Crocker, The Political Fourth Amendment, 88 Wash. U. L. Rev. 303, 309, 346 (2010). In the context of that time, when states had nearly full rein to search for physical evidence and when prosecutions were proved primarily using witnesses, the thought that constitutional protections could get in the way of convicting rapists and murderers would have been preposterous.279Tracey Maclin, The Supreme Court and the Fourth Amendment’s Exclusionary Rule 83–100 (2013); Stuntz, supra note 15, at 71–72. After all, the founders did not expect the Fourth Amendment to constrain how local law enforcement investigated crimes, and group searches executed without particularized warrants were tolerated.280Slobogin, Virtual Searches, supra note 29 at 103. Prior to the 1960s, state courts interpreted their constitutional guarantees of freedom from unreasonable searches and seizures to be very permissive. The investigation strategies that police departments adopted were generally considered reasonable. Stuntz, supra note 15 at 68–69. Thus, at that time, the buildup of procedure to help protect against crimes of belief and thought had little cost to the control of more conventional crimes.

Courts again increased Fourth Amendment procedural protections during two subsequent periods when the substance of criminal law was directed at questionable, arguably victimless vice crimes like gambling, alcohol (during prohibition), obscenity, and recreational drugs.281Stuntz, supra note 15, at 110. In the twentieth century, new information technologies changed the nature of police investigation by enabling wiretapping and forms of long-term tracking of suspects without reliance on trespass or witness cooperation. The standard story is that these technologies unsettled the balance between conflicting societal goals related to police investigations, which is true enough. But another important factor is that the test cases involved the detection and enforcement of gambling, bootlegging, and drug distribution crimes. Katz v. United States, the Fourth Amendment case that developed the reasonable expectations of privacy test, involved bugging a phone a bookmaker was using.282Katz v. United States, 389 U.S. 347, 348 (1967). And it followed the logic of Justice Brandeis’s dissent in an earlier case, Olmstead v. United States,283Olmstead v. United States, 277 U.S. 438, 471 (1928) (Brandeis, J., dissenting). which involved the wiretapping of a bootlegger.284Katz, 389 U.S. at 361 (Harlan, J., concurring). Katz marked the end of a primarily property-based conception of Fourth Amendment rights and ushered in the privacy phase. When test facts making their way to the Supreme Court involved more serious crimes, like stalking, the Supreme Court avoided finding a privacy violation.285Smith v. Maryland, 442 U.S. 735, 745–46 (1979). Bill Stuntz critiqued the privacy turn, noting that Fourth Amendment litigation became much too focused on privacy and failed to ameliorate problems of physical security (especially bodily security) when suspects were routinely frisked and thrown to the ground. Stuntz, supra note 15, at 37. See also Michael Klarman, Rethinking the Civil Rights and Civil Liberties Revolutions, 82 Va. L. Rev. 1 (1996).

To be clear, there are other reasons, separate from the substance of the criminal law being enforced, that justify a focus on privacy. Twentieth century surveillance capabilities certainly left Americans—criminals and the innocent alike—at greater risk of unwanted observation of licit activities. But there is also a clear pattern: courts have used criminal procedure to frustrate the enforcement of controversial criminal statutes that cover activities in which a sizable proportion of Americans willingly participate.286The converse is also true: when crime rates spike among the crimes that are most important to a well-functioning society, such as crimes of violence, Fourth Amendment procedural protections are tuned down. Yale Kamisar, The Warren Court and Criminal Justice: A Quarter-Century Retrospective, 31 Tulsa L.J. 1, 2–3 (1995). Once privacy posed a significant obstacle to police investigations, procedural rights became the default defense against a tyrannical state. There was less pressing need to press the Constitution into service to challenge whether conduct should even be considered criminal in the first place or whether the police are protecting communities fairly. For better or worse, the Fourth Amendment privacy rule created a tractor beam for public defenders and civil liberties organizations to concentrate their anti-authoritarian efforts.

Scholars have occasionally attempted to refocus the Fourth Amendment on a more general purpose to create a constraint on power.287Or to create a “constraint on the power of the sovereign, not merely on some of its agents” Arizona v. Evans, 514 U.S. 1, 18 (1995) (Stevens, J., dissenting). With gratitude to Tom Crocker for highlighting this passage. Crocker, supra note 278, at 335 n.188. Bill Stuntz faulted Fourth Amendment’s turn to privacy because it “tend[ed] to obscure more serious harms that attend police misconduct.”288William J. Stuntz, Privacy’s Problem and the Law of Criminal Procedure, 93 Mich. L. Rev. 1016, 1020 (1995). More recently, Thomas Crocker has argued that the Fourth Amendment should be understood as a substantive right, not just a procedural one, that follows in the vision of the First, Second, and Ninth Amendments.289As well as the Fifth Amendment’s takings clause. Crocker, supra note 278, at 309–10, 343. But ultimately, Crocker advocates for the use of this substantive right to argue for a more thorough protection against surveillance.290Id. at 311. Naturally, I think this misses the point. A citizen whose government makes nearly all conduct and action illegal will never feel secure no matter how many restrictions on surveillance are in place. And conversely, a government that is rigidly constrained from expanding its criminal laws beyond the conduct that is nearly universally reviled will be limited in its ability to threaten a citizen’s sense of liberty no matter how much surveillance is in place.

The happenstance of technology provides another reason to prefer the Fourth Amendment over other constitutional sources to redress the problems of overcriminalization and uneven protection. The privacy of the innocent was mediating the clash between American values in freedom and security. Increasing use of filtered dragnets will make this arrangement untenable. If we expect the role of the Fourth Amendment to be meaningful—to be something other than a brief paperwork requirement in the process of securing warrants for filtered dragnets—it is both necessary and appropriate that Fourth Amendment caselaw starts to look for its root function and embrace its substantive as well as procedural dimensions.

CONCLUSION

In 1967, Alan Westin, a leading light among privacy scholars, said that “the modern totalitarian state relies on secrecy for the regime, but high surveillance and disclosure for all other groups.”291Alan Westin, Privacy and Freedom 23 (1967). This is probably a true statement, but highly incomplete. Surveillance is a necessary condition for authoritarian control, but not sufficient on its own. Indeed, all modern states need surveillance. Modern systems of taxation, public benefits distribution, medical services, and public health could not function without copious amounts of personal data. Thus, surveillance is necessary for all states, not just despotic ones. Moreover, surveillance is no more unique to totalitarianism than are weapons, prisons, and other tools the state must use to carry out the most basic obligations to support social order and security.

The tools that live exclusively in the toolbox of despots are repressive substantive criminal laws, harsh punishment, and discretion to choose when to enforce the law. Even in George Orwell’s dark depiction Nineteen Eighty-Four, Big Brother was oppressive partly because of the substance of the law: the wrong thought could land a person in jail.292See generally, George Orwell, Nineteen Eighty-Four (1949).

Against this threat of uncontrolled surveillance, many privacy scholars recommend the dismantling of the surveillance apparatus. This Article focused instead on the “uncontrolled” quality of uncontrolled surveillance. Filtered dragnets are a highly controlled dragnet that reveal only criminal violations. Thus, they are only as threatening to society as the criminal statutes that they enforce and the discretion of the government agents who use them. With the right alignment of Fourth Amendment rules to authoritarian threats, the state can be made to heel—to detect crimes fairly without burdening any communities with under-protection or over-punishment. This will require some intrusion of the traditionally procedural domain of the Fourth Amendment into the substantive realm of criminal law and punishment. If the state can suddenly detect every violation, prison must be reserved for truly awful behavior, and law enforcement should have less latitude to seek out or avoid the investigations of members of certain groups.

These are radical proposals. They go well beyond the privacy framework that has dominated Fourth Amendment theory for over half a century. But they respond to a radical tool that will shock a criminal justice system that is already in crisis and deserves rescue.

97 S. Cal. L. Rev. 571

Download

* University of Arizona James E. Rogers College of Law. The author is grateful for the advice and invaluable feedback from Jordan Blair Woods, Tracey Maclin, Farhang Heydari, Toni Massaro, Tammi Walker, John Villasenor, Andrew Woods, Lilla Montagnani, Kiel Brennan-Marquez, Jeffrey Fagan, Christopher Slobogin, Derek Bambauer, Mark Verstraete, Xiaoqian Hu, Andrew Coan, Niva Elkin-Koren, Uri Hcohen, and Tal Zarsky.

Secondary Trading Crypto Fraud and the Propriety of Securities Class Actions

Traders participating in secondary crypto asset markets risk significant loss. Some trading loss will arise simply because of market dynamics, including inherently volatile crypto asset prices. But secondary crypto asset traders also risk considerable monetary injury resulting from fraudulent statements or acts by crypto asset sponsors or others occurring in connection with their secondary transactions. If subjected to such fraud, the affected crypto asset traders may turn to a Rule 10b-5 class action for redress.

Crypto asset traders’ reliance on Rule 10b-5 class actions implicates important doctrinal and public policy questions. This Article analyzes two of these questions—one doctrinal and another in the domain of public policy. In its doctrinal analysis, the Article evaluates issues pertinent to the threshold definitional question of when an exchange-traded crypto asset will constitute an investment contract and therefore fall within the definitional perimeter of a security. The Article proposes a slight generalization of the horizontal commonality test that renders the test suitable for use in both primary transaction and secondary transaction cases, and also addresses aspects of Howey’s efforts of others prong that are relevant to Howey’s application in the crypto asset context.

With respect to the public policy question, the Article evaluates whether the public policy justification for crypto asset-based Rule 10b-5 class actions is significantly weaker than for stock-based Rule 10b-5 class actions. The Article’s public policy determinations break in different directions and in some respects are to be considered preliminary, but the analysis does not justify limiting the availability of crypto asset-based Rule 10b-5 class actions any more than stock-based Rule 10b-5 class actions.

INTRODUCTION

Asset digitization through distributed ledger technology has transformed trading markets. Traders in the United States now routinely trade hundreds of crypto assets on various crypto exchanges, and the pool of tradable assets is growing.1By crypto assets, this Article means any digital asset that relies on a distributed ledger. The Article focuses on exchange-traded crypto assets but refers to those assets simply as crypto assets rather than exchange-traded crypto assets when the context is clear. Likewise, the Article’s references to stock should be understood to mean exchange-traded stock. Through these secondary transactions, crypto asset traders have seen both financial gain and financial loss, which at times have been substantial.

Recent events have amplified the prospect of secondary crypto asset traders incurring significant monetary loss through incidents of fraud. Misconduct was commonplace in the 2017 to 2019 time period, when a high frequency of crypto asset initial offerings were riddled with fraud, causing investors to lose substantial amounts.2See, e.g., Shane Shifflett & Coulter Jones, Buyer Beware: Hundreds of Bitcoin Wannabes Show Hallmarks of Fraud, Wall Street J. (May 17, 2018, 12:05 PM), https://www.wsj.com/articles/buyer-beware-hundreds-of-bitcoin-wannabes-show-hallmarks-of-fraud-1526573115 [https://web.archive.org/
web/20180612095414/https://www.wsj.com/articles/buyer-beware-hundreds-of-bitcoin-wannabes-show
-hallmarks-of-fraud-1526573115?mod=WSJ_Currencies_LEFTTopNews&tesla=y].
Now, traders transacting in secondary crypto asset markets risk being subject to fraud by crypto asset sponsors or others occurring in connection with their secondary transactions, which the Article refers to as secondary trading crypto asset fraud.3Secondary crypto asset traders may be subject to other forms of fraud or some other type of misconduct such as market manipulation or hacking. While important, those other sources of secondary crypto asset trader harm are not the subject of this Article and their examination awaits future work. The injurious effects of secondary trading crypto asset fraud extend beyond the defrauded traders. Such fraud, in combination with other types of misconduct, has the potential to fully undermine the legitimacy of the entire crypto asset ecosphere, including causing collateral damage to the reputation of economically scrupulous actors, and to strengthen the calls by some that the sector be subject to intense regulatory scrutiny.

To take an example that mirrors allegations from a recent fraud suit, suppose that a crypto asset sponsor develops a novel blockchain protocol and an accompanying crypto asset that serves as the blockchain’s native token.4See SEC v. Terraform Labs Pte. Ltd., No. 23-cv-1346, 2023 U.S. Dist. LEXIS 132046 (S.D.N.Y. July 31, 2023) (SEC complaint against crypto asset sponsors for fraud occurring in connection with two exchange-traded crypto assets, LUNA and UST). Suppose that the crypto asset goes on to trade on one or more crypto exchanges after its initial offering. At some later point, the crypto asset sponsor falsely represents that a payment provider has adopted the developed blockchain to process payments. Because the fraudulent statement is understood to evidence a new and potentially monetizable use value for both the blockchain and the associated crypto asset, secondary traders update their valuation of the crypto asset, which causes additional trading activity resulting in the crypto asset’s price appreciating on the secondary markets in which it trades. Traders who purchase the crypto asset at the resulting higher price will suffer financial harm once the market becomes aware of the falsity of the sponsor’s fraudulent statement and the crypto asset’s price falls in response. Depending on the magnitude and nature of the fraud, traders’ losses may be substantial.5See, e.g., Tom Hussey, Cryptocurrency Crash Sees Man Loses $650k Life Savings, News.com.au (May 16, 2022, 5:23 PM), https://www.news.com.au/finance/markets/world-markets/cryptocurrency-crash-sees-man-loses-650k-life-savings/news-story/183fef63537f24376a1e465
021687df9 [https://perma.cc/QH26-RHAR] (reporting on investors’ significant losses caused by the precipitous drop in LUNA’s price).

Additional regulation of the crypto asset space may diminish the prospect of fraud ex ante, but defrauded crypto asset traders may seek ex post relief in the form of private litigation. Traders sustaining losses in connection with secondary transactions of stock and other more conventional assets routinely seek class-wide relief under Rule 10b-5,617 C.F.R. § 240.10b-5 (2023). which serves as the workhorse of federal securities laws’ antifraud prohibitions. Given the prominence of Rule 10b-5 class actions in modern securities litigation, defrauded crypto asset traders likewise may turn to Rule 10b-5 class relief to recover their secondary trading losses.

These observations raise an important question: Should defrauded crypto asset traders be able to rely on Rule 10b-5 class actions to recover their secondary trading losses, both as a doctrinal matter and as a matter of public policy? A host of considerations bear on this question, and this Article focuses on two leading considerations, one doctrinal and one public policy related.

A primary consideration pertinent to the doctrinal propriety of secondary crypto asset traders relying on Rule 10b-5 class actions is the fundamental question: under what conditions will an exchange-traded crypto asset be within the definitional scope of a security because it is an investment contract under the multipronged test enunciated by the Supreme Court in Howey?7SEC v. W.J. Howey Co., 328 U.S. 293, 298–99 (1946). This Article follows the conventional approach and articulates the Howey question as an inquiry into whether the at-issue crypto asset is an investment contract. This articulation should be understood as a shorthand formulation adopted for expositional ease. In a securities case predicated on a set of crypto asset transactions, the relevant Howey question is not whether the crypto asset itself meets Howey’s prongs, but instead whether the universe of circumstances pertinent to the crypto asset transactions at issue satisfies Howey’s prongs. Courts in crypto asset cases recognize that distinction. See, e.g., SEC v. Telegram Grp. Inc., 448 F. Supp. 3d 352, 379 (S.D.N.Y. 2020) (“While helpful as a shorthand reference, the security in this case is not simply the [crypto asset], which is little more than alphanumeric cryptographic sequence . . . This case presents a ‘scheme’ to be evaluated under Howey that consists of the full set of contracts, expectations, and understandings centered on the sales and distribution of the [crypto asset]. Howey requires an examination of the entirety of the parties’ understandings and expectations.”).

Also, various cases discussed in this Article are well-known securities cases that academics and practitioners refer to almost exclusively by their short name. As such, in-text references to these cases—including Howey, Omnicare, and others—will follow this naming style and full case names and citations are provided in footnotes.
Scholars have dedicated considerable attention to this definitional inquiry but not with a specific focus on exchange-traded crypto assets.8See infra note 59.

The Article evaluates the investment contract issue as it relates to exchange-traded crypto assets with an emphasis on Howey’s “common enterprise” and “efforts of others” prongs. The contours of these and Howey’s other prongs have been shaped by courts in primary transaction cases, that is, cases in which investors directly or indirectly transacted with the enterprise’s promoter. In a secondary transaction case—such as a case involving an exchange-traded crypto asset—investors will have transacted with their trading counterparties, perhaps with the involvement of one or more intermediaries, and those counterparties ordinarily will not have been the enterprise’s promoter. Unlike crypto assets, earlier occurring investment contract cases arising in connection with primary transactions did not involve instruments that readily lent themselves to secondary trading, so courts have not had much occasion to consider the operation of Howey in the secondary transaction context.

In many instances, the investment contract rules that courts have developed in primary transaction cases have been articulated in a manner that allows them to be sensibly applied to secondary transaction cases. That is not the case for the horizontal commonality test, one of the three tests that courts use to assess Howey’s common enterprise prong. As the Article explains, because of its pooling requirement, that test is ill-suited for use in secondary transaction cases and thus requires reorientation.

The Article proposes a slight generalization of the horizontal commonality test that renders the test suitable for use in both secondary transaction and primary transaction cases. The generalized test recognizes that pooling is but one method by which investors’ financial interests in the underlying enterprise can become intertwined in the manner that horizontal commonality requires. Under the generalized test, horizontal commonality will be present if there is some mechanism, pooling or otherwise, that ties investors’ fortunes to one another and dependent on the enterprise in which they are invested.

The generalized test reasonably broadens the scope of instruments for which horizontal commonality would be found. As relevant to the Article’s question of interest, even if there were no pooling of a secondary crypto asset investors’ purchase amounts, the generalized horizontal commonality test may still be satisfied because the asset’s trading price can serve as a non-pooling mechanism that causes the pecuniary interests of the crypto asset’s traders to be linked and dependent on the success of the crypto asset and any of its associated applications. For the crypto asset’s price to actually have served that non-pooling role for purposes of the generalized horizontal commonality test, the crypto asset’s price must generally respond to material, public information in a directionally appropriate way. As part of its analysis, the Article also explains why certain facts that are present in the investment contract cases that courts have analyzed to date—such as the presence of a contract among the investment contract’s promoter and the investors—simply represent common factual features shared by the decided cases, rather than elements of the pertinent legal rule.

The Article also addresses two aspects of Howey’s efforts of others prong relevant to application of Howey to exchange-traded crypto assets. First, the Article explains that Howey’s efforts of others prong should not be understood as requiring the presence of a centralized body that exerts the requisite entrepreneurial or managerial efforts. Instead, Howey’s efforts of others prong is better understood as requiring investors to have reasonably believed that their profits were significantly determined by the entrepreneurial or managerial efforts of those other than the investors themselves, whether or not those “others” constituted a centralized group.

Second, as the Article explains, investors’ expectations concerning the use of their sales proceeds is doctrinally irrelevant to Howey’s efforts of others analysis, which instead focuses on investors’ expectations concerning whose entrepreneurial or managerial efforts significantly determined their expected profits. Thus, the fact that investors’ sales proceeds in a secondary crypto asset transaction case may not have flowed to the crypto asset’s sponsors would not itself prevent Howey’s efforts of others prong from being met. This and the Article’s other Howey-related conclusions are not limited to the specific context of a Rule 10b-5 class action and instead also are applicable to other securities claims involving secondary crypto asset transactions.

The Article’s public policy analysis is prompted by the observation that stock-based Rule 10b-5 class actions have been the subject of academic criticism, intense at times. Supported by two longstanding primary critiques known as the circularity critique and the diversification critique, prominent voices have argued that stock-based Rule 10b-5 class actions fail to properly advance their intended public policy objectives of deterrence and compensation. Other scholars have disputed the relevance of the circularity and the diversification critiques and also have identified theories that provide alternate public policy justifications for stock-based Rule 10b-5 class actions, with the leading example being a corporate governance justification for stock-based Rule 10b-5 class actions.

A normative inquiry into whether defrauded crypto asset traders should be able to rely on Rule 10b-5 class actions implicates a range of subsidiary questions. One constituent question is whether the public policy justification for crypto asset-based Rule 10b-5 class actions is significantly weaker than for stock-based Rule 10b-5 class actions. If so, then that would support legal change that limits the availability of crypto asset-based Rule 10b-5 class actions, relative to stock-based Rule 10b-5 class actions, such as the adoption of prophylactic steps in the form of legislative action or doctrinal modification that would curb crypto asset-based Rule 10b-5 class actions before they become commonplace as stock-based Rule 10b-5 class actions have become. The Article evaluates that specific public policy question in terms of the circularity and diversification critiques and the corporate governance justification.

While its public policy determinations are mixed and in part preliminary, the Article’s analysis does not lend support to the notion that the public policy justification for crypto asset-based Rule 10b-5 class actions is significantly weaker than the public policy justification for stock-based Rule 10b-5 class actions. As reflected in the discussion below, the circularity critique has significantly less relevance in the crypto asset context than in the stock context. While the diversification critique may be more or less relevant in the crypto asset context than the stock context, nothing in the analysis indicates that it is significantly more relevant in the crypto asset context than the stock context. An offsetting consideration is that the corporate governance justification loses its relevancy in the crypto asset context.

The Article is organized as follows. Part I provides a high-level summary of three key features of crypto assets that are pertinent to the Article’s substantive analysis. Part II addresses the investment contract question, while Part III provides the public policy analysis.

I.  FEATURES OF EXCHANGE-TRADED CRYPTO ASSETS

While exchange-traded crypto assets vary in their characteristics and features, they share key points of commonality relevant to an inquiry into the propriety of defrauded secondary crypto asset traders relying on Rule 10b-5 class actions as a means of redress. Three points of commonality are discussed below: operational decentralization, the absence of cash flow, and significant price volatility.

A.  Operational Decentralization

As a rough approximation, a crypto asset’s lifecycle will have three stages. The first stage is the period preceding the asset’s initial offering, during which the crypto asset’s sponsors develop the asset and any associated applications.9The Article uses the terms “sponsors” and “application” broadly. The term “sponsors” is intended to refer to the class of persons or entities that develops, promotes, or initially sells the crypto asset, while the term “application” is intended to refer to any product or service that is directly facilitated by the crypto asset. The second stage of a crypto asset’s lifecycle is the asset’s offering period. During this stage, the asset’s sponsors first offer and sell the crypto asset, or rights to the future delivery of the crypto asset, to the public and others.10Most crypto asset offerings have involved the immediate sale of the offered crypto asset. However, some crypto asset offerings instead have involved the sale of a right to the future delivery of the crypto asset via an instrument referred to as a Simple Agreement for Future Tokens (“SAFT”). See infra note 23 and accompanying text. Historically, crypto asset offerings have been unregistered offerings, with very limited exceptions.11In many instances, crypto asset sponsors do not register their offerings because they consider the offerings to be outside the scope of the Securities Act’s registration requirement, on the belief that the offered crypto assets do not constitute “securities” in the definitional sense. This has generated a string of enforcement actions by the SEC, in which the SEC contends that an unregistered crypto asset offering violated Section 5’s registration requirement on the SEC’s contrary position that the offered crypto assets were securities. See cases cited infra note 65. In limited instances, crypto asset sponsors have initially offered a crypto asset pursuant to a registration exemption. See infra note 23 and accompanying text (conducting crypto asset offerings pursuant to Regulation D). See also Daniel Payne, Blockstack Token Offering Establishes Reg A+ Prototype, Law360 (Aug. 12, 2019), https://www.law360.

com/articles/1186166 [https://perma.cc/QSQ8-ZDJJ] (describing an offering pursuant to Regulation A). There appears to be just one instance of a registered crypto asset offering. See INX Ltd., Registration Statement Under the Securities Act of 1933 (Form F-1) (Aug. 19, 2019), https://
http://www.sec.gov/Archives/edgar/data/1725882/000121390019016285/ff12019_inxlimited.htm [http://web.
archive.org/web/20230324012325/https://www.sec.gov/Archives/edgar/data/1725882/000121390019016285/ff12019_inxlimited.htm] (showing INX Ltd.’s registered offering of its INX crypto asset).
The third and final stage is the period following the crypto asset’s offering, or the period after the crypto asset is delivered to those who previously purchased rights to its delivery, during which the asset trades on one or more crypto exchanges.12A crypto asset’s sponsors may conduct multiple offerings before the crypto asset begins trading on a crypto exchange. See infra note 23 and accompanying text. In some instances, a crypto asset may have a fourth stage when it is delisted from the crypto exchanges on which it trades and then ceases all secondary trading.13See Francisco Memoria, Dead Coin Walking: BitConnect Set to Be Delisted from Last Crypto Exchange, Yahoo News (Aug. 13, 2018), https://www.yahoo.com/news/dead-coin-walking-bitconnect-set-213336558.html [https://perma.cc/QYF5-BPEW].

At some point in this lifecycle, the development, operation, management, and promotion of a crypto asset and any associated applications may move from a small group of sponsors to a significantly larger group of stakeholders. This latter process can be referred to as operational decentralization, and the resulting set of designated decision‑makers ordinarily will include the crypto asset’s holders. The modifier “operational” reflects the fact that other aspects of a crypto asset or its application may be decentralized, but in ways not directly relevant to the securities law definitional question discussed in Part II below.14For a discussion of the different ways that the term decentralization is used in the crypto asset context and an argument for precision in use of that term, especially when it is used to make legal determinations, see Angela Walch, Deconstructing “Decentralization,” in Cryptoassets: Legal, Regulatory, and Monetary Perspectives 39 (Chris Brummer ed., 2019).

As an example of these observations, consider the application Filecoin, which is an innovative blockchain-based data storage network that enables those needing computing storage to remotely use others’ idle computing storage.15Filecoin, https://filecoin.io [https://perma.cc/Q7GF-MJTS]. So, for instance, a large data center or an individual maintaining unused computing storage space can have that dormant storage incorporated in Filecoin’s storage network, thereby allowing other Filecoin users to access its idle storage in exchange for payment.16See Get Started, Filecoin, https://filecoin.io/provide/#get-started [https://perma.cc/5DDE-76ZF]. In Filecoin’s parlance, network participants who provide storage are referred to as “miners,” while network participants who use available storage are referred to as “clients.” See A Guide to Filecoin Storage Mining, Filecoin (July 7, 2020), https://filecoin.io/blog/posts/a-guide-to-filecoin-storage-mining [https://perma.cc/WPN4-J93P]. Filecoin generates economic benefit by facilitating mutually beneficial transactions, allowing unused computing storage space to be put to productive use.

The crypto asset “FIL” is associated with and facilitates Filecoin’s storage network. Transactions on the Filecoin network are conducted in FIL, in that users of Filecoin’s storage network pay storage providers in FIL rather than fiat currency.17See Store Data, Filecoin, https://docs.filecoin.io/get-started/store-and-retrieve/store-data [https://perma.cc/5GCN-ZRSK]; Retrieve Data, Filecoin, https://lotus.filecoin.io/tutorials/lotus/
retrieve-data [https://perma.cc/SR9M-8LR3].
Filecoin’s users who want to acquire or sell their FIL holdings can do so on various crypto exchanges.18See Filecoin Markets, CoinMarketCap, https://coinmarketcap.com/currencies/filecoin/
markets [https://web.archive.org/web/20230314173713/https://coinmarketcap.com/currencies/filecoin/
markets].
As reflected in publicly available information, FIL’s holders do not buy and sell the crypto asset purely for its use value on the Filecoin network, but also, or perhaps primarily, trade the asset for investment purposes, seeking financial gain from appreciations in the crypto asset’s price.19See, e.g., r/filecoin, Reddit, https://www.reddit.com/r/filecoin [https://web.archive.org/web/
20230604222832/https://www.reddit.com/r/filecoin] (showing posts by FIL holders discussing the asset’s investment value).
FIL presently has a market capitalization near $2.9 billion and its 24-hour transaction volume ordinarily exceeds $200 million.20Filecoin, CoinMarketCap, https://coinmarketcap.com/currencies/filecoin [https://perma.cc/
4PF7-RFC3] (last visited Feb. 16, 2024).

Protocol Labs, an innovative research and development company founded in 2014, developed both Filecoin and FIL.21About, Protocol Labs, https://protocol.ai/about [https://perma.cc/LFZ9-SK7T]. In 2017, Protocol Labs conducted two Reg D offerings through which it sold accredited investors the rights to the future delivery of FIL22See Protocol Labs, Notice of Exempt Offering of Securities (Form D) (Aug. 25, 2017), https://www.sec.gov/Archives/edgar/data/1675225/000167522517000004/xslFormDX01/primary_doc.xml [https://web.archive.org/web/20230704192549/https://www.sec.gov/Archives/edgar/data/1675225/
000167522517000004/xslFormDX01/primary_doc.xml]; Protocol Labs, Amendment to Notice of Exempt Offering of Securities (Form D) (Aug. 25, 2017), https://www.sec.gov/
Archives/edgar/data/1675225/000167522517000002/xslFormDX01/primary_doc.xml [https://web.
archive.org/web/20230704193217/https://www.sec.gov/Archives/edgar/data/1675225/000167522517000002/xslFormDX01/primary_doc.xml].
and raised over $200 million.23Filecoin Sale Completed, Protocol Labs (Sept. 13, 2017), https://protocol.ai/blog/filecoin-sale-completed [https://perma.cc/2NJM-RCL7]. In 2020, Filecoin became fully operational, and Protocol Labs distributed FIL to the accredited investors who had purchased the future delivery rights to FIL in the two 2017 Reg D offerings.24See FAQ: The Filecoin Network, Filecoin (Oct. 2020), https://filecoin.io/saft-delivery-faqs [https://perma.cc/Y5QY-H3HR]. The crypto asset thereafter began trading on a number of crypto exchanges.25See Filecoin (FIL) Trading Begins October 15, Kraken: Blog (Oct. 12, 2020), https://
blog.kraken.com/post/6522/filecoin-fil-trading-begins-october-15/#:~:text=We%20are%20pleased%20
to%20announce,are%20enabled%20on%20the%20network [https://perma.cc/YN8F-NEBF].

FIL and its associated application Filecoin exhibit features of the operational decentralization discussed above. In the years following FIL’s initial offering in 2017, Protocol Labs continued to develop Filecoin and FIL but continuously expanded the ability of other stakeholders, including the general public, to contribute to Filecoin and FIL’s development. In the immediate period following FIL’s initial offering, the public’s role in facilitating Filecoin and FIL’s development was limited to referring potential employees and early users to Protocol Labs and suggesting improvements to the underlying protocol.26See Filecoin 2017 Q4 Update: Community Updates, How You Can Help, Filecoin Blog, and More, Filecoin (Jan. 1, 2017), https://filecoin.io/blog/posts/filecoin-2017-q4-update [https://
perma.cc/YS7R-B9AC].
Subsequently, but before Filecoin became fully operational and FIL started trading in secondary markets, Protocol Labs made several key aspects of Filecoin and FIL’s software code available to the public for review and comment.27See Opening the Filecoin Project Repos, Filecoin (Feb. 14, 2019), https://filecoin.io/blog/posts/opening-the-filecoin-project-repos [https://perma.cc/B5UZ-EWUQ]. This important milestone provided the public with an indirect way to guide Filecoin and FIL’s development but ultimate authority remained vested in Protocol Labs.

Protocol Labs’ current decision-making authority over Filecoin and FIL is much more attenuated than before. Now, while Protocol Labs remains actively involved in Filecoin and FIL’s development28See, e.g., Senior Engineering Leadership, Filecoin Saturn, Protocol Labs, https://boards.greenhouse.io/protocollabs/jobs/4800583004 [https://perma.cc/4TQZ-8R2P] (Protocol Labs job posting for Engineering Lead for Filecoin Saturn, a decentralized content delivery network for Filecoin). and potentially may still maintain significant holdings of FIL,29See PL’s Participation in the Filecoin Economy, Protocol Labs (Oct. 19, 2020), https://protocol.ai/blog/pl-participation-in-the-filecoin-economy [https://perma.cc/VCT6-55JW]. Protocol Labs does not have sole decision-making authority over the crypto asset or its associated application. First, another centralized body, Filecoin Foundation, facilitates governance of the Filecoin network.30Filecoin Found., https://fil.org [https://perma.cc/Q9DF-EKEA]. Moreover, any person can influence Filecoin’s governance by submitting a Filecoin Improvement Proposal.31See Governance, Filecoin Found., https://fil.org/governance [https://perma.cc/JR9V-ECX7]; Filecoin Improvement Protocol, GitHub, https://github.com/filecoin-project/FIPs/
blob/master/README.md [https://perma.cc/M2Y7-G3E5].
Filecoin’s many stakeholders, including FIL holders and Filecoin’s developers, determine whether to adopt the proposal.32See Governance, Filecoin Found., supra note 32. Modifications and improvements to Filecoin’s technical features are undertaken through a similarly decentralized process, with any individual able to propose a technical change and then Filecoin’s many stakeholders deciding whether to adopt the technical modification.33See, e.g., GitHub, supra note 32 (discussing Filecoin Technical Proposals).

B.  Absence of Cash Flow

A specific crypto asset may provide its holders with a range of benefits. In addition to investment gain, some crypto assets also may be used as methods of payment for conventional goods and services, while others may enable their holders to use an associated application or exercise governance rights with respect to the crypto asset or an associated application.34See supra Section I.A (discussing FIL).

Despite these benefits, a crypto asset ordinarily will not provide its holders with dividends or cash flow in any form, realized or expected. Even if there exists a centralized body with some involvement in the crypto asset’s development and operation, the crypto asset’s holders usually will not be entitled to any income from the profits of that centralized body. In contrast, a public company’s common shareholders will receive cash flow at the board’s discretion in the form of dividends paid from the company’s net income.

More generally, a crypto asset’s holders usually will not be entitled to income from any entity or individual involved in the development and operation of the crypto asset and any associated applications. Holders of some crypto assets may earn income through staking, which is the process through which a crypto asset holder agrees to lock up their assets to facilitate the validation of transactions on a blockchain that uses a proof-of-stake consensus mechanism.35See, e.g., Hannah Lang & Elizabeth Howcroft, Explainer: What Is “Staking,” the Cryptocurrency Practice in Regulators’ Crosshairs?, Reuters (Feb. 10, 2023, 10:55 AM), https://
http://www.reuters.com/business/finance/what-is-staking-cryptocurrency-practice-regulators-crosshairs-2023-02-10 [https://perma.cc/GEZ3-27P6].
But staking is an optional process that requires the holder to forgo transacting the staked assets.36See id. While it is theoretically possible for a crypto asset to entitle its holders to cash flow, very few crypto assets with this feature have actually been implemented to date.37For instance, the crypto asset “INX” entitles its holders to a pro rata distribution of forty percent of the adjusted net cash flow from operating activities from the company INX Ltd., which seeks to develop a regulated crypto asset trading platform. See INX Ltd., Report of Foreign Private Issuer (Form 6-K) (May 16, 2022), https://www.sec.gov/Archives/edgar/data/1725882/000121390022027375/
ea160089-6k_inxlimited.htm [https://perma.cc/HBL7-B92J]; INX Ltd., Annual Report (Form 20-F) (May 2, 2022), https://www.sec.gov/Archives/edgar/data/1725882/000121390022023077/
f20f2021_inxlimited.htm [https://web.archive.org/web/20230627041213/https://www.sec.gov/Archives/
edgar/data/1725882/000121390022023077/f20f2021_inxlimited.htm].

C.  Significant Price Volatility

Crypto assets exhibit significant price volatility. Crypto asset prices can change markedly, even in relatively short periods of time. Take for instance, “SOL,” the crypto asset associated with the Solana blockchain. On July 1, 2022, SOL traded at $32.80, according to CoinMarketCap’s calculated average price on a group of crypto exchanges.38Solana Historical Data, CoinMarketCap, https://coinmarketcap.com/currencies/

solana/historical-data [http://web.archive.org/web/20230627040245/https://coinmarketcap.com/
currencies/solana/historical-data].
On August 1, 2022, and September 1, 2022, SOL traded at $41.79 and $31.59, respectively, according to CoinMarketCap’s calculated average price.39Id. So, within one month, the price of SOL appreciated by more than 27%, but then dropped by more than 24% the next month. Crypto asset prices can swing dramatically even over shorter durations, such as weeks or days.

Statistical analysis shows that crypto asset prices can be much more volatile than stock prices. For instance, Liu and Tsyvinski examined the returns of over 1,700 crypto assets between January 1, 2011 and December 31, 2018.40Yukun Liu & Aleh Tsyvinski, Risks and Returns of Cryptocurrency, 34 Rev. Fin. Studs. 2689, 2690 (2021). The authors created an index of the crypto assets in their sample and found that over the sample period, the standard deviation of daily returns of the index was 5.46%, which was five times higher than the standard deviation of daily stock returns over the sample period.41See id. at 2698 tbl.1 (showing that the returns of the constructed crypto asset index had a standard deviation of 5.46%, while stock returns instead had a 0.95% standard deviation over the sample period). The authors also found that crypto asset returns over the sample period yielded extreme losses and gains with high probability.42See id. at 2690. According to their findings, a trader who held the constructed index over the sample period would have experienced an extreme 20% negative return to daily returns with a probability of 0.48% and an extreme gain of 20% positive return to daily returns with a probability of 0.89%.43See id.

Though crypto asset prices may be more volatile than stocks, some crypto assets may exhibit significantly less price volatility than others.44See, e.g., Dirk G. Baur & Thomas Dimpfl, Asymmetric Volatility in Cryptocurrencies, 173 Econ. Letters 148, 149 tbl. 1 (2018). The volatility of some crypto assets may be closer to that of stock. Additionally, there is some empirical evidence showing that crypto asset volatility decreases over time. For instance, returning to the study discussed above, the authors found that the standard deviation of the index’s returns diminished over the sample period.45See Liu et al., supra note 41, at 2719 (“We find that the standard deviation of coin market returns decreased significantly from the first half to the second half of the sample period. The figure in the Internet Appendix shows a significant decrease in the volatility of the coin market returns over time.”).

II.  THE DOCTRINAL PROPRIETY OF CRYPTO ASSET-BASED RULE 10B-5 CLASS ACTIONS

The propriety of crypto asset traders using Rule 10b-5 class actions as a means of recovering losses caused by secondary crypto asset fraud implicates a set of important doctrinal and public policy considerations. In the discussion below, the Article focuses on the leading doctrinal question of when secondary trading crypto asset fraud constitutes securities fraud and so is properly within the scope of Rule 10b-5. The pertinent issue is whether the exchange-traded crypto asset on which the Rule 10b-5 claim is predicated is definitionally a security because it is an investment contract.46 As noted above, the relevant issue is articulated as an inquiry into whether the relevant crypto asset is an investment contract to simplify the exposition. See supra note 7. To better frame the issue, it is helpful to first provide some observations on the nature of secondary trading crypto asset fraud and Rule 10b-5 relief.

A.  The Nature of Secondary Trading Crypto Asset Fraud and Rule 10b-5 Relief

Secondary trading crypto asset fraud can inflict trader harm by altering the prices at which traders transact. The motivating hypothetical from the Article’s Introduction involved the sponsor of an exchange-traded crypto asset making misrepresentations about a new and potentially monetizable use value for the crypto asset. Defrauded crypto asset traders who purchased at the resulting inflated prices may seek relief though a Rule 10b-5 class action. Their ability to viably do so requires, among other things, that (1) the at-issue crypto asset satisfies Howey’s four-part test for an investment contract—the focus of the discussion in the next Section; (2) the substantive elements of Rule 10b-5 are met; and (3) the pertinent elements of Rule 23 are met.

Different variants of the Introduction’s hypothetical may cause the case to turn more heavily on one of the necessary legal determinations. For instance, suppose that the false or misleading statement instead was made by a person of notoriety that the crypto asset sponsor had monetarily incentivized to provide promotional services, but all other facts of the hypothetical were unchanged. In this case, if the plaintiffs asserted their Rule 10b-5 claim against the influencer, greater focus may be on the materiality of the statement than if it were made directly by the crypto asset sponsor as in the baseline hypothetical. Depending on the circumstances, such as the identity of the influencer and other background considerations, a reasonable person may not consider the misrepresentation important to their trading decision, in which case it would not be material,47See TSC Indus., Inc. v. Northway, Inc., 426 U.S. 438, 449 (1976) (providing materiality standard). while they may consider it important to their trading decision if it had instead been made by the crypto asset’s sponsor.48If asserting a claim under subsection (b) of Rule 10b-5, the plaintiffs may also face difficulties prevailing under the rule in Janus, which would require that the influencer had ultimate authority over the allegedly false or misleading statement. See Janus Cap. Grp, Inc. v. First Derivative Traders, 564 U.S. 135, 142 (2011). Depending on the factual circumstances, it may instead be that the crypto asset’s sponsor, rather than the influencer, had ultimate authority over the misrepresentation. See id. (“One who prepares or publishes a statement on behalf of another is not its maker.”). Or consider a statement by an influencer opining about a crypto asset’s expected future price. In addition to the statement potentially being immaterial, it may be a nonactionable opinion statement under the rule in Omnicare.49See Omnicare, Inc. v. Laborers Dist. Council Constr. Indus. Pension Fund, 575 U.S. 175, 189–90 (2015).

Some crypto assets may be more amenable to secondary crypto asset fraud than others. In the hypothetical from the Introduction, the associated crypto asset had potential use value, in that its associated blockchain could be used to facilitate economically meaningful activity. That is not the case for all crypto assets. Consider meme coins, which are crypto assets that are based on an Internet meme or joke. These assets often have no use value, though they vigorously trade on crypto exchanges and can have significant market capitalization. The body of statements that investors may consider important to their trading decisions may be circumscribed. For instance, if a meme coin has no intended use value, and traders understand that fact, then they may not consider a statement about a potential use value for the crypto asset to be relevant to their trading decision.50This may not necessarily be the case, however, since some meme coins have gone on to have a use value, such as being accepted as forms of payment for some goods and services. See, e.g., Tesla Starts Accepting Once-Joke Cryptocurrency Dogecoin, BBC (Jan. 15, 2022), https://
http://www.bbc.com/news/business-60001144 [https://perma.cc/6MAL-5RWV].

The alleged fraud in each of these examples is an instance of statement-based fraud. Secondary trading crypto asset fraud can also be in the form of deceptive schemes. In the hypothetical in the Introduction, suppose that the crypto asset’s sponsor and the payment provider instead had devised a clandestine scheme that caused the crypto asset’s traders to believe that the payment provider would begin using the crypto asset’s blockchain to process payments. Traders who purchased the crypto asset at the resulting higher prices would suffer financial injury, just as in the baseline hypothetical in which the fraud was in the form of a false statement by the crypto asset’s sponsor.

Finally, crypto asset traders’ ability to rely on Rule 10b-5 class actions to recover losses sustained in connection with secondary crypto asset transactions raises doctrinal issues beyond the definitional one addressed below. For instance, putting Affiliated Ute to the side,51Affiliated Ute Citizens of Utah v. United States, 406 U.S. 128, 153–54 (1972) (holding that a plaintiff asserting a Rule 10b-5 claim need not prove reliance if the claim primarily involves material omissions and the defendant owes the plaintiff a duty to disclose). secondary market crypto asset traders will only be able to litigate their Rule 10b-5 claims as a class if they are able to avail themselves of fraud on the market.52Without the doctrine’s rebuttable presumption of reliance, individual issues of reliance would predominate common issues of reliance, in contravention of Rule 23(b)(3). See Fed. R. Civ. P. 23(b)(3). The question thus arises whether fraud on the market properly extends to the crypto asset context. Or, to take another example, a private plaintiff Rule 10b-5 claim only reaches transactions that are within the extraterritorial reach of the securities laws as defined by Morrison.53Morrison holds that the federal securities laws apply only to “transactions in securities listed on domestic exchanges” and “domestic transactions in other securities.” Morrison v. Nat’l Austl. Bank, Ltd., 561 U.S. 247, 267 (2010). But suppose the crypto exchange on which the at-issue transactions occurred is not a registered exchange and maintains no trading operations in the United States.54This factual circumstance aligns with the allegations in Anderson v. Binance, No. 20-cv-2803, 2022 U.S. Dist. LEXIS 60703 (S.D.N.Y. Mar. 31, 2022). In that case, secondary crypto asset traders sued a major crypto exchange for violation of Section 12(a)(1) of the Securities Act of 1933 and Section 29(b) of the Securities Act of 1934. Id. at *5. The complaint acknowledged that the exchange was not a registered exchange and alleged no U.S. trading operations. See Defendant’s Reply Memorandum of Law in Further Support of Their Motion to Dismiss at 8, Anderson v. Binance, No. 20-cv-2803 (S.D.N.Y. Mar. 31, 2022). The court dismissed the complaint on Morrison grounds, concluding that the crypto exchange was not a “domestic exchange” and that the pertinent transactions were not “domestic transactions” as Morrison requires. See Anderson, 2022 U.S. Dist. LEXIS 60703, at *10–14. The Second Circuit recently reversed that decision. See Williams v. Binance No. 22-972, 2024 U.S. App. LEXIS 5616 (2d. Cir. Mar. 8, 2024). This scenario raises the doctrinal question of whether those secondary crypto asset transactions cannot be the subject of a private Rule 10b-5 suit because they do not satisfy Morrison’s requirements.55Courts have evaluated the extraterritoriality question in the context of crypto asset offerings and have come to differing conclusions. Compare Anderson, 2022 U.S Dist. LEXIS 60703, at *10–14 (relevant crypto asset transactions did not satisfy Morrison), with In re Tezos Secs. Litig., No. 17-cv-06779, 2018 U.S. Dist. LEXIS 157247, at *23–25 (N.D. Cal. Aug. 7, 2018) (relevant crypto asset transactions satisfied Morrison). While some academic focus has been directed at these non-definitional doctrinal questions, additional research is necessary.56For an analysis of the fraud on the market issue, see Menesh S. Patel, Fraud on the Crypto Market, 36 Harv. J.L. & Tech. 171 (2022). There does not yet appear to be any published academic work evaluating the extraterritoriality issue as it relates to crypto asset transactions occurring on a crypto exchange.

B.  Is Secondary Trading Crypto Asset Fraud Securities Fraud?

If secondary crypto asset traders incur trading loss because of fraud, they will be able to pursue Rule 10b-5 relief based on those secondary transactions only if the exchange-traded crypto asset at issue is an investment contract under Howey’s multipronged test.57Traders may have other forms of relief available. As most relevant to this Section, if the underlying secondary crypto asset transactions do not constitute securities transactions, but do constitute commodities transactions, then the traders may have a claim under Commodity Futures Trading Commission (“CFTC”) Rule 180.1 based on those secondary transactions. See 17 C.F.R. § 180.1 (2014). While the present caselaw is limited, courts have taken a broad view of the Commodity Exchange Act’s definition of a commodity in the crypto asset context. See Commodity Futures Trading Comm’n v. My Big Coin Pay, Inc., 334 F. Supp. 3d 492, 497 (D. Mass. 2018); Commodity Futures Trading Comm’n v. McDonnell, 287 F. Supp. 3d 213, 225–26 (E.D.N.Y. 2018). Many issues pertinent to that definitional inquiry will be the same as those relevant to an assessment of whether a crypto asset at its offering stage satisfies Howey’s definition of an investment contract.58Legal scholarship includes significant discussion of the application of Howey in the crypto asset context. For a sample of this scholarship, see, e.g., James J. Park, When Are Tokens Securities? Some Questions from the Perplexed (2018); Jonathan Rohr & Aaron Wright, Blockchain-Based Token Sales, Initial Coin Offerings, and the Democratization of Public Capital Markets, 70 Hastings L.J. 463, 488–502 (2019); M. Todd Henderson & Max Raskin, A Regulatory Classification of Digital Assets: Toward an Operational Howey Test for Cryptocurrencies, ICOs, and Other Digital Assets, 2019 Colum. Bus. L. Rev. 443, 455 (2019); J.S. Nelson, Cryptocommunity Currencies, 105 Cornell L. Rev. 909, 939–53 (2020); Carol Goforth & Yuliya Guseva, Regulation of Cryptoassets 263–327 (2d ed. 2022). However, these and other prior works do not focus on the definitional issue as it relates specifically to exchange-traded crypto assets. For instance, if an exchange-traded crypto asset is promoted for its use value because it enables its holders to use an associated application, and if the asset’s holders in fact hold the asset primarily for that purpose rather than its investment value, then Howey’s “expectation of profit” prong would not be met under Forman’s investment/consumption distinction.59United Hous. Found., Inc. v. Forman, 421 U.S. 837, 852–53 (1975) (“[W]hen a purchaser is motivated by a desire to use or consume the item purchased . . . the securities laws do not apply.”). This very issue has been litigated in cases in which a crypto asset was alleged to have been an investment contract at its offering stage.60For instance, in the SEC’s Section 5 action against LBRY, the court rejected LBRY’s argument that Howey’s expectation of profit prong was not met because some purchasers acquired the at-issue crypto asset for its use value. See SEC v. LBRY, Inc., 639 F. Supp. 3d 211, 220–21 (D.N.H. 2022).

But there are issues pertinent to the application of Howey in the context of exchange-traded crypto assets that are not present, or are much less salient, in the context of crypto assets at their offering stage. This Section explores a set of such issues relating to Howey’s common enterprise and efforts of others prongs.

1.  Exchange-Traded Crypto Assets and Common Enterprise

Doctrinal development of Howey’s common enterprise prong, as with all other parts of Howey’s test, has occurred through investment contract cases involving a primary transaction, that is, a transaction in which investors purchased the instrument when it was first offered for sale directly or indirectly from the enterprise’s promoter.61For a thorough doctrinal evaluation of Howey’s common enterprise prong, see James D. Gordon III, Common Enterprise and Multiple Investors: A Contractual Theory for Defining Investment Contracts and Notes, 1988 Colum. Bus. L. Rev. 635, 636–59 (1988). That was the case in Howey, for instance. The other investment contract cases to date have similarly involved primary transactions and include such varied examples as sale-and-leasebacks,62See, e.g., SEC v. Edwards, 540 U.S. 389 (2004). annuities,63See, e.g., SEC v. United Benefit Life Ins. Co., 387 U.S. 202 (1967). and crypto assets.64See, e.g., SEC v. Terraform Labs Pte. Ltd., No. 23-cv-1346, 2023 U.S. Dist. LEXIS 132046 (S.D.N.Y. July 31, 2023); SEC v. Ripple Labs, Inc., No. 20-cv-10832, 2023 U.S. Dist. LEXIS 120486 (S.D.N.Y July 13, 2023); SEC v. LBRY, Inc., 639 F. Supp. 3d 211, 220–21 (D.N.H. 2022); SEC v. Telegram Grp. Inc., 448 F. Supp. 3d 352, 381 (S.D.N.Y. 2020); SEC v. Kik Interactive Inc., 492 F. Supp. 3d 169 (S.D.N.Y. 2020). There are virtually no investment contract cases concerning secondary transactions, in which investors purchased the putative investment contract from other investors.65The only non-crypto asset investment contract case that appears to have involved a secondary transaction is Hocking v. Dubois, 885 F.2d 1449 (9th Cir. 1989) (en banc). With respect to crypto asset-based investment contract cases, the SEC’s ongoing Section 5 actions against Coinbase, SEC v. Coinbase, No. 23-cv-04738 (S.D.N.Y. filed June 6, 2023), and Binance, SEC v. Binance, No. 1:23-cv-01599 (D.D.C. filed June 5, 2023), both involve the application of Howey to crypto assets that trade in secondary markets, but as of this Article’s writing, neither court has issued a decision concerning the investment contract question. The issue also was present in the crypto asset insider trading case discussed below, see infra note 137. The court in that case very recently granted the SEC’s motion for default judgment as to one of the three defendants and in that opinion, concluded that the pertinent secondary market traded crypto assets were investment contracts. See SEC v. Wahi, No. 22-cv-01009, 2024 U.S. Dist. LEXIS 36788 (W.D. Wash. Mar. 1, 2024).

The factual orientation of the body of investment contract cases naturally has resulted in courts shaping investment contract doctrine around primary transactions. But a Rule 10b-5 case involving an exchange-traded crypto asset will involve secondary transactions, rather than primary transactions, and the two transactions differ in important ways. As noted, in a primary transaction, investors transact directly or indirectly with the promoter. In a secondary transaction, investors transact with their trading counterparties, perhaps with the involvement of one or more intermediaries, and those counterparties ordinarily will not be the promoter.66In certain limited cases, an investor’s counterparty in a secondary transaction may have been the promoter. For instance, crypto asset sponsors sometimes seek to buy back their assets through open market transactions. See, e.g., Nexo Commits Additional $50 Million to Long-Standing Buyback Initiative, Nexo (Aug. 30, 2022), https://nexo.com/media-center/nexo-commits-additional-50-million-to-long-standing-buyback-initiative [https://perma.cc/7VLR-XA2L] (announcing allocation of additional funds for a crypto asset repurchase in the open market). Also, depending on the circumstances, it may also be that when a secondary transaction occurs, the promoter who facilitated the instrument’s initial offering no longer has any meaningful involvement in the underlying enterprise, though there may be other non-investors who facilitate the enterprise.

In many instances, the legal rules that courts have developed in primary transaction cases concerning the investment contract question are equally sensible in secondary transaction cases. Take, for instance, the rule that Howey’s “investment of money” prong does not require a cash payment and instead is satisfied when any form of consideration is provided.67See, e.g., Uselton v. Com. Lovelace Motor Freight, Inc., 940 F.2d 564, 574 (10th Cir. 1991) (“[I]n spite of Howey’s reference to an ‘investment of money,’ it is well established that cash is not the only form of contribution or investment that will create an investment contract. Instead, the ‘investment’ may take the form of ‘goods and services,’ or some other ‘exchange of value.’ ”) (citation omitted). That rule is as sensible in the secondary transaction context as the primary transaction context, as consideration in either context may involve cash or noncash payment. That is not the case for the horizontal commonality test, one of the three commonality tests that courts have developed in primary transaction cases to evaluate the presence of common enterprise.68Howey does not define common enterprise or explain how its presence should be evaluated in a given case or how it was present in the case at bar. Lower courts have developed three tests to assess the presence of common enterprise: horizontal commonality and two versions of vertical commonality, broad vertical commonality and strict vertical commonality. See, e.g., Gordon, supra note 62, at 640–41 (discussing the three commonality tests). The circuit courts of appeals are fractured as to which of these tests may be used to assess the presence of common enterprise. See James D. Gordon III, Defining a Common Enterprise in Investment Contracts, 72 Ohio St. L.J. 59, 68 (2011) (“The circuit courts of appeal are profoundly divided over the definition of a common enterprise.”). As discussed below, the horizontal commonality test, as it is presently articulated, is analytically ill-suited for use in secondary transaction cases because of the test’s requirement that investors’ assets be pooled.

i.  Secondary Transactions, Horizontal Commonality, and the Pooling Requirement

The horizontal commonality test evaluates relationships among the investment contract’s investors69See, e.g., SEC v. Infinity Grp. Co., 212 F.3d 180, 187 n.8 (3d Cir. 2000) (“ ‘[H]orizontal commonality’ examines the relationship among investors in a given transaction . . . .”). and inquires whether the investors’ fortunes are intertwined and collectively dependent on the success of the enterprise in which they are invested.70See, e.g., Revak v. SEC Realty Corp., 18 F.3d 81, 87 (2d Cir. 1994) (“In a common enterprise marked by horizontal commonality, the fortunes of each investor depend upon the profitability of the enterprise as a whole . . . .”). Some circuit courts recognize horizontal commonality as the only means of assessing Howey’s common enterprise prong. See, e.g., SEC v. SG Ltd., 265 F.3d 42, 49 (1st Cir. 2001) (identifying appellate cases where the courts demanded a showing of horizontal commonality). The test usually is defined in relation to a pooling requirement, which requires investors’ assets be combined and comingled in a manner that causes investors’ fortunes associated with the enterprise to be codetermined. Specifically, in the primary market transaction cases in which the test was developed, courts usually find horizontal commonality only when there is “the tying of each individual investor’s fortunes to the fortunes of the other investors by the pooling of assets.”71Revak, 18 F.3d at 87. See also Union Planters Nat’l Bank v. Com. Credit Bus. Loans, Inc., 651 F.2d 1174, 1183 (6th Cir. 1981) (“[A] finding of horizontal commonality requires a sharing or pooling of funds.”). Some courts may also require a pro rata distribution of profits for the test to be met. See, e.g., Revak, 18 F.3d at 87. Finally, while pooling for horizontal commonality purposes usually means the pooling of investors’ assets, see Gordon, supra note 62, at 645 n.72 (“By pooling their assets and giving up their claims to any profit or loss attributable to their particular investments, investors make their collective fortunes dependent on the success of a single common enterprise.”) (citing Hocking v. Dubois, 839 F.2d 560, 566 (9th Cir. 1988)), some courts articulate the pooling requirement as the pooling of risk and investments, rather than a pooling of the investors’ assets. See, e.g., Hart v. Pulte Homes of Mich. Corp., 735 F.2d 1001, 1005 (6th Cir. 1984) (“Nothing in the complaint intimates a pooling of risks and investments among these purchasers.”).

A good description of the pooling requirement comes from the court in Savino v. E.F. Hutton:72Savino v. E. F. Hutton & Co., 507 F. Supp. 1225, 1236 (S.D.N.Y. 1981).

“Pooling” has been interpreted to refer to an arrangement whereby the account constitutes a single unit of a larger investment enterprise in which units are sold to different investors and the profitability of each unit depends on the profitability of the investment enterprise as a whole. Thus, an example of horizontal commonality involving brokerage accounts would be a “commodity pool,” in which investors’ funds are placed in a single account and transactions are executed on behalf of the entire account rather than being attributed to any particular subsidiary account. The profit or loss shown by the account as a whole is ultimately allocated to each investor according to the relative size of his or her contribution to the fund. Each investor’s rate of return is thus entirely a function of the rate of return shown by the entire account.73Id. (citation omitted).

In other words, pooling can be understood as the usual mechanism in a primary transaction case that causes investors’ fortunes in the enterprise to be interconnected and dependent on the enterprise’s success. Consider, for instance, the Seventh Circuit’s decision in Milnarik v. M-S Commodities.74Milnarik v. M-S Commodities, Inc., 457 F.2d 274 (7th Cir. 1972). There, the plaintiff opened a discretionary trading account in commodities futures with a broker.75Id. at 275. Many other investors also had opened their own discretionary trading accounts with the same broker.76Id. at 276. The plaintiff’s account sustained losses, and the plaintiff sued for violation of Section 5’s registration requirement, on the theory that the discretionary trading account contract was an investment contract.77Id. at 275. The Seventh Circuit rejected that claim because it found no pooling and thus no investment contract under Howey.78See id. at 278–79.

The absence of the pooling of investors’ funds unsurprisingly led to the court’s conclusion in Milnarik that the investors’ fortunes were not intertwined and mutually dependent on the success of their collective trading accounts.79See id. at 277. Because investors’ accounts were separately maintained and their funds not combined, the value of any given investor’s trading account was independent of the value of any other investor’s trading account.80See id. This would not have been the case had the arrangement instead involved the defendant combining the various investors’ funds in a single account, executing trades with respect to that single account, and then distributing any profits to the investors. If this had been the case, then every investor would have been made financially better off as the account became more profitable and financially worse off as its value dropped. In other words, the aggregation of investors’ funds would have caused the investors’ individual financial interests in the combined account to be tethered together and dependent on the underlying enterprise.

But pooling is not an analytically meaningful way of evaluating the presence of horizontal commonality in an investment contract case involving secondary transactions. In primary market transactions, like the ones in Howey and Milnarik, investors will have transacted directly or indirectly with the promoter. In such cases, the promoter may have pooled investors’ assets in a manner that caused investors’ fortunes in the enterprise to rise or fall together, as horizontal commonality requires.

On the other hand, secondary market investors will have transacted with trading counterparties. If those trading counterparties were separate persons or economic entities, then those counterparties would have no reason to aggregate the amounts they received from their sales, except in rare and idiosyncratic circumstances. If, alternatively, the trading counterparties included one or more persons or entities who sold to multiple traders, then it is possible that the counterparty aggregated the amounts it received for its sales, because it may have some business or other reason for doing so. Nonetheless, the counterparty’s aggregation of secondary investors’ assets, unlike the promoter’s aggregation of primary market investors’ assets, will usually not create a linkage between the secondary investors’ financial interests in the enterprise because the success of the underlying enterprise will not turn on whether the counterparty aggregated the sales proceeds it received or how it used any aggregated amounts. Simply put, there is no analytical justification for the horizontal commonality question in a secondary transaction case to turn on the pooling requirement.

An evaluation of horizontal commonality in a secondary transaction case using the lens of pooling can be both underinclusive and overinclusive. First, in a secondary transaction case, investors’ financial interests in the underlying endeavor may still be interdependent even if the investors’ sales proceeds were not aggregated. To see this, suppose that in Howey, each of the primary market investors had sold their interests to another, later stage investor. Those secondary investors’ purchase amounts presumably will not have been pooled. The secondary investors purchased from the primary market investors, rather than the promoters, and those primary market investors would ordinarily have no reason to aggregate their individual sales proceeds. Nonetheless, horizontal commonality would be present with respect to the secondary investors because those investors’ profits would have been intertwined and dependent on the success of the enterprise. If, for instance, there was a poor harvest because of the promoters’ neglect or malfeasance, each of the secondary investors would have seen their profits fall.

Second, just as the absence of an aggregation of investors’ assets does not demonstrate a lack of horizontal commonality, the presence of asset aggregation, by itself, may not necessarily establish horizontal commonality in a secondary transaction case. In the example in the previous paragraph, suppose that the primary market investors in fact had aggregated the proceeds from their resales because, for instance, they wanted to collectively invest in a new venture. That pooling of the secondary investors’ assets by the primary market investors itself has no bearing on whether the secondary purchasers’ profits associated with the orange orchard enterprise would have moved in tandem as required by horizontal commonality.

Imposing a pooling requirement in secondary transaction cases not only would be analytically infirm but also would prevent nearly all investment contracts that arise in connection with secondary transactions from satisfying the horizontal commonality test.81The exception would be if the secondary investors’ assets were pooled and that pooling created linkages between the secondary investors’ individual pecuniary interests in the underlying enterprise. That would effectively cause those transactions to be categorically excluded from the investment contract category in those jurisdictions in which horizontal commonality is the only recognized test for common enterprise.82See supra note 71. Such limitation finds no basis in logic or public policy and also runs roughshod over the Supreme Court’s directive that the term security be interpreted in fidelity to economic reality and not hindered by rigid formalities.83See, e.g., United Hous. Found., Inc. v. Forman, 421 U.S. 837, 848 (1975) (“[I]n searching for the meaning and scope of the word ‘security’ in the Act(s), form should be disregarded for substance and the emphasis should be on economic reality.”) (quoting Tcherepnin v. Knight, 389 U.S. 548, 553 (1967)).

ii.  Generalization of the Horizontal Commonality Test

Because it is logically inapt in secondary transaction cases, the pooling requirement renders the horizontal commonality test ill-suited for use in those cases. Hence, the test must be appropriately generalized so that it is articulated in a manner that renders it sensible both in secondary transaction cases and the primary transaction cases in which it and Howey’s other rules have been developed. As discussed below, the necessary reformulation of the horizontal commonality test requires only a slight generalization of the test from its present form.

As an initial observation, recall that pooling is neither necessary nor sufficient for investors’ profits to be intertwined and mutually dependent on the success of the underlying enterprise as doctrinally required. Instead, as discussed above, pooling is the usual way that the requisite financial linkages arise in a primary transaction case. In other words, pooling is the usual path to interrelated investor profits in a subset of investment contract cases. An appropriately generalized articulation of the horizontal commonality test must recognize pooling as just one possible mechanism that ties investors’ financial interests in the enterprise together.

So that it has a sensible analytical meaning in both primary transaction cases and secondary transaction cases, the horizontal commonality test must be framed so that the test is met whenever the pooling of investors’ assets or some other non-pooling mechanism causes investors’ fortunes to be tied to one another and dependent on the success of the enterprise in which they are invested. In other words, the horizontal commonality rule must be articulated so that it accurately reflects that pooling is but one mechanism that results in investors’ profits being intertwined, not the only mechanism. Note that the generalized test does not merely require that pooling or some other mechanism caused investors’ fortunes to be tied together but, consistent with the underlying analytical underpinning of the test, also requires their fortunes to be dependent on the underlying enterprise.84See, e.g., Revak v. SEC Realty Corp., 18 F.3d 81, 87 (2d Cir. 1994) (horizontal commonality defined with reference to each investors’ fortunes being dependent on the profitability of the enterprise). See also Curran v. Merril Lynch, Pierce, Fenner & Smith, Inc. 622 F.2d 216, 223–24 (6th Cir. 1980), aff’d, 456 U.S. 353 (1982) (“[N]o horizontal common enterprise can exist unless there also exists . . . some relationship which ties the fortunes of each investor to the success of the overall venture.”).

The generalized test is consistent with Howey, in that there is nothing in the opinion indicating that the Court sought to impose a pooling requirement, even in primary market cases. In fact, it is difficult to support a conclusion that there was a pooling of investors’ assets in Howey, and for that reason the presence of horizontal commonality under the test’s present formulation. In Howey, the promoters sold each investor their own tract of land and an individual service contract.85See SEC v. W.J. Howey Co., 328 U.S. 293, 295–96 (1946) (each prospective investor was offered their own land sales contract by W.J. Howey Company and their own service contract by Howey-in-the-Hills Service, Inc.). The promoter did not aggregate investors’ purchase amounts and then use that aggregated amount to sell investors’ a single tract of land serviced by the promoter in which each investor maintained a fractional interest, as the usual definition of pooling would require.86See supra note 72 and accompanying text. As Gordon has explained:

The investment contracts in Howey indisputably involved vertical commonality. However, horizontal commonality was not present because each investor individually owned a separate tract of land. The Court did note that there was ordinarily no right to specific fruit, and that the produce was “pooled,” which probably meant that the fruit was put together for marketing. However, this is not what is usually meant by “pooling” in the horizontal commonality test.

Gordon, supra note 62, at 645 (footnotes omitted). See also Gordon, supra note 69, at 73 n.96 (citing sources noting there was no pooling in Howey).

The proposed generalization is superior to the present articulation that implicitly assumes that pooling is the only path to investor wellbeing interdependence. First, a primary transaction case in which a court would find horizontal commonality under the present test would continue to satisfy the horizontal commonality test under the generalized test outlined above. The presence of pooling necessary for a finding of horizontal commonality under the current test would also cause the generalized test to be met.

Second, the generalized test does not excessively broaden the scope of horizontal commonality in primary transaction cases. If a primary transaction case would not satisfy the horizontal commonality test as it is presently articulated because of a lack of pooling, the generalized test would admit a finding of horizontal commonality only if there was some other mechanism that caused investors’ profits to be intertwined and dependent on the success of the underlying enterprise. For instance, returning to Milnarik, there are no facts in the opinion suggesting that there was some non-pooling mechanism that caused investors’ profits to be intertwined.87See Milnarik v. M-S Commodities, Inc., 457 F.2d 274, 277 (7th Cir. 1972) (“Each contract creating this relationship is unitary in nature and each will be a success or failure without regard to the others. Some may show a profit, some a loss, but they are independent of each other.”).

The generalized formulation would admit a broader array of investment contracts in primary transaction cases than under the current formulation, but these would be sensible additions. For instance, suppose in Milnarik, the broker’s policy and practice was to execute identical transactions for each of the accounts over which it had discretionary authority. In this case, while there would be no pooling of the investors’ assets,88See Savino v. E. F. Hutton & Co., 507 F. Supp. 1225, 1237 (S.D.N.Y. 1981) (in a case involving six discretionary trading accounts, holding that the investment manager’s practice of employing a similar investment strategy across the six accounts was insufficient to satisfy the pooling requirement). there would be horizontal commonality under the generalized test, as the value of investors’ portfolios would move in unison because of the broker’s trading policy and practice. The investors in this example can be understood to be in a common enterprise with one another because the value of each of their accounts is dictated by the same trading practice, even though their funds were not pooled.

Unlike the present restrictive formulation, the generalized formulation would result in investment contracts that arise in connection with secondary transactions satisfying the horizontal commonality test even in the absence of pooling, so long as there was some non-pooling mechanism that met the doctrinal requirement that investors’ profits were interrelated and dependent on the success of the underlying enterprise. The generalized test is sufficiently circumscribed and not all investment contracts arising in connection with secondary transactions will meet it. For instance, suppose that the investors in Milnarik had sold their interests in their accounts to other investors, with all other facts the same. In addition to an absence of pooling, there would be no other mechanism connecting the profits of those later investors to one another and thus no finding of horizontal commonality as to those secondary transactions under the generalized test.

a.  Application to Exchange-Traded Crypto Assets

Investors in a crypto asset offering ordinarily will have the proceeds from their purchases pooled by the crypto asset’s sponsors to facilitate the asset and any associated applications.89See, e.g., SEC v. Telegram Grp. Inc., 448 F. Supp. 3d 352, 369–70 (S.D.N.Y. 2020) (in a case involving a crypto asset offering, finding that the horizontal commonality test was met in part because the sponsor pooled the proceeds received from the initial purchasers). That may not be the case for secondary crypto asset traders who transact on crypto exchanges, as those transactions would have occurred with trading counterparties and those trading counterparties, in turn, may have had no reason to pool the amounts they received. Despite any lack of pooling of the secondary investors’ purchase amounts, the crypto asset may still meet the generalized horizontal commonality test through its price, which can serve as a potential non-pooling mechanism that causes the pecuniary interests of the crypto asset’s traders to be linked and dependent on the success of the underlying enterprise, that is, the crypto asset and any associated applications.

Start first with the requirement that secondary traders’ fortunes in the crypto asset are linked. A given exchange-traded crypto asset can trade on multiple exchanges,90See, e.g., Solana: Markets, CoinMarketCap, https://coinmarketcap.com/currencies/
solana/markets [http://web.archive.org/web/20230627040928/https://coinmarketcap.com/currencies/
solana/#Markets] (listing crypto exchanges on which Solana trades).
which may either be centralized or decentralized. A centralized crypto exchange will involve an intermediary to facilitate transactions, while a decentralized crypto exchange will not. The two types of exchanges also may differ in their pricing mechanism. A centralized crypto exchange will use a limit order book to match buyers and sellers, and therefore the exchange’s prices will be set directly by traders’ submitted orders.91See, e.g., Coinbase Trading Rules, Coinbase, https://www.coinbase.com/legal/trading_rules [https://perma.cc/V3C2-ZADH] (“Coinbase operates a Central Order Book trading platform . . . .”). Rather than relying on a limit order book, a decentralized exchange may facilitate transactions using an automated market maker, in which prices are set through a pricing algorithm.92See, e.g., The Uniswap Protocol, Uniswap Docs, https://docs.uniswap.org/concepts/uniswap-protocol [https://perma.cc/U63X-E8S6] (“The Uniswap protocol takes a different approach, using an Automated Market Maker (AMM), sometimes referred to as a Constant Function Market Maker, in place of an order book. At a very high level, an AMM replaces the buy and sell orders in an order book market with a liquidity pool of two assets, both valued relative to each other.”).

Whether a crypto exchange uses a limit order book or an automated market maker, the exchange’s pricing mechanism will generate, for a given crypto asset, a single price at which any trader can transact, holding fixed other traders’ transactions. That single trading price links together the financial wellbeing of all the crypto asset’s secondary investors. Every investor holding the crypto asset is made financially better off as the crypto asset’s price on the exchange rises and each is made worse off as the price drops. The fact that a crypto asset trades on multiple exchanges does not break the linkages between the financial wellbeing of traders on different exchanges since arbitrage causes crypto asset prices across different exchanges to closely align.93Within a given country, a crypto asset’s price difference across the exchanges on which it trades usually will be modest. See, e.g., Igor Makarov & Antoinette Schoar, Trading and Arbitrage in Cryptocurrency Markets, 135 J. Fin. Econ. 293, 294 (2020).

A crypto asset’s trading price thus provides a mechanism that links together its secondary investors’ financial interests. It is the case that a crypto asset’s trading price will be influenced by market fluctuations, but the doctrinal relevance of that observation is better understood as concerning Howey’s efforts of others prong, which is discussed below, rather than the common enterprise prong.94See infra Section II.B.2.ii.a.

A crypto asset’s price also may provide the doctrinally necessary linkage between the financial interests of the crypto asset’s secondary traders and success of the underlying enterprise. Empirical studies show that the prices of exchange-traded crypto assets generally respond in the directionally appropriate way to material, public information.95See Patel, supra note 57, at 109–111. In other words, empirical studies show that crypto asset prices generally rise when the market becomes aware of positive, material information pertinent to the crypto asset and generally decrease when the market becomes aware of negative, material information pertinent to the crypto asset. See id. For this reason, as a general matter, the financial interests of a crypto asset’s holders will be dependent on the success of the crypto asset and any associated applications. If, for instance, the crypto asset undergoes some value-enhancing change, then once that change is publicly known, the crypto asset’s price would be expected to increase, because of the directionally appropriate responsiveness of crypto asset prices to material, public information as a general matter.

Nonetheless, it is possible that while the prices of crypto assets—as an asset class—generally respond in a directionally appropriate way to material, public information, that is not the case for any given exchange-traded crypto asset. If the specific crypto asset being evaluated as a potential investment contract lacks that requisite informational responsiveness, then the crypto asset’s price would not connect the financial interests of the crypto asset’s secondary traders with success of the underlying enterprise. For instance, if the crypto asset underwent some value-reducing change, but the asset’s price was either impervious to material, public information or moved in the directionally inappropriate way to material, public information, then the value reducing change would either have generated no change to the crypto asset’s price (and thus would have made the crypto asset’s holders no better or worse off) or increased the crypto asset’s price (and thus would have made the crypto asset’s holders better, not worse, off).

Accordingly, a crypto asset’s price can serve the role of a non-pooling mechanism that satisfies the requirements of the generalized horizontal commonality test only if the crypto asset’s price generally responds to material, public information in a directionally appropriate way. If the plaintiffs in a crypto asset case implicating the Howey question rely on the asset’s price to serve that non-pooling role, then the generalized horizontal commonality test demands that there be a showing of the necessary price responsiveness. The plaintiffs can make that showing using an event study that demonstrates that the crypto asset’s price generally responds to material, public information in a directionally appropriate way.

If the plaintiffs cannot establish the necessary price responsiveness of the crypto asset, then the asset’s price cannot serve the role of a non-pooling mechanism that satisfies the requirements of the generalized horizontal commonality test, because in that circumstance, the plaintiffs will not have established that the asset’s price connects the secondary investors’ pecuniary interests to the success of the enterprise in which they are invested. In this case, the generalized horizontal commonality test will be met with respect to the at-issue crypto asset only if there was pooling of the secondary traders’ purchase amounts or there was some non-pooling mechanism other than the crypto asset’s price that caused the pecuniary interests of the crypto asset’s traders to be linked and dependent on success of the crypto asset and any associated applications.

b.  Other Reformulations of the Horizontal Commonality Test

In addition to generalizing Howey’s horizontal commonality test in the manner discussed above, there are other sensible ways to reformulate the test so that it is suitable for use in both secondary transaction and primary transaction cases. One possibility is to broaden the test so that it is also met in secondary transaction cases if (1) there was pooling of the primary market investors’ assets, and (2) the primary market investors purchased the instrument only because they reasonably expected the ability to resell their interests to secondary investors. If these two conditions are met, then the secondary investors can be understood to have effectively pooled their assets, in the sense that the reasonable expectation of eventual resales to secondary investors was a necessary condition to the primary market investors engaging in the transactions that resulted in their assets being pooled. This type of pooling by the secondary market investors can be referred to as effective pooling.

Finally, unlike the horizontal commonality test, the two vertical commonality tests do not require reformulation to be analytically workable notions in secondary transaction cases. Strict vertical commonality is met when “the fortunes of investors [are] tied to the fortunes of the promoter” and broad vertical commonality is met when the “the fortunes of the investors [are] linked . . . to the efforts of the promoter.”96Revak v. SEC Realty Corp., 18 F.3d 81, 87–88 (2d Cir. 1994). It is worth observing that the role of the promoter in secondary transaction cases will be different than in primary transactions cases. In a primary transaction case, the promoter ordinarily will have facilitated the enterprise in part by soliciting investors. In a secondary transaction case, the promoter likely will not have engaged in any such solicitation because it usually will not have been an active participant in the secondary markets, though the promoter may have directed other efforts to facilitate the enterprise.

c.  The Irrelevance of a Contractual Relationship

Finally, while a primary transaction case ordinarily will involve contracts between the promoter and the investors, that usually will not be the case in secondary transaction case, because secondary market traders will not have transacted with the promoter, except in rare circumstances.97Even in these rare circumstances, there may not have been any contract between the promoter and the secondary market trader. Consider, for instance, the circumstance in which a crypto asset sponsor engaged in a buyback of the asset in the open market. See supra note 67. Nonetheless, the absence of a contractual relationship between the promoter and investors—whether those investors were secondary market traders or purchasers in a primary market transaction—does not provide a proper basis for defeating a finding of an investment contract. In Howey, the Supreme Court did not limit the investment contract category to just formal contractual arrangements between the promoter and the investors. Instead, the Court articulated the definitional category more expansively so that, in addition to contractual arrangements, the investment contract category also encompasses “transactions” and “schemes.”98SEC v. W.J. Howey Co., 328 U.S. 293, 298–99 (1946) (“[A]n investment contract . . . means a contract, transaction or scheme.”) (emphasis added). See also Hocking v. Dubois, 885 F.2d 1449, 1457 (9th Cir. 1989) (“In defining the term investment contract, Howey itself uses the terms ‘contract, transaction or scheme,’ leaving open the possibility that the security not be formed of one neat, tidy certificate, but a general ‘scheme’ of profit seeking activities.”) (citation omitted). Courts in recent crypto asset cases have rejected the argument that Howey requires the presence of a contractual arrangement. See, e.g., SEC v. Kik Interactive Inc., 492 F. Supp. 3d 169, 178–79 (S.D.N.Y. 2020) (in a case involving the initial offering of a crypto asset, rejecting argument that Howey requires an ongoing contractual obligation). Though the Court did not define the term “scheme,” had it meant for scheme to simply mean a series of contractual arrangements, then it would have just used the term “contracts” rather than scheme.

Howey’s lack of a contract requirement is sensible. As a matter of public policy, the investor protection objectives of the securities laws are not weakened simply because the relevant transactions were not undertaken pursuant to a formal contract.99For example, suppose that in Howey the land sales contract was not in writing and therefore unenforceable because of the statute of frauds. The public policy goals of the securities laws would not be met if an investment contract were not found in this circumstance even though the economic nature of the subject transaction is the same as the circumstance in which the land sale contract had been enforceable. And while Howey and the other Supreme Court’s investment contract cases to date have involved contractual arrangements between the promoter and the investors, this common factual feature has not become a part of the Court’s enunciated rule.100The same is true for the state law cases the Supreme Court cited in Howey. To determine the contours of the investment contract category, the Supreme Court relied on state court cases interpreting state securities laws, that is, state blue sky laws. See Howey, 328 U.S. at 298. While these state cases involved contractual arrangements between the promoter and the investors, the investment contract rule fashioned by the courts in those cases did not mandate a contractual relationship. For example, Howey’s leading state court citation is to State v. Gopher Tire & Rubber Co., 177 N.W. 937 (Minn. 1920). See Howey, 328 U.S. at 298. However, in that case, the Minnesota Supreme Court defined investment contract without reference to a contractual arrangement. See Gopher Tire, 177 N.W. at 938 (“No case has been called to our attention defining the term ‘investment contract.’ The placing of capital or laying out of money in a way intended to secure income or profit from its employment is an ‘investment’ as that word is commonly used and understood.”). The Supreme Court’s description of these state cases did not characterize them as requiring a contractual relationship between the promoter and investors and instead described those cases as admitting schemes. See Howey, 328 U.S. at 298 (“The term ‘investment contract’ is undefined by the Securities Act or by relevant legislative reports. But the term was common in many state ‘blue sky’ laws in existence . . . An investment contract thus came to mean a contract or scheme for ‘the placing of capital or laying out of money in a way intended to secure income or profit from its employment.’ ”) (emphasis added) (quoting Gopher Tire, 177 N. W. at 938). For a careful historical account of blue sky laws, see Jonathan R. Macey & Geoffrey P. Miller, Origin of the Blue Sky Laws, 70 Tex. L. Rev. 347 (1991). Instead, the Supreme Court’s post-Howey investment contract cases have consistently invoked Howey’s articulation of the investment contract category as encompassing schemes.101See, e.g., SEC v. Edwards, 540 U.S. 389, 393 (2004) (“The test for whether a particular scheme is an investment contract was established in our decision in [Howey]. We look to ‘whether the scheme involves an investment of money in a common enterprise with profits to come solely from the efforts of others.’ ”) (emphasis added) (quoting Howey, 328 U.S. at 301); Int’l Bhd. of Teamsters, Chauffeurs, Warehousemen & Helpers of Am. v. Daniel, 439 U.S. 551, 558 (1979) (“To determine whether a particular financial relationship constitutes an investment contract, ‘[the] test is whether the scheme involves an investment of money in a common enterprise with profits to come solely from the efforts of others.’ ”) (emphasis added) (quoting Howey, 328 U.S. at 301); United Hous. Found., Inc. v. Forman, 421 U.S. 837, 852 (1975) (“[T]he basic test for distinguishing the transaction from other commercial dealings is ‘whether the scheme involves an investment of money in a common enterprise with profits to come solely from the efforts of others.’ ”) (emphasis added) (quoting Howey, 328 U.S. at 301); Tcherepnin v. Knight, 389 U.S. 332, 338 (1967) (“ ‘The test [for an investment contract] is whether the scheme involves an investment of money in a common enterprise with profits to come solely from the efforts of others.’ ”) (emphasis added) (quoting Howey, 328 U.S. at 301); cf. Marine Bank v. Weaver, 455 U.S. 551, 556 (1982) (“[The statutory definition of a security under the Securities Exchange Act] includes ordinary stocks and bonds, along with the ‘countless and variable schemes devised by those who seek the use of the money of others on the promise of profits.’ ”) (emphasis added) (quoting Howey, 328 U.S. at 299).

Stated differently, simply because a set of cases share a common factual predicate does not mean that the factual predicate necessarily becomes a component of the pertinent rule of law. As another example of this somewhat unremarkable observation, note that the profits that investors received in the Supreme Court’s investment contract cases arose through income generated by a business enterprise that was organized and facilitated by the promoter. But the fact that these cases share this common factual predicate does not mean that the factual predicate is part of the operative rule. As the cases recognize, investors’ “profits” for purposes of the Howey determination are not limited to proceeds from an investment in a business enterprise and instead include capital appreciation more generally.102See, e.g., United Hous. Found., Inc. v. Forman, 421 U.S. 837, 852 (1975) (“By profits, the Court has meant either capital appreciation resulting from the development of the initial investment . . . or a participation in earnings resulting from the use of investors’ funds . . . .”); SEC v. Edwards, 540 U.S. 389, 394 (2004) (explaining that “profits” for Howey’s purposes means “income or return, [that] include[s], for example, dividends, other periodic payments, or the increased value of the investment”). See also Kik Interactive, 492 F. Supp. 3d at 179–80 (for purposes of Howey, investors’ profits arose through an increase in the value of the crypto asset relative to its purchase price). This observation is especially relevant to the crypto asset context because, as noted in Section I.B above, a crypto asset’s holders ordinarily do not receive and are not entitled to any income arising from development and operation of the crypto asset or any associated applications.

2.  Exchange-Traded Crypto Assets and Efforts of Others

For a given instrument to be an investment contract, it must also satisfy Howey’s efforts of others prong. In the context of an exchange-traded crypto asset, that requirement will be met if investors reasonably expected the crypto asset’s value to be significantly determined by the entrepreneurial or managerial efforts of others.103Howey requires that investors reasonably expected their profits “to be derived from the entrepreneurial or managerial efforts of others.” United Hous. Found., Inc. v. Forman, 421 U.S. 837, 852 (1975). While Howey stated that those profits must come “solely” from the efforts of others, see Howey, 328 U.S. at 301, courts have not construed the word “solely” literally and instead have only required that the entrepreneurial or managerial efforts of those other than the investors are the ones that significantly determine the enterprise’s success. See, e.g., SEC v. Glenn W. Turner Enters., Inc., 474 F.2d 476, 482 (9th Cir. 1973) (Howey’s efforts of others prong is met if “the efforts made by those other than the investor are the undeniably significant ones, those essential managerial efforts which affect the failure or success of the enterprise”). Whether this requirement is met will depend on the at-issue crypto asset’s specific features, including the extent of its operational decentralization. This subpart explores issues pertinent to application of Howey’s efforts of others prong in the secondary trading crypto asset context.

The discussion below makes two points regarding Howey’s efforts of others prong. First, the discussion explains why operational decentralization, by itself, is not a per se bar to Howey’s efforts of others prong being met, though there may be specific factual features that result in a particular exchange-traded crypto asset not satisfying that Howey element. Second, the discussion below also explains the doctrinal irrelevancy of investors’ expectations concerning the use of their sales proceeds.

i.  Why Operational Decentralization Is Not a Per Se Bar

The first issue to consider is whether a crypto asset’s operational decentralization should preclude satisfaction of Howey’s efforts of others prong. To structure the analysis, consider two possibilities. The first possibility is that the exchange-traded crypto asset has achieved some operational decentralization but a centralized third party continues to direct some entrepreneurial or managerial efforts toward the crypto asset’s success. The second possibility is that the crypto asset has achieved complete operational decentralization, in the sense that no centralized third party directs entrepreneurial or managerial efforts toward the success of the crypto asset; instead, those efforts are undertaken by a decentralized group of unaffiliated persons.104There is also the possibility that the crypto asset and any of its associated applications no longer require any entrepreneurial or managerial efforts to be viable. Howey’s efforts of others prong would not be met in this circumstance.

a.  Continued Involvement by Sponsors or Other Centralized Third Party

If the crypto asset’s sponsors or some other centralized third party continue to exert entrepreneurial or managerial efforts such that investors reasonably expect those efforts to significantly determine the crypto asset’s value, as usually embodied by its trading price, then Howey’s efforts of others prong will be met.105Under Howey, the requisite efforts need not be undertaken by the crypto asset’s sponsors and instead the efforts of other non-investors are included in the analysis. See Howey, 328 U.S. at 298–99 (test requires that profits are reasonably expected from “the efforts of the promoter or a third party”). See also Cont’l Mktg. Corp. v. SEC, 387 F.2d 466, 470 (10th Cir. 1967) (rejecting the argument that Howey’s requisite entrepreneurial or managerial efforts must be undertaken by the security’s seller or a third-party owned or controlled by the seller). This observation is reflected in courts’ determinations of the Howey question as it pertains to crypto assets at their offering stage,106See cases cited supra note 65. As noted, no court has yet rendered a decision concerning the Howey question as it relates to secondary crypto asset transactions. See supra note 66. which have found the efforts of others prong to have been satisfied because the crypto asset’s investors reasonably expected their profits to arise from the sponsor’s entrepreneurial or managerial efforts.107For instance, in granting the SEC’s motion for a preliminary injunction in the SEC’s Section 5 claim against Telegram, the court found that the SEC had shown a substantial likelihood of success of proving that a reasonable initial purchaser of the at-issue crypto asset would have expected the asset’s resale price to increase because of the sponsor’s entrepreneurial and managerial efforts. See SEC v. Telegram Grp. Inc., 448 F. Supp. 3d 352, 375–78 (S.D.N.Y. 2020).

Presently, nearly all crypto assets appear to be associated with one or more centralized bodies that have at least some involvement facilitating their success, including through developing, operating, managing, and promoting the crypto assets and any associated applications.108See, e.g., id. (in a case involving a crypto asset’s initial offering, granting the SEC’s motion for preliminary injunction and finding that the SEC had shown a substantial likelihood of establishing Howey’s efforts of others prong because of the activities of two centralized bodies). While the importance of the efforts of such centralized bodies on a given crypto asset’s success may ebb as the crypto asset matures and becomes the subject of additional secondary trading, those efforts may remain instrumental to the crypto asset’s success. Even crypto assets like ether that have experienced significant operational decentralization have at times benefited from the focused efforts of a collective group of developers.109See, e.g., Walch, supra note 14, at 56–57 (discussing the role of developers in the 2016 hard fork of the Ethereum blockchain). See also Park, supra note 59, at 6 (“[T]here are questions about whether the Ethereum project is truly independent of its founders.”). Furthermore, the mere fact that a crypto asset relies on a distributed ledger and therefore has its relevant data spread across a network with a multitude of sites or nodes does not resolve the efforts of others question, since, for instance, a centralized body could still have significant involvement in managing the network.

Whether the presence and activities of these centralized groups is sufficient to satisfy Howey’s efforts of others prong will hinge on the nature of the centralized third party’s involvement. A series of issues await judicial determination. For instance, a crypto asset or its associated applications, if any, ordinarily will have a presence on software code repositories and messaging platforms, where the crypto asset’s developers, investors, and others come together and communicate to improve the asset or its associated applications.110See, e.g., Solana, Github, https://github.com/solana-labs/solana [https://perma.cc/3KEZ-2KGL] (Github code repository for the Solana blockchain managed by Solana Labs); Solana Community, Discord, https://discord.com/invite/solana-community-926762104667648000 (last visited Sept. 6, 2023) (an unofficial Solana-related Discord channel organized by the Solana community). Some of these activities may be managed by the crypto asset’s sponsors rather than investors.111See, e.g., Solana, Github, supra note 111. If those managerial efforts are important to the viability of the crypto asset or any associated applications, then that would militate in favor of a finding that Howey’s efforts of others prong was met.112In addition to a presence on message platforms and software code repositories, a crypto asset or its associated application may have an active presence on discussion sites like Reddit and social media sites like X. If the crypto asset’s sponsor undertakes activity on those sites that facilitates the success of the crypto asset or any associated applications, then that activity also would militate in favor of Howey’s efforts of others prong being met. See, e.g., SEC v. LBRY, Inc., 639 F. Supp. 3d 211, 217–18 (D.N.H. 2022) (evaluating Howey’s efforts of others prong in part using the crypto asset sponsor’s communications on Reddit).

The availability of pricing data opens the possibility of using empirical techniques to assess Howey’s efforts of others prong in investment contract cases involving an exchange-traded crypto asset. An assessment of whether a crypto asset’s trading price was influenced by the activities of a centralized body is relevant to the efforts of others question, which demands a determination whether reasonable investors would expect the asset’s value, as ordinarily measured by its price, to be significantly determined by the entrepreneurial or managerial efforts of the centralized body. If a crypto asset’s price was influenced by the efforts of a centralized body, then the crypto asset’s price would be expected to move in a directionally appropriate way once value-relevant activity by the centralized body became known to the market. For instance, an announced improvement in a crypto asset’s associated application by the centralized body would be expected to cause the crypto asset’s price to increase, assuming that Howey’s efforts of others prong was met.

An event study therefore could be used to assess the extent to which the at-issue crypto asset does or does not respond to potentially value-relevant activities of a centralized body.113In connection with its Motion for Summary Judgment in its action against Ripple, the SEC sought to use an event study to show that the crypto asset’s price responded to the sponsor’s value-relevant activity. See Amended Expert Rep. of Albert Metz, SEC v. Ripple Labs, Inc., No. 20-cv-10832 (S.D.N.Y. Mar. 11, 2022), ECF No. 439, Exhibit B. However, the use of event studies in that context should be undertaken with care. First, there are important methodological considerations, such as the issue of low power, which are amplified in the crypto asset context because of high crypto asset price volatility.114See infra Section III.D. Second, the event study may be underinclusive in that it would not capture the effects of a centralized body’s ongoing influence on a crypto asset’s price and instead would be limited to analysis of how episodic events associated with the centralized body affected the asset’s price. Finally, even if the event study showed that the crypto asset’s price responds to value-relevant activities of a centralized body, that finding would not fully resolve the pertinent question of whether investors reasonably expected the crypto asset’s price to be significantly determined by the centralized body’s entrepreneurial or managerial efforts, though it would be one important determinant in that inquiry.

b.  Absence of Any Centralized Third Party

Now, suppose instead that the crypto asset is fully decentralized in that there is no centralized third party that directs entrepreneurial or managerial efforts toward the crypto asset’s success; instead, those efforts are undertaken by a decentralized group of unaffiliated persons. The prospect of full decentralization raises the question of whether Howey’s efforts of others prong requires the existence of one or more centralized third parties whose entrepreneurial or managerial efforts significantly affect the investment contract’s success. If such centralized third parties in fact are necessary, then sufficient decentralization would by itself preclude satisfaction of Howey’s efforts of other prong.

SEC staff guidance concerning the application of Howey in the crypto asset context can be reasonably interpreted to envision the presence of one or more such centralized third parties for purposes of evaluating Howey’s efforts of others prong.115See Framework for “Investment Contract” Analysis of Digital Assets, SEC, https://www.sec.
gov/corpfin/framework-investment-contract-analysis-digital-assets [https://perma.cc/G2M5-P3C2].
That guidance defines an “Active Participant” as “a promoter, sponsor, or other third party (or affiliated group of third parties)” and then goes on to explain that Howey’s efforts of others prong in the crypto asset context requires an inquiry into whether “the purchaser reasonably expect[s] to rely on the efforts of an [Active Participant]” and the nature of those efforts.116See id. In other words, the SEC staff’s definition of an Active Participant could be read to exclude the efforts of a decentralized group of unaffiliated third parties from meeting Howey’s efforts of others prong. Scholars also have proposed tests for assessing Howey’s efforts of others prong in the crypto asset context that similarly appear to hinge on the presence of one or more centralized third parties, such as the crypto asset’s sponsors.117See, e.g., Henderson & Raskin, supra note 59, at 461 (proposing a test for evaluating the applicability of Howey to the crypto asset context, where the test specifies that “if the instrument is a decentralized one that is not controlled by a single entity, then it is not a security”).

The well-publicized 2018 speech by the SEC’s then-Director of Corporate Finance, Bill Hinman, can also be interpreted as implicitly adopting the notion that Howey’s efforts of others prong requires the presence of a centralized third party. In that speech, then-Director Hinman observed that increasing operational decentralization during a crypto asset’s lifecycle could cause a crypto asset that previously satisfied Howey’s test of an investment contract to no longer satisfy that test because no centralized group is tasked with the crypto asset’s entrepreneurial or managerial functions.118As Hinman observed:

[T]his also points the way to when a digital asset transaction may no longer represent a security offering. If the network on which the token or coin is to function is sufficiently decentralized—where purchasers would no longer reasonably expect a person or group to carry out essential managerial or entrepreneurial efforts—the assets may not represent an investment contract. . . . What are some of the factors to consider in assessing whether a digital asset is offered as an investment contract and is thus a security? Primarily, consider whether a third party—be it a person, entity or coordinated group of actors—drives the expectation of a return.

William Hinman, Dir., SEC Div. of Corp. Fin., Digital Asset Transactions: When Howey Met Gary (Plastic) (June 14, 2018).
That proposition has been featured prominently in crypto asset litigation that implicate the Howey question119See Defendant’s Opposition to Plaintiff’s Motion for Summary Judgment at 48–50, SEC v. Ripple Labs, Inc., No. 20-cv-10832 (S.D.N.Y. June 16, 2023). and has been the subject of academic inquiry.120See, e.g., Park, supra note 59; Henderson & Raskin, supra note 59. 

Howey should not be read as requiring the presence of one or more centralized third parties for purposes of its efforts of others prong. There is nothing in the language or reasoning of Howey suggesting that the requisite entrepreneurial or managerial efforts must be undertaken by a centralized third party.121While the requisite entrepreneurial or managerial efforts in Howey were undertaken by centralized third parties (namely, W.J. Howey Company and Howey-in-the-Hills Service, Inc.), the Court’s reasoning was not grounded on the fact of that centralization. Howey’s efforts of others prong instead is better understood as requiring investors to have reasonably expected their profits to have been significantly determined by the entrepreneurial or managerial efforts of those other than the investors themselves, whether or not those “others” constituted a centralized group.122As a separate point, most courts also evaluate the promoter’s pre-purchase activities when determining whether Howey’s efforts of others prong was met. See, e.g., SEC v Mut. Benefits Corp., 408 F.3d 737, 743–45 (11th Cir. 2005) (holding that the promoter’s pre-purchase activities are included in an evaluation of Howey’s efforts of others prong). Under this line of cases, regardless of whether the secondary transaction investment contract case involved a centralized group at the time of sale, the pre-purchase efforts of the promoter would be considered in the efforts of others analysis.

Compared with a formulation of Howey’s efforts of others prong that requires the presence of a value-enhancing centralized party, an advantage of a formulation that permits the prong to be satisfied even in the absence of a centralized party is that it better focuses the analysis on an essential feature of an investment: delegation of entrepreneurial or managerial efforts to those outside of the investor class. So long as investors are sufficiently passive, in the sense they ceded sufficient entrepreneurial and managerial efforts to others, the putative investment contract will bear this indicium, independent of the degree of centralization of the group to whom those efforts were delegated. The investment contract cases addressing whether investors’ managerial involvement in the enterprise defeats Howey’s efforts of others prong embody this observation. Those cases evaluate the efforts of others prong by focusing on the extent of investors’ passivity.123Consider, for instance, U.S. v. Leonard, 529 F.3d 83 (2d Cir. 2008), in which the Second Circuit evaluated whether the district court erred in concluding that the LLC interests at issue were investment contracts under Howey. The defendants argued that Howey’s efforts of others prong was not met because the purchasers of the LLC interests had been contractually delegated some managerial involvement in the enterprise. Id. at 88. The Second Circuit rejected that argument. Id. at 89–91. The court first distinguished between circumstances in which investors are passive and circumstances in which they maintain significant investor control. Id. at 89–90. It then held that when investors maintain or are delegated some control over the investment, Howey’s efforts of others prong may still be met so long as the investors were unable to exercise meaningful control and thus were effectively passive. Id. at 90–91. See also Steinhardt Grp. Inc. v. Citicorp, 126 F.3d 144 (3d Cir. 1997) (in a case involving a limited partnership interest, concluding that Howey’s efforts of others prong was not met because the limited partner was not sufficiently passive).

Because Howey’s efforts of others prong should not be understood as mandating the presence of a value-generating centralized body, the prong may be met even if a crypto asset has undergone substantial operational decentralization such that there is no centralized third party that exerts entrepreneurial or managerial efforts influencing the crypto asset’s value. The relevant inquiry is whether the crypto asset’s investors reasonably believed the asset’s value was significantly determined by the entrepreneurial or managerial efforts of individuals or entities other than the investors themselves. If the asset’s investors had those reasonable expectations, then Howey’s efforts of others prong would be met even if the pertinent efforts were undertaken by a dispersed and large number of unaffiliated individuals or entities.

Not all exchange-traded crypto assets will satisfy Howey’s efforts of others prong. First, if the putative investment contract is such that it requires no ongoing entrepreneurial or managerial efforts to succeed, then Howey’s efforts of others prong would not be met. Mining, the energy-intensive process of validating transactions on proof-of-work blockchains,124See, e.g., Andrew Gazdecki, Proof-Of-Work and Proof-of-Stake: How Blockchain Reaches Consensus, Forbes (Jan. 28, 2019, 9:00 AM), https://www.forbes.com/sites/forbestech
council/2019/01/28/proof-of-work-and-proof-of-stake-how-blockchain-reaches-consensus/?sh=5a105
eca68c8 [https://perma.cc/8JZV-5UZQ].
should be considered ministerial rather than entrepreneurial or managerial.125Efforts that are not entrepreneurial or managerial in nature are not credited in an analysis of Howey’s efforts of others prong. See, e.g., SEC v. Life Partners, Inc., 87 F.3d 536, 545 (D.C. Cir. 1996). Second, if the investors were the ones who significantly directed the entrepreneurial or managerial efforts pertinent to the investment contract’s success, then Howey’s efforts of others prong also will not be met.126See supra note 104; Fargo Partners v. Dain Corp., 540 F.2d 912, 914–15 (8th Cir. 1976) (finding that Howey’s efforts of others prong was not met because of the investor’s significant involvement in the alleged investment contract). See also id. at 914–15 (“Where the investors’ duties were nominal and insignificant, their roles were perfunctory or ministerial, or they lacked any real control over the operation of the enterprise, the courts have found investment contracts.”). This may be the case if the crypto asset provided investors with extensive governance rights that they can readily exercise.

Additionally, Howey does not admit as investment contracts instruments whose value is driven almost entirely by market forces. In such a circumstance, it would not be reasonable for the putative investment contract’s investors to believe that its value is significantly determined by any person’s entrepreneurial or managerial efforts.127See, e.g., Noa v. Key Futures, Inc., 638 F.2d 77, 79 (9th Cir. 1980) (concluding that Howey’s efforts of others prong was not met with respect to silver bars because investors’ profits depended on market-wide price fluctuations of silver, not managerial efforts). That is the case, for instance, for such varied tradeable items such as gold, baseball cards, and bitcoin, which are all understood to have their value driven almost entirely by market forces rather than by any person or persons’ entrepreneurial or managerial efforts. At the same time, even if the crypto asset’s price is determined in part by market forces—for instance, if its price moves in part because of price changes of another crypto asset such as bitcoin—investors may still reasonably expect the asset’s price to be significantly determined by the entrepreneurial or managerial efforts of others, in which case Howey’s efforts of others prong will be met.128Of course, in this circumstance, it may be that other prongs of Howey are not met. Consider, for example, tickets to a popular concert. Suppose that the tickets can be resold on a secondary market and that the secondary market price is significantly higher than the initial purchase price. Because of the higher secondary market price, initial purchasers profited from their purchase, in the sense that the current value of their tickets exceeds the purchase price, but did their initial ticket purchases constitute an investment contract under Howey? One possibility is that the high secondary market price was driven by the relatively high willingness to pay of those who wanted to attend the concert but were unable to obtain tickets during the initial sale. Because the purchasers’ profits were the result of market forces, Howey’s efforts of others prong would not have been met. See supra note 128 and accompanying text. But suppose instead that the elevated secondary market price was because of the entrepreneurial or managerial efforts of the performer and others, for instance, through heightened promotion and marketing of the concert. While Howey’s efforts of others prong may have been met in this circumstance, this does not necessarily mean that the initial ticket purchases constituted an investment contract. If, for instance, the initial ticket purchasers purchased their tickets primarily to attend the concert instead of seeking profits through a resale, then Howey’s expectation of profits prong would not have been satisfied because of Forman’s investment/consumption distinction. See supra note 60 and accompanying text.

ii.  The Irrelevance of Investors’ Expectations Concerning the Use of Their Sales Proceeds

In a primary transaction case, investors’ sales proceeds ultimately will flow to the promoter, who then is expected to use the proceeds to facilitate the enterprise in which the purchasers are invested. That will not be the case in a secondary transaction case. In this circumstance, investors’ sales proceeds instead will flow to the trading counterparties, who ordinarily will not be the enterprise’s promoter and also will not direct the sales proceeds to the promoter. For instance, in a secondary crypto asset transaction, the purchasers’ proceeds usually will not flow to the crypto asset’s sponsors and instead will be retained by the trading counterparties. For this reason, while investors in a primary transaction case may have a reasonable expectation that their sales proceeds will be used by the promoter to facilitate the enterprise in which they are invested, investors in a secondary transaction case generally will not reasonably have those expectations, as their sales proceeds will directly flow to trading counterparties, who will usually not be the promoter, though investors may reasonably have those expectations in certain circumstances.129For instance, suppose that the promoter was able to conduct the offering only because the initial purchasers expected to resell the instrument to secondary investors. Suppose further that the secondary investors knew, or reasonably should have known, of the initial purchasers’ expectation and necessity of resale. In this case, it may have been reasonable for the secondary investors to have expected their sales proceeds to have effectively been used by the promoter to facilitate the enterprise, with the initial purchasers merely serving as a conduit of those proceeds.

The fact that investors in a secondary transaction case may not reasonably believe that their sales proceeds will be used by the promoter to facilitate the enterprise is doctrinally irrelevant to Howey’s efforts of others prong. Howey’s efforts of others prong requires that investors reasonably expected their profits to have been significantly determined by others’ entrepreneurial or managerial efforts, and the operative rule makes no mention of investors’ expectations concerning the use of their sales proceeds.130See, e.g., United Hous. Found., Inc. v. Forman, 421 U.S. 837, 852 (1975) (Howey requires “a reasonable expectation of profits to be derived from the entrepreneurial or managerial efforts of others”). So, for example, while investors’ sales proceeds in a secondary crypto asset transaction case may not have flowed to the crypto asset’s sponsors, Howey’s efforts of others prong will still have been met so long as traders reasonably expected the crypto asset’s value to have been significantly determined by the entrepreneurial or managerial efforts of others, such as the sponsor.131Nonetheless, in its recent summary judgment decision, the court in the SEC’s Section 5 action against Ripple implicitly adopted the rule that Howey’s efforts of others prong cannot be met if investors do not reasonably expect their sales proceeds to be used by the sponsor to facilitate the underlying enterprise. See SEC v. Ripple Labs, Inc., No. 20-cv-10832, 2023 U.S. Dist. LEXIS 120486, at *35–37 (S.D.N.Y July 13, 2023). In that case, the crypto asset sponsor initially sold the crypto asset directly to certain counterparties using as conduits crypto exchanges in which secondary transactions of the crypto asset were already occurring. Id. at *8. The court concluded that because the class of investors who purchased the initially offered crypto asset on those crypto exchanges could not have known whether their sales proceeds flowed to the crypto asset’s sponsor or instead to a trading counterparty, they could not have reasonably expected that the sponsor would use their sales proceeds to increase the crypto asset’s value, thus defeating a finding of Howey’s efforts of others prong. See id. at *35–36. The case remains pending as of this Article’s writing, with the court recently denying the SEC’s motion to certify interlocutory appeal of the court’s summary judgment decision. See Order Denying Motion for Leave to Appeal, SEC v. Ripple Labs, Inc., No. 20-cv-10832 (S.D.N.Y. Oct. 3, 2023). In other words, the appropriate focus of Howey’s efforts of others prong is on investors’ beliefs about whose entrepreneurial or managerial efforts significantly determined their expected profits, not investors’ beliefs about how their sales proceeds specifically would be put to use.132Howey’s efforts of others prong also does not require that the promoter itself, as opposed to some other non-investor, undertake the requisite entrepreneurial or managerial efforts. See supra note 106.

There is no public policy justification for limiting the investment contract category to only those circumstances in which investors reasonably expected the promoter to use their funds to facilitate the enterprise in which they are invested. First, the adoption of that limiting rule would permit instruments that otherwise would be investment contracts to permissibly be the subject of an unregistered public offering if the offering were structured in a manner that investors could not readily discern whether their proceeds would flow to the sponsor.133Others have made a similar point. See, e.g., John Coffee, The Next Big Case in the Crypto Wars, N.Y.L.J. (Sept. 20, 2023), https://www.law.com/newyorklawjournal/2023/09/20/the-next-big-case-in-the-crypto-wars/?slreturn=20231020000848 [https://perma.cc/3V68-JZBQ] (explaining that linking Howey’s efforts of others prong to investors’ knowledge of the use of their sales proceeds “creates a dangerous incentive for issuers to structure offerings so as to hide critical facts” and leads to “[t]he perverse result . . . that the less the investor knows, the safer the issuer becomes”). For example, if a promoter simultaneously undertook multiple investment projects, the promoter could pool all investors’ funds, which may result in investors of any given project not knowing whether the promoter specifically used their funds to finance their project, even though there was no question that the investors’ profits would be significantly determined by the promoter’s entrepreneurial or managerial efforts.

Second, limiting the investment contract category so that it only encompasses circumstances in which investors reasonably expected the promoter to use their funds to facilitate the enterprise would exclude an expansive swath of secondary transaction investment contract cases from the scope of federal securities law. This near wholesale carveout of an entire transaction class from the reach of the securities laws would serve no public policy goal and instead would undermine the investor protection objectives that the securities laws seek to promote.

C.  The Value of Additional Definitional Clarity

Crypto asset sponsors and crypto exchanges sometimes criticize Howey’s investment contract analysis when applied to the crypto asset context as unreasonably uncertain.134 See, e.g., Coinbase, Petition for Rulemaking: Digital Asset Securities Regulation (July 21, 2022), at 8, https://www.sec.gov/files/rules/petitions/2022/petn4-789.pdf [https://web.archive.org/web/

20231119200747/https://www.sec.gov/files/rules/petitions/2022/petn4-789.pdf] (“Applying the Howey test[] piecemeal to an entire market sector has proven itself to be an unworkable solution.”).
Any offering of securities, unless exempted, must be registered, and any exchange that facilitates securities transactions must register, unless exempted. It is thus important to crypto asset sponsors and exchanges that they have clear guidance on which of the crypto assets they may offer or list are securities under federal securities law. Crypto asset sponsors and crypto exchanges contend that Howey fails to clearly inform them which crypto assets may be securities, and thus subject them to federal securities law, including its robust registration requirements.135See, e.g., id. at 5 (“Although Coinbase, and other digital asset trading venues, have identified a number of digital assets that are clearly not securities, and therefore may trade without SEC registration, there are other assets that are harder to classify relying on the SEC’s application of the Howey and Reves tests. Many of the questions we ask [in this petition] highlight the challenge of identifying which of these digital assets, if any, fall within the Commission’s jurisdiction . . . .”). Some scholars have expressed discontent over the lack of definitional clarity. See, e.g., Goforth & Guseva., supra note 59, at 314 (“Cryptoassets do not act like traditional securities, and they do not always fit well with the existing framework. The lack of regulatory clarity remains a serious impediment to safe and compliant development of cryptoasset markets.”). The effect of Howey’s uncertainty on crypto asset sponsors and exchanges is heightened because the pertinent transactions are not one-off or episodic transactions but instead are the foundations of those market participants’ business models.

The discussion in the previous Section shows that the effects of any uncertainty in Howey’s application in the crypto asset context extends beyond crypto asset sponsors and exchanges and also encompasses crypto asset traders. Crypto asset traders who are subject to secondary crypto asset trading fraud, or other forms of misconduct prohibited by the federal securities laws such as market manipulation, may seek to recover through claims asserted under the securities laws but only to find their claims dismissed on grounds that the pertinent transactions did not involve securities.136Crypto asset traders may also unknowingly be swept within securities law’s various prohibitions, such as insider trading. In SEC v. Wahi, No. 22-cv-01009, 2023 U.S. Dist. LEXIS 89067 (W.D. Wash. May 22, 2023), the defendant traders who were alleged by the SEC to have unlawfully engaged in insider trading argued that due process prohibits the SEC from enforcing its position that the at-issue crypto assets were securities because market participants, such as the defendants in the case, lacked fair notice about the scope of the investment contract category. Defendants’ Motion to Dismiss at 38–39, SEC v. Wahi, No. 22-cv-01009 (W.D. Wash. May 22, 2023).

As the case law grows and matures, crypto asset market participants’ uncertainty about Howey’s analysis in the crypto asset context should abate.137As cases are litigated, doctrinal fissures will arise, but the appellate process provides a mechanism for resolution of those fissures. For example, the court in the SEC’s case against Terra issued a decision in which it rejected the reasoning of the Ripple court’s decision discussed above concerning Howey’s efforts of others prong. See SEC v. Terraform Labs Pte. Ltd., No. 23-cv-1346, 2023 U.S. Dist. LEXIS 132046, *44–46 (S.D.N.Y. July 31, 2023) (rejecting the reasoning of the Ripple decision concerning Howey’s efforts of others prong); supra note 132 (describing the Ripple decision). The Second Circuit should have the opportunity to resolve this intra-circuit split at the appropriate time. The opinions courts have authored to date in crypto asset cases concerning the investment contract question have been detailed and reasoned (even if one disagrees with their reasoning or conclusions).138See supra note 65. Future opinions at that level of care should provide market participants with a clearer understanding of when crypto asset transactions are within the scope of securities law. SEC staff may also offer additional guidance on crypto assets and the definitional question.139As noted, SEC staff has already issued some guidance on the definitional question, see supra note 104 and accompanying text, but some have questioned its clarity and value of that guidance in ameliorating market participants’ legal uncertainty. See, e.g., Carol R. Goforth, Regulation by Enforcement: Problems with the SEC’s Approach to Cryptoasset Regulation, 82 Md. L. Rev. 107, 143–48 (2022).

The pace of such doctrinal development may be slower than market participants prefer, especially crypto asset sponsors and exchanges.140In addition to calling for legislative change, some crypto asset participants have also called on the SEC to engage in rulemaking to clarify when crypto assets are securities. See, e.g., Coinbase, Petition for Rulemaking, supra note 135. Some scholars and market participants further argue that in the absence of rulemaking, the SEC is improperly “regulating by enforcement.” See Goforth, supra note 140, at 143–48. But see Chris Brummer, Yesha Yadav & David Zaring, Regulation by Enforcement, 96 S. Cal. L. Rev. (forthcoming 2024) (concluding that regulators generally have latitude as to whether to make policy through rulemaking, adjudication, or by filing a suit, though documenting some exceptions to that general principle). Further clarity may come in the form of legislation that seeks to articulate with more specificity the circumstances when a given crypto asset will be within the scope of securities law. Some of the introduced or contemplated bills would define a large class of crypto assets as commodities rather than securities.141See Alexander C. Drylewski, David Meister, Daniel Michael, Chad E. Silverman, Daniel Merzel & Jon Concepción, New Senate Crypto Bill Would Limit SEC Regulatory Role in Favor of CFTC, Skadden (July 20, 2023), https://www.skadden.com/insights/publications/2023/07/new-senate-crypto-bill-would-limit-sec-regulatory-role [https://perma.cc/W5ZR-VJWZ]. To the extent a crypto asset is deemed to be a commodity rather than a security, traders sustaining losses from secondary trading crypto asset fraud could seek recovery through a CFTC Rule 180.1 class action rather than a Rule 10b-5 class action.142See supra note 50. If the substantive claim underlying secondary trading crypto asset fraud class actions were to shift to Rule 180.1, the public policy discussion in Part III below would also apply in that context. 

Finally, it is worth observing that certain aspects of the securities laws’ registration and post-offering disclosure requirements are not especially well-suited for the crypto asset context. With respect to the registration process, scholars have observed that because the disclosures required by registration were developed with an eye to offerings of more conventional securities like stocks and bonds, they do not always align well with crypto asset offerings.143According to Brummer:

[T]he base layer disclosure documents for securities law fail to anticipate the particular technological features of decentralized technologies and infrastructures. Instead, they assume and inquire only into governance, technology, and other operational features inherent to industrial economies, and which are often different, or altogether absent in digital and blockchain-based economies. As a result, securities forms—including Form S-1, the document initial issuers of securities file with the SEC to disclose key facts about their business—fail to anticipate decentralized architectures, and are both over- and under-inclusive in terms of the disclosure requirements that one would expect of issuers of blockchain-based securities.

Chris Brummer, Disclosure, Dapps, and DeFi, 5.2 Stan. J. of Blockchain L. & Pol’y 137, 146–47 (2022) (footnotes omitted).
This point about incongruity also applies to the regulatorily mandated post-offering disclosures. For instance, suppose that a crypto asset sponsor conducts a registered offering of the crypto asset. Through section 15(d) of the Securities Exchange Act,14415 U.S.C. § 78o(d). the sponsor becomes subject to the ongoing reporting requirements of section 13(a) of the Exchange Act, such as the requirement to prepare and file an annual report.145See id. (issuer that conducts a registered offering becomes subject to the ongoing reporting requirements of Section 13(a) of the Securities Exchange Act, 15 U.S.C. § 78m); 15 U.S.C. § 78m(a) (ongoing reporting requirements). Suppose that, at some point, the crypto asset undergoes complete operational decentralization such that the crypto asset sponsor ceases to be involved in any aspect of the crypto asset and instead the development, operation, management, and promotion of the crypto asset and any associated applications are undertaken by a decentralized group of other stakeholders.

In this case, should the sponsor, as the crypto asset’s issuer, still be obligated to make the required ongoing disclosures, on the ground that section 13(a) obligates the “issuer” to make those disclosures?146See 15 U.S.C. § 78m(a) (requirements directed at the registered security’s “issuer”). Alternatively, if the ongoing reporting obligations instead were to somehow apply to the decentralized non-issuer group, then how, as a practical matter, could such a diffused group be able to prepare the necessary periodic and current reports? There is also the question of whether the information called for by the required post-offering disclosures is meaningful and appropriate for the crypto asset context. These questions demonstrate that some regulatory effort should be directed at reformulating the post-offering disclosure requirements so that they are better suited for the crypto asset context.147For a proposal to revise the Securities Act’s disclosure regime so that it is better suited for crypto asset initial offerings, see Chris Brummer, Trevor I. Kiviat & Jai Massari, What Should Be Disclosed in an Initial Coin Offering?, in Cryptoassets: Legal, Regulatory, and Monetary Perspectives 157 (Chris Brummer ed., 2019).

III.  PUBLIC POLICY CONSIDERATIONS PERTINENT TO CRYPTO ASSET-BASED RULE 10B-5 CLASS ACTIONS

In addition to the doctrinal propriety of defrauded crypto asset traders relying on Rule 10b-5 class actions, there is the normative question of whether defrauded traders should be able to rely on Rule 10b-5 class relief as a matter of public policy. That issue arises in part because of the considerable skepticism that some legal scholars have expressed about the use of Rule 10b-5 class actions in stock-based cases as effective compensation and deterrence mechanisms.

The assault on stock-based Rule 10b-5 class actions has primarily been through two longstanding critiques—the circularity and diversification critiques.148See, e.g., James Cameron Spindler, We Have a Consensus on Fraud on the Market—And It’s Wrong, 7 Harv. Bus. L. Rev. 67, 77 (2017) (“As the assault on fraud on the market has progressed, two of the primary weapons have been the circularity and diversification critiques.”). Cox is understood to have first identified the circularity critique in 1997, with Coffee later enshrining the concept in the literature. See James D. Cox, Making Securities Fraud Class Actions Virtuous, 39 Ariz. L. Rev. 497, 509 (1997); John C. Coffee, Jr., Reforming the Securities Class Action: An Essay on Deterrence and Its Implementation, 106 Colum. L. Rev. 1534, 1558 (2006). The diversification critique traces its roots to a 1985 article by Easterbrook and Fischel and a 1992 article by Mahoney. See Spindler, supra, at 77–82 (discussing Frank H. Easterbrook & Daniel R. Fischel, Optimal Damages in Securities Cases, 52 U. Chi. L. Rev. 611 (1985) and Paul G. Mahoney, Precaution Costs and the Law of Fraud in Impersonal Markets, 78 Va. L. Rev. 623 (1992)). For a discussion of some of the objections to Rule 10b-5 stock-based class actions other than the circularity and diversification critiques, see Coffee, supra, at 1538–56. More recently, some scholars have challenged the relevancy of those critiques,149See Spindler, supra note 149. while others have articulated theories that provide alternate public policy justifications for stock-based Rule 10b-5 class actions, with the leading example being a corporate governance justification for stock-based Rule 10b-5 class actions.150The corporate law justification was developed by Fox. See Merritt B. Fox, Why Civil Liability for Disclosure Violations When Issuers Do Not Trade?, 2009 Wis. L. Rev. 297 (2009). Despite the lingering skepticism by some academics that stock-based Rule 10b-5 class actions fail to achieve their public policy objectives, they remain a core fixture of securities practice.

If the public policy justification for crypto asset-based Rule 10b-5 class actions is significantly weaker than stock-based Rule 10b-5 class actions, then we may want a preemptive curtailment of those litigations through legislative action or doctrinal reorientation before they become commonplace as stock-based Rule 10b-5 class actions have become. More generally, if the public policy justifications are significantly weaker for crypto asset-based Rule 10b-5 class actions than stock-based ones, that would justify different legal treatment of the two types of class actions. This Part of the Article evaluates that particular public policy question viewed through the lens of the circularity and diversification critiques and the corporate governance justification.

The public policy determinations below are mixed and preliminary in part, but do not lend support to the notion that the public policy justification for crypto asset-based Rule 10b-5 class actions is significantly weaker than the public policy justification for stock-based Rule 10b-5 class actions. First, the circularity critique—the leading critique in the stock-based Rule 10b-5 context—is significantly attenuated in the crypto asset context because the principal factors supporting the circularity critique in the stock context are substantially absent in the crypto asset context. There are countervailing reasons why the diversification critique may be more or less relevant in the crypto asset context than in the stock context, but no reason to expect that the diversification critique has significantly more force in the crypto asset context than in the stock context. On the other hand, the corporate governance justification loses relevance in the crypto asset context.

Sections A, B, and C below address the circularity critique, the diversification critique, and the corporate governance justification, respectively. Section D provides a few comments concerning the issue of frivolous litigation.

A.  The Circularity Critique

The key critique against Rule 10b-5 stock-based class actions is circularity, which is the idea that when class actions settle, as nearly all do, the settlement is ultimately paid for by the company’s shareholders.151See, e.g., Spindler, supra note 149, at 69 (“The circularity critique holds that shareholder class actions amount to shareholders suing themselves.”) (quotation marks omitted). This serves to undermine both the deterrence and compensatory features of the class action process. Because of its centrality to public policy analysis of securities class actions, it is valuable to work through some of the details of the circularity critique before turning to its applicability in the crypto asset context.152Both the circularity critique and the diversification critique have been subjected to considerable academic inquiry. See id. at 91 (“The circularity and diversification critiques have been remarkably successful. Academic adherents are legion and comprise a veritable who’s who of securities law. . . . It appears most legal academics who propose significant securities class action reform have adopted some form of these arguments.”). Many academic articles have evaluated the circularity critiques and the diversification critique, though to a lesser extent. For a partial list, see id. at 91 nn.114–31.

1.  Circularity in the Stock Context

Circularity arises in the stock context for two reasons. The first driver of the circularity critique is that individually named directors and officers usually will not directly pay any of the settlement amount because of D&O insurance and indemnification. A study by Klausner, Hegland, and Goforth, for instance, evaluated a sample of over two hundred and fifty securities class actions that had settled and found that directors and officers did not make any payments in 98% of those cases.153Michael Klausner, Jason Hegland & Matthew Goforth, How Protective Is D&O Insurance in Securities Class Actions? An Update, PLUS J., May 2013, at 1, 3. Directors did not make payments in any of those settled cases and corporate officers made payments in 2% of the evaluated cases. Id. That number is not surprising given that nearly all public companies purchase D&O insurance.154See Sean J. Griffith, Uncovering a Gatekeeper: Why the SEC Should Mandate Disclosure of Details Concerning Directors’ and Officers’ Liability Insurance Policies, 154 U. Pa. L. Rev. 1147, 1168 n.66 (2006). Empirical studies also indicate that directors and officers may not pay any reputational penalty when they are accused of fraud.155See, e.g., Eric Helland, Reputational Penalties and the Merits of Class-Action Securities Litigation, 49 J.L. & Econ. 365 (2006). The lack of director and officer liability thus mitigates the deterrence effect of securities class actions on director and officer conduct.

The second driver of the circularity critique is the relationship between shareholders and the company’s net income. Because individually-named defendants ordinarily do not contribute to stock-based securities class action settlements, settlements instead are paid for by the company, either directly or through the company’s D&O insurance, or some combination of the two.156The study discussed above determined that of the settlements in the sample, the insurer paid the entire settlement amount in 57% of the settlements, the insurer paid for just a part of the settlement in 28% of the cases, and the insurer paid for none of the settlement in the remaining 15% of cases. See Klausner et al., supra note 154, at 1. Accordingly, settlement of a Rule 10b-5 class action against an issuer and its directors and officers usually will be funded by the issuer directly or indirectly through the cost of the D&O insurance that the issuer has purchased. Because shareholders are the company’s residual claimants, these corporate expenditures associated with settlement payments are ultimately borne by shareholders in the form of diminished cash flow.

One group of shareholders bearing the cost of settlement will be the same ones who were injured by the fraud (assuming they did not sell their shares). Because these shareholders will be partially footing their own recovery, full compensation will not be achieved. The other of the firm’s current shareholders responsible for the settlement will be ones who were not class plaintiffs. These shareholders have no direct responsibility for the fraud but will be paying for the injured shareholders’ recovery, which implicates fairness considerations.

The circularity critique can be more formally illustrated through a simple model that embodies these observations. Consider a stock-based Rule 10b-5 class action in which the subject company has N shares outstanding that were trading at a pre-fraud price of P0 per share. Assume there was a fraudulent material misrepresentation attributed to the issuer and its directors and officers that increased the stock’s price to P1, which eventually returned to the pre-fraud level of P0 once the market became aware of the fraudulent statement. 

Suppose that the class of the company’s shareholders who purchased shares at the inflated price bring a Rule 10b-5 class action against the company and its directors and officers. For simplicity, assume these injured shareholders do not sell their shares. Of the company’s N shares outstanding, suppose that n shares are represented by the litigating class. So, if π is the fraction of the company’s outstanding shares represented by the litigating class, then π = n/N. The case settles and then pays s dollars per share to each of the n shares purchased during the class period, for a total settlement payment of s*n. Given the discussion above regarding corporate obligations for class action settlements, the company will pay a fraction α of the settlement, where α is between 0 and 1, which ultimately will be borne by the firm’s shareholders holding the N shares. In discussions of the circularity critique it is ordinarily assumed, either expressly or implicitly, that the company directly or indirectly pays the entirety of the settlement, which corresponds to the circumstance in which α = 1.

Given this setup, first consider the post-settlement welfare of the shareholders who were injured by the fraud because they paid the inflated price for the company’s stock. For expositional simplicity, consider a shareholder who is a member of the class and who purchased just a single share of the company’s stock. The value of the share that the shareholder maintains is P0, but they purchased the share for P1, which means that the net value of their portfolio is P0 – P1. The shareholder receives a settlement payment of s but because shareholders ultimately bear the company’s settlement expenditure of α(s*n), each of the firm’s shareholders bears a per share settlement expense equal to α(s*n)/N, or α(s*π). Thus, a class plaintiff receives a per-share net settlement amount of s – α(s*π). Collecting terms, the per-share post-settlement welfare of a class plaintiff is:

          P0 – P1 + s(1 – α*π)                                                         (1)

Even in the hypothetical but unrealistic world in which there are no litigation costs and no plaintiffs’ attorney fee awards,157Those fees ordinarily account for nearly one quarter of the settlement amount in securities class actions. See Lynn A. Baker, Michael A. Perino & Charles Silver, Is the Price Right? An Empirical Study of Fee-Setting in Securities Class Actions, 115 Colum. L. Rev. 1371, 1389 tbl.1 (2015). However, the percentages are somewhat smaller for the largest settlements. See Stephen J. Choi, Jessica Erickson & A.C. Pritchard, Working Hard or Marking Work? Plaintiffs’ Attorneys Fees in Securities Fraud Class Actions, 17 J. Empirical Legal Stud. 438, 449 tbl.2 (2020) (attorney fees were 18.5% of the settlement among the top decile of settlements in the sample). and even if the settlement were to compensate defrauded shareholders for the full amount of their overcharge, a settlement would not make the injured shareholders whole so long as the corporation pays at least some portion of the settlement. That is evident in the model above. To see this, suppose there are no litigation costs or plaintiffs’ attorney fee awards and the settlement fully pays the overcharge—that is, s = P1 – P0. In this case, the post-settlement welfare of the injured shareholder discussed above who holds one share of the stock is – α*π(P1 – P0),158Using equation (1), the per-share post-settlement welfare of the injured shareholder under consideration is P0 – P1 + (P1 – P0)*(1 – α*π), which equals – α*π(P1 – P0). which is negative whenever the corporation pays at least some portion of the settlement, that is, whenever α is greater than 0.

In other words, while the settlement makes class shareholders whole in the first instance, they ultimately are not fully compensated because they each pay a portion of the settlement amount equal to α(s*π) per share. Each of the other firm’s shareholders also pay a per-share amount equal to α(s*π) to finance the settlement. As this example shows, the circularity critique supports the position of those who argue that stock-based Rule 10b-5 class actions fail to meet compensation and deterrence objectives and implicate fairness concerns.159For a summary of the arguments, see Spindler, supra note 149, at 86–91. Spindler does not agree that circularity poses an issue in stock-based Rule 10b-5 class actions. He uses the informational efficiency of stock prices to develop a model similar to the one above that shows that circularity will not arise because of a stock’s price fully adjusting to the expected settlement amount. See id. at 93–95.

2.  Circularity in the Crypto Asset Context

Circularity is a significantly attenuated consideration for Rule 10b-5 crypto asset class actions because the drivers of the critique discussed above are substantially absent in the crypto asset context. To start, individual defendants in crypto asset Rule 10b-5 class actions are much less likely to be able to rely on insurance or indemnification as a shield from personal liability, relative to the stock-based context. First, because of the operational decentralization discussed in Section I.A above, an individual wrongdoer may not be associated with any entity such as a corporate body that provides indemnification rights or insurance coverage. Second, while publicly available data is lacking, D&O coverage appears very limited in the crypto asset context because of an avoidance by D&O carriers of the crypto space, as well as high premiums and unfavorable terms.160See Noor Zainab Hussain & Carolyn Cohn, Insurers Denying Coverage to FTX-Linked Crypto Firms as Contagion Risk Mounts, Ins. J. (Dec. 19, 2022), https://www.insurancejournal.com/

news/international/2022/12/19/699978.htm [https://perma.cc/VME7-JJG3] (“Insurers were already reluctant [prior to the collapse of the crypto exchange FTX] to underwrite asset and directors and officers (D&O) protection policies for crypto companies because of scant market regulation and the volatile prices of Bitcoin and other cryptocurrencies. Now, the collapse of FTX . . . has amplified concerns.”); Josh Liberatore, Crypto Winter Raises Host of D&O Coverage Issues, Law360 (Feb. 10, 2023, 9:38 PM), https://www.law360.com/articles/1575237 [https://perma.cc/FLC9-2XG9] (quoting a D&O lawyer for the observation that “[m]ost D&O underwriters view crypto firms as toxic in today’s environment, so the availability of D&O insurance for those firms is quite limited . . . . Even when available, the insurance is expensive and somewhat limited in scope of coverage”).
So, even if an individual wrongdoer is affiliated with a centralized entity, the individual may not have the protection of D&O coverage, or only very limited protection, relative to an individual defendant in a stock-based Rule 10b-5 action. Furthermore, the apparent rarity of D&O coverage presumably would make indemnification a rarity as well, as a crypto asset entity would not be readily able to purchase Side B coverage to cover its indemnification expenses.161See Tom Baker & Sean J. Griffith, The Missing Monitor in Corporate Governance: The Directors’ & Officers’ Liability Insurer, 95 Geo. L.J. 1795, 1802 (2007) (“[Side B] coverage protects the corporation itself from losses resulting from its indemnification obligations to individual directors and officers . . . . ”).

The absence of crypto asset holders’ cash flow rights further diminishes the relevance of the circularity critique in the crypto asset context. As discussed in Section I.B above, except in very rare circumstances, a crypto asset’s holders will not be the recipients of any profit distributions resulting from their crypto asset holdings. So, if a Rule 10b-5 crypto asset class action settles, then the crypto asset’s holders may not bear any of the cost of the settlement, as would be the case in the stock context.

For instance, suppose the defendant set in a Rule 10b-5 crypto class action includes an entity involved in developing the crypto asset and the entities’ directors or officers. Suppose that the class action settles for s dollars per asset purchased during the class period. None of the settlement amount will be borne by the crypto asset’s holders (other than any defendant who may be a holder). Even if only some of the settlement is paid by the individual defendants, leaving some of the settlement to be paid by the named entity, that expenditure will not be passed down to the class plaintiffs or any other of the crypto asset’s traders because none have cash flow rights in the named entity.

With respect to the stylized model above, the named entity defendant may pay a fraction α of the settlement but because that amount is not borne by the crypto asset’s traders, the class plaintiffs’ welfare after the settlement is P0 – P1 + s for each share purchased during the class period. Putting aside any litigation costs or attorney fee awards, this then supports the feasibility of complete compensation if the settlement amount is set equal to the overcharge.162As noted, plaintiffs’ attorney fees can be large in stock-based cases. See supra note 158. However, there is no reason to believe that this issue is significantly heightened in the crypto asset context. Furthermore, to the extent the market for plaintiffs’ lawyers is competitive, those fees should accurately reflect the cost of litigation and thus are a necessary ingredient to the private enforcement of the securities laws. Finally, if the fee awards were significantly higher in crypto asset Rule 10b-5 cases than in stock-based Rule 10b-5 cases, plaintiffs’ attorneys would be expected to substitute from the latter to the former, thus equalizing the fee awards in the two types of cases. One countervailing consideration is that, to the extent the defendant is actively involved in developing or supporting the crypto asset or any associated applications, a settlement payment by the defendant may impede its ability to effectively engage in those facilitating efforts. By decreasing the perceived value of the crypto asset or any associated applications, the settlement may lower the crypto asset’s price, which would adversely affect the crypto asset’s holders, including class plaintiffs.

In addition to the possibility of full compensation, because crypto asset traders outside of the class are not paying for the settlement of the class plaintiffs, the fairness concerns noted above are ameliorated in the crypto asset context. A related implication of the circularity critique in the stock-based context is that putting litigation costs to the side, litigation is zero-sum, in that shareholders’ aggregate wealth is unchanged after a settlement or judgment.163This requires the assumption that the company in the stock-based context directly or indirectly pays for the entire settlement. In this case, every dollar paid to a class plaintiff comes from the company, and therefore the company’s shareholders, and is thus a mere intra-shareholder transfer that leaves shareholders’ aggregate wealth unaffected. That is not the case in the crypto asset context. Because the cost of a settlement is not borne by the crypto asset’s traders, their aggregate welfare will increase after a settlement, putting aside the point above about a settlement potentially having adverse effects on development of the crypto asset or any associated applications. Finally, deterrence is heightened relative to the stock context because of the significantly greater likelihood that the individual defendants responsible for the fraud will incur monetary liability and thus be better incentivized to avoid that conduct in the first instance.

B.  The Diversification Critique

Diversification is another leading critique lodged in the literature against stock-based Rule 10b-5 class actions. While circularity focuses on compensation and deterrence considerations in a single securities class action, the diversification critique peers with a broader lens. It inquires how a shareholder’s entire portfolio is affected by fraud and concludes that the cost of fraud can be diversified away, thereby nullifying the role of Rule 10b‑5 class actions as a remedial mechanism.164The labeling of this critique as the diversification critique is from Spindler. See Spindler, supra note 149. Sometimes the diversification critique is considered a component of the circularity critique. See, e.g., Jill E. Fisch, Confronting the Circularity Problem in Private Securities Litigation, 2009 Wis. L. Rev. 333, 346 (2009) (“The theory behind the circularity argument is that the market consists primarily of diversified investors for whom the gains and losses from securities fraud net out.”).

The key features of the diversification critique can be seen through a simplified model. Suppose that there are N publicly traded firms and a single investor. There are two time periods, period one and period two. In period one, the investor decides, for each one of the N firms, whether or not to purchase a single share of the firm’s common stock. So, in the first period, the investor can purchase up to N shares—one share of each of the N firms—but may invest in just a subset of the N firms. In the second period, the investor sells all of the shares that they purchased in the first period.

Suppose further that each of the N firms will be the target of fraud, the effect of which will be to artificially and temporarily inflate the firm’s stock price. Assume, for further simplicity, that all firms have the same fundamental, that is, non-fraud, share price and that the fraud will have the same price-inflating effect on each firm’s stock. For any given firm, there are two possibilities of the timing of the fraud. One possibility (which can be referred to as scenario one) is that the fraud occurred immediately before period one and is revealed to the market between period one and period two. The second possibility (which can be referred to as scenario two) is that the fraud occurred immediately after period one and is revealed to the market after period two. Firms are randomly assigned to the two scenarios with equal probability and the firms’ assignments are uncorrelated.

This setup illuminates the two key tenants of the diversification critique. First, the diversification critique postulates that, for any given issuer, every shareholder of the firm ex ante is as likely to be a victim of fraud as a beneficiary. This can be seen in the model above. For any firm in which the investor became a shareholder in period one, the investor’s likelihood of being in scenario one (in which case the investor will have purchased at the fraud-inflated price and sold at the lower, fundamental price) is the same as the likelihood of being in scenario two (in which case the investor will have purchased at the fundamental price and sold at the higher, fraud-inflated price). This means that even without a compensatory scheme in place, the expected cost of fraud to the investor for any given stock in their portfolio is zero: the likelihood that a shareholder will incur the cost of fraud is the same as the likelihood that they benefit, and the cost and gains are the same. But note that while the expected cost to the shareholder from fraud directed at any given firm in which the shareholder is invested is zero, fraud still affects the variability of the shareholder’s portfolio, since half the time the trader will be a victim of fraud and the other half the time, a beneficiary.

The second key tenant of the diversification critique is that investors can diversify away the risk that fraud injects into their portfolio. In the stylized model above, that diversification occurs through the investor taking positions in a greater number of firms. In the context of that model, while fraud will have the same, non-zero effect on the expected value of a portfolio comprised of the shares of a single firm and a portfolio comprised of the shares of many firms, fraud will result in the latter portfolio being less risky than the former portfolio. If stock traders are sufficiently diversified, then fraud will not only have zero expected cost on their portfolios but also will cause traders’ portfolios to be exposed to only limited additional risk.165Spindler traces the historical development of the diversification critique, culminating in its modern form, which is discussed in the text above and embodied by Grundfest’s articulation. See Spindler, supra note 149, at 77–86; Joseph A. Grundfest, Damages and Reliance Under Section 10(b) of the Exchange Act, 69 Bus. Law. 307, 313–14 (2014). (“[B]ecause aftermarket transactors are both purchasers and sellers over time, and because the probability of profiting by selling into an aftermarket fraud is the same as the probability of suffering a loss as a consequence of buying into an aftermarket fraud, the aggregate risk created by aftermarket fraud can be viewed as diversifiable. Indeed, on average and over time, the risk of being harmed by aftermarket securities fraud (at least as measured exclusively by stock prices) averages to zero for investors who purchase and sell with equal frequency.”). Note that in Grundfest’s articulation, investors’ risk mitigation occurs through investors making numerous buy-sell decisions over time, while in the stylized model in the text above, the risk mitigation occurs through investors increasing the number of firms in which they maintain an equity position.

As this discussion indicates, the strength of the diversification critique as a basis for concluding that fraud has no ex ante adverse effect on shareholder welfare turns primarily on two things. First, the theory’s strength depends on the extent to which shareholders are diversified. If shareholders are not well-diversified, then even though fraud will not affect the expected value of shareholders’ portfolios, it will increase their portfolios’ riskiness, which will undermine the welfare of risk averse shareholders. Second, the critique’s strength turns on the extent of shareholder risk aversion. If shareholders are strongly risk averse, then the effects of fraud on shareholder welfare through increased portfolio volatility will be more pronounced than if they were less risk averse, all else equal. The reason is that a more risk averse shareholder experiences greater disutility from an increase in portfolio risk than a less risk averse shareholder, all else equal.166Apart from an absence of sufficient diversification and sufficiently risk-averse traders, there may be other reasons why the diversification critique does not fully support the eradication of legal sanction for fraud. For instance, the critique assumes that shareholders’ portfolios are such that shareholders have an equal likelihood of being the beneficiaries of fraud as victims. However, some trader types may be more likely to be the victims of fraud than beneficiaries. See, e.g., Fisch, supra note 165, at 347 (“Informed traders are more likely to suffer net losses from securities fraud . . . because they trade on information, including fraudulent information.”). See also Spindler, supra note 149, at 102–13 (providing a game theoretic argument against the diversification technique based on precaution costs). 

These observations show that an assessment of whether the diversification critique is more or less pronounced in the crypto asset context than the stock context should focus, at least in the first instance, on comparing the extent of stock traders’ diversification and risk aversion with the extent of crypto asset traders’ diversification and risk aversion.167For simplicity, this discussion in this Section assumes that stock traders are distinct from crypto asset traders. Of course, some traders trade both stock and crypto assets. For those traders, the discussion in this Section can be understood as relating separately to the equity portion of their portfolio and the crypto asset portion of their portfolio. Empirical work is needed in order to be able to competently assess how the extent of crypto asset investors’ diversification and degree of risk aversion compares to that of stock traders.

Though strong conclusions are not possible in the absence of this empirical analysis, it is reasonable to expect that the implications of the diversification and risk aversion considerations break in different directions, but nothing suggests that those considerations are such that the diversification critique has significantly greater relevance in the crypto asset context than in the stock context. Turning first to trader diversification, it is likely that stock traders are better diversified than crypto asset traders. Through the widespread availability of index funds, index-based exchange-traded funds (“ETFs”), and managed funds, equity traders can readily and cheaply diversify their stock portfolios. The prominence of those instruments suggests that many equity traders do maintain diversified stock portfolios. That likely is not the case for crypto asset investors given that the means for crypto asset investors to easily diversify their crypto asset holdings, such as through tokenized index funds that track a broad basket of crypto assets, are not commonplace, and crypto asset investors appear to prefer purchasing and selling individual crypto assets rather than funds.

To the extent crypto asset traders are less diversified than stock traders, this would translate into the diversification critique having less relevance in the crypto asset context than in the stock context. On the other hand, it is reasonable to expect the risk aversion consideration to work in the other direction, because crypto asset traders may be less risk averse than stock traders. As discussed in Section I.C above, crypto asset prices are very volatile as a general matter and more volatile than stock prices as a relative matter. That crypto asset traders are willing to trade in the face of such volatile prices may be reflective of those traders being more willing to accommodate risk than stock traders. To the extent that is correct, then this would provide a mechanism for the diversification critique to have more, not less, relevance in the crypto asset context than in the stock context.

C.  The Corporate Governance Justification

The circularity and diversification critiques have been the primary arguments asserted against stock-based Rule 10b-5 class actions. One rejoinder to those critiques is a corporate governance justification that posits that stock-based Rule 10b-5 class actions advance public policy through improvements in corporate governance.168See Fox, supra note 151 (developing the corporate governance justification). For an extension of Fox’s argument, see Fisch, supra note 165, at 345–49.

The corporate governance justification focuses on securities law’s disclosure regime. The justification is based on the notion that more accurate disclosures by companies subject to the disclosure regime translate into improvements to legal and nonlegal channels of corporate governance. These improved corporate governance mechanisms, in turn, incentivize managers to be better focused on share value maximization, which results in economic gain. For example, the corporate governance justification posits that more accurate corporate disclosures increase the disciplinary power of a hostile takeover. The underlying reasoning is that more accurate company disclosures enable potential acquirers to more readily identify managerial deviations from share value maximization, where the threat of such takeover better incentivizes managers to maximize share value in the first instance.169See Fox, supra note 151, at 311–12. The corporate governance justification concludes that securities class actions work alongside public enforcement to improve the accuracy of company disclosures, which serves to facilitate these and other forms of economic gain.170See id. at 318–28. The justification also posits that accurate public company disclosures generate economic gain though an increase in liquidity. Id. at 311–12 (“Disclosure also enhances efficiency by increasing the liquidity of an issuer’s stock through the reduction in the bid/ask spread demanded by the makers of the markets for these shares.”). The corporate governance justification assumes that private enforcement of the securities laws deters misconduct and therefore results in more accurate disclosures. As a deterrence-based theory, it is subject to that aspect of the circularity critique that argues that D&O insurance and indemnification undermines, if not eliminates, Rule 10b-5’s ability to deter corporate directors and officers. See supra Section III.A.1.  

The corporate governance justification loses relevance in the crypto asset context. The primary reason is that crypto asset sponsors are not reporting companies, and thus subject to the securities law’s ongoing disclosure obligations, at least under current law and practice.171This is not surprising. First, crypto asset sponsors are not reporting companies under section 15(d) of the Securities Exchange Act other than in the rarest of cases because crypto asset offerings are almost never registered. See supra note 11. Second, because crypto asset exchanges presently do not register as national securities exchanges, crypto asset sponsors are not reporting companies through section 12(b) of the Securities Exchange Act. Finally, even if a crypto asset sponsor is an entity with a class of “equity security,” it could stay under the triggering thresholds of section 12(g) of the Securities Exchange Act. Crypto asset sponsors also do not voluntarily furnish the market with information that is substantively similar to the disclosures provided by public companies.172See Dirk A. Zetzsche, Ross P. Buckley, Douglas W. Arner & Linus Föhr, The ICO Gold Rush: It’s a Scam, It’s a Bubble, It’s a Super Challenge for Regulators, 60 Harv. Int’l L.J. 267 (2019) (reviewing over 1,000 white papers associated with crypto asset initial offerings and concluding that most included inadequate disclosures). So, it is not meaningful to ask whether crypto asset-based Rule 10b-5 class actions generate disclosure improvements.

Second, the various channels of corporate governance that the corporate governance justification posits to be improved by stock-based Rule 10b-5 class action have no or little applicability in the crypto asset context. For example, crypto asset sponsors are not publicly traded companies and so cannot be the subject of a takeover effort. Even if a party were to acquire significant amounts of a crypto asset, that would not allow the acquirer to exercise control over the crypto asset’s sponsor or to replace its management, as may be the case with the acquisition of sufficient voting shares of a publicly traded company.

D.  Price Volatility and Frivolous Litigation

The analysis above, when aggregated, does not provide a basis for concluding that the public policy justification for crypto asset-based Rule 10b-5 class actions is substantially weaker than the public policy justification for stock-based Rule 10b-5 class actions. The circularity critique is significantly less relevant in the crypto asset context than in the stock context, and the diversification critique may be more or less relevant in the crypto asset context than the stock context, but nothing indicates that it is significantly more relevant. An offsetting consideration is that the corporate governance justification ceases relevancy in the crypto asset context.

Absent from the discussion above is the issue of frivolous litigation, which can impose social cost by causing the defendants to divert resources away from value-enhancing activity to paying legal expenses and incurring settlement payments. One question pertinent to the Article’s public policy question is whether unmeritorious Rule 10b-5 class actions are more likely to be expected in the crypto asset context than in the stock context.

The prospect of frivolous lawsuits is heightened in the crypto asset context because of the significant price volatility discussed in Section I.C above. A crypto asset’s traders may lose significant amounts simply because of inherent price changes. In the face of a significant volatility-induced price drop, financially impaired crypto asset traders may seek to use Rule 10b-5 to recover their non-fraud losses, understanding that such cases often result in at least some recovery through settlement. Instead of crypto asset investors leading the charge to the courtroom in such circumstances, lawyers may be the first movers.173Some argue that this dynamic became commonplace in stock-based Rule 10b-5 class actions following the Supreme Court’s decision in Basic Inc. v. Levinson, 485 U.S. 224 (1988), in which the Court recognized fraud on the market, making stock-based Rule 10b-5 class actions ubiquitous. As Pritchard has argued:

The incentives unleashed by Basic spawned a flood of securities fraud suits, often targeting start-up firms with high volatility, regardless of connection to actual fraud. When the stock prices of these firms fell, plaintiffs’ lawyers filed suits, and then combed disclosures for potential misstatements. Settlements followed quickly, however, obviating any need to prove fraud. The upshot was a tax on risk, which raised the cost of capital for start-up firms.

A.C. Pritchard, Halliburton II: A Loser’s History, 10 Duke J. Const. L. & Pub. Pol’y 27, 39 (2015).
In either case, frivolous suits may deplete or deteriorate the budgets of crypto asset sponsors and others who are involved in the development of crypto assets and their applications, which would serve to diminish incentives to innovate. That prospect of dampened innovative activity is amplified given the apparent current rarity of D&O insurance.174See supra Section III.A.2.

This is an important consideration, but the same price volatility that may incentivize non-meritorious suits may also work to disincentivize them. At various points of their Rule 10b-5 class action, crypto asset traders will need to establish aspects of their case through statistical methods. For instance, the plaintiff traders will need to establish loss causation, which will necessitate use of an event study to show that the crypto asset’s price responded in a statistically significant manner to one or more corrective disclosures.175See, e.g., Jill E. Fisch & Jonah B. Gelbach, Power and Statistical Significance in Securities Fraud Litigation, 11 Harv. Bus. L. Rev. 55, 60 (2021). As has been documented elsewhere, event studies in Rule 10b-5 class actions may not be able to identify statistically significant price effects because of low power.176See, e.g., Jill E. Fisch, Jonah B. Gelbach & Jonathan Klick, The Logic and Limits of Event Studies in Securities Fraud Litigation, 96 Tex. L. Rev. 553 (2018). The issue of low power is heightened when there is high price volatility, as in the crypto asset context.177See, e.g., Fisch & Gelbach, supra note 176, at 76–78. For this reason, whether or not a crypto asset Rule 10b-5 case is meritorious or not, the issue of low power will make it difficult for crypto asset traders to establish elements of their claim. That inability combined with an awareness that other aspects of their claim may have poor factual support may dissuade crypto asset traders from bringing frivolous Rule 10b-5 cases.178As discussed in Section I.C above, studies indicate that crypto asset volatility may decrease with time, so the low power issue might mitigate as a crypto asset continues to trade in secondary markets. As this discussion shows, the same relatively high price volatility that could cause more frivolous crypto asset Rule 10b-5 class actions to be litigated than stock-based Rule 10b-5 class actions simultaneously provides a reason why there may be fewer frivolous suits of the former type than the latter.

CONCLUSION

Traders who participate in secondary crypto asset trading markets understand that any trading gains are accompanied by the risk of trading losses. Most traders presumably also understand that their losses at times can be significant because of the high volatility of crypto asset prices. But accompanying these market-determined losses are potentially significant trading losses caused by fraud occurring in connection with traders’ secondary transactions. In response to incidents of secondary trading crypto asset fraud, crypto asset traders may seek recovery for their trading losses through Rule 10b-5 class actions. The propriety of crypto asset traders relying on that form of relief implicates a host of doctrinal and public policy questions. This Article sought to analyze two such questions, one doctrinal and one public policy related.

In its doctrinal analysis, the Article evaluated issues pertinent to the threshold definitional question of when an exchange-traded crypto asset will constitute an investment contract and therefore fall within the definitional perimeter of a security. That analysis identified a slight generalization of the horizontal commonality test so that the test is suitable for use in both primary transaction and secondary transaction cases. The analysis also explained why Howey’s efforts of others prong should not be understood to require the presence of a centralized third party and also explained why the prong does not concern itself with investors’ expectations concerning the use of their sales proceeds. These findings, though, are legal propositions. Whether or not a particular exchange-traded crypto asset is or is not an investment contract will depend on the pertinent facts and the totality of the circumstances. 

In its public policy analysis, the Article evaluated whether the public policy justification for crypto asset-based Rule 10b-5 class actions is significantly weaker than stock-based Rule 10b-5 class actions. It structured its analysis around the primary theories advanced in the literature to assess whether stock-based Rule 10b-5 class actions advance their public policy objectives. The Article’s public policy determinations break in different directions and in some respects are to be considered preliminary, but the analysis does not justify limiting the availability of crypto asset-based Rule 10b-5 class actions any more than stock-based Rule 10b-5 class actions.

96 S. Cal. L. Rev. 1571

Download

* Professor of Law, UC Davis School of Law. This Article benefited from helpful comments by Jordan Barry and Jill Fisch, as well as participants at the University of Southern California’s Digital Transformation in Business and Law Symposium. Parts of this Article build on and draw from points in a prior work. See Menesh S. Patel, Fraud on the Crypto Market, 36 Harv. J.L. & Tech. 171 (2022). I thank Merritt Fox for his comments on that earlier work, which motivated me to address points in Part III of this Article. I also thank Madeline Goossen, Jessica Langdon, Remy Merritt, and the other journal editors for their helpful suggestions and editing assistance. Maximilian Engel, Katherine Gan, and Ada (Xia) Wu provided excellent research assistance.

Data Valuation and Law

Data has become an increasingly valuable asset. Numerous areas of law—including contracts, corporate law, intellectual property (“IP”), antitrust, tax, privacy, and bankruptcy—require parties and courts to determine the value of assets, including data. Unfortunately, data valuation has been hindered by a lack of clarity over what data is and why it is valuable. This lack of clarity also increases the chances of legal decisionmakers valuing data in inconsistent ways, which would create further confusion, inefficiencies, and opportunities for regulatory arbitrage.

This Article proposes a unified framework for valuing data that will promote consistent valuations across fields of law. It begins by conceptualizing data as building blocks: It is of little value on its own. But when placed in skillful and creative hands, it can unlock choices for its holders—choices they would not otherwise have—that can generate tremendous profits. Thus, data constitutes what is known as a “real option.” This Article shows how using real options to value data can significantly improve upon existing data valuation practices.

INTRODUCTION

The rise of data analytics has been staggering. In 2021, 1.134 trillion megabytes were created every day, totaling 74 zettabytes for the year.1See Louie Andre, 53 Important Statistics About How Much Data Is Created Every Day, Fins. Online (July 16, 2023), https://financesonline.com/how-much-data-is-created-every-day [https://
perma.cc/RKL6-9L8S].
As large as this is, projections for 2022 are over 25% higher.2Approximately 94 zettabytes of new data were projected to be created during 2022. Id. Big data and new information technology are changing the tools, business models, operations, and mindset that firms, nonprofits, and governments use every day, quietly transforming business and society.3See generally Geoffrey G. Parker, Marshall Van Alstyne & Paul Sangeet Choudary, Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You (2016); Marco Iansiti & Karim R. Lakhani, Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World (2020); Ajay Agrawal, Joshua Gans & Avi Goldfarb, Power and Prediction: The Disruptive Economics of Artificial Intelligence (2022).

These changes come with challenges. A variety of legal regimes govern economic activity; in many instances, those legal regimes must determine the value of owning or using particular assets, including data.

For example, one area in which data valuation plays an important role is in contracting. Firms contract with each other daily with regard to the sale of data. This includes first-party data sales, such as when Target sells data that it has collected to Proctor & Gamble, as well as third-party data sales, in which data aggregators or brokers sell data that others have collected. If one party breaches the contract, what remedies are available to their counterparty?4Cemre Bedir, Contract Law in the Age of Big Data, 16 Eur. Rev. Cont. L. 347, 362–64 (2020). In corporate law, target boards have fiduciary duties to make sure their shareholders are being appropriately compensated during mergers and acquisitions. This requires having a handle on the value of the target firm’s assets, including its data.5Doron Nissim, Big Data, Accounting Information, and Valuation, 8 J. Fin. & Data Sci. 69, 70 (2022). In tax, the taxation of intangible assets and specifically of data is a growing issue of concern.6Young Ran (Christine) Kim & Darien Shanske, State Digital Services Taxes: A Good and Permissible Idea (Despite What You Might Have Heard), 98 Notre Dame L. Rev. 741, 797–798 (2022).

These questions can potentially be even thornier when specific aspects of data must be valued, rather than full ownership. To take another example, suppose that one firm’s negligence results in another firm’s proprietary data leaking to the public. To award damages, a court must determine how much the damaged firm lost from having the data become public—but how much is that?7D. Daniel Sokol & Tawei Wang, A Review of Empirical Literature in Information Security, 95 S. Cal. L. Rev. 95, 109 (2021). Similarly, in antitrust, when control of data plays an important role in anticompetitive behavior, is it ownership of the data itself that creates the problem, or the use of the data?8See Tilman Kuhn, Kristen O’Shaughnessy, Tobias Pesch, Jaclyn Phillips & D. Daniel Sokol, Big Data and Data-Related Abuses of Market Power, in Research Handbook on Abuse of Dominance and Monopolization 438, 438–55 (Pinar Akman, Or Brook & Kristianos Stylianou eds., 2023) (providing an overview of cases in the United States and European Union). Does sharing the data with competitors make matters better or worse?9Id. The rise of generative artificial intelligence (“AI”), which requires data for its machine learning models, may create additional concerns as to the value of various data usage rights.

Unfortunately, the difficulties of conceptualizing data have hampered law’s attempts to incorporate the data revolution into multiple legal doctrines. This has opened the door to confusion, inconsistency, and inefficiency. Decisionmakers have confused data with algorithms, and struggled with how to apply certain doctrines to the legal rights that data owners and data users possess. This increases the risks that regulators in different substantive areas of law, as well as in different jurisdictions, will take inconsistent approaches. This creates inefficiencies as parties subject to multiple regimes work to navigate them. Different legal regimes also creates opportunities for regulatory arbitrage, in which regulated parties take advantage of divergent regulatory rules to achieve the regulatory treatment they want while making only minor changes to their economic activities.

To address these concerns, this Article offers a general framework for valuing data based on real options valuation. The financial economics literature pioneered the use of real options to better assess business decision-making under uncertainty.10See generally Avinash K. Dixit & Robert S. Pindyck, Investment Under Uncertainty (1994). This approach has since been extended beyond finance to address other areas of uncertainty.11See, e.g., Joseph A. Grundfest & Peter H. Huang, The Unexpected Value of Litigation: A Real Options Perspective, 58 Stan. L. Rev. 1267, 1282–91 (2006); Andrew Chin, Teaching Patents as Real Options, 95 N.C. L. Rev. 1433, 1434–35 (2017). Real option analysis provides a better path forward than the current patchwork of doctrinal and analytical approaches. A real options approach is conceptually correct and thus has the potential to ameliorate the confusion, inconsistency, and inefficiency of existing approaches. To our knowledge, this is the first article to utilize real options as a method to value data, in law or otherwise.

Along with its potential benefits as a method of data valuation, real options analysis does have its drawbacks. Real options theory is complicated, which creates implementation challenges that must be overcome, or at least managed, to achieve the benefits described above. That said, real options analysis is an improvement over existing approaches. Applying a more unified theory also allows for a more standardized approach that can then be tailored to specific doctrines and areas of law.

This Article proceeds as follows. Part I provides context regarding the big data revolution and the growing importance of data. In doing so, it reviews the extant theoretical and empirical literatures on data valuation. Part II identifies the implications of data valuation for law by providing some case studies across fields. It includes vignettes demonstrating the types of issues that emerge and some current legal approaches. Next, in Part III, the Article explores how real options analysis offers a viable potential solution to the current patchwork of legal approaches. The Article concludes on how agencies and courts would benefit from such an approach, notes limitations on the use of real options, and offers avenues of future research.

I.  THE DATA REVOLUTION AND THE VALUE OF DATA

To understand the importance of data valuation methods to the law, one must understand two other, related points. First, one must have a grounding in why and how data is used in the modern economy. Second, one must consider how to think about how those use cases translate into value estimates.

A.  Digital Transformation

To understand the role of data in the modern economy, one must consider three related points: (1) The increase in AI techniques that can generate value from data; (2) The increase in data to which such AI techniques can be applied; and (3) The amount of value that these techniques are creating. Understanding these dynamics allows us to explore specific case studies that apply these insights across a number of areas of law.

1.  Generating Value from Data with AI

As a starting point, companies across the economy have moved to increasingly digitized, AI-enabled business strategies, producing profound effects on value creation and innovation.12Iansiti & Lakhani, supra note 3, at 28–40; Ajay Agrawal, Joshua Gans & Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence 11–13 (2018); Hau L. Lee, Big Data and the Innovation Cycle, 27 Prod. & Operations Mgmt. 1642, 1645–46 (2018); Hal R. Varian, Big Data: New Tricks for Econometrics, 28 J. Econ. Persps. 3, 7–25 (2014) (analyzing the uses of big data in economics). Many companies have become platforms, where the ability to create economies of scale and scope have allowed for a generation of “new opportunities to create, appropriate, and deliver value for firms and [users] . . . .” D. Daniel Sokol, Technology Driven Government Law and Regulation, 26 Va. J.L. & Tech. 1, 2 (2023). We use the term AI broadly here, as a way to encompass algorithms that improve prediction and decision-making.13For applications in law, see for example, Amy L. Stein, Artificial Intelligence and Climate Change, 37 Yale J. on Reg. 890, 895–900 (2020); Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, 119 Colum. L. Rev. 1829, 1829–32 (2019); W. Nicholson Price II, Regulating Black-Box Medicine, 116 Mich. L. Rev. 421, 432–37 (2017). There are different approaches to AI, such as neural networks and machine learning, among others.14Xiao Liu, Dokyun Lee & Kannan Srinivasan, Large-Scale Cross-Category Analysis of Consumer Review Content on Sales Conversion Leveraging Deep Learning, 56 J. Mktg. Rsch. 918, 924–25 (2019) (using neural networks in marketing research); Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871, 871–80 (2016) (discussing machine learning in a legal context).

When thinking about data and AI, it can be helpful to consider a simple, three-tier vertical model of how companies and other actors use data and AI to further their goals.

 

Figure 1.

At the first stage is data. If AI is the product or output, data serve as the input. Data feed the needs of AI-enabled technologies. Data underlie machine learning and prediction models, and it is data that has fueled digital transformation.15Marshall Fisher & Ananth Raman, Using Data and Big Data in Retailing, 27 Prod. & Operations Mgmt. 1665, 1666–67 (2018); Anindya Ghose & Vilma Todri-Adamopoulos, Toward a Digital Attribution Model: Measuring the Impact of Display Advertising on Online Consumer Behavior, 40 Mgmt. Info. Sys. Q. 1, 2–3 (2016). Without sufficient quantity and quality of data, many current AI techniques simply cannot produce very good results.

Data often is the input to the next stage—powering an algorithm. The algorithm itself is not the end of the production. Rather, the algorithm simply enables better prediction. It is at the stage of prediction where there are outputs to AI—outputs that can generate tremendous value.

For example, when a user types terms into a search engine, that engine might consider data about what sites other users who typed in similar terms ultimately clicked on (among other data) when deciding what results should appear. Diagnostic software might compare a patient’s MRI to millions of MRI images that have already been analyzed by doctors to estimate the likelihood that the patient has breast cancer. Data drives the AI, the AI makes predictions, and those predictions enable better decision-making, which creates economic value.

2.  Increase in Data

While many facets of AI are themselves not new, the speed of data collection and processing have significantly improved these tools’ impact.16Ajay Agrawal, Joshua Gans & Avi Goldfarb, Prediction, Judgment, and Complexity: A Theory of Decision-Making and Artificial Intelligence, in The Economics of Artificial Intelligence 89, 93 (Ajay Agrawal, Joshua Gans & Avi Goldfarb eds., 2019). Data is vast and the various ways to use it have grown significantly, such that there are distinct data-related strategies that firms may adopt.

The data ecosystem is worth exploring briefly. Data can be bought and sold like many other inputs.17Maryam Farboodi & Laura Veldkamp, Data and Markets 1 (Mass. Inst. of Tech. Sloan, Research Paper No. 6887–22, 2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4284192 [https://perma.cc/M4JS-4Y2A]. It can be acquired from public sources. It can be collected from what can be termed data suppliers. For example, first-party companies such as Netflix or Spotify can sell their data and databases to other companies—firms regularly sell large quantities of this type of data through basic business transactions.18Firms also sell “exhaust” data; this is data sold for what are unrelated to business transactions but have a secondary purpose for other kinds of business. Third-party data brokers, apps and internet service providers (“ISPs”) that can provide locational or other data, and data aggregators also play significant roles in the data ecosystem.19Llewellyn D.W. Thomas & Aija Leiponen, Big Data Commercialization, 44 Inst. Elec. & Electronics Eng’rs: Eng’g Mgmt. Rev. 74, 80 (2016). Data brokers buy and sell data, thereby allowing firms to acquire new data to make better predictions.20See Nico Neumann & Catherine Tucker, Data Deserts and Black Boxes: The Impact of Socio-Economic Status on Consumer Profiling (February 27, 2023) (unpublished presentation) (on file with the Southern California Law Review); Arion Cheong, D. Daniel Sokol & Tawei Wang, Cookie Intermediaries: Does Competition Leads to More Privacy? 2–5 (April 16, 2023) (unpublished manuscript) (on file with Southern California Law Review). This increase in data sources is an important change, as it makes data more widely available. This both enables more actors to put it to use and to experiment and innovate with it.21To the extent that data is accessible from many sources, that weakens arguments that data access is a key barrier to entry.

Indeed, data has become both a make and buy decision.22See Jordan M. Barry & Victor Fleischer, Tax and the Boundary of the Firm 2–7 (Aug. 28, 2023) (unpublished manuscript) (on file with Southern California Law Review). See generally R.H. Coase, The Nature of the Firm, 4 Economica 386 (1937). That is, firms have significant opportunities to generate their own data—such as Target keeping track of what consumers buy at Target—and to acquire third-party data from other actors. This is especially true with respect to end-consumer data.23See Alessandro Bonatti, Munther Dahleh, Thibaut Horel & Amir Nouripour, Selling Information in Competitive Environments 4–5 (Mass. Inst. of Tech. Sloan Sch. of Mgmt., Working Paper No. 6532-21, 2022), https://arxiv.org/pdf/2202.08780 [https://perma.cc/7MWJ-AZNQ]; Anja Lambrecht & Catherine E. Tucker, Can Big Data Protect a Firm From Competition?, Competition Pol’y Int’l Antitrust J. (Jan. 17, 2017), https://www.competitionpolicyinternational.com/can-big-data-protect-a-firm-from-competition [https://perma.cc/JK39-W2CR]; Thomas & Leiponen, supra note 19, at 80.

3.  Amount of Value

What is this power of data? Typically, data is defined across four “V’s”: velocity, veracity, volume and variety.24See A.B.A. Section of Antitrust, Artificial Intelligence & Machine Learning: Emerging Legal and Self-Regulatory Considerations (Part One) 2 (2019), https://

http://www.americanbar.org/content/dam/aba/administrative/antitrust_law/comments/october-2019/clean-antitrust-ai-report-pt1-093019.pdf [https://perma.cc/F9S2-8P5Q].
Combined, these four Vs create data value. Velocity is the speed at which data is collected and used. Volume is the sheer amount of data that is generated, which (at least at present) overwhelms our ability to process it; there is more data than ever before and every day we create 328.77 million terabytes of new data.25See Petroc Taylor, Volume of Data/Information Created, Captured, Copied, and Consumed Worldwide from 2010 to 2020, with Forecasts from 2021 to 2025, Statista (Sept. 8, 2022), https://www.statista.com/statistics/871513/worldwide-data-created [https://perma.cc/LZ5B-CSFM]. Veracity goes to the increasingly important issues of data accuracy and trustworthiness. Finally, variety reflects the diversity of data types that can be collected and used, such as e-mails, PDFs, and videos.

Data may come from many sources. The general rule of data is that the more the data, the greater the ability to feed AI and the better the ability to improve prediction,26Iansiti & Lakhani, supra note 3, at 16–27; Andrei Hagiu & Julian Wright, Data-Enabled Learning, Network Effects and Competitive Advantage 3 (May 2021) (unpublished manuscript), https://app.scholarsite.io/julian-wright/articles/data-enabled-learning-network-effects-and-competitive-advantage-3 [https://perma.cc/6J8A-L8MU]. although there are limits to what data alone can do.27See, e.g., Carmelo Cennamo, Building the Value of Next-Generation Platforms: The Paradox of Diminishing Returns, 44 J. Mgmt. 3038, 3039–41 (2018) (identifying diminishing returns to data); Hanna Halaburda, Mikolaj Jan Piskorski & Pinar Yildirim, Competing by Restricting Choice: The Case of Matching Platforms, 64 Mgmt. Sci. 3574, 3574–76 (2017) (identifying network saturation allowing for competition through differentiation in platforms); D. Daniel Sokol & Roisin Comerford, Antitrust and Regulating Big Data, 23 Geo. Mason L. Rev. 1129, 1135–40 (2016) (illustrating that it is not the data but what you do with them that matters as well as other limits to data). Data must be processed, via AI or otherwise, to reap benefits.28Ron Berman & Ayelet Israeli, The Value of Descriptive Analytics: Evidence from Online Retailers, 41 Mktg. Sci. 1074, 1076 (2022) (finding that e-commerce data analytics dashboards increase weekly revenues between 4%–10%). When properly processed, big data allows firms to improve their products and services and to develop new such products and services.29Sokol & Comerford, supra note 27, at 1134.

The academic and practitioner literature on data valuation is complex. First, there is the literature on data brokers. In some senses, the costs of data are lower now than ever before.30Avi Goldfarb & Catherine Tucker, Digital Economics, 57 J. Econ. Literature 3, 3 (2019). The reduced cost of data allows for the creation of a wide variety of sophisticated algorithms that can produce insights that would elude unassisted humans.31Iansiti & Lakhani, supra note 3, at 62–70. The ability to utilize data to feed AI allows for opportunities to better create, appropriate, and deliver economic value not merely for AI-driven firms but for the different users of digital platforms such as advertisers, complementors, and customers.32Ron Adner, Phanish Puranam & Feng Zhu, What Is Different About Digital Strategy? From Quantitative to Qualitative Change, 4 Strategy Sci. 253, 258 (2019); Michael G. Jacobides, Carmelo Cennamo & Annabelle Gawer, Towards a Theory of Ecosystems, 39 Strategic Mgmt. J. 2255, 2257 (2018); Geoffrey Parker, Marshall Van Alstyne & Xiaoyue Jiang, Platform Ecosystems: How Developers Invert the Firm, 41 Mgmt. Info. Sys. Q. 255, 259 (2017).

This transformation creates significant economic value, but the drivers of that value are not well understood by courts and regulatory bodies. In some cases, regulation might stymie the use of data and chill innovation and investment.33See Jian Jia, Ginger Zhe Jin & Liad Wagman, The Short-Run Effects of the General Data Protection Regulation on Technology Venture Investment, 40 Mktg. Sci. 661, 677 (2021) (finding a decrease in venture capital investment as a result of GDPR); Rebecca Janssen, Reinhold Kesler, Michael E. Kummer & Joel Waldfogel, GDPR and the Lost Generation of Innovative Apps 1 (Nat’l Bureau of Econ. Rsch., Working Paper No. 30028, 2022) (finding a reduction of apps by one third as a result of GDPR). In other cases, the potential portability of certain types of data has motivated increased legislative and regulatory action.34Org. for Econ. Coop. & Dev., Data Portability, Interoperability and Digital Platform Competition 42 (2021). In other situations, courts have held that owners of certain types of data have certain rights, such as the right to exclude others from such data. The exact value—either of the underlying data itself or of the rights to exclude others—may not always be clear.35Francesco Decarolis & Gabriele Rovigatti, From Mad Men to Maths Men: Concentration and Buyer Power in Online Advertising, 111 Am. Econ. Rev. 3299, 3299–303 (2021) (discussing ad auctions). There are yet other areas in which data-related transactions occur on a regular basis, but which have not produced judicial decisions to date.36Id.

It is these sorts of complexities as to law and data to which we next turn.

B.  Disagreements on How to Think About Data Creating Value

Valuing data presents conceptual challenges because data is unlike other assets, including other intangible assets. The first problem is to understand how even though data is a building block for constructing a final product, data is not like traditional tangible assets such as bricks and steel used to make a factory. Data can be collected and mixed in a number of different, complex ways. Further, unlike bricks, data is non-rivalrous; more than one firm can use the same data.37Charles I. Jones & Christopher Tonetti, Nonrivalry and the Economics of Data, 110 Am. Econ. Rev. 2819, 2834 (2020). For instance, someone’s driving history can be used at the same time by multiple firms, in the same or different industries (for example, advertisers, insurance companies, credit card companies). As Jones and Tonetti explain:

An analogy may be helpful. Because capital is rival, each firm must have its own building, each worker needs her own desk and computer, and each warehouse needs its own collection of forklifts. But if capital were nonrival, it would be as if every auto worker in the economy could use the entire industry’s stock of capital at the same time. Clearly this would produce tremendous economic gains. This is what is possible with data.38Id. at 2820.

Thus, non-rivalry means that valuation may be harder across a number of the traditional measurements.

Further complicating data is that it is (mostly) non-exclusive.39But see Autorité de la concurrence, Décision n° 14-MC-02 du 9 septembre 2014 relative à une demande de mesures conservatoires présentée par la société Direct Energie dans les secteurs du gaz et de l’électricité (2014) (identifying unique data because of regulation as to customer data and contracts). For example, if someone collects public records about home purchases into a comprehensive database, that does not prevent others from collecting that same information in the same way. This is a stark contrast from some other intangible assets, including traditional forms of IP such as patents and copyrights, which create value by conferring exclusive rights on their holders.40John P. Conley & Christopher S. Yoo, Nonrivalry and Price Discrimination in Copyright Economics, 157 U. Pa. L. Rev. 1801, 1818–19 (2009).

Both of these indicia suggest that the underlying value of the data, rather than that of the algorithm, may be small. When the input (data) is easily available to all, it is the actor’s ability to make use of the input—that is, the algorithm—that creates the value, not the input itself. For example, a classic crème brûlée recipe has only four ingredients—cream, sugar, egg yolks, and vanilla. All of these items are widely available. The ability to charge a premium for the final product is a function of the baking skill of the pastry chef.

Beyond non-rivalry and non-excludability, some regulation, such as the European Digital Markets Act41Proposal for a Regulation of the European Parliament and of the Council on Contestable and Fair Markets in the Digital Sector (Digital Markets Act), COM (2020) 842 (Dec. 15, 2020) [hereinafter Proposal for a Regulation]. requires fair, reasonable, and non-discriminatory (“FRAND”) licensing. Even in IP and antitrust, FRAND terms are not always clearly understood.42Herbert Hovenkamp, FRAND and Antitrust, 105 Cornell L. Rev. 1683, 1684 (2020). It stands to reason that in data, with fewer cases to provide guidance across different areas of law, the nature of FRAND obligations is even less clear. Further, certain types of data have sharing requirements in practice that may change the valuation of data, such as requirements for data portability.

Data is also unlike some other intangible assets because of the speed at which data can become obsolete.43Ehsan Valavi, Joel Hestness, Marco Iansiti, Newsha Ardalani, Feng Zhu & Karim R. Lakhani, Time Dependency, Data Flow, and Competitive Advantage 10 (Harv. Bus. Sch., Working Paper No. 21-099, 2021) (“High perishability undermines the importance of data volume or historical data in creating a competitive advantage.”). Much data gets stale over time.44Ehsan Valavi, Joel Hestness, Newsha Ardalani & Marco Iansiti, Time and the Value of Data 1 (Harv. Bus. Sch., Working Paper No. 21-016, 2020). This suggests that much data is a diminishing asset, something which IP such as patents or copyrights do not face nearly as quickly because those rights last for longer periods.

II.  THE IMPLICATIONS OF DATA VALUATION FOR LAW

There are many areas of law for which valuation of various assets is important. Data is an increasingly valuable asset. Unfortunately, there is currently relatively little law on how to value data. Courts and regulators have generally avoided the question whenever possible, perhaps out of concern for the difficulty of the problem, or uncertainty on how to proceed, and often such cases get decided upon other grounds. This raises the chances that different legal areas will use different valuation methods. Such inconsistency creates dilemmas as to how to allocate legal rights and responsibilities. Perhaps the clearest way of understanding this tension across areas of law is to consider the purpose of damages. Damages exist to compensate a potential victim for violations of law and/or to deter the violator from doing so again.45Gary S. Becker, Crime and Punishment: An Economic Approach, 76 J. Pol. Econ. 169, 172–73 (1968). There are other potential justifications for damages, such as retributivism, but these are the two justifications raised most frequently in the civil context. Methods across areas of law might include: (1) a cost-based approach based on the replacement cost; (2) a market-based approach based on similar acquisitions of data (or companies with data); and (3) an income-based approach, to the extent that the data is producing income via sales or even licensing. To this, we add the importance of a fourth possibility, an options-based approach. Often, outcomes seem to be highly contextual rather than based on valuation methodology.46Feng Chen, Kenton K. Yee & Yong Keun Yoo, Robustness of Judicial Decisions to Valuation-Method Innovation: An Exploratory Empirical Study, 37 J. Bus. Fin. & Acct., 1094, 1097 (2010). A lack of consistency is significant because of the growing stake of data as an important part of economic activity.

Which approach ultimately to take across areas of law such as IP, antitrust, mergers and acquisitions (M&A), bankruptcy, torts, and other areas of law varies. One important driver is what information courts and parties can easily measure. When contracts (and comparable transactions) are not easy to find, private negotiations between contracting parties in the shadow of the law are another important driver. These questions become more salient as we try to understand how issues involving big data reverberate across a number of areas of law and in terms of the value of data overall. The biggest question is how much value do we think is in big data?47We assume that data creates value. See Maryam Farboodi, Roxana Mihet, Thomas Philippon & Laura Veldkamp, Big Data and Firm Dynamics, 109 Am. Econ. Assoc. Papers & Proc. 38, 42 (2019). We might also imagine that information is simply a byproduct of economic activity. See Pablo D. Fajgelbaum, Edouard Schaal & Mathieu Taschereau-Dumouchel, Uncertainty Traps, 132 Q. J. Econ. 1641, 1642 (2017).

A.  Valuation Is Important to Many Areas of Law

Below we offer some examples of how data valuation plays a role across various areas of law. We highlight these examples as a way to understand some of the complexity that requires a more generalized rethink as to valuation method of data. Understanding these complexities helps clarify the value of data as well as some of the struggles that different areas of law are currently experiencing as they seek to value data.

1.  Antitrust

Antitrust has tried to address the questions of competition and the exercise of market power in two contexts—mergers and conduct cases. These produce two types of antitrust cases—those where data is an input and those in which data is a product. However, there is little caselaw in each area. Consequentially, the problem with both sets of circumstances is that we tend not to see litigated cases that get to the valuation issue of the data.

Antitrust primarily addresses behavior one of two ways. The first is through ex ante enforcement through merger control. Essentially, regulators can block mergers that are expected to produce antitrust problems. On the mergers side, most cases do not go to court, which means that litigated cases may not be representative. Even in those cases for which there is a judicial opinion, not all issues may get addressed. Scholars have expressed general frustration with what gets decided under the shadow of merger law.48D. Daniel Sokol & James A. Fishkin, Antitrust Merger Efficiencies in the Shadow of the Law, 64 Vand. L. Rev. En Banc 45, 45–46 (2011). Thus, the basis for decisions on many issues, including data valuation, is limited or incomplete. As Professors Katz and Shelanski lament, “The overall picture of current merger enforcement practice is, therefore, murky.”49Michael L. Katz & Howard A. Shelanski, Merger Analysis and the Treatment of Uncertainty: Should We Expect Better?, 74 Antitrust L.J. 537, 547 (2007).

Cases provide some guidance on how antitrust courts and agencies think about data, which gives some insight on how to think about data’s value. Yet much uncertainty remains. As of this writing, no mergers have been blocked on data theory grounds in the United States. Nor have there been any decided cases that explain the valuation method used for such transactions that weigh the data rather than its use to a specific platform.

In the case of data, let us begin with mergers and the possibility that data is itself the market. One such deal that included data as the market is the 2014 CoreLogic-DataQuick merger.50See Decision & Order at 5–8, In re CoreLogic, Inc., Docket No. C-4458 (F.T.C. May 21, 2014). In that transaction, the Federal Trade Commission cleared the transaction with a database divestiture but did not explain the valuation technique employed. Alas, this has been typical with regard to antitrust analysis of mergers that include data as a market. Similarly, people generally have not discussed mergers that include valuable data as an input (for example, Microsoft/LinkedIn, Apple/Shazam) as matters of valuation. At best, there are transactions that have received some sort of conditional approval such as Nielsen/Arbitron but without an explicit discussion of data valuation.51See Decision & Order at 5–7, In re Nielsen Holdings N.V., Docket No. C-4439 (F.T.C. Feb. 28, 2014).

Antitrust, through public and private enforcement, polices against anticompetitive conduct by one or more firms that harms competition. Conduct cases in antitrust involving data issues have not resolved the data valuation question, either. Complicating antitrust further is that duties to deal with competitors are limited, which means that such data sharing cases do not get to the data valuation stage of the case. Rather, these cases are decided based on the premise that data is not required to be shared in the first place. Yet, understanding such cases helps to explore the value of data because the discussion helps to inform the value of data use and ownership.

For example, Section 2 of the Sherman Act generally imposes no requirements to deal with one’s competitors.52Sherman Act, 15 U.S.C. § 2 (1982). In Aspen Skiing Co. v. Aspen Highlands Skiing Corp., the Supreme Court held that there are some limited circumstances under which Section 2 requires monopolistic firms to deal with their rivals.53Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 U.S. 585, 585 (1985). Courts have further narrowed Aspen Skiing’s holding since. Most recently, the DC Circuit dismissed a monopolization case that forty-six states brought against Meta based on the court’s narrow reading of Aspen Skiing.54New York v. Meta Platforms, Inc., 66 F.4th 288, 305 (D.C. Cir. 2023). Guam and the District of Columbia were also plaintiffs in the litigation. 

Cases brought under other provisions of the Sherman Act have also implicated the value of data. However, much like the Section 2 monopolization cases, courts examining Section 1 of the Sherman Act have offered little guidance on how to value data. For example, in Authenticom, Inc. v. CDK Global, LLC, Authenticom brought a claim against CDK for closing its system for data and thereby barring data scrapers from access. The Seventh Circuit ruled in favor of CDK on the basis that forced data sharing was inconsistent with precedent.55Authenticom, Inc. v. CDK Global, LLC, 874 F.3d 1019, 1025–27 (7th Cir. 2017). Because of this ruling, which dismissed the case on essential facilities grounds, the data valuation issue was never addressed. Of course, that does not mean that the data does not have value, merely that the court was able to dispose of the case without determining what the data’s value was.

Similar to antitrust enforcement, competition regulation increasingly plays an important role in big data valuation. This comes up specifically in the case of the Digital Markets Act (“DMA”), the European approach to ex-ante regulation of data.56Proposal for a Regulation, supra note 41, at 7. See Nicolas Petit, The Proposed Digital Markets Act (DMA): A Legal and Policy Review, 12 J. Eur. Competition L. & Prac. 529, 529–32 (2021) (providing an overview of the Digital Markets Act). Regarding “gatekeeper” firms, the DMA states:

The gatekeeper shall provide to any third-party undertaking providing online search engines, at its request, with access on fair, reasonable and non-discriminatory terms to ranking, query, click and view data in relation to free and paid search generated by end users on its online search engines. Any such query, click and view data that constitutes personal data shall be anonymised.57Digital Markets Act, 2022 O.J. (L 265) art. 6 ¶ 11.

Of course, data from a gatekeeper will not generate profits on its own; gatekeeper data must still be combined with some effort by recipients. But this reality makes it harder to assess the incremental profits the recipient earns as a result of having access to the data.58Incremental revenue, which one might hope to observe, will overstate the benefits; one must also consider incremental costs. 

2.  Business Law

Business law increasingly confronts data valuation. Unfortunately, it does so in ways that do not always show the precision that we believe is necessary to unlock a more accurate value of data assets. For example, data valuation questions arise within the context of both mergers and acquisitions (“M&A”) and bankruptcy. A number of factors arise in each context that make data valuation more difficult. Within the merger context, the purpose of valuation is to best help the acquiring and target boards to fulfill their fiduciary duties to ensure that the price paid for the acquisition is an appropriate one.

Overall, corporate law has grappled with how to account for intangibles. Many assets, including branding and intangibles such as IP, are lumped together under the heading of “goodwill.” However, the goodwill from reputation and branding is different than goodwill that is the basis of a regenerative asset such as data. Further, how data is stored and how easily it can be processed and integrated make such a valuation more challenging.59Chengxin Cao, Gautum Ray, Mani Subramani & Alok Gupta, Enterprise Systems and M&A Outcomes for Acquirers and Targets, 46 Mgmt. Info Sys. Q. 1295, 1299–300 (2022) (identifying similar issues in the context of integration of business enterprise software in M&A).

Different data sets may have different levels of privacy requirements, such as data that is protected under the Health Insurance Portability and Accountability Act (“HIPAA”) versus commercial health data, which has less stringent requirements. Identifying what sort of data companies may keep, for how long, how stale such data get, and the potential liabilities of such data are complex.60Sometimes firms might unknowingly buy a data lemon, with liabilities that attach because of a data breach, such as Marriot’s acquisition of Starwood’s hotel chain. However, this is a somewhat different question than valuing the data set itself. Chirantan Chatterjee & D. Daniel Sokol, Don’t Acquire a Company Until You Evaluate Its Data Security, Harv. Bus. Rev. (April 16, 2019), https://hbr.org/2019/04/dont-acquire-a-company-until-you-evaluate-its-data-security [https://perma.cc
/XH4E-BK6M].
Yet, there are very few cases that offer direct guidance on how to value data in the corporate and M&A setting. Thus, data valuation ends up a financial black box with potentially large implications if and when such cases go to litigation. This sort of uncertainty creates potential risk for deals, particularly those deals for which the underlying data may be a significant asset.61Michel Benaroch, Yossi Lichtenstein & Karl Robinson, Real Options in Information Technology Risk Management: An Empirical Validation of Risk-Option Relationships, 30 Mgmt. Info. Sys. Q. 827, 828 (2006) (suggesting a risk management-based approach to address the uncertainty associated with data breaches).

Finally, unresolved issues include requirements of how to store data62Woodrow Hartzog & Neil Richards, Privacy’s Constitutional Moment and the Limits of Data Protection, 61 B.C. L. Rev. 1687, 1706 (2020). as well as how to destroy data.63Some forms of data disposal have specific regulation. See, e.g., Disposing of Consumer Report Information? Rule Tells How, U.S. Fed. Trade Comm’n (June 2005), https://www.ftc.gov/business-guidance/resources/disposing-consumer-report-information-rule-tells-how [https://perma.cc/RWW9-2EXJ]. The lack of uniform federal privacy legislation makes such analysis more difficult. Federal agencies, especially the FTC, enforce privacy protections,64Ginger Zhe Jin & Andrew Stivers, Protecting Consumers in Privacy and Data Security: A Perspective of Information Economics 1 n.2 (May 22, 2017) (unpublished manuscript), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3006172 [https://perma.cc/N3E3-4NGV]. but private actions also play a role.65See generally Daniel J. Solove & Woodrow Hartzog, Breached! Why Data Security Law Fails and How to Improve It (2022) (discussing the shortcomings of data privacy and privacy laws). Moreover, states can impose additional rules on top of the federal ones. For example, California took inspiration from the General Data Protection Regulation (“GDPR”) and adopted the California Consumer Privacy Act (“CCPA”) and the California Privacy Rights Act (“CPRA”).66California Consumer Privacy Act of 2018, Cal. Civ. Code §§ 1798.100–199.100 (2018); California Privacy Rights Act, Cal. Civ. Code §§ 1798.100–199.100 (2018).

This issue of data valuation similarly plays itself out in the bankruptcy setting. In some settings, the data itself, such as customers’ spending behavior,67Perhaps this is a more sophisticated version of a customer list, which gets trade secret protection under the Defend Trade Secrets Act. may be the asset. Take the example of the bankruptcy proceeding for Caesar’s Entertainment Operating Corp casinos.68James E. Short & Steve Todd, What’s Your Data Worth?, 58 Mass. Inst. Tech. Sloan Mgmt. Rev. 17, 17 (2017). Creditors viewed the company’s data (customer-specific data on spending habits) as one of the company’s most important assets. Yet, as is often the case in bankruptcy proceedings, this issue was resolved through negotiations in the shadow of the law, leaving behind no case law to help shape future data valuation inquiries. On one side, there was a note by the bankruptcy court examiner that properties of Caesar’s that were sold off were worse off because they could not leverage the data of the rewards program—but at the same time, the examiner recognized that it would be difficult to sell the rewards program to other buyers.69Id. Thus, the court never ultimately decided how to value the data in light of these complexities. This is common in bankruptcy, where few decisions come in the form of a bankruptcy court ruling.70Douglas G. Baird & Robert K. Rasmussen, The End of Bankruptcy, 55 Stan. L. Rev. 751, 786–88 (2002).

3.  Synthesis

These case studies lead to a number of conclusions. First, courts do not always get to valuation questions. This may be because cases are decided on other grounds for legitimate reasons or because judges feel uncomfortable getting to the actual valuation and so they rule on different grounds to avoid the exercise. Second, there is uncertainty of valuation methodologies across areas of law, as well as potential for some such issues to simultaneously emerge in multiple contexts (for example, M&A and antitrust, M&A and bankruptcy, antitrust and data privacy) that may employ different methodologies. Accordingly, we believe that a more consistent approach may better facilitate business certainty with regard to valuation models.

III.  REAL OPTIONS AS A SOLUTION

Real options analysis provides a framework that can be used to value data across different contexts, including different areas of law.  We provide a basic introduction to real options before discussing the advantages and disadvantages of using them to value data. We then discuss how this approach might be employed in the real world.

A.  Real Options

An option is the right, but not the obligation, to do something. For example, if Maria has the right to paint her house green, to travel to Paris, or to order pizza for lunch, those are all options.

In finance, the most well-known options give their holders the right to buy or sell a specific quantity of a particular asset at a specified time for a specified price. These options are known as financial options.71See Investment Products: Options, Fin. Inv. Regul. Auth., https://www.
finra.org/investors/investing/investment-products/options [https://perma.cc/J6VN-7GPR] (last visited Aug. 28, 2023).
For instance, Jacinta might have the right to buy 1,000 shares of Apple stock in three months’ time at a price of $150 per share. That right would be quite valuable if, three months from now, Apple stock is trading at $200 per share: Jacinta could buy 1,000 Apple shares for $150,000,721,000 shares * $150 purchase price per share = $150,000. then immediately sell them to other investors for $200,000,731,000 shares * $200 sale price per share = $200,000. netting her $50,000 of profit.74$200,000 revenue from sale of Apple shares – $150,000 paid for Apple shares = $50,000 profit.

Real options, like financial options, reflect the value of being able to react to changing conditions. However, rather than representing merely the right to buy or sell, they can encompass one’s ability to change one’s behavior in all manner of ways.75Real options are also called strategic options. Ivo Welch, Corporate Finance 363 (3rd ed. 2014). This ability to change course can be extremely valuable. A pair of simple, stylized examples help illustrate this point.

Example 1. Suppose that you are an executive at a company, and you are considering whether the company should launch a new product. It is unclear how consumers will react to the product; they may love it (iPods) or they may not (Zunes). Suppose that there is a 50% chance that the product will be a success, in which case it will generate $10 million of profits per year for the next ten years.76For conceptual clarity, and to avoid complicating the example with issues related to time value of money and discount rates, we assume that all of the payment values discussed in this example are present values—that is, the profit you will earn in year one (or two, or three, or seven, etc.) is worth $10 million to you today. On the other hand, there is a 50% chance that the product will be a commercial failure, in which case it will cost the company $20 million per year for the next ten years.

Under the facts of Example 1, the company should not launch the product.77For simplicity, this analysis assumes that you are risk-neutral. If you were risk-averse, the case against the project would be even stronger. Half of the time, the product will produce $100 million of profit;78$10 million in annual profits * 10 years = $100 million in total profits. the other half of the time it will produce losses of $200 million.79$20 million in annual losses * 10 years = $200 million in total losses. On average, then, launching the new product will cost the company $50 million.8050% * $100 million + 50% * -$200 million = $50 million + -$100 million = -$50 million. Equivalently, the net present value (NPV) of this project is -$50 million.

Example 2. The facts are the same as in Example 1, except that now the company has the ability to stop making the new product after its first year on the market.

Under the facts of Example 2, the company should absolutely launch the product. When the product is a success, it will keep the product on the market. Everything will remain the same in that circumstance, and the company will earn $100 million of profit. But when the product is a commercial failure, the company can now cut its losses after one year. By doing so, the company will reduce its total losses when the product fails from $200 million to only $20 million.81The difference is between 1 year of $20 million annual losses and 10 such years. On average, the new product will now generate $40 million of profit.8250% * $100 million + 50% * -$20 million = $50 million + -$10 million = $40 million. Equivalently, the NPV of this project is $40 million.

Taken together, Examples 1 and 2 show how valuable the ability to change course can be. Simply having the ability to give up on the product when it is not profitable transforms a project that loses $50 million into one that earns $40 million—a $90 million swing.83$50 million – -$40 million = $90 million. Since the only difference between these two Examples was the real option to give up on the product after a year, that option is worth $90 million.

Real options come in a variety of common forms. Companies can expand or contract their businesses, such as by opening new locations or closing existing facilities. They can accelerate or delay projects, such as by hiring more workers to build a factory or pausing construction. They can switch production processes, trade-off between workers and automated processes, or shift production between in-house divisions and outside contractors. Taken together, real options encompass a wide range of actions spread across an expansive set of possible circumstances.

B.  Real Options as a Model for Data Valuation

As a framework for valuing data, real option analysis has many virtues. First, the value of data is that it enables a person to take new actions that were not available previously.84This feature is not unique to data. For example, the value of lumber comes from what you can build with it, or what someone will give you in exchange for it—which depends on what they can build with it or what they can sell it for, and so on. Real option analysis is how finance values the ability to take new courses of action. Thus, as a conceptual matter, real option analysis is a natural fit for valuing data. Further, real option analysis is a flexible and expansive tool that can be used to model an extraordinarily wide range of scenarios and circumstances. This makes it capable of handling the range of new possible outcomes that data, paired with modern statistical analysis, can produce.

Moreover, as noted previously, current approaches to data valuation offer little guidance. This increases the potential for confusion, inconsistency, and regulatory arbitrage. In some instances, they assign data no value at all.85Interestingly, this parallels the most common mistake that managers make with respect to real options. Welch, supra note 75, at 368. In some instances, holding data can have negative expected value, even accounting for the real options it creates. This could happen if the uses for the data generate little profit (for example, if legislation narrowly circumscribes their permitted uses), but the firm would suffer large costs if the data leaks, and the chance of a leak remains significant even after the firm takes precautions.     Applying real options analysis to data valuation would help ameliorate all of these problems. Real options analysis gives a clear theoretical framework, providing guidance and structure for those trying to determine data’s value. This would help align and unify the disparate valuation approaches that have been employed to date. Improved alignment would also reduce the opportunities for regulatory arbitrage that can result when different regulatory regimes adopt inconsistent valuation methodologies.86See Victor Fleischer, Regulatory Arbitrage, 89 Tex. L. Rev. 227, 230 (2010) (describing regulatory regime arbitrage); cf. Jordan Barry, Response, On Regulatory Arbitrage, 89 Tex. L. Rev. See Also 69, 73–78 (2010) (arguing that regulatory regime arbitrage is a subset of economic substance arbitrage, and that true regulatory arbitrage is only possible in that context when at least one of the regulatory regimes in question is using a regulatory rule that does not track the relevant underlying economic substance).

While real option valuation offers a number of benefits, it also entails a significant drawback: correctly valuing real options is quite difficult. To do so precisely, one must anticipate, and then think through, all of the possible future states of the world, their respective likelihoods of occurring, how one would respond to them all, and how much one would ultimately reap as a result. From there, one can work backwards from these endpoints to determine the right course of action at each decision point and the scenario’s expected value overall. This is a tall order—especially when valuing data, an asset whose value depends in part on future developments in statistical analysis.

To put a somewhat finer point on it, consider financial options once more. Valuing financial options is a difficult mathematical problem. Fischer Black, Myron Scholes, and Robert Merton’s options pricing model was a watershed advance for the field, ultimately garnering a Nobel Prize in 1997.87Fischer Black & Myron Scholes, The Pricing of Options and Corporate Liabilities, 81 J. Pol. Econ. 637, 640–45 (1973); Robert C. Merton, Theory of Rational Option Pricing, 4 Bell J. Econ. & Mgmt. Sci. 141, 162–71 (1973); Press Release, The Nobel Prize, Royal Swedish Academy of Sciences, The Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel 1997 (Oct. 14, 1997), https://www.nobelprize.org/prizes/economic-sciences/1997/press-release [https://perma.cc/AP7W-9Z4H]. Even with the solution in hand, the mathematics remain challenging. As important as options are to modern finance, many undergraduate finance courses do not cover the application of their formula, let alone its derivation.88See, e.g., A. Craig MacKinlay, The Wharton School, U. Pa., Finance 1000: Corporate Finance (2022), https://apps.wharton.upenn.edu/syllabi/202230/FNCE1000001 [https://perma.cc/YV4T-3U7H]; Albers Sch. Bus. & Econ., Seattle University, FINC 3400 Business Finance & FINC 3420 Intermediate Corporate Finance, https://www.seattleu.edu/business/undergraduate/courses–syllabi/finance [https:
//perma.cc/W5DD-9C6N] (last visited on Aug. 28, 2023).

Valuing real options is even harder than valuing financial ones. There are more possibilities to consider, more actions available, and more variables of interest.89See, e.g., Tom Copeland & Peter Tufano, A Real-World Way to Manage Real Options, Harv. Bus. Rev. (Mar. 2004), https://hbr.org/2004/03/a-real-world-way-to-manage-real-options [https:
//perma.cc/BJL8-TE64] (“As many executives point out, options embedded in management decisions are far more complex and ambiguous than financial options. Their concern is that it would be dangerous to try to reduce those complexities into standard option models, such as the Black-Scholes-Merton model, which have only five or six variables.”).
It would be extremely difficult to write and apply a regulation with a precise formula that generalized across different types of data from diffuse contexts and industries. The complexity of real options also poses challenges for parties, for judges, and for juries.

This is a serious problem. A valuation method that has attractive theoretical properties, but that is impossible to apply in practice, would seem to be of extremely limited value.

C.  A Way Forward

Despite its complexities, we nonetheless believe that real options analysis holds great promise as a framework for valuing data. If one wants to value data accurately, one must have the right model. In our view, real options analysis captures what makes data useful, and thus offers the best framework to think about data’s value. If data’s value is complicated and depends on many factors, then this is not a fault of the model; the model can only help a user identify and focus on the things that matter, even if that’s a long list.90The complexity of real options may not be an entirely bad thing. For example, complexity in the valuation process may impede parties’ ability to strategically manipulate valuations. Put another way, to get the right answer, one must ask the right question. The right question may be a hard one—but answering a different, easier question means avoiding the problem, not solving it.

Moreover, it is worth stating what may be obvious: the real options approach need not be perfect to be an improvement over existing practices.91Harold Demsetz, Information and Efficiency: Another Viewpoint, 12 J.L. & Econ. 1, 1 (1969) (identifying the nirvana fallacy of a first-best comparative institutional analysis). Getting all interested parties asking the right question—or even the same question—would be valuable. It would reduce conceptual confusion, inconsistencies, and opportunities for regulatory arbitrage. Moreover, real options always have positive value.92This is also true of financial options.  Whenever taking an available course of action is profitable, one can do so; if that course of action is not profitable, one can simply decline to take that action.93This assumes that actors are rational. If that is not the case, then it may be beneficial to remove some of one’s choices, such as Odysseus tying himself to the mast to avoid being lured by the Sirens’ song. Homer, The Odyssey (Emily R. Wilson trans., W.W. Norton & Co. 1st ed. 2018). It can also be valuable to remove options from your choice set if that will change others’ behavior in a way that is favorable to you. See, e.g., Deepak Malhotra, Six Steps for Making Your Threat Credible, Harv. Bus. Sch.: Working Knowledge (May 30, 2005), https://hbswk.hbs.edu/item/six-steps-for-making-your-threat-credible [https://perma.cc/J58N-D7AS] (describing how, when playing chicken, the best strategy is to remove your steering wheel and throw it out the window; that way, your adversary knows that you cannot swerve even if you wish to, and must then act accordingly). See also supra note 85 and accompanying text.  Real options analysis would underscore the point that data has value and thus should not be ignored.94Cf. Welch, supra note 75, at 368. These combined benefits may be considerable.

Furthermore, if decisionmakers use real options analysis to value data, they may find ways to ameliorate the complexity problems over time. Trial and error can produce insights. As agencies and courts experiment with the framework, approximations may arise that are easier to calculate. Even if these approximations are not precisely accurate, they may be close enough to be useful. In particular, they may be significant improvements over existing data valuation methods.

That dynamic—of finding heuristics that are simpler but informative—has been borne out in other settings. For example, basic corporate finance theory teaches that profit-maximizing firms should use net present value analysis to allocate their resources.95See, e.g., id. at 61–66. Yet many firms, including large, sophisticated ones, analyze other metrics as well.96See John R. Graham, Presidential Address: Corporate Finance and Reality, 77 J. Fin. 1975, 2038 (2022) (surveying corporate managers on how they make capital allocation decisions and finding that, among large firms, 64% use the payback method and 39% use the profitability index); John R. Graham & Campbell R. Harvey, The Theory and Practice of Corporate Finance: Evidence from the Field, 60 J. Fin. Econ. 187, 199 (2001) (finding that 57% used the payback method, 30% used the discounted payback method, and 12% used the profitability index). These metrics include the profitability index, which measures how much profit a project generates per dollar invested, and the payback rule, which considers how long it takes for a project to repay its startup costs.97Welch, supra note 75, at 75–78. Both of these simple rules have well-known flaws that can cause them to produce absurd results.98Profitability index can produce the wrong decision rules because firms seek to maximize their total profits, not their profits per dollar invested. For example, consider two mutually exclusive projects: Project A costs $100 and produces $1000 in revenue. Project B costs $1 and produces $100 in revenue. Both projects are good, but if one must choose between them, Project A is clearly better; its $900 in profit dwarfs Project B’s $99 profit. Yet Project B has a much higher profitability index ($100 / $1 = 100) than Project A does ($1000 / $100 = 10). Id. at 75–76.

The payback rule evaluates projects based on how long they take to return their initial costs. Discounted payback does the same, but discounts the project’s future cash flows to account for the fact that they do not come immediately. Both have the same problem; they ignore any cash flows that the project generates after it has paid back its initial costs. Consider project C, which costs $100 today and returns $110 in a year, and project D, which costs $100 today and returns $1000 in a year and a day. Project D is clearly a superior project, but the payback method will select Project C instead. Id. at 77.
Why, then, do they remain common?

One possible answer is that these simple rules produce information about projects’ real option value. For example, recouping one’s initial investment means that those recovered dollars can be redeployed toward other purposes, increasing the range of decisions available to the firm.99This assumes that capital markets are imperfect, which is true of real-world markets. See id. at 511–39. Researchers have found that, under a variety of circumstances, such simple rules can allow firms to make nearly optimal decisions.100See Robert L. McDonald, Real Options and Rules of Thumb in Capital Budgeting, in Project Flexibility, Agency, and Competition 13 (M.J. Brennan & L. Trigeorgis eds., 2000); Achim Wambach, Payback Criterion, Hurdle Rates and the Gain of Waiting, 9 Int’l Rev. Fin. Analysis 247, 257 (2000); Glenn W. Boyle & Graeme A. Gutherie, Payback and the Value of Waiting to Invest 13–14 (Apr. 29, 1997) (unpublished manuscript), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=74 [https://perma.cc/8K39-B95L]. The relative accuracy of these rules, combined with their simplicity, may explain why firms use them more frequently than real options analysis.101See Graham, supra note 96, at 1985 (finding that only 38% of large firms frequently use real options in decision-making, which was less frequent than profitability index (39%) or payback rule (64%)); see also Graham & Harvey, supra note 96, at 188 (finding that payback rule was more commonly used than real options); H. Kent Baker, Shantanu Dutta & Samir Saadi, Management Views on Real Options in Capital Budgeting, 21 J. Applied Fin. 1, 8 (2011) (surveying Canadian firms and finding that only 10% often or always used real options analysis when deciding among projects, while 67% used the payback rule, 25% used the discounted payback rule, and 11% used the profitability index). These types of heuristics, and others, may prove useful to valuing data.

Alternatively, real option analysis can inform other modes of valuation. One response to complicated valuation problems is the method of comparables: To determine an item’s value, identify similar items whose values are known (that is, comparables), then make appropriate adjustments. This method is frequently employed to value items such as real estate, art, and active businesses.102Welch, supra note 75, at 431–36. Under the right circumstances, this method can produce accurate valuations.

The method of comparables can be tricky to apply to data for several reasons. First, it may be difficult to identify similar data sets with known values. Sale prices are often used as the measure of value for comparable items, and sale prices for data may not be public. But even when sale prices are available, data sets can differ from each other along a variety of dimensions. Which of those differences are important, and how much should value estimates be adjusted to account for these differences? For example, which is more valuable—a data set that is twice as large, or one that includes data drawn from twice as much time? Is data more valuable when the future is more uncertain or less? These are but a few of the dimensions one might wish to consider.

Real option theory sheds insight into some of these questions. It identifies a number of factors that directly affect real option value, and thus the value of data. These factors can then be considered and adjusted for when using comparables to value data.

One factor that informs a data set’s value is its informational uniqueness. To what extent does that data tell its user something that they otherwise would not know? Having insights that no one else has can be extremely valuable. On the other hand, when competitors have access to comparably informative data, profitably exploiting the data gets harder, as competition among firms puts the firm’s counterparties in a comparatively stronger position.

Two other factors stem from the payoffs available from exploiting data. Unsurprisingly, the higher the potential future profits that the data can unlock, the more valuable the data is. What is less obvious is that the value of data increases as the future becomes less certain. This is somewhat abnormal; in finance, safer cash flows are usually considered more valuable than riskier ones.103Id. at 124, 197. Options are an important exception to this general rule, however. Because options allow one to change behavior in response to different circumstances, they actually become more valuable when a project has a wider range of possible future payouts.104Id. at 364.

Another important factor in real option valuation is the length of time over which one can continue to change one’s behavior.105This is also an important factor in financial option valuation. See generally Merton, supra note 87. The longer that one can change direction, the more actions that one has available, and the more valuable the option. In the data context, this corresponds to the useful life of the data. As noted earlier, some data remains useful and informative for years or even decades; other data grows stale quickly.106Of course, distinguishing one from the other may be challenging in particular cases. The task gets easier when one at least knows to ask the question, however. All else equal, the former is more useful than the latter.107This factor relates to the first. If the data is informationally unique, or more unique, for a longer period of time, the firm possessing that data will have more attractive choices available to it for a longer period of time (that is, a longer-lived option).

Relatedly, interest rates affect the value of real options, and thus of data.108This is also true of financial options. See generally Merton, supra note 87. Profits earned in the future are more valuable when interest rates are low than when rates are high.109More precisely, firms should care about the discount rate they apply to future cash flows rather than about interest rates, but the two concepts are similar. In practice, the latter is easier to observe and may closely correlate with the former. Interest rates have more of an effect on data with a longer useful life, and less of an effect on shorter-lived data.

How quickly and cheaply one can change one’s behavior also affects a real option’s value. The quicker one can act, the more nimble one is, the more ways in which one can profitably change one’s behavior. Similarly, options that can be exercised at little cost are more valuable than those which are expensive to utilize.110This is analogous to the strike price for a financial call option; all else equal, options with lower strike prices are more valuable.

These factors are more amenable to forming legal standards than a strict formula for valuing real options would be. Accordingly, they may provide a path forward for data valuation.

Finally, real options theory could inform attempts to value data in a different way. Experience may convince policymakers that valuing data is simply too hard, and that they should act accordingly. Such actions could take multiple forms.

One response to a difficult valuation problem is to simply exit the field as much as possible. Section 83 of the Internal Revenue Code provides a good example of this approach.11126 U.S.C. § 83 (2023). It addresses the questions of how much income a taxpayer has when they receive property in exchange for performing services, and when the taxpayer is taxed on that income. Section 83’s general rule is that employees are taxed on property based on its fair market value, and they are taxed at the time it becomes clear that they will get to keep the property.

For example, startup companies frequently include some form of equity interest in the company as part of their employees’ compensation packages.112See, e.g., Abraham J.B. Cable, Fool’s Gold? Equity Compensation & the Mature Startup, 11 Va. L. & Bus. Rev. 613, 613 (2017). These interests can come in various forms, including stock, restricted stock units, or stock options.113Id. If employees leave their employer before a certain date—if they quit to take a new job or are fired—then they forfeit some or all of their equity interests. The date after which an employee gets to keep an equity interest, even if the employee leaves the firm, is known as that interest’s vesting date. If an employee leaves the employer before the vesting date, they lose their unvested equity.

Under the general rule of Section 83, an employee is typically taxed on the value of their equity interest at the time those interests vest.11426 U.S.C. § 83 (2023). However, as noted previously, valuing stock options is difficult. Accordingly, Section 83 exempts stock options from its general rule—unless they have a visible market price (in which case they are easy to value).11526 U.S.C. § 83(e) (2023); Treas. Reg. § 1.83–7(b) (as amended in 2004). Stock options can also have a readily ascertainable fair market value if they are not actively traded, but this is unusual; the relevant regulations recognize that the possibility of future price changes increases the value of an option and requires (among other conditions) that this component of value be measurable with reasonable accuracy. Treas. Reg. § 1.83–7(b)(2), (3) (as amended in 2004). Instead, employees who receive stock options generally are not taxed until they exercise those options, at which point they receive stock in their employer, which is easier to value.116This assumes that the stock is vested. The general rule of Section 83 applies to the stock; if the employee may have to surrender the stock to the employer in the future if they do not continue their employment past a specified date, then the employee is not taxed on the value of the stock until the stock vests. This limits taxpayers’ ability to take aggressive valuations of hard-to-value stock options.117For example, absent these rules, an employee could assign a low value to a stock option, thereby recognizing little ordinary income at the time of the grant. They would then recognize greater gains on the eventual sale of their stock, but those gains would generally be long-term capital gains and would be subject to a significantly lower tax rate. Because options are hard to value, it could be difficult for the IRS to prove that the employee’s valuation was too low. Regulators can adopt similar tactics in the context of data valuation.

A potentially complementary approach would be to foster a market for data, with standardized features, in order to make private transaction prices more visible and data sets more easily comparable. In a number of instances, legislative and regulatory interventions have helped shift markets characterized by bespoke arrangements toward more commoditized features and greater transparency.118Financial derivatives provide a useful recent example. See Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111–203, §§ 701–774, 124 Stat. 1376 (2010). Such standardized markets can make the job of valuation much easier, and can also protect unsophisticated parties operating in those markets.119See Burton G. Malkiel, A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing 26 (2015) (“Taken to its logical extreme, it means that a blindfolded monkey throwing darts at the stock listings could select a portfolio that would do just as well as one selected by experts.”).

CONCLUSION

While data has become increasingly valuable and important, the law’s attempts to value data have lagged, remaining confused and underdeveloped. Situating data valuation law within an economic framework built on real options analysis would resolve conceptual confusion among courts, agencies, and legislatures. It would also create greater predictability among private actors, which in turn would reduce the risk of regulatory uncertainty and facilitate investment. A clearer legal approach that cuts across different areas of law and jurisdictions would limit opportunities for regulatory arbitrage across fields of law addressing data valuation. Furthermore, a consistent approach reduces politicization of results, preventing favored groups from shifting unclear legal rules in their favor when there is no economic basis for such a shift. A consistent approach also makes decision-making less opaque, thereby increasing the legitimacy of outcomes.

While the real options approach is not without potential problems, we believe that it is the least bad alternative available. Moreover, increased use of real options analysis over time may generate heuristics that simplify data valuation by courts and agencies. These heuristics may prove so effective that private parties incorporate them into arm’s length transactions. Further research is needed to identify what heuristics work best in the data valuation context, as well as how to encourage more transparent and comparable pricing in burgeoning data markets worldwide.

96 S. Cal. L. Rev. 1545

Download

* John B. Milliken Professor of Law and Taxation, USC Gould School of Law.

† Carolyn Craig Franklin Chair in Law, Professor of Law and Business, USC Gould School of Law and USC Marshall School of Business, and Senior Advisor, White & Case LLP.