Listeners’ Choices Online

The most useful way to think about online speech intermediaries is structurally: a platform’s First Amendment treatment should depend on the patterns of speaker-listener connections that it enables. For any given type of platform, the ideal regulatory regime is the one that gives listeners the most effective control over the speech that they receive.

In particular, we should distinguish four functions that intermediaries can play: (1) broadcast, such as radio and television, transmits speech from one speaker to a large and undifferentiated group of listeners, who receive the speech automatically; (2) delivery, such as telephone, email, and broadband Internet, transmits speech from a single speaker to a single listener of the speaker’s choosing; (3) hosting, such as YouTube and Medium, allows an individual speaker to make their speech available to any listeners who seek it out; and (4) selection, such as search engines and feed recommendation algorithms, gives listeners suggestions about speech they might want to receive. Broadcast is relevant mostly as a (poor) historical analogue, but delivery, hosting, and selection are all fundamental on the Internet.

On the one hand, delivery and hosting intermediaries can sometimes be subject to access rules designed to give speakers the ability to use their platforms to reach listeners because doing so gives listeners more choices among speech. On the other hand, access rules are somewhere between counterproductive and nonsensical when applied to selection intermediaries because listeners rely on them precisely to make distinctions among competing speakers. Because speakers can use delivery media to target unwilling listeners, they can be subject to filtering rules designed to allow listeners to avoid unwanted speech. Hosting media, however, mostly do not face the same problem, because listeners are already able to decide which content to request. Selection media, for their part, are what enable listeners to make these filtering decisions about speech for themselves.

Introduction

This is an essay about listeners, the Internet, and the First Amendment. In it, I will argue that the most useful way to think about online speech intermediaries is structurally: a platform’s First Amendment treatment should depend on the patterns of speaker-listener connections that it enables. For any given type of platform, the ideal First Amendment regime is the one that gives listeners the most effective control over the speech that they receive.

This essay does not stand alone. In a previous article, Listeners’ Choices, I outlined a two-part theory of the First Amendment based on recognizing listeners’ choices about what speech to hear.1James Grimmelmann, Listeners’ Choices, 90 U. Colo. L. Rev. 365, 366–67 (2019). First, any free-speech principle that does not take listeners’ choices seriously is self-defeating. In a world where speakers pervasively compete for listeners’ attention—which is to say, in our world—listeners’ choices provide the only normatively appealing way to resolve the inevitable conflicts among speakers. Second, existing First Amendment doctrine regularly defers to listeners’ choices. Many cases that are seemingly about speakers’ rights snap into focus as soon as we pay attention to which listeners are willing and which listeners are not. Listeners’ choices among speakers are typically content- and viewpoint-based, but a legal rule that defers to those choices can be content-neutral.

The theory I presented in Listeners’ Choices was skeletal. Here, my purpose is to flesh out the listeners’-choice principle so that it does useful doctrinal and policy work in our modern media environment. I will analyze the role of listeners’ choices in four structurally different functions that media intermediaries can carry out:

  • Intermediaries carrying out a broadcast function, such as radio and television, connect one speaker to a large and undifferentiated group of listeners who receive the speech automatically;
  • Intermediaries carrying out a delivery function, such as telephone, email, and broadband Internet, transmit speech from a single speaker to a single listener of the speaker’s choosing;
  • Intermediaries carrying out a hosting function, such as YouTube and Medium, allow an individual speaker to make their speech available to any listeners who seek it out; and
  • Intermediaries carrying out a selection function, including search engines and feed recommendation algorithms, give listeners suggestions about speech they might want to receive.

Notice that I refer to distinct “functions,” because media and intermediaries are not monolithic. There is no set of First Amendment rules for “the Internet,” nor can there be. The Internet is too vast and variegated for that to work. Distinguishing among broadcast, delivery, hosting, and selection helps us see that these functions can be disaggregated. On the Internet, we are accustomed to thinking of hosting and selection as intertwined; the term “content moderation” encompasses them both. But they do not necessarily need to be: YouTube the hosting platform and YouTube the search engine are different and could be subjected to different legal rules.

The original sin of broadcast was that it inextricably combined selection and delivery into a single take-it-or-leave-it package, in a way that was uniquely disempowering to listeners. Bandwidth limitations mean that broadcast media present listeners with a limited array of speakers to choose among. And the fact that listeners receive broadcast speech as a group, rather than individually, means that it is hard to protect unwilling listeners from that speech without blocking willing listeners’ ability to receive it. The result is a body of doctrine and theory that purports to act in listeners’ interest but is primarily concerned with allocating scarce bandwidth among competing speakers.

In contrast, listeners can be far more empowered on the Internet than they were offline. Delivery, hosting, and selection are all more listener-friendly than broadcast. The individually targeted nature of delivery media means that media intermediaries can block unwanted communications to unwilling listeners without offending core free-speech values. The pinched kinds of choices that broadcast media needed to make among competing speakers were a poor proxy for the much broader kinds of choices that listeners can make for themselves on hosting media. And the recommendations that selection media provide to help listeners choose among competing speakers are fundamentally oriented towards facilitating listeners’ autonomy, not speakers’.

Turning to the specifics of how these different kinds of media should be regulated, there are two structurally different kinds of legal rules that can apply to them:

  • Access rules ensure that speakers are able to use a medium, even when an intermediary would prefer to exclude them.2 Access rules for listeners raise harder issues because speakers can have associational, privacy, and economic interests in restricting the audience for a communication to exclude willing listeners. An activist organizer’s mailing list might exclude political opponents; a copyright owner’s catalog might have a paywall with different prices for hobbyist and professional subscribers. A communications platform’s access policies for listeners are often inextricably bound up with speakers’ preferences about their audiences. These are subtle questions, and I do not discuss them in this essay.
  • Filtering rules ensure that listeners are able to avoid unwanted speech, even when speakers would prefer to subject them to it. Sometimes they empower an intermediary to reject that speech on behalf of listeners (i.e., they are the opposite of access rules), but sometimes they require speakers and intermediaries to structure their communications in a way that enables listeners themselves to reject the speech.

From a speaker’s point of view, access rules look like they promote free speech and filtering rules look like they inhibit it. But from a listener’s point of view, both types of rules can promote the values of the First Amendment.

For access rules, the key distinction is between rival and non-rival media. Delivery and hosting can be non-rival on the Internet, where bandwidth is immense and can be expanded as needed. Speakers who use delivery and hosting media mostly do not interfere with each other, and so an intermediary can treat most speakers identically. But selection is fundamentally rival: listeners rely on these intermediaries to help them distinguish among speakers, and so selection intermediaries must favor some speakers and disfavor others. As a result, delivery and hosting intermediaries can often be subjected to access rules requiring even-handed treatment of all interested speakers, but the First Amendment mostly forbids imposing access rules on selection intermediaries.

For filtering rules, the key distinction is that delivery situates the relevant choices among speaker-listener pairings upstream (closer to speakers) while hosting situates those choices downstream (closer to listeners). When listeners can make their own choices among speech (as on hosting intermediaries), filtering rules—whether imposed by intermediaries or by the legal system—have the effect of thwarting those choices. However, when speakers make those choices in the first instance (as on delivery intermediaries), sometimes filtering rules are necessary to empower listeners to make choices for themselves. Selection media, for their part, provide listeners the information they need to choose which content on hosting media to request, and which content on delivery media to receive.

In part, this essay is a love letter to selection media, written on behalf of listeners. Selection media play an utterly necessary role in an environment of extreme informational abundance, and they can be more responsive to listeners’ informational choices and needs than any other form of media.3This is a generalization of a point I have been making for decades about search engines. See, generally James Grimmelmann, Don’t Censor Search, 117 Yale L.J. Pocket Pt. 48 (2007); James Grimmelmann, The Structure of Search Engine Law, 93 Iowa L. Rev. 1 (2007); James Grimmelmann, Information Policy for the Library of Babel, 3 J. Bus. & Tech. L. 29 (2008); James Grimmelmann, The Google Dilemma, 53 N.Y. L. Sch. L. Rev. 939 (2009); James Grimmelmann, Speech Engines, 98 Minn. L. Rev. 868 (2014) [hereinafter Grimmelmann, Speech Engines]. Access rules are often nonsensical when applied to them, and filtering rules must be applied with care, lest they trample on the filtering work that selection media are already doing.4See James Grimmelmann, Some Skepticism About Search Neutrality, in The Next Digital Decade: Essays on the Future of the Internet 435, 439–42 (Berin Szoka & Adam Marcus eds., 2010).

But the fact that selection media are often listener-friendly does not mean that they always are. I have argued previously that search engines can be regulated when they behave disloyally or dishonestly towards their users,5Grimmelmann, Speech Engines, supra note 3. and the same goes for selection media. More generally, I will argue here that structural regulation of selection media is often appropriate. For example, an intermediary could be forced to disaggregate its hosting and selection functions; the former can—and sometimes should­—be regulated in ways that the latter cannot. Indeed, an intermediary might need to open its delivery or delivery platform up to competing selection intermediaries (so-called “middleware”) to give listeners broader and freer choice over the speech they receive.

Finally, a note on scope. This is an essay about intermediaries, not an essay about all forms of media. I am focusing on intermediaries’ roles in carrying third-party speech from speakers to listeners, not on their own first-party speech that they want to share with listeners. Different structural and First Amendment considerations apply to first-party speech. I will argue in places that solicitude for intermediaries’ speech interests should not prevent us from regulating them in ways that promote listeners’ speech interests. But this is not primarily an essay about intermediaries’ speech itself.6See generally Stuart Minor Benjamin, Transmitting, Editing, and Communicating: Determining What ‘The Freedom of Speech’ Encompasses, 60 Duke L.J. 1673 (2011) (discussing whether and when the First Amendment encompasses transmission of speech by intermediaries).

This essay has four substantive Parts. Part I provides a short review of the argument from Listeners’ Choices and can be skipped if you are familiar with it. Part II describes the structural differences among broadcast, delivery, hosting, and selection media, and explains how they relate to each other. Part III considers how access rules play out in these four types of media, and Part IV does the same for filtering rules. As we will see, the appropriate legal treatment of these different kinds of intermediaries and rules falls out naturally. First Amendment doctrine becomes radically simpler when we carve up media at their joints.

I. Listeners’ Choices: A Review

The starting point of Listeners’ Choices is that we can think about speech as a matching problem: in an environment where billions of people speak and billions of people listen, who speaks to whom? This way of thinking about speech is mostly content-neutral: it focuses on the network structure of connections between speakers and listeners, rather than on the content of the speech they exchange over those connections. I called an actual arrangement of speakers and listeners a “matching” to emphasize its mutuality and the fact that it is a collective property of speakers and listeners overall.

The possible structures of speaker-listener matching are shaped by two things: choices and scarcities. Regarding the former, speakers make choices about what to say and how; regarding the latter, listeners make choices about what to listen to and how. Not all their choices can be simultaneously honored, but the heart of this way of thinking about free speech is that speakers and listeners make choices among each other, and that these choices are in large part constitutive of the values that free expression serves. They are subjective, individual, and profoundly content- and viewpoint-based. Some conflicts among speakers’ and listeners’ choices arise simply from their diverging values and goals; I called these conflicts “internal” limits on possible speaker-listener matchings.

As for scarcities, another class of limits on speaker-listener matchings are what I called “structural” limits: some combinations of who speaks to whom are physically or practically impossible. In particular, three types of scarcity shape the patterns of speech everywhere and always: bandwidth, attention, and ignorance. Bandwidth limits, such as the limited range of the human voice or the limited number of very high frequency (“VHF”) television channels, restrict the ability of speakers’ messages even to reach listeners. Attention limits are hard-wired into human anatomy and psychology. Although speech consists of information, which is potentially infinitely replicable, each person can only pay attention to one or a few speakers at a time. Finally, ignorance about the content of speech can lead people to make choices about what to listen to—choices that would not have made if they were fully aware of what the speech would be.

The upshot of having these scarcities is that listeners’ choices among competing speakers provide a compelling way to decide among competing speech claims. Listeners’ choices are valuable in themselves because listening is an indispensable part of any communication, and listeners’ choices should be elevated over speakers’ choices because of the scarcity of attention; the capacity to listen is limited in a way that the capacity to speak is not. In order to tune into a preferred speaker, a listener must be able to tune out other speakers, and a speech environment in which listeners cannot do so is one in which effective speech is impossible. From this general point, a few specific observations follow.

First, in one-to-many cases of conflicts between willing and unwilling listeners, willing listeners generally prevail. The “Fuck the Draft” jacket in Cohen v. California7Cohen v. California, 403 U.S. 15, 16 (1971). and the drive-in movie screen in Erznoznik v. Jacksonville8Erznoznik v. Jacksonville, 422 U.S. 205, 206 (1975). were seen by both willing and unwilling viewers. To censor these forms of expression at the insistence of the unwilling ones would deprive the willing ones of speech they were willing (and in Erznoznik, affirmatively choosing) to see. The unwilling ones are expected to avert their eyes or change the channel. This looks like a preference for speakers’ right of expression as against unwilling listeners, but really it is a preference for willing listeners over unwilling ones.

Second, in true one-to-one cases where a speaker addresses a single unwilling listener, the analysis is far less speaker-friendly. The Supreme Court has affirmed homeowners’ rights to literally and figuratively shut their doors to unwanted solicitors9Martin v. City of Struthers, 319 U.S. 141, 150 (1943). and mail.10Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 736–37 (1970). A general ordinance prohibiting Jehovah’s Witnesses from going door-to-door11See Martin, 319 U.S. at 142. or prohibiting the mailing of communist literature would be unconstitutional,12Lamont v. Postmaster Gen., 381 U.S. 301, 307 (1965). because of the presence of potentially willing listeners among the audience. That concern drops away when the speaker can stop attempting to communicate with individual listeners who specifically object while still reaching those who do not. Listeners can choose not to pay attention, and speakers who attempt to overcome listeners’ defenses (for example, with amplified sound trucks) can be barred from doing so.13Kovacs v. Cooper, 336 U.S. 77, 89 (1949). The caselaw here is rich and context-sensitive; a rule that listeners always win would be as wrong as a rule that speakers always win. Instead, the cases grapple with the interests of speakers, willing listeners, unwilling listeners, and—importantly—undecided listeners, who cannot decide whether they want to hear what the speaker has to say unless the speaker at least has an initial chance to ask.14See, e.g., McCullen v. Coakley, 573 U.S. 464, 489 (2014) (holding that a state law establishing six-foot buffer zone around people entering abortion facilities interfered with the right of anti-abortion advocates to engage in “consensual conversations” with people seeking abortions (emphasis added)).

Third, the general problem of sorting listeners into the willing and the unwilling involves what I call “separation costs”: the effort that willing listeners must take to hear, or that unwilling listeners must take to avoid hearing, or that speakers must take to distinguish between the two, or some combination of the above. The scale and distribution of separation costs can vary greatly based on the technological environment. I argue that the legal system, in a very rough way, seeks out the least-cost-avoider of speech conflicts: when a party can take a simple and inexpensive action to resolve the conflict, the law often expects them to do so.

II. Four Media Functions

This Part reviews the structural differences among the four media functions: broadcast, delivery, hosting, and selection. Along with some examples of each type, I discuss the ways in which each of them is one-to-one or one-to-many.15Eugene Volokh, One-to-One Speech vs. One-to-Many Speech, Criminal Harassment Laws, and “Cyberstalking”, 107 Nw. U. L. Rev. 731 (2013). I defer discussion of scarcity and bandwidth constraints to the next Part, as these issues bear heavily on access rules.

A. Broadcast

Start with the wired and wireless mass media that dominated most of the twentieth century: radio, broadcast television, satellite television, and cable. These mass media are characterized by their extensive reach: they enabled a single speaker to reach a large potential audience of listeners. They are, in Eugene Volokh’s taxonomy, one-to-many media.

 

To be clear, broadcast media collectively enable numerous speakers to reach large audiences; there are many TV stations, and each station broadcasts many different programs. Instead, when I say that broadcast is one-to-many, I mean that each individual speaker reaches a large and undifferentiated audience. Broadcast aggregates numerous such one-to-many communications, dividing them up by time (for example, WNBC-TV broadcasts the news at 7:00 and Access Hollywood at 7:30) and by intermediary (WNBC-TV and WABC-TV both broadcast their respective news programs at 7:00). The structural point is that WNBC-TV can only broadcast a single program at a time—such as Access Hollywood at 7:30—and when it does, it enables a one-to-many communication from Access Hollywood to its viewers.

B. Delivery

Next, consider delivery media like mail, telegraph, telephone, email, direct messaging, and Internet service. They all transmit speech from an individual speaker to an individual listener selected by the speaker, making them one-to-one media.16Id. at 742. More precisely, they are one-to-one with respect to individual communications from speaker to listener. In aggregate, they are many-to-many. The postal service delivers millions of letters, but each letter goes from a single sender to a single recipient. Delivery is therefore a kind of disaggregated broadcast: instead of sending joint communication to all listeners at once, individual communications are sent to individual listeners at the speaker’s request.

Most delivery media use some form of medium-specific addresses for a sender to specify their chosen recipient. A letter goes to a specific postal address; a telephone call to a specific telephone number; an email to a specific email address; an Internet Protocol (“IP”) datagram to a specific IP address; and so on. A speaker can choose to send the same message to many listeners by sending many individual communications to different addresses. Conversely, by having an address, a listener makes themselves reachable by speakers and then can receive a mostly undifferentiated stream of communications from any speaker who wants to reach them.

Some delivery media—such as telephone and direct messaging—are interactive, but it still makes sense to talk of “the speaker” and “the listener.” First, at the beginning of a conversation, one user is trying to establish a connection with another: the phone rings, or an email appears in the inbox. The user trying to establish the connection is the one who chose to initiate the communication, chose when to do it, and most importantly, chose with whom to establish it. They are a speaker, and if the other user agrees, they receive the message and become a listener. Second, what we think of as “interactive” media are really bidirectional media. A telephone connection is “full duplex”: it requires two speech channels, one in each direction. The same is true for a Zoom call, an email conversation, or anything else that travels on the Internet. These interactive exchanges are made up of individual IP datagrams, each traveling from a sender to a recipient identified by IP address. Third, all delivery media are interactive on a long-enough time scale. Pen pals exchange letters, trading off the roles of speaker and listener. Each letter is still a discrete one-to-one communication carried by the postal service; mail is still a delivery medium.

C.  Hosting

A third category of Internet media consists of hosting platforms. Third-party speakers send content to these intermediaries, which make the content available to listeners on request. For example, an artist uploads illustrations from her portfolio of work to a Squarespace site and individual fans visit the site to view the illustrations.

Other examples of hosting intermediaries include (1) bulk storage like Google Drive and Amazon S3; (2) content-delivery networks (“CDNs”) like Akamai and Cloudflare; (3) hosting functions of social-media platforms like YouTube and X; and (4) web-based self-publishing features of platforms like Medium and Substack. Structurally, online marketplaces are also hosting services as long as they (a) sell digital content instead of physical goods or services, and (b) feature speaker-submitted third-party content. Examples include App Stores by Apple and Google, e-book stores by Barnes & Noble and Amazon, video game stores by Steam and Epic, and even Spotify as a distributor of podcasts and music.

Hosting is the mirror image of delivery. Both are one-to-one media; each individual communication goes from a single speaker to a single listener. The difference is that in delivery media, the speaker selects which listeners to speak to; in hosting media, the listener selects which speakers to listen to. Although hosting is usually thought of as a service offered by platforms to speakers, the listener’s request plays a crucial role in the process. Hosting is also a kind of disaggregated broadcast: instead of sending a joint communication to all listeners at once, individual communications are sent to individual listeners, this time at the listener’s request.

Hosting and delivery functions are often used in conjunction. A website host, for example, responds to a user’s request for a particular URL by sending a response with the contents of the page at that address. The request and the response are both made using delivery media—the Internet service providers (“ISPs”) along the delivery path between the host and the user. (So, for that matter, is the transmission from speaker to the website host with the content the speaker wants to make available, and so is the website host’s acknowledgement that it has received the content.) But the host’s own activities—its responses to listeners’ requests for content—have the listener-selected nature of hosting, not the speaker-selected nature of delivery.

Some intermediaries offer both hosting and delivery. Substack is a good example: each post is both made available on Substack’s website and also mailed out to newsletter subscribers. Substack is a hosting service for listeners who read the post on the website, but it is a delivery service for listeners who read the post in their email inbox. Sometimes the distinction is irrelevant, but sometimes it matters. Substack allows newsletter authors to import a mailing list of subscribers, so it is not safe to assume that everyone who receives a Substack delivery has consented to it. For a user who objects to newsletter spam, Substack is a delivery intermediary, not a hosting intermediary.

Like delivery, hosting can be aggregated into a one-to-many medium. Indeed, this is typically the default on the Internet. Unless a host affirmatively restricts which listeners have access to a speaker’s content—for example, with a list of subscribers to a paywalled publication—anyone with an Internet connection can access it, and it is far easier to leave access unrestricted than to impose selective restrictions. Thus, from a speaker’s perspective, hosting can function like broadcast in that it allows a speaker to reach an indeterminately large audience with a single act of publication.

D. Selection

Finally, consider the selection function of some media, which consists of recommending some content for users. Selection media include general search engines that index third-party sites, such as Google, Bing, Kagi, and DuckDuckGo, as well as site-specific search engines that index the content on a specific platform such as the search bars built into YouTube, TikTok, and X. They also include recommendation engines that may provide personalized results not explicitly tied to a user query, such as the feed algorithms on Facebook and TikTok or the watch-next suggestions on YouTube. The key feature of a selection platform is that it tells users about content, which they can then consume in full if they want.

Selection media are not strictly one-to-one or one-to-many in the same way that broadcast, delivery, and hosting are; they do not by themselves carry content from speakers to listeners. Instead, it is helpful to think of selection media as being many-to-one because they help individual listeners choose speech from a large variety of speakers. They turn an overwhelming volume of available content into a much smaller number of selections or recommendations that a listener can meaningfully experience, and they do so in ways that can be individuated for each specific listener.

Selection media are hardly new, but two features of the Internet make selection media particularly important online. First, the sheer scale of the Internet makes selection an absolute necessity. There is far more content on the Internet, or even on social-media platforms and not-especially large websites, than any one user can plausibly engage with. The shift from bandwidth to attention as the most salient bottleneck makes selection a crucial site of contestation.

Second, the Internet has often enabled selection to be disaggregated from delivery and hosting. The selection function of a television channel is obvious: because it can transmit so little compared with what it might, the choice of what to transmit does most of the work of selection. However, YouTube is both a content host and a content recommender: it can host a video without ever recommending that video to anyone. It is the difference between an album (selection bundled with hosting) and a playlist (selection by itself). This point cuts both ways—distinguishing the two functions takes some First Amendment pressure off of hosting, but piles more onto selection.

III.  Access

A. Scarcity

One of the fundamental structural constraints on choices about speech is scarcity: limits on the number of communications that a given medium, or an intermediary using that medium, can carry. Scarcity forces choices among speakers to be made upstream by the intermediary or by regulators allocating the medium among speakers and intermediaries. In contrast, non-scarce media allow choices among speakers to be made downstream by listeners themselves. Unsurprisingly, there is a long history of scarcity arguments in telecommunications policy.

The standard story, as reflected in caselaw, points to the scarcity of broadcast spectrum as a justification for regulation. First, the available spectrum needs to be allocated to different users to prevent chaos and interference. Then, once it has been handed out, these users can be required to carry a reasonable diversity of speakers so that the intermediaries do not have undue power over speech. The usual citation for this form of argument is Red Lion Broadcasting Co. v. FCC, which used scarcity arguments to uphold the FCC’s fairness doctrine.17Red Lion Broad. Co. v. FCC, 395 U.S. 367, 400–01 (1969).

In contrast, other media are not thought of as scarce in the same way. There is room for many simultaneous speakers, which means there is no need for regulatory intervention. Intermediaries themselves can choose which speakers to carry, and there is less risk of having a handful of powerful intermediaries entirely control the speech environment. The usual citation for this form of argument is Miami Herald Publishing Co. v. Tornillo, which declined to extend Red Lion to newspapers.18Mia. Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 257–58 (1974).

Instead, the Supreme Court upheld newspapers’ First Amendment right to pick and choose what content they print.

Thus, goes the story, there is a spectrum from scarce media, like broadcast, to non-scarce media, like newspapers. The scarcer the medium, the more regulable it is. Other media fall somewhere in between. Cable television, for example, can carry a limited number of channels, but typically more than broadcast can. Thus, the scarcity rationale for regulating cable exists, but is weaker than for regulating broadcast. This tracks with the regulatory regime: cable operators are required to set aside some of their channels for local broadcasters and public-access channels, but cable channels are not regulated for content. It also tracks with judicial treatment: the Supreme Court held 5-4 that this regulatory regime was constitutional in Turner Broadcasting System, Inc. v. FCC, almost exactly halfway in between the 9-0 decisions in Red Lion and Miami Herald.19Turner Broad. Sys., Inc. v. FCC, 520 U.S. 180 (1997).

There are two problems with this story. The first is that it does not obviously explain why there are some media—such as telephone—that are even more regulated than broadcast. The telephone network has much higher capacity than broadcast does (it can carry millions of simultaneous conversations), but it is subject to a strict common-carriage regime. A naive scarcity argument would suggest the exact opposite: that because telephone capacity is effectively unlimited, there is no need for regulation.

The second problem is that even in cases that rely on scarcity arguments, those arguments do not always cut in the direction one would expect. In Miami Herald, it was the newspaper arguing that its editorial space was scarce—in the Supreme Court’s words, that it could not engage in “infinite expansion of its column space.”20Mia. Herald, 418 U.S. at 257. The Supreme Court accepted this argument as a rationale to uphold the newspaper’s First Amendment right to reject unwanted content—the exact opposite of what a naive scarcity argument would suggest.

The way out of these paradoxes is to recognize that there are two dimensions to scarcity. On one hand, there is what I call bandwidth scarcity: the limits on any one intermediary’s ability to carry the speech of multiple speakers. On the other hand, there is what I call entry scarcity: the limits on the number of intermediaries who can operate simultaneously. Entry scarcity cuts in favor of regulation: an intermediary is in a position to control who gets to speak, unconstrained by market forces and the threat of competition. But bandwidth scarcity cuts against regulation: it means that the intermediary necessarily exercises editorial judgment over which speakers have access, and it rules out simple common-carriage regimes that treat all

speakers equally. It is the interplay between these two distinct forms of scarcity that determines whether a medium is regulable.

In particular, mapping the two dimensions of scarcity in a two-by-two diagram reveals the underlying pattern of scarcity arguments:

  • In the top-right quadrant are print media, which are moderately bandwidth-scarce (it is possible to add pages to a newspaper or book, but at some expense and only by modifying its physical layout) and mostly not entry-scarce (physical printing is a commodity business). Thus, both scarcity considerations cut against regulation: there is no physical or economic need to allocate a limited ability to print among competing speakers, and imposing access rules comes at a real cost to a publisher’s ability to print the content it wants. Indeed, as Miami Herald illustrates, the Supreme Court’s solicitude for intermediaries’ speech is at its zenith here.
  • In the bottom-left quadrant are the classic common carriers. They are entry-scarce (the costs of running a second telephone network to every home were prohibitive), but they are not particularly bandwidth-scarce (carrying one more conversation or letter is a trivial burden for the phone network or the mails). Indeed, these are typically the most regulated communications intermediaries.
  • In the top-left quadrant are broadcast media. They are both entry-scarce (only thirteen VHF channels were allocated, and the practical number that could operate in any given area was invariably smaller) and bandwidth-scarce (each VHF television channel had 6 megahertz to carry a 525-line video signal at 30 frames per second). They are off-axis: their entry scarcity cuts in favor of regulation, but their bandwidth scarcity cuts against it. This is why they have historically been required to carry some diversity of content, but never with full common-carriage rules. They are more regulable than print, but less regulable than common-carriage networks.
  • In the bottom-right quadrant are media that are neither entry-scarce nor bandwidth-scarce. This is also an off-axis combination, but it is the opposite of the situation with broadcast, where access rules were both necessary (to give disfavored speakers access) and costly (because doing so comes at the cost of other speech the broadcasters could have carried). Here, access rules do not have a speech cost: giving additional speakers the ability to use an intermediary does not require the intermediary to drop other speakers to make room. However, it is also not clear whether these rules are necessary in the first place, because ordinary market forces would likely suffice to provide all speakers with the ability to speak.

As we will see, this two-dimensional framing of scarcity is quite helpful in situating the speech claims for and against access to the four types of intermediaries discussed in this essay: broadcast, delivery, hosting, and selection. Entry scarcity provides the justification for access rules to ensure listeners the widest possible range of choices among speakers without artificial limits imposed by incumbent intermediaries. However, bandwidth scarcity, when it exists, bespeaks caution: access rules come at their own sharp cost, limiting intermediaries’ ability to select the speech they think their listeners will most appreciate the ability to choose among. Thus, as we will see, hosting and delivery media (which are not bandwidth-scarce) may appropriately be the subject of common-carriage regulation where there are real issues of entry scarcity. However, selection media (which are intrinsically bandwidth-scarce) mostly should not be the subject of regulation regardless of entry scarcity.

I should note that there are competing definitions of “scarcity,” and my intention is to be agnostic among them. At different times and places, scarcity has been used to describe physical constraints (such as the laws of physics that govern electromagnetic interference), economic constraints (such as the cost of building out the infrastructure to run a telephone network), and regulatory constraints (such as limits on the number of cable franchises that will be awarded in a geographic area). Some commentators use scarcity narrowly to include only physical constraints; others use it broadly to include economic and regulatory constraints. These varying uses often reflect different beliefs about what kinds of regulations are appropriate for scarce media.21See generally Richard R. John, Sound Policy: How the Federal Communications Commission Worked in the Age of Radio (2025) (unpublished manuscript) (on file with author) (discussing these debates in the early years of the FCC). My argument here is modular with respect to the definition of scarcity in use. If you, according to your preferred definition, believe that a medium is entry-scarce but not bandwidth-scarce, I hope you will agree with my arguments for why common carriage might be an appropriate regulatory regime.

With these observations about scarcity in mind, we can turn to how access rules play out for different types of media. The focus throughout will be on how different rules increase or limit the choices available to listeners.

B. Broadcast

Twentieth-century broadcast media had highly limited capacity and were both bandwidth- and entry-scarce. These limits were primarily physical and technological and secondarily economic and regulatory. The available techniques for modulating an audio or audiovisual signal into one that could be transmitted through the atmosphere (radio, television, and satellite) or through wires (cable) allowed only a small number of such signals to be transmitted simultaneously in any geographic region. This number expanded over time with developments in telecommunications engineering: from AM to FM radio broadcasting; from VHF (very high frequency) to UHF (ultra high frequency) television broadcasting; from coaxial to fiber-optic cables; and so on. The basic structure remained the same: a fixed, finite menu of channels transmitted simultaneously to all potential listeners.

In such a setting, speaker-listener matching arises from a two-stage process. First, a few speakers are chosen to have access to the available channels, and then each listener chooses from the speech that speakers make available on those channels. In the United States, the first-stage choice among speakers was (and is) made by the operator of the physical infrastructure—the transmitting equipment or physical cable network—subject to some regulatory limits. The second-stage choice was (and is) made by individuals: members of the public with appropriate receiving apparatus (restricted in some cases, such as cable and satellite, who have subscribed to the operator’s service). The phrase most commonly used to describe this second-stage choice—changing the “channel”—reflects the way in which the technological constraints of twentieth-century broadcast funneled speech into a small and finite number of options.

Consider a speaker who is denied access to a channel, or who receives less access than they want, or who is limited in how they are allowed to use it, or who is charged more than they want for their access. In each case, they are obviously aggrieved. It is harder, however, from a purely speaker-centric position to explain why they have been wronged. The challenge—and this is a recurring challenge for speaker-centric analyses—is the problem of symmetry among speakers. It is one thing to say that the lucky speaker who receives access is better off than the unlucky speaker who does not, but it is quite another to make them change places. Doing so simply swaps the problem of the network operator picking winners and losers with the problem of the government picking winners and losers. To give A access and deny it to B amounts to preferring A’s speech to B’s, and on most theories of free speech, this preference is an awkward one for the government to engage in.

Instead, rationales for broadcast content regulation tend to rely on the needs of listeners, rather than speakers. As many scholars have noted,22E.g., David A. Strauss, Rights and the System of Freedom of Expression, 1993 U. Chi. Legal F. 197, 202 (1993). this is the upshot of Alexander Meiklejohn’s famous phrase, “What is essential is not that everyone shall speak, but that everything worth saying shall be said.”23Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 25 (1948). The basic idea of this regulatory paradigm is to give listeners either high-quality content, a wide range of options of content, or both—on the assumption that speakers and broadcasters, left to their own devices, will provide neither. As the Supreme Court put it in Red Lion’s famous phrasing, “It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount.”24Red Lion Broad. Co. v. FCC, 395 U.S. 367, 390 (1969).

Ringing rhetoric aside, it is hard to find actual listeners in the resulting regulatory regime. In an environment of severe bandwidth constraints, it is impossible to solicit and honor all individual listeners’ choices; there are never enough channels to give each member of the audience what they personally want. Instead, they make their desires known only collectively and statistically by tuning in to channels and by paying for those channels or for the things advertised on them. Thus, as the long-running theme in media criticism goes, broadcast was a “vast wasteland” of boring, mediocre, and fundamentally majoritarian content.25Newton N. Minow, Television and the Public Interest, 55 Fed. Commc’n L.J. 395, 398 (2003) (reprinting Minow’s speech on May 9, 1961, before the National Association of Broadcasters). The larger the mass audience, the lower the common denominator.26See C. Edwin Baker, Media, Markets, and Democracy (2002) (arguing that mass media tend towards popular content to the exclusion of content of interest to smaller communities).

Consider some of the most notable examples of broadcast access regulations: the Mayflower doctrine27Mayflower Broad. Corp., 8 F.C.C. 333, 339–40 (1941). and its successor the fairness doctrine,28Rep. on Editorializing by Broadcast Licensees, 13 F.C.C. 1246, 1253 (1949). the right of reply,29Pers. Attacks; Pol. Eds., 32 Fed. Reg. 10303 (July 13, 1967); Red Lion Broad., 395 U.S. at 367 (upholding the constitutionality of the FCC’s right of reply rules). and the equal-time rule.3047 U.S.C. § 315. None of these were concerned with any specific listeners’ choices among speakers. Instead, they were all attempts to provide for listeners’ interests generically—by anticipating what groups of hypothetical listeners might want or need.

The few occasions on which broadcast media regulations have attempted to take account of actual listeners’ choices when setting access rules only show how hard it is to do so. The most striking example is format regulation. For years, the FCC interpreted the Communications Act of 1934’s requirement that broadcast licensees serve the “public convenience, interest, or necessity” to mean that it should consider stations’ formats in its licensing procedures.31Id. § 303. It would deny approval for new pop-music radio licenses, for example, if it felt that an existing market was adequately served by the radio stations already licensed to operate in the area.32Citizens Comm. to Pres. the Present Programming of the Voice of the Arts in Atlanta on WGKA-FM v. FCC, 436 F.2d 263, 270 (D.C. Cir. 1970). Indeed, a licensee seeking permission to change formats was required to petition the FCC for approval.33See Hartford Commc’ns Comm. v. FCC, 467 F.2d 408, 411–12 (D.C. Cir. 1972). These rules have long since gone by the wayside. The FCC now takes the position that broadcasters have a First Amendment right to broadcast any content format they want. In FCC v. WNCN Listeners Guild, the Supreme Court upheld the FCC’s policy decision not to consider formats in licensing renewal and transfer proceedings. 450 U.S. 582, 595–96 (1981).

Format regulation was in theory a listener-based system, but the FCC seemed genuinely flummoxed when actual listeners showed up in licensing procedures demanding a voice in the first-stage choices of who got access to the airwaves and on what terms. In Office of Communication of United Church of Christ v. FCC, a group of civil-rights activists attempted to intervene in a license-renewal proceeding before the FCC, alleging that WLBT in Jackson, Mississippi had aired only pro-segregation viewpoints.34Off. of Commc’n of United Church of Christ v. FCC, 359 F.2d 994, 997–98 (1966). The FCC denied their request, arguing that these “representatives of the listening public”35Id. at 997. could “assert no greater interest or claim of injury than members of the general public.”36Id. at 999. The D.C. Circuit reversed and remanded for an evidentiary hearing, as listeners were “most directly concerned with and intimately affected by the performance of a licensee.”37Id. at 1002.

There followed a string of cases in which the FCC and the D.C. Circuit struggled with how to actually take listeners’ views into account.38E.g., Citizens Comm. to Pres. the Present Programming of the Voice of the Arts in Atlanta on WGKA-FM v. FCC, 436 F.2d 263, 270 (D.C. Cir. 1970); Hartford Commc’ns Comm. v. FCC, 467 F.2d 408, 414 (D.C. Cir. 1972); Lakewood Broad. Serv., Inc. v. FCC, 478 F.2d 919, 924 (D.C. Cir. 1973); Citizens Comm. to Keep Progressive Rock v. FCC, 478 F.2d 926, 929 (D.C. Cir. 1973). In Citizens Committee to Keep Progressive Rock v. FCC, for example, WGLN in Sylvania, Ohio, switched to an all-prog-rock format in late 1971, and then received FCC approval in 1972 to switch to “generally middle of the road music which may include some contemporary, folk and jazz.”39Citizens Comm. to Keep Progressive Rock, 478 F.2d at 928. The Citizens Committee to Keep Progressive Rock petitioned the FCC to object. The D.C. Circuit ordered a hearing on whether the Toledo metropolitan area was adequately served by prog-rock stations as compared with top-forty stations,40Id. at 932. and discussed such details as whether a “golden oldies” format was sufficiently distinct from “middle of the road.”41Id. at 928 n.5. “In essence, one man’s Bread is the next man’s Bach, Bacharach, or Buck Owens and the Buckeroos, and where ‘technically and economically feasible,’ it is in the public’s best interest to have all segments represented,” the opinion sagely intoned.42Id. at 929.

My point here is not that the FCC’s enterprise of supervising formats or of requiring balanced public-interest programming in the name of listener interests was ill-considered. Instead, I want to emphasize that these interventions were more about listeners’ interests than about listeners’ choices. Some of them were about giving listeners information that it is considered important for them to have, and some of them were about moderately diversifying the menu of speech from which listeners could choose. But in an environment of severely limited bandwidth serving mass audiences, there was almost nothing more that could be done.

I make this point here because there are two misconceptions about listeners that are extraordinarily prevalent in the literature on access to the media. Both of them are direct consequences of inappropriately extending reasonable assumptions about the broadcast environment to other domains where they are much worse fits.

The first mistaken assumption is that speakers seeking access to media are necessarily good proxies for listeners. In 1967, Jerome Barron wrote, “It is to be hoped that an awareness of the listener’s interest in broadcasting will lead to an equivalent concern for the reader’s stake in the press, and that first amendment recognition will be given to a right of access for the protection of the reader, the listener, and the viewer.”43Jerome A. Barron, Access to the Press—A New First Amendment Right, 80 Harv. L. Rev. 1641, 1666 (1967) (emphasis added). In broadcast media, a strong right of access for diverse speakers may be a way to promote listeners’ practical ability to choose speech.

In other media, which are not characterized by the same combination of broad distribution and narrow bandwidth, there is much less reason to think of speakers as proxies for listeners. To give a simple example, many of the speakers most loudly demanding—and sometimes suing for—a right of access to Internet platforms are unrepentant spammers.44E.g., Cyber Promotions, Inc. v. Am. Online, Inc., 948 F. Supp. 436, 443–44 (E.D. Pa. 1996). Less charitably, the Republican National Committee. See Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *2–3 (E.D. Cal. Aug. 24, 2023). The access they seek is the access of pre-FCC unlicensed broadcast, as in the right to overwhelm media and listeners with high-volume speech that drowns out alternatives and reduces listeners’ practical ability to choose among speakers.

The second misconception about listeners’ choices that arises from seeing all media as broadcast media is the belief that nothing else can be done. Both the justifications for and many of the criticisms of regulations like the fairness doctrine and format review arise from thinking about speech environments in which listeners are fundamentally passive. The only controls they have—or can have—are the channel dial and the on-off switch. It seems to follow that the only useful regulatory interventions must happen upstream and that individual listeners themselves can have little involvement in the matching process. The entire model of media criticism that conceptualizes individuals as television viewers—numb, motionless, and mindless zombies or couch potatoes tuned in to the idiot box—is blind to the ways in which they engage with media that give listeners more agency and more choices.45Even in the case of television, it misses the way that fans engage. See generally Henry Jenkins, Textual Poachers: Television Fans and Participatory Culture (1992); Betsy Rosenblatt & Rebecca Tushnet, Transformative Works: Young Women’s Voices on Fandom and Fair Use, in eGirls, eCitizens 385 (Jane Bailey & Valerie Steeves eds., 2015). This is a different type of agency than the agency I am discussing as listeners. We will see many examples soon. For now, remember that the assumption of listener passivity is just that—an assumption.

C. Delivery

Delivery media are mostly not bandwidth-scarce, especially on the Internet. Any given delivery intermediary’s platform tends to face fewer capacity constraints than broadcast media did. Partly this is structural: delivery media solve a smaller problem because they only try to route a communication to one recipient, rather than many. Partly it is due to physical differences: the phone network could handle more simultaneous connections by running more wires in trunk lines, whereas cable could not increase the number of channels without reengineering every subscriber’s wiring and equipment. Partly it is due to the telecommunications engineering triumphs of the telephone system and the Internet, which have scaled up over many orders of magnitude in their lifetimes. And partly it is due to recognizing the limits of the possible: telegraph companies did not attempt to offer video service.

Whatever the reason, any given communication takes up a much smaller fraction of a delivery provider’s capacity than a corresponding communication would take up of a broadcaster’s capacity. Comcast as a cable operator can offer its subscribers a few hundred channels, while Comcast as an ISP can offer its subscribers delivery to and from millions of sites. The result is that Comcast’s Internet-service subscribers interfere with each other far less than the cable channels vying for transmission do. One more subscriber is trivial from Comcast’s perspective, and it has every economic incentive to sign up as many as it can. However, each cable carriage agreement is individually negotiated, and Comcast is ready to say “no” if the terms are not good enough because Comcast has to devote some of a sharply limited resource to each channel it offers.

Entry scarcity varies among delivery media. Some, such as email, are almost completely open to entrants: anyone can set up their own SMTP server and start exchanging emails. Others, such as telephone and Internet service, have limited competition among intermediaries who can serve any particular customer or region because the need to place physical infrastructure, such as fiber-optic cables or cell-phone towers, in particular locations creates economic and regulatory barriers to entry. The postal service is an extreme example: it has a statutory monopoly on the carriage of letters.4618 U.S.C. § 1694 (fining anyone who, in regular point-to-point service, “carries, otherwise than in the mail, any letters or packets”).

There is a long and robust tradition of speakers’ rights to access delivery media. Older delivery media, in particular, have frequently been subjected to common-carriage rules that require them to accept communications from all senders and for all receivers, and forbid them from discriminating on the basis of the contents of those messages.47See Genevieve Lakier, The Non–First Amendment Law of Freedom of Speech, 134 Harv. L. Rev. 2299, 2316–30 (2021); Blake E. Reid, Uncommon Carriage, 76 Stan. L. Rev. 89, 110–13 (2024). The postal service “shall not . . . make any undue or unreasonable discrimination among users of the mails . . . .”4839 U.S.C. § 403. This statutory obligation is almost certainly a First Amendment rule.49See Blount v. Rizzi, 400 U.S. 410, 416 (1971) (“The United States may give up the Post Office when it sees fit, but while it carries it on the use of the mails is almost as much a part of free speech as the right to use our tongues . . . [P]rocedures designed to deny use of the mail . . . violate the First Amendment unless they include built-in safeguards against curtailment of constitutionally protected expression . . . .”). Similarly, the Communications Act prohibits “any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services” by telecommunications common carriers including telephone companies.5047 U.S.C. § 202(a). This is the modern continuation of a long tradition: laws in the nineteenth century required telegraph companies to “operate their respective telegraph lines as to afford equal facilities to all, without discrimination in favor of or against any person, company, or corporation whatever.”51Telegraph Lines Act, ch. 772, 25 Stat. 382–83 (1888) (codified as amended at 47 U.S.C. § 10); See Lakier, supra note 47, at 2320–24 (surveying history of telegraph common-carrier laws). Indeed, the postal service,52See 39 U.S.C. § 101(a) (“The United States Postal Service shall be operated as a basic and fundamental service provided to the people by the Government of the United States . . . .”). telephone network,53See 47 U.S.C. § 254 (establishing universal service policy). and broadband Internet service54See generally FCC, Connecting America: The National Broadband Plan (2010). are all the subjects of universal-service policies that affirmatively attempt to provide access to all American residents.

On the other hand, it is an open doctrinal question whether government can require modern delivery providers—specifically email and broadband Internet—to provide uncensored access to speakers and listeners. The best and most prominent example is the FCC’s network neutrality rules that attempted to require broadband ISPs to carry traffic to and from all edge providers (that is, speakers) on a nondiscriminatory basis.55The most recent version was the Safeguarding and Securing the Open Internet Order of 2024, 89 Fed. Reg. 45404 (June 7, 2024). See 47 C.F.R. § 8.3(a) (2024) (ISPs “shall not block lawful content, applications, services, or non-harmful devices”); id. § 8.3(b) (ISPs shall not “impair or degrade lawful internet traffic on the basis of internet content, application, or service”); id. § 8.3(c)(1) (ISPs shall not “directly or indirectly favor some traffic over other traffic” for compensation); id. § 8.3(d)(1) (ISPs shall not “unreasonably interfere with or unreasonably disadvantage” users’ ability to access and edge providers’ ability to make available lawful content). That order was set aside by the Sixth Circuit. See Ohio Telecom Ass’n v. FCC, 124 F.4th 993, 933 (6th Cir. 2025). It is unlikely that federal network-neutrality rules will be revived in the short run, although state-level counterparts remain in force. See, e.g., Cal. Civ. Code § 3100 (West 2024). The D.C. Circuit upheld one version of the FCC’s network neutrality rules against a First Amendment challenge in 2016.56See U.S. Telecom Ass’n v. FCC, 825 F.3d 674, 675 (D.C. Cir. 2016). Dissenting from denial of rehearing en banc, Judge Kavanaugh argued that ISPs exercise editorial discretion protected by the First Amendment.57See U.S. Telecom Ass’n v. FCC, 855 F.3d 381, 382 (D.C. Cir. 2017). There are also dicta in the Moody v. NetChoice majority opinion describing First Amendment protections for social-media companies’ “choices about the views they will, and will not, convey” that would seem to apply equally well to ISPs.58Moody v. NetChoice, LLC, 603 U.S. 707, 737 (2024).

Indeed, § 230 affirmatively shields Internet delivery media from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”5947 U.S.C. § 230(c)(2)(A). The precise contours of what constitutes “good faith” are unsettled,60See, e.g., Darnaa, LLC v. Google, Inc., No. 15-cv-03221-RMW, 2016 U.S. Dist. LEXIS 152126, at *9 (N.D. Cal. Nov. 2, 2016). as is the scope of the “otherwise objectionable” catchall,61See, e.g., Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040, 1047 (9th Cir. 2019). but the general result is to preempt any state attempts (by statute or common law) to impose access mandates.62See, e.g., Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *10–11 (E.D. Cal. Aug. 24, 2023).

It is also notable that many delivery media are governed by strict privacy rules that limit carriers’ ability even to determine the contents of a message. The USPS is legally prohibited from opening first-class mail without a search warrant.63See 39 U.S.C. § 404(c). Telephone carriers are restricted from listening to conversations by the Wiretap Act,64See 18 U.S.C. § 2511(1)(a) (prohibition on interception); id. § 2511(2)(a)(i) (describing limited exception to that prohibition for interceptions “necessary incident to the rendition of his service or to the protection of the rights or property of the provider of that service”). as are ISPs and email providers.65See, e.g., United States v. Councilman, 418 F.3d 67, 69 (1st Cir. 2005) (finding Wiretap Act interception by email provider). Even beyond legal limits, many delivery providers now use encryption systems that technologically prevent the provider from determining message contents; for example, Apple Messages and Signal are end-to-end encrypted so that only the designated recipient (and not any intermediary, including Apple or Signal) can decrypt a message. A fortiori, carriers who cannot even tell what a message says cannot discriminate on the basis of its contents.

It is easy to justify common-carriage access rules for delivery media—old and new—in light of their structural characteristics. From the intermediary’s point of view, the weak bandwidth constraints mean that carrying any particular communication is not a substantial technical burden. In the aggregate, of course, communications add up, but that is primarily an economic problem—one to be addressed with appropriate pricing and funding.66See generally Brett Frischmann, Infrastructure: The Social Value of Shared Resources (2012). Where pricing is not available or insufficient, capacity limits on the volume of communications to or from a user are largely content-neutral ways of allocating bandwidth.67Similarly, communications that impair the network itself can be addressed through anti-abuse rules that target the harmful effects and only incidentally burden speech. See., e.g., 47 C.F.R. § 68.108 (2023) (allowing telephone providers to discontinue service to customers who attach equipment that harms the network); id. §§ 8.3(a), (b), (d)(2) (making exceptions to network neutrality rules for “reasonable network management”).

Carrying a communication is not a speech problem, except to the extent that the intermediary wants to make an expressive statement by carrying or refusing to carry particular messages. Historically, though, that argument has carried very little weight for traditional delivery media. This attitude is easy to justify by seeing delivery media from the perspective of speakers and listeners. Willing speakers and willing listeners have essentially the same interest in access to delivery media: the goal of forming the core free speech interest by communicating with each other.68Grimmelmann, supra note 1, at 382; Jovy Chan, Understanding Free Speech as a Two-Way Right, 1 Pol. Phil. 156, 164 (2024). If you want to send me an email and I want to receive the email, we are both thwarted if your email provider deletes your email.

An intermediary’s speech claims are weaker when they go up against those of matched speaker-listener pairs. The intermediary may not want to help the speaker and listener connect, but this is fundamentally an objection to their speech, not a claim about its own speech. It might prefer to deliver messages from other speakers it likes better; but when it does so, it forces listeners to receive messages from speakers they prefer less. As I argued in Listeners’ Choices, it is a core free-speech violation to make a listener listen to a speaker whose speech they do not want rather than listen to a speaker whose speech they want.69Grimmelmann, supra note 1, at 388. So while a delivery intermediary’s denial of access to a speaker or listener is not by itself a First-Amendment violation, the First Amendment leaves ample room for government to require delivery intermediaries to provide access.

In general, both speakers and listeners have standing to challenge denials of access to a delivery platform. In Murthy v. Missouri, the Supreme Court held that listeners do not have standing to challenge restrictions on speakers unless “the listener has a concrete, specific connection to the speaker.”70Murthy v. Missouri, 603 U.S. 43, 75 (2024). In the case of a speaker attempting to send a message to a specific listener (as opposed to the hosting platforms at issue in Murthy itself), this connection seems clearly satisfied. And where it is the listener who has been excluded from a platform (for example, disconnected by their ISP over alleged copyright violations), the impact on their speech interests as a listener is equally obvious.

If there is a distinction between analog and digital delivery media, it cuts in favor of applying access rules to modern digital intermediaries, not against. As bandwidth constraints drop further and further away, intermediaries’ arguments that they have a technical or economic need to discriminate among users on the basis of their speech get weaker and weaker. Most arguments to the contrary rest on a confusion between delivery and selection media. Commentators project the strong expressive interests in an intermediary’s selection function (both the intermediary’s own and those of the listeners they serve) onto the intermediary’s delivery function, without stopping to consider whether these functions can be separated and distinguished.

D. Hosting

Common-carriage access rules for hosting media generally facilitate listener choice. There is an obvious argument in favor of access rules: the more speakers that are available through a hosting intermediary, the wider the range of choices it offers to listeners. The entire web was better than AOL’s walled garden; a streaming service with ten million tracks beats one with one million. The hosting intermediary might have self-interested reasons to limit access (for example, to favor its affiliated speakers or to extract more money from speakers through price discrimination), but the listeners who use the platform generally prefer that it offer the widest possible range of speakers and speech. To a first approximation, listeners either side with the speaker in a speaker-hosting platform dispute (if they want the speech) or are at most indifferent (if they do not want the speech).

Common arguments against access rules that apply to other forms of media mostly do not apply to hosting media. First, there is no scarcity of bandwidth compelling hosting intermediaries to pick and choose among speakers to carry. Bandwidth on the Internet is effectively infinite. Cloudflare could serve every user in the United States if it needed to. This is not to say that Cloudflare could, would, or should do so for free—this level of access would be quite expensive and a speaker wanting to support hundreds of millions of massive downloads would quite reasonably be expected to pay commensurately. It is just that Cloudflare could serve everything to everyone.

Second, there are generally no operational constraints that cause one speaker’s content to interfere with another’s. Common Internet hosting intermediaries are technically capable of carrying almost any item of content within a category: videos at a given resolution, files consisting of arbitrary bitstrings, and so on. These items of content may have different sizes—and might be subject to caps for short-run capacity or economic reasons—but from a technical perspective, the intermediary is entirely indifferent as to their content. A broadcast radio station must deal differently with a talk-show host in studio one, a live musical performance in studio two, and a recorded program coming via audio link from a remote location. However, in an important sense, all apps in an app store are the same. Offering speaker A’s app does not divert resources needed to offer speaker B’s.

Third, there is no scarcity of listeners’ attention compelling hosting providers to prioritize some content over others. A delivery platform can fill up a listener’s queue with unwanted speech, making it harder to receive to the speech they want. If your telephone is ringing off the hook with telemarketers, your friends will get a busy signal every time they call. However, a hosting platform does not make any claims on a listener’s attention; it simply sits there passively until the user seeks out and requests the speech. No one is interested in all 100,000,000 tracks on Spotify; but for the most part, having access to an extra 99,900,000 does not take anything away from the 100,000 one might actually be interested in listening to.

To be sure, a hosting platform with 100,000,000 pieces of content is harder to browse than a platform with 100. But this should be understood as more of a selection problem than a hosting problem. Combining hosting and selection into a single platform function takes some of the control over speaker-listener matching away from listeners and vests it in the platform. A movie theater that shows 5 movies at time offers far less listener choice than a streaming platform that gives listeners access to a catalog of 50,000. Give that same listener a list of 5 recommended hot new releases and they have all of the choice-related benefits of the movie theater and none of the drawbacks. The creation of Internet-scale hosting intermediaries creates its own need for equally useful selection intermediaries, but the first step towards facilitating their healthy development is recognizing that selection is distinct from hosting.

None of this is not to say that access rules always actually enhance the choices available to listeners. The economics of multi-sided markets are complicated, and a badly designed access rule could undermine a pricing strategy that successfully attracts more speakers and more listeners to an intermediary. My goal here is narrower. I want to argue that rules that have the effect of increasing the range of speakers available on a hosting platform are pro-listener-choice, whether or not they are structured as open access rules. The actual creation of a regulatory regime involves difficult policy considerations and mechanism designs. My point is only that this policy space ought to be available to regulators and not be foreclosed by the First Amendment.

Indeed, access rules are even easier to justify for commodity hosting platforms than they are for delivery platforms. As we have seen, filtering rules for delivery media frequently translate into corresponding exceptions to access rules. Spam-blocking, for example, might be a case of reasonable network management under network neutrality rules. This, in turn, means that regulators need to be cautious with imposing access rules, lest they inadvertently cut off filtering that listeners depend on. A must-carry rule for email, for example, would be a spammer’s dream.

To the extent that listeners do their own filtering in accessing a hosting platform, hosting platforms do not require the same degree of caution with access rules. If regulators require that Candy Crush be available in app stores, it does no harm to a user who does not enjoy match-three games. If you don’t want to play Candy Crush, don’t download it.

E. Selection

For decades, speakers have been demanding access to selection intermediaries. In the 2000s, the issue of the day was “search neutrality”: equal access to search engines’ rankings.71See generally Grimmelmann, supra note 4. More recently, speakers have complained about being “downranked” on social media—that is, not placed in other user’s algorithmic feeds. In both cases, the complaint is the same: their speech is theoretically available to users but not recommended in practice.

The fundamental challenge with giving a coherent account of access to selection is the baseline problem.72See generally Grimmelmann, supra note 4. It is nearly impossible to describe what “correct” or “neutral” rankings would look like. Different users have different preferences, and even the same user has different preferences in different contexts and at different times. My Facebook News Feed should not be identical to yours; we have different friends and you like fashion while I like sports. My search results for “crab cakes” should be different than my search results for “crab canon,” and even my search for “Vikings” could be referring to Scandinavian seafarers, a football team, Mars probes, a TV series, or kitchen appliances.73See Grimmelmann, Speech Engines, supra note 3, at 913 (discussing challenge of defining relevance). As a result, different selection media can quite reasonably make different choices about speakers. Indeed, for a regulator to prescribe what a selection platform should do is to become a selection platform itself.

Thus, selection stands in sharp contrast to delivery and hosting, both of which have a plausible neutral baseline: deliver or host everything. Selection is more like broadcast in this respect: choices must be made. However, the reason for the choices is very different. The need for choices in broadcast stems from bandwidth being scarce; not all speech can be made available at all. The need for choices in selection stems from attention being scarce; listeners must choose among these the speech available to them. In broadcast, transmission and selection are inextricably linked. However, on the Internet, transmission (that is, hosting plus delivery) and selection can be distinct functions, one of which substantially overcomes the scarcity problem and the other of which confronts it full-force.

Access claims in the selection context are therefore effectively a zero-sum fight among speakers. To move speaker A up one place in a feed means pushing some other speaker B down one place. Platforms might make this choice for a variety of content-based reasons—profit, ideology, whimsy—but it is much harder to identify a legitimate reason for a regulator to prefer A to B or vice-versa. A neutrality rule in a delivery or hosting context works because the government can tell an ISP to deliver all IP datagrams with equal priority (network neutrality) or a cloud-hosting provider to host all lawful content (a must-carry regime); the baseline is content-neutral. But there is no simple corresponding neutrality rule for selection. To select is to choose on the basis of content.

I argued in Speech Engines for a more limited principle of relevance to search users. That is, a search result is a search engine’s guess at what a user will find relevant to their query.74Grimmelmann, Speech Engines, supra note 3, at 913. The user’s goals are subjective from their perspective, but it is an objectively observable fact from the search engine’s perspective how well a result corresponds to a user’s goals. The search engine must make a subjective guess at what the user will find relevant, but it is an objective fact whether the result the engine actually shows to the user corresponds to that best guess. A regulator therefore has a principled basis to intervene when a search engine is disloyal to its users—and it is disloyal when it shows them results that (objectively) differ from the engine’s own (subjective) judgment about what the users are likely to find relevant. This does not mean the regulator can substitute its own relevance judgments for those of the user or the search engine, but it does mean that the regulator can prevent the search engine from lying to users and it might be able to prevent certain conflicts of interest that might tempt the search engine into underplaying its hand.

This argument generalizes into a broader claim about selection intermediaries and listeners. A selection intermediary offers listeners a way to choose among speakers. To prohibit the intermediary from doing so, or to dictate how it makes the selection, is to interfere with listeners’ ability to choose. We should understand this as an interference with listeners’ First Amendment rights to listen (and not just the intermediary’s right to speak). At the same time, we should recognize that a selection intermediary that is dishonest or disloyal also interferes with listeners’ First Amendment interests. The dishonesty and disloyalty can provide a content-neutral basis for identifying problematic recommendations by selection intermediaries, even though those recommendations are themselves content-based.

  1. Moody v. NetChoice

The Supreme Court’s recent decision in Moody v. NetChoice was a missed opportunity to clarify these principles.75Moody v. NetChoice, LLC, 603 U.S. 707, 724–28 (2024). Texas and Florida passed content-moderation laws that, in various ways, prohibited major social-media platforms from restricting content on the basis of political viewpoint (Texas) or from restricting content from political candidates or journalistic enterprises (Florida). The actual holding in Moody was a nothingburger about the appropriate standards for facial challenges; but in dicta, a five-justice majority explained that the platforms’ “selection, ordering, and labeling of third-party posts” were protected expression.76Id. at 727.

This was a thoroughly speaker-oriented perspective. It treated the problem with the states’ laws as that “an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”77Id. at 731. This perspective makes perfect sense when the entity is a newspaper or a parade, both of which contribute to the marketplace of ideas by adding perspectives they think that readers or viewers will appreciate. And it is true, in a sense, for social media, where many platforms curate speech in ways that reflect specific viewpoints.

However, in another more accurate sense, the value of selection algorithms on social media is to users as listeners: the selection algorithms help them find speech they find interesting, valuable, and relevant to their diverse interests. A state mandate to insert some speech into a user’s feed or search results interferes with the user’s ability to listen to the speech that the user actually wants to hear. It is not just compelled speech as against the platform—it is also compelled listening as against the user. Put this way, the First Amendment problem is blindingly obvious.78See generally Brief of First Amendment and Internet Law Scholars as Amici Curiae Supporting Respondents, Moody v. NetChoice, LLC, 603 U.S. 707 (2024) (Nos. 22-277 and 22-555) (making this argument).

This shift in perspective—from speaker to listener, from platform to user—is important for two reasons. First, it gives a more convincing response to the states’ argument that the platforms are not really speaking in most of their selection decisions. Facebook does not really have an opinion on whether my cousin’s apple pie photos or my friend’s story about a long line at the grocery store is worthier speech, but I certainly do. There is a sense in which the speech value of Facebook’s ranking decisions is derivative of my speech interests.

This is a compelling response to Texas’s attempt to inject political speech into social-media feeds on a viewpoint-neutral basis. It is a bit uncomfortable for Facebook to argue that it has an expressive preference to discriminate on the basis of viewpoint, but it is perfectly natural for individual users to have expressive viewpoints and to prefer content on that basis. For listeners to choose speakers on the basis of viewpoint is not to interfere with the freedom of speech; it is an exercise of that freedom and the point of the whole enterprise. Subscribing to The Nation instead of The National Review (or vice-versa) is viewpoint discrimination on the user’s part, and that is a good thing! Social-media users want feeds that reflect their divergent interests and viewpoints, and social-media platforms advance, rather than inhibit, First Amendment values when they cater to these listener preferences.

Second, the focus on listeners’ expressive interests in choosing what speech they receive on social-media platforms and on having platforms that can algorithmically make selections in accordance with those interests makes clearer that this is an argument only about selection and not necessarily about hosting. To the extent that states attempt to regulate platforms’ hosting functions with neutrality or must-carry mandates, those laws may rest on a firmer basis than their attempts to regulate platforms’ selection functions.79Eugene Volokh, Treating Social Media Platforms Like Common Carriers?, 1 J. Free Speech L. 377, 448 (2021). As I argued above, there is a plausible neutral baseline for hosting, and regulating hosting by itself does not interfere with listeners’ choices in the same way as regulating selection does.

In the actual Moody and Paxton cases, the platforms’ hosting and selection functions were closely related, and the most common content-moderation remedy they applied was to delete the content entirely.80See generally Eric Goldman, Content Moderation Remedies, 28 Mich. Tech. L. Rev. 1 (2021) (discussing much wider range of remedies available to platforms). Similarly, the states’ laws ran rules that sounded in hosting (“permanently delete or ban”) together with rules that sounded in selection (“post-prioritization” or “shadow ban”), as if all of these practices were entirely equivalent. However, it is possible to imagine future laws that more clearly require hosting of content on a viewpoint-neutral basis while leaving platforms greater discretion over selection. I think these laws pose genuinely harder questions. Moody’s majority opinion collapses these distinctions in an unhelpful way.

  1. Antitrust and Self-Preferencing

A listeners’-choice perspective also shows why antitrust regulation of selection intermediaries is broadly permissible, even when some of the anticompetitive conduct complained of involves the selection of speech.81        See generally Hillary Greene, Muzzling Antitrust: Information Products, Innovation and Free Speech, 95 B.U. L. Rev. 35 (2015). The actual antitrust analysis is highly fact-specific and requires careful technological and economic reasoning about particular products and markets. See generally Erik Hovenkamp, Platform Exclusion of Competing Sellers, 49 J. Corp. L. 299 (2024); Erik Hovenkamp, The Antitrust Duty to Deal in the Age of Big Tech, 131 Yale L.J. 1483 (2022). My point here is only that in many circumstances, the First Amendment does not block a court from reaching the merits of an antitrust case involving a selection intermediary. Again, the key point is that although users have content- and viewpoint-based preferences among speech, the government can act neutrally in terms of content by taking those preferences into account, whatever they are. An app store that rejects fart apps because “the App Store has enough fart, burp, flashlight, fortune telling, dating, drinking games, and Kama Sutra apps, etc. already”82App Review Guidelines § 4.3 Spam, Apple Dev., https://developer.apple.com/app-store/review/guidelines [https://perma.cc/9FA3-N67R]. is certainly expressing a viewpoint. However, to the extent that users want fart apps and the app store is suppressing competing fart apps in favor its own, promoting welfare-enhancing consumer choices is a perfectly

legitimate government interest and the harm is cognizable under traditional antitrust principles.

Thus, rules against self-preferencing by selection intermediaries will generally be permissible under the First Amendment. This position may sound absurd if one sees only the First Amendment interests of the intermediary, and it is still difficult if one takes into account the interests of its competitors. However, it becomes entirely reasonable if one considers the interests of affected users. Indeed, there is a natural congruence between the interests of users as listeners (my argument in this essay) and the interests of users as consumers (the traditional stance of antitrust law).

More specifically, it would be permissible to have a rule that a pure selection intermediary must treat first-party content that it itself produced evenhandedly with third-party content from competitors. The intermediary will have valid, expressive reasons to prefer some content over others, and these decisions will mostly be off-limits to regulatory scrutiny, as discussed above. However, a regulator can make clear that the platform cannot prefer first-party content simply because it is first-party content. The platform can use any ranking rules it wants, but those rules must be applied even-handedly to all—or at least, the platform must give users the option of disabling any self-preferencing.

For similar reasons, disclosure of speech-selection intermediaries’ commercial ties is also generally permissible under traditional consumer-protection principles. Listeners can legitimately expect to know when a speaker has a financial incentive to tell them one thing rather than another, an expectation that applies to speech selection as well as to speech itself. At the moment, paid advertising in search results and in social-media feeds must be disclosed as such; however, a stronger rule that required selection platforms to disclose when recommended content is first-party, or when there are substantial financial ties between the platform and a speaker, would also be allowable for the same reasons.

Finally, full structural separation between hosting, delivery, and selection is a plausible antitrust remedy or regulatory mandate. In Part IV, I will discuss in more detail why this kind of separation might be appealing from a free-speech perspective. For now, I just want to note that the economic and technical separation of these functions is itself plausible from a First Amendment perspective, Moody notwithstanding. I have been arguing that hosting and delivery platforms could be subject to must-carry rules, but selection platforms generally cannot. Much of the gap between the two sides’ positions in Moody arose from the fact that the laws’ proponents generally cited caselaw about common carriage in hosting and delivery settings, while

the laws’ opponents generally cited caselaw about expressive choices in selection settings.

The thing that made the Moody cases difficult to resolve was that the platforms combined both hosting and selection functions, and most of the briefing (and the opinions) ran these functions together. This would seem to open up an argument on the platforms’ part: Moody confirms they have full First Amendment protection when they engage in selection, so even a pure hosting platform is always allowed to engage in selection—i.e., there is a First Amendment right to combine these two functions. However, I think this does not follow from Moody; or to the extent that it does, Moody is wrong.

The thrust of the common-carriage cases is that the public provision of standardized service can be subject to nondiscrimination obligations.83There is a parallel tradition that these standardized services can be structurally separated from other services that involve more individualized offerings. This, for example, is what the Telecommunications Act of 1996 attempted to do with its distinction between “telecommunications service” (standardized and common-carriage) and “information service” (bespoke and unregulated). To the extent that this distinction is coherent (and I think that it is, much of the time), nondiscrimination obligations should apply to the standardized services and not to the individualized ones. Moody may have missed this distinction, but the Court’s opinion in 303 Creative LLC v. Elenis seems to hinge on it; that is, it is First-Amendment-compelled speech to require a designer to make a custom wedding website (“pure speech”), but it is perfectly permissible to require a merchant to sell a commodity product to all comers.84303 Creative LLC v. Elenis, 600 U.S. 570, 593–94 (2023); see also Dale Carpenter, How to Read 303 Creative v. Elenis, Volokh Conspiracy (July 3, 2023, 2:11 PM), https://reason.com/volokh/2023/07/03/how-to-read-303-creative-v-elenis [https://perma.cc/KVQ9-KD2N] (arguing that 303 Creative applies to products that are customized and expressive). In listener terms, listeners are paying attention to the intermediary’s own speech in individualized cases like selection, while paying attention to third-party speech in standardized cases like hosting.

  1. Unranked Feeds

An interesting partial and special case of separating hosting from selection is to require a provider to include an unranked or chronological feed for those users who want it. Facebook offers both “Top Posts” (algorithmically ranked) and “Most Recent” (chronological) feeds; Reddit offers “Best” and “Hot” (algorithmically ranked) but also “New” (chronological) sorting options.

What makes these options feasible is that there is a plausible objective baseline. A chronological feed on Facebook is “all posts from friends and pages I follow, sorted by recency.” This is workable in a way that “all posts I would be interested in” is not. The restriction to content from accounts that one follows is what makes the option to display everything tractable. A purely chronological feed of everything posted to X (the “firehose”) is not of interest to most users—it would be overwhelmingly vast—but a purely chronological feed of everything posted by those they follow is. For similar reasons, a non-algorithmic search engine is an oxymoron except in domains that are so small or simple as to barely require a search engine at all. Anything larger than “find on this webpage” requires contestable choices about ordering.

A chronological-feed option is listener-choice enhancing. A chronological-feed mandate would not be. Facebook and other social-media platforms have extensive evidence showing that users stay on their sites longer and engage with more posts when they see non-chronological feeds. This is a legitimate user preference; given the limits of attention, the user benefits greatly from delegating the choice to Facebook.85I think it is more accurate to call this a “delegation” of choice rather than “choosing not to choose.” Cf. Cass R. Sunstein, Choosing Not to Choose, 64 Duke L.J. 1, 9 (2014). However, not every user wants algorithmic feeds. I, for example, only used chronological ordering on Twitter, and have stuck to that preference on federated platforms. This, too, is a legitimate user preference; a platform that forces algorithmic ordering on everyone when chronological ordering is feasible thwarts some listeners’ choices about speech selection.

This is another way in which Moody paints with too broad a brush. Seeing selection as purely a matter of platform speech makes the majority insensitive to listeners’ speech interests. Requiring a chronological option from social media feeds in addition to a platform’s preferred algorithmic option looks like a restriction on the platform’s speech rights; indeed, to the majority it might even be compelled speech. However, a chronological feed option is also a way of respecting users-as-listeners’ choices about speech without forcing a platform to make ranking choices that it and its users would otherwise disagree with. Requiring a chronological option strictly increases the choices available to listeners, while not interfering with a platform’s ability to provide its preferred ordering to any listeners who are interested in hearing it.

IV. Filtering

Now consider media from the perspective of unwilling listeners. As we will see, there are really three different types of unwilling listeners in media regulation. In each case, it is helpful to distinguish between (1) downstream filtering infrastructure that empowers listeners themselves to avoid unwanted content, and (2) upstream filtering rules that prevent that content from reaching them in the first place.

First, there are listeners who are uninterested in or who actively dislike particular content: opera fans who loathe rap music or reality television fans who find scripted shows unbearably dull. Here, downstream filtering infrastructure is typically sufficient. As long as there is something they would rather watch (an access problem), as long as they are able to find out about it (a selection problem), and as long as they are actually able to switch to it (which is true for most media),86Exceptions typically involve being in public places, such as in an auto mechanic’s waiting room or on a subway car with someone having a loud video call. they can watch operas and reality shows, and ignore the rap and scripted dramas. It does not bother them, because they do not need to see it. Upstream filtering rules are unnecessary.

Second, there are listeners who are individually targeted with specific unwanted content that is hard for them to avoid. This is fundamentally a delivery problem; it does not arise with other types of media. Sometimes speakers target individual listeners, like a harassing telephone caller. Sometimes they target many listeners indiscriminately, like an email spammer. Either way, listeners can try to use self-help downstream filtering to avoid it, but if that fails, they may need upstream filtering to help prevent it from reaching them in the first place.

And third, there are minors. Sometimes, children want to avoid violent, sexual, disturbing, or other adult-themed content because it upsets them, but they come across it by accident and cannot look or flip away in time. Sometimes—perhaps more often—the problem is that children are willing to see this material, but their parents or guardians want to shield them from it. In both cases, the theory is that children are less capable of making choices for themselves as listeners than adults are, and therefore that some kind of upstream filtering rules are necessary because downstream ones will fail. Either the kids themselves will be less good at filtering than their parents would be, or the kids will affirmatively evade the filtering their parents try to impose.

Downstream filtering infrastructure also plays a crucial role in supporting (or undermining) the rationales for other kinds of media regulations. On the one hand, good downstream filtering plays a crucial role in making it possible for listeners to pick and choose among the superabundance of content that access rules try to make available. On the other, good downstream filtering can reduce the need for upstream filtering rules—in First Amendment terms, it is frequently a “less restrictive alternative.”

A. Broadcast

In broadcast media, unwilling listeners were typically expected simply to change the channel. They may not always have had many other broadcast options, but no one was forcing them to watch any particular broadcast. Even this limited measure of choice was sufficient to protect unwilling listeners from programs they despised. As the range of channels expanded (with it, the range of choices), the less of an imposition any one unwanted channel was on listeners—indeed, the less likely they were to notice or care about it at all. Similarly, by their nature, very few broadcast programs were personally targeted at, or specifically harmful to, individual listeners. The local CBS affiliate simply did not care enough about Angela Johnson at 434 Oakview Terrace to preempt Murder She Wrote with an hour-long special insulting Johnson and her life choices.

Instead, the filtering problems on broadcast media primarily concern minors. The theory of “just change the channel” does not work for them for two reasons. First, something offensive or shocking could come up unexpectedly when one is just flipping through channels. This was the case in FCC v. Pacifica Foundation, in which the Supreme Court upheld the FCC’s finding that a radio broadcast of George Carlin’s “seven dirty words” routine was indecent in violation of its regulations.87FCC v. Pacifica Found., 438 U.S. 726, 740–41 (1978). And it is the case with the FCC’s modern attempts to extend its obscenity-and-indecency rules to cover fleeting expletives and other sudden intrusions into otherwise family-friendly broadcasts, like Bono calling U2’s Best Original Song win at the Golden Globes “really, really, fucking brilliant” live on air, or the 2004 Super Bowl wardrobe malfunction.88See generally FCC v. Fox Television Stations, Inc., 567 U.S. 239, 248, 258 (2012) (finding the FCC’s rule unconstitutionally vague as applied to fleeting expletives). These are cases where a listener (here, a parent making choices on behalf of their child) cannot effectively make a choice not to receive the unwanted material because of the linear, real-time nature of broadcast audio and video. The character of the channel changes more quickly than the listener can flip away.

Second, sometimes children want to watch shows their parents do not want them to. Nominally, the theory here is that parents cannot constantly supervise their children’s TV viewing; stations have to do the filtering work that parents cannot.89See J.M. Balkin, Media Filters, the V-Chip, and the Foundations of Broadcast Regulation, 45 Duke L.J. 1131, 1136–38 (1996) (arguing persuasively that the difficulty of parental supervision is the real import of courts’ language that broadcast media are uniquely “pervasive”). This is why the FCC’s indecency regulations are confined to only the hours from 6:00 AM to 10:00 PM each day: at night, when indecency regulations do not apply, kids are assumed to be in bed and not watching TV.9047 C.F.R. § 73.3999(b) (2023). In comparison with indecency rules, obscenity regulations apply at all hours of the day. Id. § 73.3999(a). The indecency rules are an incursion on adults’ abilities as listeners to choose what speech they want to receive. They are an exception to the normal rule that willing listeners beat unwilling listeners. The justification is simply the usual one offered so often in American law: protecting the supposed innocence of the young from the purportedly corrupting influence of being aware that sex is a thing that exists. The eight hours at night when indecency rules do not apply serve as a concession to adults’ interests as listeners.

I say that this is “nominally” the theory of broadcast indecency regulation because it only really makes sense in a world where the main audio and video media are broadcast—a world we have not lived in for decades. Cable, satellite, and other subscription services have never been subject to the indecency rules. Here, the theory is that parents can choose whether or not to subscribe, presumably in a different way than they could choose whether or not to have a TV. Thus, they have an upfront choice that they can use to prevent their children from receiving unwanted indecent material. If you do not want your kids to watch Skinemax late at night, do not get cable, or do not pay extra for premium channels. Similar laws and similar logic apply to “over-the-top” broadcast services on the Internet, like ESPN+’s live sports games. If you do not like it, do not subscribe.

At times, the government has tried to impose more stringent filtering rules on broadcasters. Listeners’ choices provide a simple and compelling explanation of where the doctrine has come to rest. Consider United States v. Playboy Entertainment Group, Inc., where § 505 of the 1996 Telecommunications Act required cable operators to “fully scramble or otherwise fully block”91Codified at 47 U.S.C. § 561(a). sexually explicit programs except between the hours from 10:00 PM to 6:00 AM the next day.92United States v. Playboy Ent. Grp., Inc., 529 U.S. 803, 806 (2000). Of course, most cable operators already scrambled sexually explicit channels for non-subscribers, and sexually explicit channels like Playboy Television were typically “premium” offerings sold à la carte, so only paying subscribers to these specific channels would have a converter box to descramble them.93See id. at 807. So far, this was simply a case of parental choice over what broadcast services to subscribe to.

The technological complication was “signal bleed”; the analog scrambling technologies available in the 1990s could not prevent portions of the audio and video from leaking through, albeit in somewhat garbled form.94Id. at 807–08. To Congress, signal bleed meant that existing scrambling by itself was insufficient, and so cable companies would need to “fully block” such content if they could not “fully scramble” it. However, the Supreme Court observed that there was a less-restrictive alternative to fully banning a channel—“block[ing] unwanted channels on a household-by-household basis.”95Id. at 815. Indeed, this capacity was already required of cable systems by § 504 of the Act,96Codified at 47 U.S.C. § 560. so the law contained its own less-restrictive alternative. In other words, a legal regime requiring upstream filtering for all listeners by broadcast intermediaries was unconstitutional because there was a downstream alternative that gave individual listeners a more granular choice.

A more technical complex broadcast filtering system is the “V-chip,” which the 1996 Telecommunications Act required in all televisions shipped through interstate commerce.9747 U.S.C. § 330(c)(1); see generally Balkin, supra note 89. The Act describes the V-chip bloodlessly as “a feature designed to enable viewers to block display of all programs with a common rating,”9847 U.S.C. § 303(x). but the intent and implementation were that the rating systems would flag programs with sexual, violent, or other type of adult content. While the V-chip is mandated by law, the ratings that it interprets are not. The TV Parental Guidelines, which include classic bangers like TV-14-LS (many parents would find the contents unsuitable for children under 14 because of crude language and sexual situations) are “voluntarily rated by broadcast and cable television networks, or program producers.”99Frequently Asked Questions, TV Parental Guidelines, http://tvguidelines.org/faqs.html [https://perma.cc/CMF3-PQWK]. Indeed, there is a strong argument that a mandatory rating system would constitute unconstitutional compelled speech. See Book People, Inc. v. Wong, 91 F.4th 318, 336–40 (5th Cir. 2024) (holding unconstitutional a mandatory self-applied age-rating system for websites). Overall use of the V-chip seems to have peaked at about 15 percent of parents.100Henry J. Kaiser Family Foundation, Parents, Children, & Media: A Kaiser Family Foundation Survey, KFF, https://www.kff.org/wp-content/uploads/2013/01/entmedia061907pres.pdf [https://web.archive.org/web/20250221161327/https://kff.org/wp-content/uploads/2013/01/entmedia061907pres.pdf].

It is enlightening to consider the V-chip, like § 504, as a mechanism for creating listener choice under the choice-unfriendly conditions of broadcast. In both cases, signals are still transmitted indiscriminately to all listeners, but in both cases, listeners can individually choose whether to opt in or opt out of making those signals intelligible. Section 504 does so in a less granular way (entire channels), while the V-chip does so in a more granular way (individual programs), but the general idea is the same. It is not a coincidence that in both cases, the regulatory regime converged on a technical system that put more choices in the hands of individual households. This overall downstream movement of choices about speech—from speakers and intermediaries to listeners; from “push” media to “pull” media—is one of the most significant trends in recent media history.

B. Delivery

Now consider filtering rules that help unwilling listeners avoid unwanted deliveries. The First Amendment does not operate directly here; outside of some narrow contexts involving a “captive audience,” there is no First Amendment right not to be spoken to.101See Frisby v. Schultz, 487 U.S. 474, 487–88 (1988) (upholding an ordinance against residential picketing on the grounds that people are captive audiences in their own homes); Snyder v. Phelps, 562 U.S. 443, 459–60 (2011) (rejecting liability for funeral protests on the ground that the mourners were not a captive audience when the protesters “stayed well away from the memorial service”). Instead, laws designed to protect listeners from unwilling communications in delivery media are generally constitutional, provided that they are suitably tailored to the actual harms suffered by listeners who are genuinely unwilling.

The most obvious example is that anti-harassment laws have repeatedly been upheld when they involve one-to-one communications.102E.g., Lebo v. State, 474 S.W.3d 402, 407 (Tex. Ct. App. 2015) (upholding conviction for repeatedly sending threatening emails and telephone calls to victim). Repeated telephone calls or harassing emails can be the subject of valid restraining orders, civil judgments, or criminal convictions.103See, e.g., 47 U.S.C. § 223(a) (prohibiting telephone harassment). See also United States v. Lampley, 573 F.2d 783, 788 (3d Cir. 1978) (upholding constitutionality of § 223(a)); United States v. Darsey, 342 F. Supp. 311, 312–14 (E.D. Pa. 1972) (describing problems § 223(a) was meant to solve). See generally Genevieve Lakier & Evelyn Douek, The First Amendment Problem of Stalking: Counterman, Stevens, and the Limits of History and Tradition, 113 Calif. L. Rev. 143, 170–77 (2025) (discussing history of anti-stalking law). The key here, as I argued in Listeners’ Choices, is that these restrictions do not prevent speakers from addressing willing listeners.104Grimmelmann, supra note 1, at 392. They remain free to telephone anyone else they want; only one particular number is forbidden. The legal system can therefore protect the unwilling victims of harassment without interfering in the core First Amendment relationship between willing speaker and willing listener.105See generally Leslie Gielow Jacobs, Is There an Obligation to Listen?, 32 U. Mich. J.L. Reform 489 (1999). An order requiring a speaker to take down a blog post about the victim interferes with that relationship; an order requiring them to stop sending direct messages to the victim does not.106See Volokh, supra note 15, at 742–43 (making one-to-many vs. one-to-one distinction).

Listeners can opt out of unwanted one-to-one commercial speech. The Controlling the Assault of Non-Solicited Pornography and Marketing Act (“CAN-SPAM”) for email, the Telephone Consumer Protection Act (“TCPA”) for telephone and Short Message Service (“SMS”), Do-Not-Call for telephone, and the TCPA for faxes all broadly prohibit sending certain types of commercial solicitations to unwilling listeners. CAN-SPAM uses an opt-out system; a sender gets one bite at the apple but must refrain from further emails once a recipient objects.10715 U.S.C. § 7704(a)(3)(A)(i). With some exceptions, TCPA prohibits the use of automated dialers and prerecorded messages (that is, bulk communications particularly unlikely to be of interest to individuals) unless they affirmatively opt in.10847 U.S.C. § 227(b)(1)(B). Do-Not-Call bars all unsolicited commercial calls to numbers on the list,10915 U.S.C. § 6151; 16 C.F.R. §310.4(b)(1)(iii)(B) (2024). and TCPA bars all unsolicited commercial faxes.11047 U.S.C. § 227(b)(1)(C). All of these laws have been upheld against First Amendment challenges.111See generally Mainstream Mktg. Servs., Inc. v. FTC, 358 F.3d 1228 (10th Cir. 2004) (discussing Do-Not-Call); United States v. Smallwood, No. 3:09-CR-249-D(07), 2011 U.S. Dist. LEXIS 76880 (N.D. Tex. July 15, 2011) (discussing CAN-SPAM); Moser v. FCC, 46 F.3d 970 (9th Cir. 1995) (discussing telephone provisions of TCPA); Missouri ex rel. Nixon v. Am. Blast Fax, Inc., 323 F.3d 649 (8th Cir. 2003) (discussing fax provisions of TCPA).

The First Amendment rule for unwanted postal mail is even stronger. In Rowan v. United States Post Office Department, the Supreme Court upheld a law under which “a person may require that a mailer remove his name from its mailing lists and stop all future mailings to the householder.”112Rowan v. U.S. Post Off. Dep’t, 397 U.S. 728, 729 (1970). Although the law was framed in terms of allowing recipients to opt out of receiving “erotically arousing or sexually provocative” advertisements,113Id. at 730. it allowed recipients “complete and unfettered discretion in electing whether or not [they] desired to receive further material from a particular sender,”114Id. at 734. and the legislative history indicated that neither the postal service nor a reviewing court could “second-guess[]” the recipient’s decision.115Id. at 739 n.6. “Nothing in the Constitution compels us to listen to or view any unwanted communication,” wrote Chief Justice Burger for a unanimous court.116Id. at 737. Compare Rowan with Bolger v. Youngs Drug Products Corp., in which the Court held a law prohibiting the mailing of contraceptive advertising unconstitutional:117Bolger v. Youngs Drug Prods. Corp., 463 U.S. 60, 72 (1983). that is, a prohibition on the use of mailings was constitutional when the prohibition was requested by the recipient (Rowan) but unconstitutional when the prohibition was imposed by the government (Bolger).

Although Rowan is sometimes discussed as a captive-audience case,118E.g., Snyder v. Phelps, 562 U.S. 443, 459–60 (2011). it is better understood as a case about delivery media. Consider Frisby v. Schultz, a true captive-audience case: there is nowhere to go to hide from protesters outside your door, so a law prohibiting residential picketing is constitutional.119Frisby v. Schultz, 487 U.S. 474, 487–88 (1988). By contrast, the Supreme Court has treated self-help as effective against unwanted mail. Bolger stated that the “short, though regular, journey from mail box to trash can is an acceptable burden, at least so far as the Constitution is concerned.”120Bolger, 463 U.S. at 72 (internal quotation omitted). The only way this Bolger dictum can be squared with Rowan is if the basis of Rowan’s holding is listeners’ rights against unwanted communications, rather than one being a captive audience in one’s home against unwanted postal mail.

It is also widely accepted that there is no First Amendment problem if a delivery carrier implements some form of filtering or blocking at the request of a user. Wireless and landline telephone companies offer call blocking to their customers, which allows a user to block all further calls from a number. Indeed, FCC regulations explicitly permit providers to block calls that are likely to be unwanted based on “reasonable analytics”12147 C.F.R. § 64.1200(k)(3)(i) (2023). so long as the recipient has an opportunity to opt out of the blocking.122Id. § 64.1200(k)(3)(iii). Email filtering is also incredibly widely deployed. Some users do the filtering themselves, manually or with an app, but many rely on the filtering (both explicit blacklists and using machine learning) offered by their email providers. Here again, § 230 plays a role: the most common reason that delivery media block “otherwise objectionable” communications is that their users object to them, and spam is a common reason.123See, e.g., Republican Nat’l Comm. v. Google, Inc., No. 2:22-cv-01904-DJC-JBP, 2023 U.S. Dist. LEXIS 149076, at *11 (E.D. Cal. Aug. 24, 2023).

Finally, many laws require speakers to accurately identify themselves upstream when using delivery media so that listeners downstream can decide whether or not to receive their speech. CAN-SPAM prohibits false or misleading header information,12415 U.S.C. § 7704(a)(1). prohibits deceptive subject lines,125Id. § 7704(a)(2). and requires that advertisements be disclosed as such.126Id. § 7704(a)(5)(i). The Truth in Caller ID Act prohibits spoofing caller ID information “with the intent to defraud, cause harm, or wrongfully obtain anything of value.”12747 U.S.C. § 227(e)(1). The Junk Fax Prevention Act of 2005 (“JFPA”) requires clear “identification of the business, other entity, or individual sending the [fax] message.”128Id. § 227(d)(1)(B). Although there is a right to speak anonymously under many circumstances, there are limits on how far a speaker can go in lying about their identity to trick a listener into hearing them out. Importantly, some of these laws require delivery intermediaries to implement the infrastructure for accurate identification. The FCC, for example, requires telephone providers to implement a comprehensive framework against caller-ID spoofing known as “secure telephone identity revisited and signature-based handling of asserted information using tokens standards,” otherwise abbreviated as “STIR/SHAKEN.”12947 C.F.R. § 64.6300 (2023).

C. Hosting

Listener choices play a central role in the justifications for hosting providers’ First Amendment rights—and also in the justification for speakers’ access rights to hosting platforms. These justifications presume that listeners can voluntarily choose to engage with hosted content they want and to avoid hosted content they do not want. In the terminology of Listeners’ Choices, listeners can be asked to bear the necessary “separation costs” because they can easily and inexpensively choose where to click.130Grimmelmann, supra note 1, at 395–96. It follows, then, that unwilling listeners’ objection to content are not a sufficient reason to prevent it from being hosted for willing listeners.

The Supreme Court’s decision in Snyder v. Phelps is a nice example.131See generally Snyder v. Phelps, 562 U.S. 443 (2011). In addition to its funeral protests, the Westboro Baptist Church has a website that is, if anything, more offensive and upsetting. However, a website is even easier for an unwilling listener to avoid. The Church physically picketed at Albert Snyder’s son’s funeral, but he only found the website “during an Internet search for his son’s name.”132Id. at 449 n.1. Unsurprisingly, he pressed only the funeral-protest theory before the Supreme Court and abandoned his tort claims based on the website.133Id. The Court held that the First Amendment protected the Church’s picketing, and the argument is even stronger for the website.

Now consider whether hosting providers can have responsibilities to avoid carrying harmful-to-minors material. To simplify only slightly, the history of anti-indecency regulation is that some adults have tried to restrict minors’ access to sexually themed content by passing upstream filtering laws requiring speakers and hosting platforms to prevent the posting of such content. The courts have responded by invalidating these laws whenever listener-controlled downstream filtering is a plausible alternative. Indeed, it is striking how many contexts the same basic rationale has worked in.

Start with Sable Communications of California, Inc. v. FCC, in which federal law regulated “dial-a-porn” services by prohibiting the transmission of indecent interstate commercial telephone messages.134Sable Commc’ns of Cal., Inc. v. FCC, 492 U.S. 115, 118 (1989). While the prohibition might have been constitutional as to minors, adults have a constitutional right to view indecent but not obscene material. Because the statute prohibited transmission to adults as well, it restricted protected speech, and therefore was unconstitutional.

Put this way, Sable is a classic hosting case of both willing and unwilling listeners. The fact that the speech might reach some unwilling (minor) listeners does not mean that it can be prohibited entirely in such a way as to deprive willing (adult) listeners. Indeed, this first-cut explanation will apply perfectly well to almost all of the cases in this section. It is not wrong.

However, Sable is also a filtering case. The FCC had previously considered multiple technologies to block minors without blocking adults, including credit-card verification, access codes that would be provided only following an age verification process, message scrambling requiring a descrambler that only adults would be able to purchase, and customer-premises blocking, in which subscribers could block their phones from being

able to call entire exchanges (including the paid numbers over which Sable and other dial-a-porn operators provided their services). The Court specifically identified these technical schemes as plausible “less restrictive means, short of a total ban, to achieve the Government’s interest in protecting minors.”135Id. at 129.

These are all technologies to distinguish adults from minors, but they are also all filtering technologies. All four of them require a user to take an affirmative step to listen to particular speech. Indeed, the act of dialing a phone number itself is an affirmative step that these other mechanisms could piggyback on. This is why I describe Sable as a close cousin to a hosting case. To be sure, Sable Communications was delivering its own speech and not that of third parties, but it was fundamentally sending content to listeners on demand, and in such a way that they could predict the general outlines of the speech they were about to receive. (This fact alone is sufficient to distinguish FCC v. Pacifica Foundation and the other broadcast-indecency cases.136FCC v. Pacifica Found., 438 U.S. 726, 748–49 (1978).)

The same arc is visible in the Supreme Court’s caselaw on indecency on the Internet. The first stop was Reno v. American Civil Liberties Union.137See generally Reno v. Am. C.L. Union, 521 U.S. 844 (1997). The Communications Decency Act prohibited the transmission of indecent or sexual material to minors138Id. at 859–60.—including a good deal of material that was fully constitutional for adults to receive.139Id. at 870–76. The government tried to defend the statute by arguing that it only required intermediaries to refrain from sending such material to minors, while leaving them free to send it to adults.140Id. at 876–79. However, the Court held that “this premise is untenable”—that “existing technology did not include any effective method for a sender to prevent minors from obtaining access to its communications on the Internet without also denying access to adults.”141Id. at 876. In other words, the absence of effective age verification turned a de jure rule against sending indecent material to minors into a de facto rule against hosting it in general.142The Supreme Court is currently reconsidering the constitutional status of age-verification technology, in the context of numerous state laws requiring pornographic sites to implement age verification. See Free Speech Coal., Inc. v. Paxton, 95 F. 4th 263, 284 (5th Cir. 2024), cert. granted, 144 S. Ct. 2714 (2024).

Seven years later, in Ashcroft v. American Civil Liberties Union, the Supreme Court confronted a more narrowly drafted law, the Child Online Protection Act (“COPA”).143See generally Ashcroft v. Am. C.L. Union, 542 U.S. 656 (2004). Again, the statute prohibited sending to minors certain material that was constitutional for adults to receive.144Id. at 661–62. This time, however, the affirmative defenses were broader; providers were protected as long as they required a credit card, digital age verification, or any other “reasonable measures that are feasible under available technology.”145Id. at 662. The Court held that COPA was unconstitutional because “blocking and filtering software”—software operated and controlled by parents to limit the sites their children can access—was a less restrictive and more effective alternative.146Id. at 666–70.

As in Playboy Entertainment Group, the availability of more effective downstream filtering technologies meant that a law requiring upstream filtering was unconstitutional. However, unlike in Playboy Entertainment Group, the downstream filters were made available by third parties. The fact that parents could install their own filtering software meant that website hosts were under no duty to do their own filtering. This is a listener-choice-facilitating rule: Yes, it transfers some of the burdens of filtering from intermediaries to listeners, but it also means that each family can choose for itself how to tune its filters, if any.

In United States v. American Library Ass’n, the Supreme Court upheld the provisions of the Children’s Internet Protection Act (“CIPA”), which conditioned federal funding to schools and libraries on their installation of filtering software.147United States v. Am. Libr. Ass’n, Inc., 539 U.S. 194, 214 (2003). A four-Justice plurality held that the condition was a valid exercise of Congress’s Spending Clause power and that library Internet access was not a public forum.148Id. at 205–06. Meanwhile, Justice Kennedy and Justice Breyer’s concurrences in the judgment made nuanced arguments about listeners’ choices. Justice Kennedy’s argument rested on the government’s claim that “on the request of an adult user, a librarian will unblock filtered material or disable the Internet software filter without significant delay”—that is, CIPA allowed willing adult listeners to decide for themselves what sites to view.149Id. at 214. Justice Breyer made a similar point, arguing that an unblocking request was a “comparatively small burden.”150Id. at 220. Whether or not these claims are empirically accurate, the general principle is consistent with a deference to listener-controlled choices about filtering, subject only to the carve-out that minors are not regarded as having the autonomy to choose to view certain material that their elders regard as harmful to them.

D. Selection

I have argued that selection generally facilitates listener choices among speech, and that government attempts to alter platforms’ selection decisions interfere with listeners’ practical ability to find the content that they want. This is not to say that platforms’ selection decisions are ideal or give listeners the full degree of choices they might enjoy. Platforms will almost always get some users’ choices wrong some of the time. Every update you scroll past or search result you ignore is a mistake from your perspective. Platform-provided selection is better than the chaos of content without selection, but there is almost always room to improve.151See generally James Grimmelmann, The Virtues of Moderation, 17 Yale J.L. & Tech. 42 (2015) (discussing moderation in online communities).

It is helpful, then, to recognize that the bundling of hosting and selection on today’s social-media platforms may be a bug rather than a feature. The previous subsection argued that separation of hosting and selection could be permissible as a way for government to ensure that speakers are able to be heard by listeners who genuinely want to hear them (hosting) while not forcing their speech on listeners who do not (selection). However, there is another advantage to clearly separating the two functions, whether required by regulation or voluntarily adopted by a platform.

What would a world where social-media platforms separated hosting from selection look like? The short answer is that it would look much more like web search already does. Hosting providers make content available at speakers’ request, with stable URLs at reachable IP addresses, and transmit that content to listeners at listeners’ request. Meanwhile, search engines index the content and provide recommendations of relevant content to listeners, also at listeners’ request. Listeners have a choice of competing search engines to help them make their choice among competing speakers. The system is not perfect—Google has a dominant market share for general web search in the United States—but there is competition for those users who are willing to use other search engines. For example, Bing, DuckDuckGo, and Kagi are three highly creditable alternatives.

Several commentators have described a similar possible separation for social media. One proposal from a group of Stanford researchers is for “middleware,” defined as “software, provided by a third party and integrated into the dominant platforms, that would curate and order the content that users see.”152Francis Fukuyama, Barak Richman, Ashish Goel, Roberta R. Katz, A. Douglas Melamed & Marietje Schaake, Middleware for Dominant Social Platforms: A Technological Solution to A Threat to Democracy, Stan. Cyber Pol’y Ctr. (2021), https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/cpc-middleware_ff_v2.pdf [https://perma.cc/SZ9Z-AW3P]; see also Francis Fukuyama, Richard Reisman, Daphne Keller, Aviv Ovadya, Luke Thorburn, Jonathan Stray & Shubhi Mathur, Shaping the Future of Social Media with Middleware, Found. for Am. Innovation (Dec. 2024), https://cdn.sanity.io/files/d8lrla4f/staging/1007ade8eb2f028f64631d23430ee834dac17f8e.pdf/Middleware [https://perma.cc/7TBA-UUR3]. Users on the platform would rely on the platform for hosting speakers’ content, but third-party middleware would do the selection. The first and most obvious virtue of middleware is that it introduces competition into the selection process, even when a platform is “dominant”; a monopoly on hosting does not automatically translate into a monopoly on selection.

The authors of the Stanford proposal argue that middleware would “dilute[] the enormous control that dominant platforms have in organizing the news and opinion that consumers see.”153Fukuyama, Richman, Goel, Katz, Melamed & Schaake, supra note 152, at 6. This is entirely correct, but I would put the point differently. Middleware pushes control from a platform towards its users, specifically towards users as listeners. An integrated platform benefits from its position at the center of the two-sided market for hosting, even if its selection is disappointing to users. However, when selection is broken out, selection intermediaries will attract users precisely to the extent that they succeed in satisfying those users’ desire for useful advice about what speech to listen to. That is, middleware selection providers compete along the right axis.

A close relative of middleware—or perhaps a subset of it—is “user agents”: software controlled by the end user that takes the content from a platform and curates it. The difference between middleware and a user agent is that middleware is integrated with the platform and takes over the selection function, while a user agent starts from the content selected by the platform and performs a second round of selection on it. For example, an ad blocker integrated into a user’s browser takes the content selected by a website and curates it by removing the ads. I have argued that these user agents are important for user autonomy in deciding what software to run on their computers, and a similar argument applies to users’ autonomies over what speech they receive.154James Grimmelmann, Spyware vs. Spyware: Software Conflicts and User Autonomy, 16 Ohio St. Tech. L.J. 25 (2020).

Ben Thompson, a technology and business analyst and journalist, offered a fascinating road-not-taken proposal for Twitter (prior to its transformation into X by Elon Musk).155Ben Thompson, Back to the Future of Twitter, Stratechery (Apr. 18, 2022), https://stratechery.com/2022/back-to-the-future-of-twitter [https://perma.cc/3P3G-94KG]. Thompson argued that Twitter should be split in two: TwitterServiceCo would be “the core Twitter service, including the social graph”; TwitterAppCo would be “all of the Twitter apps and the advertising business.”156Id. TwitterAppCo would pay TwitterServiceCo for application programming interface (“API”) access to post to timelines and read tweets, but so could other companies. As Thompson observes, this solution would “cut a whole host of Gordian Knots”: it would make it easier for new social-media entrants to compete on offering better clients or better content moderation; it would pull many controversial content-moderation decisions closer to the users they directly affect; and it would enable a far greater diversity of content moderation policies (both geographically and based on user preferences).157Id.

Needless to say, this was not the route that Musk followed after his acquisition of Twitter—but it is much closer to the route that many post-Twitter social-media services are following. In their ways, Mastodon, Bluesky, and Threads have embraced a version of the middleware ideal, but with an interesting twist. All three of these systems have a “federated” approach to hosting. Users have a direct affiliation with a server or system; they upload their posts to it, and they read other users’ posts through it.

So far, so familiar. The difference is that these services all federate with other services providing similar functionality to their own users. They copy posts from other servers; they make their own users’ posts available for other servers to copy. The result is that content posted by a user anywhere is available to all users everywhere. As a consequence, any given server has less power over its users; they can migrate to a different server without cutting themselves off from their connections on the social graph. Mastodon, for example, has built-in migration functionality that allows users to change servers and have their contacts automatically update subscriptions to the new one.

Federation also has substantial content-moderation benefits because, like middleware, it pushes content moderation closer to the listeners who are directly affected by it. Each federated server can have its own content-moderation policy—that is, each server can implement its own selection algorithm. This is not quite middleware as such, in that a server combines hosting and selection. However, it is much closer than a fully integrated platform would be. Indeed, once it hits a basic baseline of technical competence and reliability, a federated server’s principal differentiator is its moderation policy. So here, too, users who prefer a particular set of policies as listeners have the ability to choose on that basis. This, too, is speech-promoting.

The most careful theorization is of this model is Mike Masnick’s Protocols, Not Platforms.158Mike Masnick, Protocols, Not Platforms: A Technological Approach to Free Speech, Knight First Amend. Inst. at Colum. Univ. (Aug. 21, 2019), https://knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free-speech [https://perma.cc/ET69-VQ4E]. Masnick argues that the key move is to separate a platform into a standardized open protocol and a particular proprietary implementation of that protocol. The interoperable nature of the protocol is what ensures that implementations are genuinely competing on the basis of users’ preferences over content, and not just based on the lock-in network effects of a single platform that has the largest userbase. That is, interoperability enables migration, which enables competition, which promotes competition and speech values. Masnick gives a detailed argument for why this model promotes diversity in users’ speech preferences. I would add only that this diversity is primarily diversity of users as listeners.

To finish, I would like to note a type of selection that can come closer to the middleware goal of facilitating listener choice, even within proprietary platforms. Shareable blocklists (a) allow users to make and share a list of users they do not want to see or receive any content from, and (b) allow other users to import and use another’s shared blocklist.159See generally R. Stuart Geiger, Bot-Based Collective Blocklists in Twitter: The Counterpublic Moderation of Harassment in a Networked Public Space, 19 Info. Commc’n & Soc’y 787 (2016). Blocking is a relatively crude form of selection; it does not necessarily work against abusers or spammers who change their identity or use sock puppet accounts, nor does it let through individual worthwhile posts from users who are otherwise blocked. Still, blocklists satisfy the key desideratum: they are listener-controlled filters. Shareable blocklists have been used for email, on Twitter (before X discontinued this feature), and for ad-blocking on the web, among other settings.

Conclusion

Internet media come in different bundles of functions than pre-Internet media did. Offline, broadcast combined transmission and selection in a way that made it appear that there was a natural connection between speakers’ access to a platform and listeners’ interests, and that both were naturally opposed to media intermediaries’ own speech claims. All of this was true enough in that context, given the structural constraints of the broadcast medium.

However, the assumption that listeners and speakers are united against intermediaries is simply not true when applied beyond the broadcast context. Instead, we frequently find that intermediaries are listeners’ allies, providing them with useful assistance in finding and obtaining the speech of interest to them—and that they form a united front against speakers trying to push their speech on unwilling listeners. Applying the broadcast analogy in this context can result in making unwilling listeners into captive audiences, all while claiming that it is necessary in the Orwellian name of listeners’ rights.

Instead, I have argued that to think clearly about speech on the Internet, we must distinguish between the functions of delivering, hosting, and selecting content, and that we must see each of them from listeners’ point of view. In such a setting, carefully drafted neutrality rules on delivering and hosting can be genuinely speech-facilitating because they promote listeners’ choices. In contrast, most attempts to regulate selection interfere with listeners’ choices. There are a few exceptions—structural separation, interoperability and middleware, restrictions on self-preferencing, and chronological feed options—but all of them are about giving listeners genuine choice among selection intermediaries, or about ensuring loyalty within the intermediary-listener relationship. Beyond that, selection intermediaries should largely be free to select as they see fit, and listeners should largely be free to use them or not, as they see fit.

Seeing the Internet from listeners’ perspective is a radical leap. It requires making claims about the nature of speech and about where power lies online, which can seem counterintuitive if you are coming from the standard speaker-oriented First Amendment tradition. But once you have made that leap, and everything has snapped into focus again, it is impossible to unsee.160See Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805, 1834–36 (1995) (presciently arguing that the Internet will lead to an abundance of speech and shift control over that speech from speakers to listeners).

This is not to say that listeners should always get what they want, any more than speakers should. A democratic self-governance theory of the First Amendment might be acutely concerned that groups of like-minded listeners will wall themselves off inside echo chambers and filter bubbles. This is a powerful argument, and to refute it by appealing to a pure listeners’ choice principle is to beg the question. However, even if a shift to listeners’ perspective cannot resolve the debate between self-governance theories and individual-liberty theories—between collective needs and individual choices—such a shift can still clarify these debates. The fear of echo chambers and filter bubbles is fundamentally a concern about listeners’ choices, not one about speakers’ rights. Focusing on what listeners want, and on the consequences of giving it to them, makes clear what is really at stake. It also sheds light on the tradeoffs involved in adopting one media-policy regime as opposed to another.

Listeners online live in a world where countless chattering speakers vie for their attention using every dishonest and manipulative tactic they can—partisans, fraudsters, advertisers, and spammers of every stripe. Selection intermediaries are listeners’ best, and in some cases their only, line of defense against the cacophony; it can be the only way to tune out the racket and hear what they actually want to hear. Intermediaries have immense power over listeners because of it, but what listeners need is to moderate that power and tip the balance more in their favor, instead of eliminating the intermediaries entirely. Being more protective of platforms’ selection decisions gives us more room to be skeptical of their hosting and delivery decisions; it lets us better distinguish when speakers have legitimate claims against platforms and when they do not.

Listeners are at the center of the First Amendment and more so online than ever before. It is time for First Amendment theory and doctrine to get serious about listeners’ choices among speech on online platforms.

 

98 S. Cal. L. Rev. 1231

Download

* Tessler Family Professor of Digital and Information Law, Cornell Law School and Cornell Tech. I presented an earlier version of this article at The First Amendment and Listener Interests symposium at the University of Southern California on November 8–9, 2024. My thanks to the participants and organizers, and to Aislinn Black, Jane Bambauer, Kat Geddes, Erin Miller, Blake Reid, Benjamin L.W. Sobel, and David Gray Widder. The final published version of this article will be available under a Creative Commons license.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest