In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms are not bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression—particularly the entangled concepts of “public figures” and “newsworthiness.”
This Article offers the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. It first introduces the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. The Article then turns to the “New Governors” and examines how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, this Article offers lessons for both courts and platforms as they confront new challenges posed by online speech. It exposes the pitfalls of using algorithms to identify public figures; explores the diminished utility of setting rules based on voluntary involvement in public debate; and analyzes the dangers of ad hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in law or content moderation.
Finally, the Article concludes by exploring what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. These platforms act as legislature, executive, judiciary, and press—but without any separation of powers to establish checks and balances. A change to this model is already occurring at one platform: Facebook is creating a new Oversight Board that will hopefully provide due process to users on the platform’s speech decisions and transparency about how content-moderation policy is made, including how concepts related to newsworthiness and public figures are applied.
One of the cornerstones of First Amendment doctrine is the general rule that content-based restrictions on all speech—apart from a few narrow categories of low-value speech—are evaluated under strict scrutiny. As many have observed, this rule has produced considerable strain within the doctrine because it applies the same onerous standard throughout the vast and varied expanse of all non-low-value speech, which includes not only the core, highest-value speech for which such stringent protection is clearly warranted, but also less valuable speech to which the application of strict scrutiny is often dissonant. Nevertheless, traditional accounts maintain that this blunt, highly prophylactic approach is necessary given the significant costs and risks associated with granting courts greater discretion to make value-based speech distinctions.
This Article challenges these accounts. I argue that courts should more explicitly recognize a broad conceptual category of what I call “middle-value speech”—that is, speech that falls within the hazy center of the speech-value spectrum between clearly high-value speech, like political speech or truthful news reporting, and clearly low-value speech, like true threats or incitement. The scope of such speech is vast, potentially encompassing speech as diverse as public disclosures of sensitive private data, sexually explicit speech, professional advice, search engine results, and false statements of fact. Yet current First Amendment doctrine broadly fails to recognize middle-value speech as a discrete conceptual category, and this failure has produced substantial costs in the form of doctrinal distortion and a lack of analytical transparency. These costs have grown precipitously—and will continue to grow—in conjunction with the First Amendment’s broad expansion beyond the familiar precincts of core ideological expression into increasingly eclectic varieties of speech.
The harm principle allows government to limit liberties as necessary to prevent harm. Does the freedom of speech present an exception to the harm principle? Most American scholars say yes. It is common practice to proclaim proudly that the U.S. Constitution protects speech even when it causes harm. But two tenets of the author of the harm principle himself suggest that, today, this answer may be too glib. For John Stuart Mill, the enhanced protection of speech is only a means to protect thought, and moreover, opinions lose their immunity if they cross over from thought into action. Together, these two points invite us to consider the possibility that the special protection we have come to afford, even to a newly broadened range of speech that goes well beyond thought, may be misplaced. There are cases, I will argue, in which we should be slow to assume that society is necessarily without power to protect itself from harm that expression may cause.
Thirty-two-year-old Eric Rinehart was a former police officer and member of the Indiana National Guard. He was going through his second divorce, he had custody of his seven-year-old son, and he had no criminal record. During this time, perhaps against his better judgment, he began two sexual relationships with young women, aged sixteen and seventeen. Although the young women were much younger in age, both of Rinehart’s sexual relationships were consensual and entirely legal. Under Indiana state law, the legal age of consent for sexual intercourse is sixteen.
During the course of his relationship with one of the young women, Rinehart lent her his digital camera after she suggested, based on her past experiences with other partners, that she use it to take provocative photographs of herself. When she returned the camera, Rinehart found pictures of the young woman engaged in “sexually explicit conduct.” Following this event, Rinehart photographed the same young woman engaged in similar sexual activities. In addition, Rinehart created “short videos of himself and [the second young woman] engaged in sexual intercourse.” All the photos and videos were taken with the knowledge and consent of his sexual partners. All of the images were uploaded onto Rinehart’s home computer, but none were distributed to a third party, nor was there evidence that Rinehart intended to do so.
In an opinion that many would argue gave birth to modern free speech law, Justice Oliver Wendell Holmes, Jr. described the purpose of the First Amendment as protecting the “free trade in ideas [because] the best test of truth is the power of the thought to get itself accepted in the competition of the market.” Thus was born the “marketplace of ideas” metaphor that has heavily influenced the subsequent development of free speech jurisprudence. In another seminal opinion, Justice Louis Brandeis emphasized that “a state is, ordinarily, denied the power to prohibit dissemination of social, economic and political doctrine which a vast majority of its citizens believes to be false and fraught with evil consequence” because such prohibitions interfere with the “public discussion,” which is at the heart of deliberative democracy. More recently, the Supreme Court has articulated the view that “[u]nder the First Amendment, there is no such thing as a false idea. However pernicious an opinion may seem, we depend for its correction not on the conscience of judges and juries but on the competition of other ideas.” These three statements constitute some of the most famous declarations of First Amendment liberty in the history of the Supreme Court. What is noteworthy about these opinions, however, is that they all focus on the freedom to articulate ideas, including opinions and doctrine. What they do not address is the treatment of facts in free speech law. It is true that the most recent case quoted above, Gertz v. Robert Welch, Inc., goes on after denying the existence of false ideas to state that “there is no constitutional value in false statements of fact,” but this is a reference only to false facts. What about true facts? What is their role in the pantheon of free speech? Are they equivalent to opinions, or do facts warrant distinct First Amendment analysis? How does factual speech relate to the underlying purposes of the First Amendment? And have the answers to these questions shifted in light of the rise of the Internet as the dominant modern avenue for the dissemination of speech? These are the questions that this Article explores.
The Federal Trade Commission (“FTC”) adopted new disclosure rules in 2009 for “consumer-generated media.” The “Guides Concerning the Use of Endorsements and Testimonials in Advertising” warn bloggers, people who post on social networking sites, and other generators of new media content that they must disclose when they receive payments or free products related to what they write about. Failure to disclose material connections can result in fines of up to $10,000 for each violation.
The FTC endorsement rules do not apply to journalists who work for newspapers, magazines, or television and radio stations. When the guides were released, new media journalists protested that the government was creating a two-tiered regulatory regime that singled them out for unfavorable treatment. Jack Shafer, the media critic for Slate, called the rules “preposterous” and denounced “[t]he FTC’s [m]ad [p]ower [g]rab.”
In 2008, a group of taggers known as the Metro Transit Assassins (“MTA”) painted a giant “MTA” tag in the Los Angeles riverbed that was visible from downtown office buildings and freeways. The three-story-high tag extended for half a mile along the riverbed and used an estimated four hundred gallons of paint. The government projected that it would cost $3.7 million to clean up the tag, including taking the necessary precautions to contain the toxic paint and runoff during cleanup. A graffiti historian explained that the tag was “definitely a statement, . . . [t]o do something that big and bold it takes organization.” Seven alleged MTA members were arrested in 2009 for the tag. During the arrests and ensuing searches, law enforcement found specialized tools that enable such large-scale, logistically difficult tagging: high-pressure fire extinguishers filled with paint.
In response to the riverbed tag and a multitude of other MTA graffiti vandalism throughout Los Angeles, the Los Angeles City Attorney filed a complaint in July 2010 seeking a civil injunction against MTA and its members. If granted, the injunction would, among other things, prohibit possession of graffiti tools, prohibit public association with other members of MTA, prohibit profiting from graffiti, and impose a curfew on MTA members. First Amendment challenges to the injunction have already begun: in May 2011, the American Civil Liberties Union (“ACLU”) filed defense motions containing First Amendment challenges to the injunction, but they were denied by the Los Angeles Superior Court. In August 2011, the California Court of Appeals denied defense motions challenging the Superior Court ruling.
This Note will propose a new categorical exclusion from the First Amendment for speech that specifically details how to commit a crime and, –as a whole, lacks serious literary, artistic, political, or scientific value. This exclusion–the crime plans exclusion–may be tailored in various ways to reflect an accommodation of free speech principles and government interests. Ultimately, this Note will advocate a two-plank definition of crime plans speech requiring (1) that the speech be sufficiently specific so that a reasonable person who has never committed the described crime could follow the instructions and expect to carry out the crime or conceal evidence, and (2) that the speech, “as a whole, lacks serious literary, artistic, political, or scientific value,” which will be referred to collectively as “redemption value.”
While this Note will advocate a new categorical exclusion, it will also suggest that crime plans speech can be denied First Amendment protection under traditional strict scrutiny analysis. Moreover, when crime-facilitating speech does not fall into the crime plans exclusion, it still may be denied First Amendment protection under strict scrutiny analysis if the state’s compelling interest in prohibiting that speech outweighs the individual’s free speech interest. Though strict scrutiny analysis can often yield the same result as a categorical exclusion, categorically excluded speech does not have presumptive constitutional protection and is subject only to the minimal rational basis test. Thus, the argument structure of the categorical exclusion conveys a message that specific crime-facilitating speech that has virtually no noncriminal redemptive value is undeserving of First Amendment protection.
Pornography dominates the discussion about free speech on the Internet. Congress has twice enacted legislation aimed at preventing minors from getting access to online pornography. Federal and local law enforcement agencies have dramatically increased efforts to combat the spread of child pornography. The Department of Justice has renewed attempts to crack down on obscene material after years of lax enforcement.
Yet the debate about online pornography has overshadowed another disturbing Internet phenomenon. The Internet has facilitated growth in the availability of extremely violent images and videos. A little online searching reveals depictions of torture, of both humans and animals; videos depicting murders and executions, including beheadings by Islamic militants; videos of brutal amateur street fights, some consensual, but many not; videos of minors engaged in schoolyard fights and beatings, some posted to humiliate the victims; and videos of cockfighting. Online retailers have sold videos of dog fights and extremely violent video games, including one in which the player is tasked with making graphic snuff videos and another which allows the player to play fetch with dogs using human heads.
Over a decade after being arrested in a western Montana cabin, Theodore Kaczynski is once again grabbing headlines. Although he is currently in a federal maximum-security prison serving the life sentence that he received for committing the Unabomber crimes, Kaczynski is now engaged “in a legal battle with the federal government and a group of his victims over the future of [his] handwritten papers.” The government has proposed selling “sanitized versions of the materials” via an Internet auction in order to raise money for a group of his victims, and Kaczynski is fighting that plan.
At issue, largely, is the extent of the government’s power under the Victim and Witness Protection Act of 1982 (“VWPA”) and further, what the government may do with the property it seized from Kaczynski. This property includes his “handwritten . . . journals, diaries and drafts of his anti-technology manifesto . . . [which] contain blunt assessments of 16 mail bombings from 1978 to 1995 that killed 3 people and injured 28, as well as his musings on the suffering of victims and their families.” Moreover, due to its unique set of facts, Kaczynski’s case also provides an intriguing opportunity to evaluate whether the VWPA violates the First Amendment of the U.S. Constitution or conflicts with the Copyright Act of 1976 (“Copyright Act”), and to explore the fascinating interplay between these two areas of law, both of which provide protection for individuals’ free expression.