In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms are not bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression—particularly the entangled concepts of “public figures” and “newsworthiness.”
This Article offers the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. It first introduces the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. The Article then turns to the “New Governors” and examines how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, this Article offers lessons for both courts and platforms as they confront new challenges posed by online speech. It exposes the pitfalls of using algorithms to identify public figures; explores the diminished utility of setting rules based on voluntary involvement in public debate; and analyzes the dangers of ad hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in law or content moderation.
Finally, the Article concludes by exploring what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. These platforms act as legislature, executive, judiciary, and press—but without any separation of powers to establish checks and balances. A change to this model is already occurring at one platform: Facebook is creating a new Oversight Board that will hopefully provide due process to users on the platform’s speech decisions and transparency about how content-moderation policy is made, including how concepts related to newsworthiness and public figures are applied.
During the 2016 Presidential campaign, the average adult saw at least one “fake news” item on social media. The people distributing the articles had a variety of aims and operated from a variety of locations. Among the locations we know about, some were in Los Angeles, others in Macedonia, and, yes, others were in Russia. The Angelenos aimed to make money and sow chaos. The Macedonians wanted to get rich. And the Russians aimed to weaken Hillary Clinton’s candidacy for president, foster division around fraught social issues, and make a spectacle out of the U.S. election. To these ends, the Russians mobilized trolls, bots, and so-called “useful idiots,” along with sophisticated ad-tracking and micro-targeting techniques to strategically distribute and amplify propaganda. The attacks are ongoing.
Cheap distribution and easy user targeting on social media enable the rapid spread of disinformation. Disinformative content, like other online political advertising, is “micro-targeted” at narrow segments of the electorate, based on their narrow political views or biases. The targeting aims to polarize and fragment the electorate. Tracing the money behind this kind of messaging is next to impossible under current regulations and advertising platforms’ current policies. Voters’ inability to “follow the money” has implications for our democracy, even in the absence of disinformation. And of course, an untraceable flood of disinformation prior to an election stands to undermine voters’ ability to choose the candidate that best aligns with their preferences.
On March 18, 2016, and March 22, 2016, a jury awarded Terry Bollea (a.k.a Hulk Hogan) a total of $140 million in compensatory and punitive damages against Gawker Media for posting less than two minutes of a video of Hulk Hogan having sex with his best friend’s wife. The award was based upon a finding that Gawker intentionally had invaded Hulk Hogan’s privacy by posting the video online.
The case has been receiving extensive media coverage because it is a tawdry tale involving a celebrity, betrayal, adultery, sex, and the First Amendment. The story would be better if all of the characters in the story were not, at best, anti-heroes. Hulk Hogan had sex with his best friend’s wife. Hulk Hogan’s sex partner committed adultery. Hulk Hogan’s best friend, the cuckold, allegedly was the person who videotaped the encounter and then leaked it to Gawker. And, after sleeping with his best friend’s wife, Hulk Hogan had the audacity to sue the cuckold for allegedly leaking the sex tape to Gawker, with the cuckold settling that claim by paying Hulk Hogan $5000. The cuckold then asserted his Fifth Amendment right against self-incrimination to avoid testifying in the case against Gawker. On the other side of the story, Gawker, the entity that posted the sex tape online, is a “media gossip” website host and does not look very good attempting to wear the cloak of the First Amendment by claiming that the contents of the Hulk Hogan sex video, as opposed to the simple fact that the tape existed, was newsworthy. Nor did it help Gawker’s image when Gawker’s editor testified that he would only draw the line against posting sex videos if the video included a child under four years old. It is hard to root for any of the parties in the case.
Thirty-two-year-old Eric Rinehart was a former police officer and member of the Indiana National Guard. He was going through his second divorce, he had custody of his seven-year-old son, and he had no criminal record. During this time, perhaps against his better judgment, he began two sexual relationships with young women, aged sixteen and seventeen. Although the young women were much younger in age, both of Rinehart’s sexual relationships were consensual and entirely legal. Under Indiana state law, the legal age of consent for sexual intercourse is sixteen.
During the course of his relationship with one of the young women, Rinehart lent her his digital camera after she suggested, based on her past experiences with other partners, that she use it to take provocative photographs of herself. When she returned the camera, Rinehart found pictures of the young woman engaged in “sexually explicit conduct.” Following this event, Rinehart photographed the same young woman engaged in similar sexual activities. In addition, Rinehart created “short videos of himself and [the second young woman] engaged in sexual intercourse.” All the photos and videos were taken with the knowledge and consent of his sexual partners. All of the images were uploaded onto Rinehart’s home computer, but none were distributed to a third party, nor was there evidence that Rinehart intended to do so.
This Note will analyze how the cyberstalking statute applies to a particular form of new media, Twitter, within the framework of a First Amendment analysis. While the analysis within this Note is limited to the interplay between Twitter and the cyberstalking statute, the principles discussed, policies weighed, and doctrines explored also apply to the regulation of distressing speech on the Internet generally. Part II examines Twitter, focusing on how Twitter users interact and the effect this has on First Amendment principles. Part III looks closely at the crime of cyberstalking and the cyberstalking statute. It explores the definition of cyberstalking, the difficult nature of cyberstalking regulation, and the harms cyberstalking can cause. It then discusses the cyberstalking statute (including the 2006 amendment at issue in Cassidy), how courts have construed the statute, and what speech the statute criminalizes. Part IV applies First Amendment doctrine to the cyberstalking statute’s regulation of Twitter. This part analyzes the following: how the First Amendment applies to Internet fora; vagueness and overbreadth challenges; the protection of speech covered by the statute; what level of scrutiny should apply to the statute; whether the statute serves to protect a “captive audience”; and how the statute holds up under each level of scrutiny. Further, after laying out these First Amendment principles, Part IV critiques the district court opinion issued in the Cassidy case. Part V proposes potential changes to the statute to ensure it does not run afoul of the First Amendment. Part VI concludes by refocusing on general First Amendment principles and the interests at issue in this case, and it emphasizes that protecting the captive audience may be the most appropriate role for cyberstalking laws to serve.
In an opinion that many would argue gave birth to modern free speech law, Justice Oliver Wendell Holmes, Jr. described the purpose of the First Amendment as protecting the “free trade in ideas [because] the best test of truth is the power of the thought to get itself accepted in the competition of the market.” Thus was born the “marketplace of ideas” metaphor that has heavily influenced the subsequent development of free speech jurisprudence. In another seminal opinion, Justice Louis Brandeis emphasized that “a state is, ordinarily, denied the power to prohibit dissemination of social, economic and political doctrine which a vast majority of its citizens believes to be false and fraught with evil consequence” because such prohibitions interfere with the “public discussion,” which is at the heart of deliberative democracy. More recently, the Supreme Court has articulated the view that “[u]nder the First Amendment, there is no such thing as a false idea. However pernicious an opinion may seem, we depend for its correction not on the conscience of judges and juries but on the competition of other ideas.” These three statements constitute some of the most famous declarations of First Amendment liberty in the history of the Supreme Court. What is noteworthy about these opinions, however, is that they all focus on the freedom to articulate ideas, including opinions and doctrine. What they do not address is the treatment of facts in free speech law. It is true that the most recent case quoted above, Gertz v. Robert Welch, Inc., goes on after denying the existence of false ideas to state that “there is no constitutional value in false statements of fact,” but this is a reference only to false facts. What about true facts? What is their role in the pantheon of free speech? Are they equivalent to opinions, or do facts warrant distinct First Amendment analysis? How does factual speech relate to the underlying purposes of the First Amendment? And have the answers to these questions shifted in light of the rise of the Internet as the dominant modern avenue for the dissemination of speech? These are the questions that this Article explores.
This Note will propose a new categorical exclusion from the First Amendment for speech that specifically details how to commit a crime and, –as a whole, lacks serious literary, artistic, political, or scientific value. This exclusion–the crime plans exclusion–may be tailored in various ways to reflect an accommodation of free speech principles and government interests. Ultimately, this Note will advocate a two-plank definition of crime plans speech requiring (1) that the speech be sufficiently specific so that a reasonable person who has never committed the described crime could follow the instructions and expect to carry out the crime or conceal evidence, and (2) that the speech, “as a whole, lacks serious literary, artistic, political, or scientific value,” which will be referred to collectively as “redemption value.”
While this Note will advocate a new categorical exclusion, it will also suggest that crime plans speech can be denied First Amendment protection under traditional strict scrutiny analysis. Moreover, when crime-facilitating speech does not fall into the crime plans exclusion, it still may be denied First Amendment protection under strict scrutiny analysis if the state’s compelling interest in prohibiting that speech outweighs the individual’s free speech interest. Though strict scrutiny analysis can often yield the same result as a categorical exclusion, categorically excluded speech does not have presumptive constitutional protection and is subject only to the minimal rational basis test. Thus, the argument structure of the categorical exclusion conveys a message that specific crime-facilitating speech that has virtually no noncriminal redemptive value is undeserving of First Amendment protection.
Proposition 8, the California ballot measure that amended the state constitution to deny marriage to same-sex couples, passed by a small margin in November 2008. The campaign was contentious, well funded by both sides, and the subject of much media attention. After Proposition 8 passed, however, the debate about same-sex marriage in California was far from over. Shortly after the election, Proposition 8 opponents organized protests against certain Proposition 8 supporters and their employers throughout California and in other states. For example, opponents protested at the Church of Latter-Day Saints in Los Angeles because the church and its members raised a significant amount of money to support Proposition 8. Opponents also organized boycotts of businesses whose owners or employees donated to support Proposition 8. Several of these protests had negative repercussions for donors. For example, following threats of boycotts of his musical works and his employer, Scott Eckern, the longtime artistic director of the California Musical Theater, resigned from his position after it was revealed that he donated $1000 to Proposition 8. Marc Shaiman, the composer of the music for Hairspray, told Eckern that he would not let his work be performed in the theater due to Eckern’s support for Proposition 8. U.S. law requires a secret ballot for both candidate and issue elections, so how did opponents of Proposition 8 identify the donors to Proposition 8? The answer lies in disclosure laws. In California, as in most states, campaigns must publicly disclose certain information about individuals who donate to a ballot measure or candidate. California’s Political Reform Act of 1974, as amended, provides that all campaign donations of $100 or more must be published on the Secretary of State’s website, allowing the public to easily search for the names of campaign donors online. Further, not only must the donor’s name and the amount of the contribution be disclosed, but the donor’s street address, occupation, and employer’s name—or, if self-employed, the name of the donor’s business—must also be disclosed. On the federal level, campaign contributions to federal candidates are also now easily accessible to the public online. Federal law requires disclosure of individuals who contribute $200 or more to a candidate. This information can be viewed online through the Federal Election Commission’s (“FEC’s”) website, as well as on other websites. Not only has technology increased the availability of donor information online, but political entrepreneurs have also taken the FEC’s campaign finance data and made it even more accessible online, allowing users to search the data by multiple categories. For example, the Huffington Post, a popular blog, runs a search engine called “Fundrace 2008,” which allows a user to search for donors to 2008 presidential candidates by a donor’s first or last name, address, city, or employer. The website boasts about the easy access to the political leanings of nearly anyone a user knows of: “Want to know if a celebrity is playing both sides of the fence? Whether that new guy you’re seeing is actually a Republican or just dresses like one?”
Few areas of constitutional law remain more captive to the subjective whims of judicial preference than the First Amendment’s religion clauses. This condition results in part from the Court’s notorious inability to agree on a uniform standard of review under either the Free Exercise or Establishment Clauses. This instability matters because, as Justice Scalia notes, “[w]hat distinguishes the rule of law from the dictatorship of a shifting Supreme Court majority is the absolutely indispensable requirement that judicial opinions be grounded in consistently applied principle.” As concerns the religion clauses, a stabilizing principle may be found in political process theory, a set of ideas that, while generally familiar to constitutional theory, have yet to be comprehensively applied to either free exercise or establishment controversies.
Process theory embraces “[t]he notion that courts should exercise judicial review almost exclusively to protect democracy and guarantee the fairness of legal processes.” Conversely, process theory rejects the notion that courts should enforce “substantive” policy preferences that cannot be justified on these “process-oriented” grounds, as they are more properly left to the vicissitudes of the political branches. Borrowing heavily from the literature of civic republicanism, this Note argues that process theory should be broadened to account for the unique contributions of religion to the political process. This Note further argues that, using process theory, courts should interpret the First Amendment’s religion clauses as process-oriented safeguards for the political contributions of religious faith and institutions. Finally, courts should reject a jurisprudence that employs the religion clauses as vehicles for the enforcement of substantive conceptions of free exercise and disestablishment.
As word of the decision in Hosty v. Carter spread in the summer of 2005, many college journalists were outraged. To them, it was the end of free speech as they knew it. In Hosty, the en banc Seventh Circuit became the first court to apply in a college the framework of the Supreme Court’s Hazelwood case, which for nearly twenty years had given high school administrators wide latitude to restrict the content of student-run newspapers. As a result, many college journalists believed they were powerless against university presidents and deans, who they believed could charge into their newsrooms, lock up their computers, and even stop their presses – all with the blessing of the First Amendment.
In truth, the outrage did not begin with Hosty. It began seventeen years earlier with the Supreme Court’s decision in Hazelwood School District v. Kuhlmeier. In Hazelwood, the Supreme Court held that in high schools, where school-sponsored student speech does not occur in a public forum, the school may regulate the content of that speech for reasons that are “reasonably related” to any of a range of “legitimate pedagogical concerns.” Thus, many people believed Hazelwood gave high school administrators near free reign to stop students from participating in one of our nation’s most sacred traditions – a free and independent press. And in Hazelwood, the Supreme Court explicitly left open the possibility that the case’s analytical framework might be applied to student publications in colleges too. But until June 2005, no court had dared to do so. Hosty was the first.