We are now some twenty years into the story of the Internet’s bold challenge to law and the legal system. In the early 2000s, Jack Goldsmith and I wrote Who Controls the Internet, a book that might be understood as a chronicle of some the early and more outlandish stages of the story. Professors Pollman and Barry’s excellent article, Regulatory Entrepreneurship, adds to and updates that story with subsequent chapters and a sophisticated analysis of the strategies more recently employed to avoid law using the Internet in some way. While Pollman and Barry’s article stands on its own, I write this Article to connect these two periods. I also wish to offer a slightly different normative assessment of the legal avoidance efforts described here, along with my opinion as to how law enforcement should conduct itself in these situations.
Behind regulatory entrepreneurship lies a history, albeit a short one, and one that has much to teach us about the very nature of law and the legal system as it interacts with new technologies. Viewed in context, Pollman and Barry’s “regulatory entrepreneurs” can be understood as, in fact, a second generation of entrepreneurs who learned lessons from an earlier generation that was active in the late 1990s and early 2000s. What both generations have in common is the idea that the Internet might provide profitable opportunities at the edges of the legal system. What has changed is the abandonment of so-called “evasion” strategies—ones that relied on concealment or geography (described below)—and a migration to strategies depending on “avoidance,” that is, avoiding the law’s direct application. In particular, the most successful entrepreneurs have relied on what might be called a mimicry strategy: they shape potentially illegal or regulated conduct to make it look like legal or unregulated conduct, thereby hopefully avoiding the weight of laws and regulatory regimes.
This Note will analyze how the cyberstalking statute applies to a particular form of new media, Twitter, within the framework of a First Amendment analysis. While the analysis within this Note is limited to the interplay between Twitter and the cyberstalking statute, the principles discussed, policies weighed, and doctrines explored also apply to the regulation of distressing speech on the Internet generally. Part II examines Twitter, focusing on how Twitter users interact and the effect this has on First Amendment principles. Part III looks closely at the crime of cyberstalking and the cyberstalking statute. It explores the definition of cyberstalking, the difficult nature of cyberstalking regulation, and the harms cyberstalking can cause. It then discusses the cyberstalking statute (including the 2006 amendment at issue in Cassidy), how courts have construed the statute, and what speech the statute criminalizes. Part IV applies First Amendment doctrine to the cyberstalking statute’s regulation of Twitter. This part analyzes the following: how the First Amendment applies to Internet fora; vagueness and overbreadth challenges; the protection of speech covered by the statute; what level of scrutiny should apply to the statute; whether the statute serves to protect a “captive audience”; and how the statute holds up under each level of scrutiny. Further, after laying out these First Amendment principles, Part IV critiques the district court opinion issued in the Cassidy case. Part V proposes potential changes to the statute to ensure it does not run afoul of the First Amendment. Part VI concludes by refocusing on general First Amendment principles and the interests at issue in this case, and it emphasizes that protecting the captive audience may be the most appropriate role for cyberstalking laws to serve.
In September 2010, Iranian engineers detected that a sophisticated computer worm, known as Stuxnet, had infected and damaged industrial sites across Iran, including its uranium enrichment site, Natanz. In just a few days, a sophisticated computer code was able to accomplish what six years of United Nations Security Council resolutions could not. Not a single missile was launched, nor any tanks deployed, yet the computer worm effectively set back the Islamic Republic’s nuclear program by two years and destroyed roughly one-fifth of its nuclear centrifuges. The worm itself included two major components. One was designed to send Iran’s nuclear centrifuges spinning out of control, damaging them. The other component seemed right out of the movies; “the computer program . . . secretly recorded what normal operations at the nuclear plant looked like, then played those readings back to plant operators, like a pre-recorded security tape in a bank heist, so that it would appear that everything was operating normally while the centrifuges were actually tearing themselves apart.”
Stuxnet, to date, is the most sophisticated cyber weapon ever deployed. It acted as a “collective digital Sputnik moment,” bringing to light the important cybersecurity challenges the world faces. What makes cyber attacks so destructive is their ability to travel through the Internet and attack the structures it rests upon. Governments, industrial and financial companies, research institutions, and billions of citizens worldwide heavily populate these global networks. In fact, much of public and private life depends on functioning telecommunications and information-technology infrastructures. Thus, what we deemed to be one of the greatest successes of the twenty-first century, a global communication infrastructure, has now become our biggest vulnerability.
Pornography dominates the discussion about free speech on the Internet. Congress has twice enacted legislation aimed at preventing minors from getting access to online pornography. Federal and local law enforcement agencies have dramatically increased efforts to combat the spread of child pornography. The Department of Justice has renewed attempts to crack down on obscene material after years of lax enforcement.
Yet the debate about online pornography has overshadowed another disturbing Internet phenomenon. The Internet has facilitated growth in the availability of extremely violent images and videos. A little online searching reveals depictions of torture, of both humans and animals; videos depicting murders and executions, including beheadings by Islamic militants; videos of brutal amateur street fights, some consensual, but many not; videos of minors engaged in schoolyard fights and beatings, some posted to humiliate the victims; and videos of cockfighting. Online retailers have sold videos of dog fights and extremely violent video games, including one in which the player is tasked with making graphic snuff videos and another which allows the player to play fetch with dogs using human heads.
Though largely unnoticed by the public, March 1, 2007, marked the transition from traditional analog television to digital broadcast television (“DTV”), a move some have characterized as the most significant change to the television broadcast industry since color replaced black and white. On that date, Federal Communications Commission (“FCC”) regulations went into effect mandating that all televisions sold in the United States contain a digital tuner capable of receiving DTV broadcast signals. If consumers are unaware of the change now, it will not escape their attention on February 17, 2009, when their old analog sets go dark as broadcasters comply with further FCC regulations mandating the cessation of all analog television broadcasts. Ultimately, the government intends to profit by auctioning off the additional frequency spectrum freed up by the more efficient digital use of the broadcast spectrum.
In 1945, two engineers at the University of Pennsylvania invented the first general-purpose electronic computing device—the Electronic Numerical Integrator and Computer (“ENIAC”). The ENIAC was capable of 5000 simple calculations a second, yet it took up the space of an entire room, “weighed 30 tons, and contained over 18,000 vacuum tubes, 70,000 resistors, and almost 5 million hand-soldered joints.” This machine cost over $1 million dollars, equivalent to roughly $9 million today. Over the next thirty years integrated circuits shrunk, yielding microprocessors able to perform millions and billions of calculations per second with new storage media able to hold megabits and gigabits of data. As a result, computers became smaller, more advanced, and dramatically less expensive. Still, prior to the late-1980s, these and other computers were “solely the tool[s] of a few highly trained technocrats.” In the mid-1980s, only 8.2 percent of American households contained computers. American public businesses, universities, and research organizations used only 56,000 large “general purpose” computers and 213,000 smaller “business computers”; private businesses used another 570,000 “mini-computers” and 2.4 million desktop computers; and the federal government employed between 250,000 and 500,000 computers.
A defining problem of the Information Age is securing computer databases of ultrasensitive personal information. These reservoirs of data fuel our Internet economy but endanger individuals when their information escapes into the hands of cyber-criminals. This juxtaposition of opportunities for rapid economic growth and novel dangers recalls similar challenges society and law faced at the outset of the Industrial Age. Then, reservoirs collected water to power textile mills: the water was harmless in repose but wrought havoc when it escaped. After initially resisting Rylands v. Fletcher’s strict-liability standard as undermining economic development, American courts and scholars embraced it once the economy matured and catastrophes such as the Johnstown Flood made those hazards impossible to ignore.
Public choice analysis suggests that a meaningful public law response to insecure databases is as unlikely now as it was in the early Industrial Age. The Industrial Age’s experience can, however, help guide us to an appropriate private law remedy for the new risks and new types of harm of the early Information Age. Just as the Industrial Revolution’s maturation tipped the balance in favor of early tort theorists arguing that America needed, and could afford, a Rylands solution, so too the Information Revolution’s deep roots in American society and many strains of contemporary tort theory support strict liability for bursting cyber-reservoirs of personal data instead of a negligence regime overmatched by fast-changing technology. More broadly, the early Industrial Age offers valuable lessons for addressing other important Information Age problems.
At 9:00 PM on April 7, 2003, Fox Broadcasting (“Fox”) aired the penultimate episode of Married by America, a reality television show that allowed the public to select potential spouses for its contestants. Six minutes of the episode detailed the remaining two couples’ bachelor and bachelorette parties, during which strippers attempted to “lure participants into sexual activities.” Of the five million people who watched the broadcast, ninety complaints were filed with the Federal Communications Commission (“FCC” or “Commission”), the government agency that regulates television communications. In October 2004, the FCC determined that the six-minute segment contained explicit and patently offensive depictions of sexual activities. It thus determined that the content was indecent and in violation of federal law. For this violation, the FCC penalized both Fox and 169 Fox affiliates by issuing a Notice of Apparent Liability for $1,183,000 in fines. At the time, this was the largest proposed fine, or “forfeiture,” in FCC history.
Telecommunications services have always been a mix of wireline services, such as wireline telephone, cable television, and Internet access, and wireless services, such as AM/FM radio, broadcast television, and microwave-satellite transmission of electronic signals. Each mode of service has certain properties, both beneficial and detrimental. Wireline has the potential for almost unlimited capacity, such as the use of multigigabit fiber optics, but requires that the service be delivered to a particular location. Wireless frees the customer from being tied to a specific location, allowing service to be rendered wherever the customer is, but suffers from fading or nonexistent connections and possible privacy concerns. The mix between wireline mode and wireless mode is in constant flux; recently, however, the focus of the market has been shifting toward wireless. Cellular telephony has exploded worldwide, and after a slow start, the market penetration has increased dramatically. Meanwhile, the number of wired access lines in the United States has been declining, for the first time since the Great Depression.
Municipal wireless is an important trend, but not for the reasons implied by much of the popular reporting that surrounds this topic. Cities are unlikely to dominate the roster of wireless broadband operators that directly serve the residential and business public. Municipalities, however, have been significant early adopters of innovative unlicensed wireless broadband technologies, providing both a market toehold to innovative products and services using those technologies, and an experimental testing ground for novel organizational models. Most cases of municipal wireless involve the use of unlicensed wireless broadband to meet the local government’s own needs for ubiquitous broadband services, or to construct public-private partnerships aimed at facilitating broadband wireless services to the business and residential public. These uses express local government interests long recognized as legitimate: provision of efficient city services, local economic development, and equity within the community. Thus, the concern for policymakers should not be whether cities should be involved in wireless broadband; there are legitimate reasons why they should, and why increasing numbers of them will be. Rather, the important public policy concern is how to ensure that, in the process of facilitating the first uses of wireless, city authority does not get subverted to create artificial limits on future broadband wireless competition. Doing so will require thoughtful melding of separate legal frameworks governing access to city property and public rights of way into a coherent policy that guides when exclusivity legitimately can or cannot feature in public-private partnership arrangements for communications services.