This Note will argue that although the CCPA was imperfectly drafted, much of the world seems to be moving toward a standard that embraces data privacy protection, and the CCPA is a positive step in that direction. However, the CCPA does contain several ambiguous and potentially problematic provisions, including possible First Amendment and Dormant Commerce Clause challenges, that should be addressed by the California Legislature. While a federal standard for data privacy would make compliance considerably easier, if such a law is enacted in the near future, it is unlikely to offer as significant data privacy protections as the CCPA and would instead be a watered-down version of the CCPA that preempts attempts by California and other states to establish strong, comprehensive data privacy regimes. Ultimately, the United States should adopt a federal standard that offers consumers similarly strong protections as the GDPR or the CCPA. Part I of this Note will describe the elements of GDPR and the CCPA and will offer a comparative analysis of the regulations. Part II of this Note will address potential shortcomings of the CCPA, including a constitutional analysis of the law and its problematic provisions. Part III of this Note will discuss the debate between consumer privacy advocates and technology companies regarding federal preemption of strict laws like the CCPA. It will also make predictions about, and offer solutions for, the future of the CCPA and United States data privacy legislation based on a discussion of global data privacy trends and possible federal government actions.
United States government surveillance has reached a point where the government “c[an] construct a complete electronic narrative of an individual’s life: their friends, lovers, joys, sorrows.” In June 2013, Edward Snowden released thousands of confidential documents from the National Security Agency (“NSA”) regarding classified government surveillance programs. The documents brought to light the fact that that the NSA was spying on individuals, including foreign citizens, and deliberately misleading Congress about these activities. According to Snowden, the spying was so extensive that the spying measures, including a program known as “PRISM,” involved the improper mass collection of data from citizens worldwide through NSA interactions with telecom giants like Google, Microsoft, and Facebook, and by tapping into global fiber optic cables.
These revelations sent shockwaves around the globe, and the backlash was swift and unforgiving. One thing became clear to Americans and the rest of the world: the NSA and the U.S. government had prioritized the massive collection of private information over and above the personal privacy rights of the global population. The concept of throwing civil liberties to the wayside through grossly intrusive surveillance pushed Snowden to step forward and reveal what he had seen all too closely. He no longer wanted to “live in a world ‘where everything that I say, everything that I do, everyone I talk to, every expression of love or friendship is recorded.’”
Across the Atlantic, the priorities of European Union member nations stand in stark contrast to those of the United States. The EU takes a much stronger stance on privacy and data protection and restricts how companies transfer data to non-EU nations. In the EU’s Data Protection Directive (the “Directive”), the right to privacy is described as a “fundamental right[ ] and freedom[ ].” This sentiment is echoed in other landmark EU documents such as the Convention for the Protection of Human Rights and Fundamental Freedoms.
The Second Amendment, like other federal constitutional rights, is a restriction on government power. But what role does the Second Amendment have to play—if any—when a private party seeks to limit the exercise of Second Amendment rights by invoking private law causes of action? Private law—specifically, the law of torts, contracts, and property—has often been impacted by constitutional considerations, though in seemingly inconsistent ways. The First Amendment places limitations on defamation actions and other related torts, and also prevents courts from entering injunctions that could be classified as prior restraints. On the other hand, the First Amendment plays almost no role in contractual litigation, even when courts are called on to enforce contractual provisions that directly restrict speech. The Equal Protection Clause was famously interpreted to bar the enforcement of a racially restrictive covenant in Shelley v. Kraemer, but in the years since, courts have largely limited that case to its facts.
This Note argues that fragmented free expression laws across European member states and data controllers’ ability to select their reviewing supervisory authority give U.S. data controllers latitude to exploit the privacy-expression balance in favor of the U.S. prioritization of expression. Whereas the current literature revolving around the right to be forgotten and the GDPR focuses on reconciling and converging transatlantic values of privacy and free expression, this Note examines the mechanisms of the European Union’s assertion and imposition of privacy values across the Atlantic through the right to be forgotten and the right to erasure and describes weaknesses in the GDPR that may undermine those mechanisms.
A girlfriend hacks her boyfriend’s computer and discovers evidence of tax evasion. She contacts a local law enforcement officer who arrives at her house and looks at the files she found. Without a warrant, the officer opens other files in the same folder the girlfriend had searched. The officer notices another folder labeled “xxx.” He opens the folder and discovers child pornography. The officer seizes the computer based on what he found. The boyfriend is indicted for possession of child pornography and tax evasion. Before trial, the boyfriend moves to suppress all evidence obtained pursuant to the officer’s warrantless search of the computer. What evidence should the judge suppress?
The answer turns on the Fourth Amendment’s private-search exception. Under this exception, a government agent may recreate a search conducted by a private individual so long as the agent does not “exceed the scope” of the prior private search. The question under the existing framework is: at what point did the officer exceed the scope of the prior search—if at all? Was it when he viewed files the girlfriend had not viewed, when he opened files in a different folder, or did he stay within the scope of the girlfriend’s search by only searching the computer’s hard drive? This is what I will refer to as the denominator problem, which asks what courts should use as the unit of analysis to measure the scope of a digital search.
There are at least four competing approaches to the denominator problem, discussed in Part II, and the Supreme Court has provided little guidance on how the private-search doctrine applies to digital searches, resulting in a circuit split. Until this issue is resolved, law enforcement has little guidance on when to obtain a warrant following a private search and can unknowingly subject individuals to unreasonable invasions of privacy, which may result in suppression of relevant evidence. One recent example is United States v. Lichtenberger.
In summer of 2013, the National Security Agency (“NSA”) rocketed into headlines when Glenn Greenwald, a reporter at the Guardian, broke a stunning, Orwellian story: pursuant to top-secret court orders, Verizon and other major telephone service providers had granted the NSA blanket access to their American customers’ call records. These companies, Greenwald claimed, were providing the NSA with telephony metadata—general information about each of their customers’ calls, such as phone numbers, call lengths, and call times. In the face of the ensuing public outcry, the American government acknowledged the existence of the telephony metadata program. In doing so, however, it was careful to assert that the program, while secret, was nonetheless constitutional, and that the court orders had been issued pursuant to the Foreign Intelligence Surveillance Act (“FISA”).
Almost every information privacy law provides special protection for certain categories of “sensitive information,” such as health, sex, or financial information. Even though this approach is widespread, the concept of sensitive information is woefully undertheorized. What is it about these categories that deserves special protection? This Article offers an extended examination of this question. It surveys dozens of laws and regulations to develop a multi-factor test for sensitivity.
From this survey, the Article concludes that sensitive information is connected to privacy harms affecting individuals. Consistent with this, at least for the case of privacy in large databases, it recommends a new “threat modeling” approach to assessing the risk of harm in privacy law, borrowing from the computer security literature. Applying this approach, it concludes that we should create new laws recognizing the sensitivity of currently unprotected forms of information—most importantly, geolocation and some forms of metadata—because they present significant risk of privacy harm.
This Article seeks to clarify the relationship between contract law and promises of privacy and information security. It challenges three commonly held misconceptions in privacy literature regarding the relationship between contract and data protection—the propertization fatalism, the economic value fatalism, and the displacement fatalism—and argues in favor of embracing contract law as a way to enhance consumer privacy. Using analysis from Sorrell v. IMS Health Inc., marketing theory, and the work of Pierre Bourdieu, it argues that the value in information contracts is inherently relational: consumers provide “things of value”—rights of access to valuable informational constructs of identity and context—in exchange for access to certain services provided by the data aggregator. This Article presents a contract-based consumer protection approach to privacy and information security. Modeled on trade secret law and landlord-tenant law, it advocates for courts and legislatures to adopt a “reasonable data stewardship” approach that relies on a set of implied promises—nonwaivable contract warranties and remedies—to maintain contextual integrity of information and improve consumer privacy.
With our culture’s celebrity obsession intensifying each year, it is not surprising that recent media attention has concentrated on the children of these famous faces. Unfortunately, there are currently no adequate federal or state laws in place to protect these children from being hounded by paparazzi and exploited by entertainment magazines and Web sites worldwide. This Note examines the evolution of antipaparazzi legislation and analyzes the inadequacies of current and proposed legal protections. Further, it recommends strengthening existing safeguards by creating paparazzi-free buffer zones around family-oriented areas and following international approaches to maintaining an adequate level of privacy, and consequently safety, for celebrity children.
In the best of circumstances, governing domestic intelligence is challenging. Intelligence sits in an uncomfortable relationship with law’s commitment to transparency and accountability. History amply demonstrates that intelligence—including domestic intelligence—frequently begins where the rule of law gives out.
The inherent difficulty of governing intelligence has been unnecessarily exacerbated by a deep-seated and longstanding confusion about what domestic intelligence is. For over a century, policymakers and academic commentators have assumed that it is essentially a form of criminal investigation and that criminal law supplies the logical starting place for its effective governance. Over the years, this faulty premise has fostered a boom-and-bust cycle in intelligence governance; domestic intelligence has been, at different times, effectively out of business or unchecked by law.
This Article introduces a new way to think about domestic intelligence and its governance. Domestic intelligence is a kind of risk assessment, a regulatory activity familiar across the administrative state. Similar to risk assessments in environmental or health and safety law, domestic intelligence seeks to quantify a risk before it materializes, based on the careful analysis of aggregative data.