This Note will argue that although the CCPA was imperfectly drafted, much of the world seems to be moving toward a standard that embraces data privacy protection, and the CCPA is a positive step in that direction. However, the CCPA does contain several ambiguous and potentially problematic provisions, including possible First Amendment and Dormant Commerce Clause challenges, that should be addressed by the California Legislature. While a federal standard for data privacy would make compliance considerably easier, if such a law is enacted in the near future, it is unlikely to offer as significant data privacy protections as the CCPA and would instead be a watered-down version of the CCPA that preempts attempts by California and other states to establish strong, comprehensive data privacy regimes. Ultimately, the United States should adopt a federal standard that offers consumers similarly strong protections as the GDPR or the CCPA. Part I of this Note will describe the elements of GDPR and the CCPA and will offer a comparative analysis of the regulations. Part II of this Note will address potential shortcomings of the CCPA, including a constitutional analysis of the law and its problematic provisions. Part III of this Note will discuss the debate between consumer privacy advocates and technology companies regarding federal preemption of strict laws like the CCPA. It will also make predictions about, and offer solutions for, the future of the CCPA and United States data privacy legislation based on a discussion of global data privacy trends and possible federal government actions.

Artificial intelligence (“AI”), and machine learning in particular, promises lawmakers greater specificity and fewer errors. Algorithmic lawmaking and judging will leverage models built from large stores of data that permit the creation and application of finely tuned rules. AI is therefore regarded as something that will bring about a movement from standards towards rules. Drawing on contemporary data science, this Article shows that machine learning is less impressive when the past is unlike the future, as it is whenever new variables appear over time. In the absence of regularities, machine learning loses its advantage and, as a result, looser standards can become superior to rules. We apply this insight to bail and sentencing decisions, as well as familiar corporate and contract law rules. More generally, we show that a Human-AI combination can be superior to AI acting alone. Just as today’s judges overrule errors and outmoded precedent, tommorrow’s lawmakers will sensibly overrule AI in legal domains where the challenges of measurement are present. When measurement is straightforward and prediction is accurate, rules will prevail. When empirical limitations such as overfit, Simpson’s Paradox, and omitted variables make measurement difficult, AI should be trusted less and law should give way to standards. We introduce readers to the phenomenon of reversal paradoxes, and we suggest that in law, where huge data sets are rare, AI should not be expected to outperform humans. But more generally, where empirical limitations are likely, including overfit and omitted variables, rules should be trusted less, and law should give way to standards.

Where you go to college and what you choose to study has always been important, but, with the help of data science, it may now determine whether you get a student loan. Silicon Valley is increasingly setting its sights on student lending. Financial technology (“fintech”) firms such as SoFi, CommonBond, and Upstart are ever-expanding their online lending activities to help students finance or refinance educational expenses. These online companies are using a wide array of alternative, education-based data points—ranging from applicants’ chosen majors, assessment scores, the college or university they attend, job history, and cohort default rates—to determine creditworthiness. Fintech firms argue that through their low overhead and innovative approaches to lending they are able to widen access to credit for underserved Americans. Indeed, there is much to recommend regarding the use of different kinds of information about young consumers in order assess their financial ability. Student borrowers are notoriously disadvantaged by the extant scoring system that heavily favors having a past credit history. Yet there are also downsides to the use of education-based, alternative data by private lenders. This Article critiques the use of this education-based information, arguing that while it can have a positive effect in promoting social mobility, it could also have significant downsides. Chief among these are reifying existing credit barriers along lines of wealth and class and further contributing to discriminatory lending practices that harm women, black and Latino Americans, and other minority groups. The discrimination issue is particularly salient because of the novel and opaque underwriting algorithms that facilitate these online loans. This Article concludes by proposing three-pillared regulatory guidance for private student lenders to use in designing, implementing, and monitoring their education-based data lending programs.

Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail.

In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”).

The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime.

Until January 2018, under the border search exception, CBP officers were afforded the power to search any electronic device without meeting any standard of suspicion or acquiring a warrant. The border search exception is a “longstanding, historically recognized exception to the Fourth Amendment’s general principle that a warrant be obtained . . . .” It provides that suspicionless and warrantless searches at the border are not in violation of the Fourth Amendment merely because searches at the border are “reasonable simply by virtue of the fact that they occur at the border . . . .” The CBP, claiming that the border search exception applies to electronic devices, searched more devices in 2017 than ever before, with approximately a 60 percent increase over 2016 according to data released by the CBP. These “digital strip searches” violate travelers’ First, Fourth, and Fifth Amendment rights. With the advent of smartphones and the expanded use of electronic devices for storing people’s extremely personal data, these searches violate an individual’s right to privacy. Simply by travelling into the United States with a device linked to such information, a person suddenly—and, currently, unexpectedly—opens a window for the government to search through seemingly every aspect of his or her life. The policy behind these searches at the border does not align with the core principles behind our longstanding First and Fifth Amendment protections, nor does it align with the policies behind the exceptions made to constitutional rights at the border in the past.
In order to protect the privacy and rights of both citizens and noncitizens entering the United States, the procedures concerning electronic device searches need to be rectified. For instance, the border search exception should not be applied to electronic devices the same way it applies to other property or storage containers, like a backpack. One is less likely to expect privacy in the contents of a backpack than in the contents of a password- or authorization-protected devices—unlike a locked device, a backpack can be taken, can be opened easily, can fall open, and also has been traditionally subjected to searches at the border. Moreover, there are many reasons why electronic devices warrant privacy.

Businesses and organizations expect their managers to use data science to improve and even optimize decisionmaking. Yet when it comes to some criminal justice institutions, such as prosecutors’ offices, there is an aversion to applying cognitive computing to high-stakes decisions. This aversion reflects extra-institutional forces, as activists and scholars are militating against the use of predictive analytics in criminal justice. The aversion also reflects prosecutors’ unease with the practice, as many prefer that decisional weight be placed on attorneys’ experience and intuition, even though experience and intuition have contributed to more than a century of criminal justice disparities.

Instead of viewing historical data and data-hungry academic researchers as liabilities, prosecutors and scholars should treat them as assets in the struggle to achieve outcome fairness. Cutting-edge research on fairness in machine learning is being conducted by computer scientists, applied mathematicians, and social scientists, and this research forms a foundation for the most promising path towards racial equality in criminal justice: suggestive modeling that creates baselines to guide prosecutorial decisionmaking.

Akin to every other legal issue that comes before the Court, reconciling the state’s discretion and the Supreme Court’s role in judicial review requires a judicially manageable standard that allows the Court to determine when a legislature has overstepped its bounds. Without a judicially discoverable and manageable standard, the Court is unable to develop clear and coherent principles to form its judgments, and challenges to partisan gerrymandering would thus be non-justiciable.

In the partisan gerrymandering context, such a standard needs to discern between garden-variety and excessive use of partisanship. The Court has stated that partisanship may be used in redistricting, but it may not be used “excessively.” In Vieth v. Jubelirer, Justice Scalia clarified, “Justice Stevens says we ‘er[r] in assuming that politics is ‘an ordinary and lawful motive’ in districting,’ but all he brings forward to contest that is the argument that an excessive injection of politics is unlawful. So it is, and so does our opinion assume.” Justice Souter, in a dissent joined by Justice Ginsburg, expressed a similar idea: courts must intervene, he says, when “partisan competition has reached an extremity of unfairness.”

At oral argument in Rucho, attorney Emmet Bondurant argued that “[t]his case involves the most extreme partisan gerrymander to rig congressional elections that has been presented to this Court since the one-person/one-vote case.” Justice Kavanaugh replied, “when you use the word ‘extreme,’ that implies a baseline. Extreme compared to what?”

Herein lies the issue that the Court has been grappling with in partisan gerrymandering claims. What is the proper baseline against which to judge whether partisanship has been used excessively? And how can this baseline be incorporated into a judicially manageable standard?

In the courtroom environment, oral presentations are becoming increasingly supplemented and replaced by advancing digital technologies that provide legal practitioners with effective demonstrative capabilities. Improvements in the field of virtual reality (“VR”) are facilitating the creation of immersive environments in which a user’s senses and perceptions of the physical world can be completely replaced with virtual renderings. As courts, lawyers, and experts continue to grapple with evidentiary questions of admissibility posed by evolving technologies in the field of computer-generated evidence (“CGE”), issues posed by the introduction of immersive virtual environments (“IVEs”) into the courtroom have, until recently, remained a largely theoretical discussion.

Though the widespread use of IVEs at trial has not yet occurred, research into the practical applications of these VR technologies in the courtroom is ongoing, with several studies having successfully integrated IVEs into mock scenarios. For example, in 2002, the Courtroom 21 Project (run by William & Mary Law School and the National Center for State Courts) hosted a lab trial in which a witness used an IVE. The issue in the case was whether a patient’s death was the result of the design of a cholesterol-removing stent or a surgeon’s error in implanting it upside down.

We are now some twenty years into the story of the Internet’s bold challenge to law and the legal system. In the early 2000s, Jack Goldsmith and I wrote Who Controls the Internet, a book that might be understood as a chronicle of some the early and more outlandish stages of the story. Professors Pollman and Barry’s excellent article, Regulatory Entrepreneurship, adds to and updates that story with subsequent chapters and a sophisticated analysis of the strategies more recently employed to avoid law using the Internet in some way. While Pollman and Barry’s article stands on its own, I write this Article to connect these two periods. I also wish to offer a slightly different normative assessment of the legal avoidance efforts described here, along with my opinion as to how law enforcement should conduct itself in these situations.

Behind regulatory entrepreneurship lies a history, albeit a short one, and one that has much to teach us about the very nature of law and the legal system as it interacts with new technologies. Viewed in context, Pollman and Barry’s “regulatory entrepreneurs” can be understood as, in fact, a second generation of entrepreneurs who learned lessons from an earlier generation that was active in the late 1990s and early 2000s. What both generations have in common is the idea that the Internet might provide profitable opportunities at the edges of the legal system. What has changed is the abandonment of so-called “evasion” strategies—ones that relied on concealment or geography (described below)—and a migration to strategies depending on “avoidance,” that is, avoiding the law’s direct application. In particular, the most successful entrepreneurs have relied on what might be called a mimicry strategy: they shape potentially illegal or regulated conduct to make it look like legal or unregulated conduct, thereby hopefully avoiding the weight of laws and regulatory regimes.

In January 2003, the Slammer worm hit the Internet. Five of the Internet’s thirteen root-name servers shut down. Three hundred thousand cable modems in Portugal went offline, all of South Korea’s cell phone and Internet services went down, and Continental Airlines cancelled flights from its Newark hub due to its inability to process tickets. It took only six months after the disclosure of a security flaw for a virus writer to write the 376 byte virus. When it unleashed, it took ten minutes to infect ninety percent of vulnerable systems.

The flaw was a buffer overflow in the Microsoft SQL Server 2000 software. Because the code is embedded in other Microsoft products, not all users were even aware that their systems were running a version of SQL Server. Unfortunately, this was a well-known, preventable security flaw. Moreover, Microsoft had released a patch for the flaw exploited by Slammer six months before the attack. Despite the widespread effects, no flood of lawsuits ensued.