This Paper argues that in the wake of the Supreme Court’s 2018 decision, Murphy v. NCAA —a case completely unrelated to immigration—there is now a single best answer to the constitutional question presented in the ongoing sanctuary jurisdiction cases. The answer is that the Trump Administration’s withholding of federal grants is indeed unconstitutional, but this is because Section 1373, the statute on which the Executive’s actions are predicated, is itself unconstitutional. Specifically, this Paper argues that the expansion of the anti-commandeering doctrine under Murphy provides a tool by which the federal appellate courts can invalidate Section 1373 as an impermissible federal regulation of state and local governments. By adopting this approach, courts can surpass the comparatively surface-level questions about the Executive’s power to enforce a particular federal statute, and instead address the more central issue: the existence of Section 1373.

This argument proceeds in the following stages. Part I provides a background for each of the central concepts in this analysis. These include (1) an explanation of the anti-commandeering doctrine in its pre- and post-Murphy forms, (2) a description of Section 1373, (3) a working definition of “sanctuary jurisdictions,” and (4) a brief overview of the sanctuary jurisdiction cases decided to date. Part II argues that, in light of the Supreme Court’s decision in Murphy, there is no question that Section 1373 is subject to anti-commandeering claims. Part III then argues that, as a matter of doctrine, Section 1373 should fail to withstand such claims because it does not qualify for any exceptions to the anti-commandeering rule. Finally, Part IV argues that, aside from Supreme Court precedent, there are a series of independent, normative reasons to strike down Section 1373. This Paper concludes that Section 1373 should be held unconstitutional in its challenge before the higher federal courts, including the Supreme Court of the United States if necessary, and that such a ruling is the most desirable method of resolving the sanctuary jurisdiction cases.

This Note will argue that although the CCPA was imperfectly drafted, much of the world seems to be moving toward a standard that embraces data privacy protection, and the CCPA is a positive step in that direction. However, the CCPA does contain several ambiguous and potentially problematic provisions, including possible First Amendment and Dormant Commerce Clause challenges, that should be addressed by the California Legislature. While a federal standard for data privacy would make compliance considerably easier, if such a law is enacted in the near future, it is unlikely to offer as significant data privacy protections as the CCPA and would instead be a watered-down version of the CCPA that preempts attempts by California and other states to establish strong, comprehensive data privacy regimes. Ultimately, the United States should adopt a federal standard that offers consumers similarly strong protections as the GDPR or the CCPA. Part I of this Note will describe the elements of GDPR and the CCPA and will offer a comparative analysis of the regulations. Part II of this Note will address potential shortcomings of the CCPA, including a constitutional analysis of the law and its problematic provisions. Part III of this Note will discuss the debate between consumer privacy advocates and technology companies regarding federal preemption of strict laws like the CCPA. It will also make predictions about, and offer solutions for, the future of the CCPA and United States data privacy legislation based on a discussion of global data privacy trends and possible federal government actions.

In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms are not bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression—particularly the entangled concepts of “public figures” and “newsworthiness.”
This Article offers the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. It first introduces the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. The Article then turns to the “New Governors” and examines how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, this Article offers lessons for both courts and platforms as they confront new challenges posed by online speech. It exposes the pitfalls of using algorithms to identify public figures; explores the diminished utility of setting rules based on voluntary involvement in public debate; and analyzes the dangers of ad hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in law or content moderation.
Finally, the Article concludes by exploring what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. These platforms act as legislature, executive, judiciary, and press—but without any separation of powers to establish checks and balances. A change to this model is already occurring at one platform: Facebook is creating a new Oversight Board that will hopefully provide due process to users on the platform’s speech decisions and transparency about how content-moderation policy is made, including how concepts related to newsworthiness and public figures are applied.

Artificial intelligence (“AI”), and machine learning in particular, promises lawmakers greater specificity and fewer errors. Algorithmic lawmaking and judging will leverage models built from large stores of data that permit the creation and application of finely tuned rules. AI is therefore regarded as something that will bring about a movement from standards towards rules. Drawing on contemporary data science, this Article shows that machine learning is less impressive when the past is unlike the future, as it is whenever new variables appear over time. In the absence of regularities, machine learning loses its advantage and, as a result, looser standards can become superior to rules. We apply this insight to bail and sentencing decisions, as well as familiar corporate and contract law rules. More generally, we show that a Human-AI combination can be superior to AI acting alone. Just as today’s judges overrule errors and outmoded precedent, tommorrow’s lawmakers will sensibly overrule AI in legal domains where the challenges of measurement are present. When measurement is straightforward and prediction is accurate, rules will prevail. When empirical limitations such as overfit, Simpson’s Paradox, and omitted variables make measurement difficult, AI should be trusted less and law should give way to standards. We introduce readers to the phenomenon of reversal paradoxes, and we suggest that in law, where huge data sets are rare, AI should not be expected to outperform humans. But more generally, where empirical limitations are likely, including overfit and omitted variables, rules should be trusted less, and law should give way to standards.

Where you go to college and what you choose to study has always been important, but, with the help of data science, it may now determine whether you get a student loan. Silicon Valley is increasingly setting its sights on student lending. Financial technology (“fintech”) firms such as SoFi, CommonBond, and Upstart are ever-expanding their online lending activities to help students finance or refinance educational expenses. These online companies are using a wide array of alternative, education-based data points—ranging from applicants’ chosen majors, assessment scores, the college or university they attend, job history, and cohort default rates—to determine creditworthiness. Fintech firms argue that through their low overhead and innovative approaches to lending they are able to widen access to credit for underserved Americans. Indeed, there is much to recommend regarding the use of different kinds of information about young consumers in order assess their financial ability. Student borrowers are notoriously disadvantaged by the extant scoring system that heavily favors having a past credit history. Yet there are also downsides to the use of education-based, alternative data by private lenders. This Article critiques the use of this education-based information, arguing that while it can have a positive effect in promoting social mobility, it could also have significant downsides. Chief among these are reifying existing credit barriers along lines of wealth and class and further contributing to discriminatory lending practices that harm women, black and Latino Americans, and other minority groups. The discrimination issue is particularly salient because of the novel and opaque underwriting algorithms that facilitate these online loans. This Article concludes by proposing three-pillared regulatory guidance for private student lenders to use in designing, implementing, and monitoring their education-based data lending programs.

Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail.

In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”).

The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime.

The recent financial crisis demonstrated that, contrary to longstanding regulatory assumptions, nonbank financial firms—such as investment banks and insurance companies—can propagate systemic risk throughout the financial system. After the crisis, policymakers in the United States and abroad developed two different strategies for dealing with nonbank systemic risk. The first strategy seeks to regulate individual nonbank entities that officials designate as being potentially systemically important. The second approach targets financial activities that could create systemic risk, irrespective of the types of firms that engage in those transactions. In the last several years, domestic and international policymakers have come to view these two strategies as substitutes, largely abandoning entity-based designations in favor of activities-based approaches. This Article argues that this trend is deeply misguided because entity- and activities-based approaches are complementary tools that are each essential for effectively regulating nonbank systemic risk. Eliminating an entity-based approach to nonbank systemic risk—either formally or through onerous procedural requirements—would expose the financial system to the same risks that it experienced in 2008 as a result of distress at nonbanks like AIG, Bear Stearns, and Lehman Brothers. This conclusion is especially salient in the United States, where jurisdictional fragmentation undermines the capacity of financial regulators to implement an effective activities-based approach. Significant reforms to the U.S. regulatory framework are necessary, therefore, before an activities-based approach can meaningfully complement domestic entity-based systemic risk regulation.

Big investment managers, such as Vanguard and Fidelity, have accumulated an astonishing amount of common stock in America’s public companies—so much that they now have enough corporate votes to control entire industries. What, then, will these big managers do with their potential power?
This Article argues that they will do less than we might think. And the reason is paradoxical: the biggest managers are too big to be activists. Their great size creates intense internal conflicts of interest that make aggressive activism extremely difficult or even impossible.
The largest managers operate hundreds of different investment funds, including mutual funds, hedge funds, and other vehicles that all invest in the same companies at the same times. This structure inhibits activism, because it turns activism into a source of internal conflict. Activism by one of a manager’s funds can damage the interests of the manager’s other funds. If a BlackRock hedge fund invests in a company’s equity, for instance, at the same time a BlackRock mutual fund invests in the company’s debt, then any attempt by either fund to turn the company in its favor will harm the interests of the other fund. The hedge fund and mutual fund might similarly come into conflict over the political and branding risks of activism and the allocation of costs and profits. Federal securities regulation and poison pills can create even more conflicts, often turning activism by a hedge fund into serious legal problems for its manager’s entirely passive mutual funds. A big manager, in other words, is like a lawyer with many clients: its advocacy for one client can harm the interests of another.
The debate about horizontal shareholding and index fund activism has ignored this truth. Research on horizontal ownership tends to treat a manager and its funds as though they were a single unit with no differences among them. Traditional analyses of institutional shareholder activism tend to go the opposite direction, treating mutual funds as though they were totally independent with no connection to other funds under the same management.
By introducing a subtler understanding of big managers’ structures, I can make sense of shareholder activism more clearly. Among other things, I show why aggressive activism tends to come entirely from small managers—that is, from the managers whose potential for activism is actually the weakest.

In the wake of widespread revelations about sexual abuse by Harvey Weinstein, Larry Nassar, and others, the United States is reckoning with the past and present and searching for the means to prevent and punish such offenses in the future. The scourge of sexual crimes goes far beyond instances perpetrated by powerful men; this misconduct is rampant throughout the country. In some of these cases, third parties knew about the abuse and did not try to intervene. Scrutiny of—and the response to—such bystanderism is increasing, including in the legal world.
In order to align law and society more closely with morality, this Article proposes a more holistic, aggressive approach to prompt involvement by third parties who are aware of specific instances of sexual crimes in the United States. This Article begins by documenting the contemporary scope of sexual crimes in the United States and the crucial role bystanders play in facilitating them.
The Article next provides an overview and assessment of “Bad Samaritan laws”: statutes that impose a legal duty to assist others in peril through intervening directly (also known as the “duty to rescue”) or notifying authorities (also known as the “duty to report”). Such laws exist in dozens of foreign countries and, to varying degrees, in twenty-nine U.S. states, Puerto Rico, U.S. federal law, and international law. The author has assembled the most comprehensive global database of Bad Samaritan laws, which provides an important corrective to other scholars’ mistaken claims about the rarity of such statutes, particularly in the United States. Despite how widespread these laws are in the United States, violations are seldom, if ever, charged or successfully prosecuted.
Drawing on historical research, trial transcripts, and interviews with prosecutors, judges, investigators, and “upstanders” (people who intervene to help others in need), the Article then describes four prominent cases in the United States involving witnesses to sexual crimes. Each case provides insight into the range of conduct of both bystanders and upstanders.
Because not all such actors are equal, grouping them together under the general categories of “bystanders” and “upstanders” obscures distinct roles, duties, and culpability for violating those duties. Drawing on the case studies, this Article thus presents original typologies of bystanders (including eleven categories or sub-categories), upstanders (including seven categories), and both kinds of actors (including four categories), which introduce greater nuance into these classifications and this Article’s proposed range of legal (and moral) responsibilities. These typologies are designed to maximize generalizability to crimes and crises beyond sexual abuse.
Finally, the Article prescribes a new approach to the duty to report on sexual abuse and possibly other crimes and crises through implementing a combination of negative incentives (“sticks”) and positive incentives (“carrots”) for third parties. These recommendations benefit from interviews with sexual violence prevention professionals, police, legislators, and social media policy counsel. Legal prescriptions draw on this Article’s typologies and concern strengthening, spreading, and standardizing duty-to-report laws at the state and territory levels; introducing the first general legal duty to report sexual crimes and possibly other offenses (such as human trafficking) at the federal level; exempting from liability one of the two main bystander categories the Article proposes (“excused bystanders”) and each of its six sub-categories (survivors, “confidants,” “unaware bystanders,” children, “endangered bystanders,” and “self-incriminators”); actually charging the other main bystander category the Article proposes (“unexcused bystanders”) and each of its three sub-categories (“abstainers,” “engagers,” and “enablers”) with violations of duty-to-report laws or leveraging these statutes to obtain testimony from such actors; and more consistently charging “enablers” with alternative or additional crimes, such as accomplice liability. Social prescriptions draw on models and lessons from domestic and foreign contexts and also this Article’s typologies to recommend, among other initiatives, raising public awareness of duty-to-report laws and creating what the Article calls “upstander commissions” to identify and “upstander prizes” to honor a category of upstanders the Article proposes (“corroborated upstanders”), including for their efforts to mitigate sexual crimes. A combination of these carrots and sticks could prompt would-be bystanders to act instead as upstanders and help stem the sexual crime epidemic.

Until January 2018, under the border search exception, CBP officers were afforded the power to search any electronic device without meeting any standard of suspicion or acquiring a warrant. The border search exception is a “longstanding, historically recognized exception to the Fourth Amendment’s general principle that a warrant be obtained . . . .” It provides that suspicionless and warrantless searches at the border are not in violation of the Fourth Amendment merely because searches at the border are “reasonable simply by virtue of the fact that they occur at the border . . . .” The CBP, claiming that the border search exception applies to electronic devices, searched more devices in 2017 than ever before, with approximately a 60 percent increase over 2016 according to data released by the CBP. These “digital strip searches” violate travelers’ First, Fourth, and Fifth Amendment rights. With the advent of smartphones and the expanded use of electronic devices for storing people’s extremely personal data, these searches violate an individual’s right to privacy. Simply by travelling into the United States with a device linked to such information, a person suddenly—and, currently, unexpectedly—opens a window for the government to search through seemingly every aspect of his or her life. The policy behind these searches at the border does not align with the core principles behind our longstanding First and Fifth Amendment protections, nor does it align with the policies behind the exceptions made to constitutional rights at the border in the past.
In order to protect the privacy and rights of both citizens and noncitizens entering the United States, the procedures concerning electronic device searches need to be rectified. For instance, the border search exception should not be applied to electronic devices the same way it applies to other property or storage containers, like a backpack. One is less likely to expect privacy in the contents of a backpack than in the contents of a password- or authorization-protected devices—unlike a locked device, a backpack can be taken, can be opened easily, can fall open, and also has been traditionally subjected to searches at the border. Moreover, there are many reasons why electronic devices warrant privacy.