Below, this Article introduces the relevant case law by examining the recent case of United States v. Hill, a federal Hate Crimes Prevention Act prosecution of a battery committed on a gay fellow-employee at an Amazon Fulfillment Center. There follows a brief tour of the most crucially relevant Supreme Court Commerce Clause jurisprudence, with an emphasis on current doctrine.
In light of these materials, this Article then highlights a number of largely unsolvable problems in trying to delimit the scope of the Commerce Clause power. There is, merely to begin, the problem of the vagueness of legal language in general and of the key terms embodied in the Commerce Clause more specifically. The vagueness problem impairs attempts to clarify the meaning and bounds of the language of the Commerce Clause.
This Article articulates the downsides to treating climate change as a national security issue and demonstrates how the U.N.-mandated concept of “human security” provides a more effective framework. Human security realizes the benefits of securitization while lessening its costs. It does so by focusing on people, rather than the state, and emphasizing sustainable development policies necessary to mitigate, rather than just acclimate to, climate change. While explored here in detail, these arguments are part of a larger, ongoing project examining how the human security paradigm can generate more effective legal solutions than a national security framework for global challenges, like climate change.
Part I of this Article briefly examines calls to treat climate change as a national security issue, specifically from within the grassroots climate change movement, and canvasses the benefits of doing so. Part II explores the downsides to securitizing climate change and demonstrates how a human security approach resolves these concerns. Overall, this Article accepts the view that a security-oriented attitude towards climate change is vital to meaningful action on the issue. It takes the position, however, that this approach must both align with liberal democratic values and facilitate solutions for mitigating the climate crisis. These changes to the prevailing security paradigm are unlikely to come from the state itself, which is invested in maintaining a state-centered view of security. It must, instead, be led by civil society—particularly the climate change movement, which has the most incentive to take action on these issues.
This Paper argues that in the wake of the Supreme Court’s 2018 decision, Murphy v. NCAA —a case completely unrelated to immigration—there is now a single best answer to the constitutional question presented in the ongoing sanctuary jurisdiction cases. The answer is that the Trump Administration’s withholding of federal grants is indeed unconstitutional, but this is because Section 1373, the statute on which the Executive’s actions are predicated, is itself unconstitutional. Specifically, this Paper argues that the expansion of the anti-commandeering doctrine under Murphy provides a tool by which the federal appellate courts can invalidate Section 1373 as an impermissible federal regulation of state and local governments. By adopting this approach, courts can surpass the comparatively surface-level questions about the Executive’s power to enforce a particular federal statute, and instead address the more central issue: the existence of Section 1373.
This argument proceeds in the following stages. Part I provides a background for each of the central concepts in this analysis. These include (1) an explanation of the anti-commandeering doctrine in its pre- and post-Murphy forms, (2) a description of Section 1373, (3) a working definition of “sanctuary jurisdictions,” and (4) a brief overview of the sanctuary jurisdiction cases decided to date. Part II argues that, in light of the Supreme Court’s decision in Murphy, there is no question that Section 1373 is subject to anti-commandeering claims. Part III then argues that, as a matter of doctrine, Section 1373 should fail to withstand such claims because it does not qualify for any exceptions to the anti-commandeering rule. Finally, Part IV argues that, aside from Supreme Court precedent, there are a series of independent, normative reasons to strike down Section 1373. This Paper concludes that Section 1373 should be held unconstitutional in its challenge before the higher federal courts, including the Supreme Court of the United States if necessary, and that such a ruling is the most desirable method of resolving the sanctuary jurisdiction cases.
This Note will argue that although the CCPA was imperfectly drafted, much of the world seems to be moving toward a standard that embraces data privacy protection, and the CCPA is a positive step in that direction. However, the CCPA does contain several ambiguous and potentially problematic provisions, including possible First Amendment and Dormant Commerce Clause challenges, that should be addressed by the California Legislature. While a federal standard for data privacy would make compliance considerably easier, if such a law is enacted in the near future, it is unlikely to offer as significant data privacy protections as the CCPA and would instead be a watered-down version of the CCPA that preempts attempts by California and other states to establish strong, comprehensive data privacy regimes. Ultimately, the United States should adopt a federal standard that offers consumers similarly strong protections as the GDPR or the CCPA. Part I of this Note will describe the elements of GDPR and the CCPA and will offer a comparative analysis of the regulations. Part II of this Note will address potential shortcomings of the CCPA, including a constitutional analysis of the law and its problematic provisions. Part III of this Note will discuss the debate between consumer privacy advocates and technology companies regarding federal preemption of strict laws like the CCPA. It will also make predictions about, and offer solutions for, the future of the CCPA and United States data privacy legislation based on a discussion of global data privacy trends and possible federal government actions.
In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms are not bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression—particularly the entangled concepts of “public figures” and “newsworthiness.”
This Article offers the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. It first introduces the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. The Article then turns to the “New Governors” and examines how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, this Article offers lessons for both courts and platforms as they confront new challenges posed by online speech. It exposes the pitfalls of using algorithms to identify public figures; explores the diminished utility of setting rules based on voluntary involvement in public debate; and analyzes the dangers of ad hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in law or content moderation.
Finally, the Article concludes by exploring what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. These platforms act as legislature, executive, judiciary, and press—but without any separation of powers to establish checks and balances. A change to this model is already occurring at one platform: Facebook is creating a new Oversight Board that will hopefully provide due process to users on the platform’s speech decisions and transparency about how content-moderation policy is made, including how concepts related to newsworthiness and public figures are applied.
Artificial intelligence (“AI”), and machine learning in particular, promises lawmakers greater specificity and fewer errors. Algorithmic lawmaking and judging will leverage models built from large stores of data that permit the creation and application of finely tuned rules. AI is therefore regarded as something that will bring about a movement from standards towards rules. Drawing on contemporary data science, this Article shows that machine learning is less impressive when the past is unlike the future, as it is whenever new variables appear over time. In the absence of regularities, machine learning loses its advantage and, as a result, looser standards can become superior to rules. We apply this insight to bail and sentencing decisions, as well as familiar corporate and contract law rules. More generally, we show that a Human-AI combination can be superior to AI acting alone. Just as today’s judges overrule errors and outmoded precedent, tommorrow’s lawmakers will sensibly overrule AI in legal domains where the challenges of measurement are present. When measurement is straightforward and prediction is accurate, rules will prevail. When empirical limitations such as overfit, Simpson’s Paradox, and omitted variables make measurement difficult, AI should be trusted less and law should give way to standards. We introduce readers to the phenomenon of reversal paradoxes, and we suggest that in law, where huge data sets are rare, AI should not be expected to outperform humans. But more generally, where empirical limitations are likely, including overfit and omitted variables, rules should be trusted less, and law should give way to standards.
Where you go to college and what you choose to study has always been important, but, with the help of data science, it may now determine whether you get a student loan. Silicon Valley is increasingly setting its sights on student lending. Financial technology (“fintech”) firms such as SoFi, CommonBond, and Upstart are ever-expanding their online lending activities to help students finance or refinance educational expenses. These online companies are using a wide array of alternative, education-based data points—ranging from applicants’ chosen majors, assessment scores, the college or university they attend, job history, and cohort default rates—to determine creditworthiness. Fintech firms argue that through their low overhead and innovative approaches to lending they are able to widen access to credit for underserved Americans. Indeed, there is much to recommend regarding the use of different kinds of information about young consumers in order assess their financial ability. Student borrowers are notoriously disadvantaged by the extant scoring system that heavily favors having a past credit history. Yet there are also downsides to the use of education-based, alternative data by private lenders. This Article critiques the use of this education-based information, arguing that while it can have a positive effect in promoting social mobility, it could also have significant downsides. Chief among these are reifying existing credit barriers along lines of wealth and class and further contributing to discriminatory lending practices that harm women, black and Latino Americans, and other minority groups. The discrimination issue is particularly salient because of the novel and opaque underwriting algorithms that facilitate these online loans. This Article concludes by proposing three-pillared regulatory guidance for private student lenders to use in designing, implementing, and monitoring their education-based data lending programs.
Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail.
In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”).
The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime.
The recent financial crisis demonstrated that, contrary to longstanding regulatory assumptions, nonbank financial firms—such as investment banks and insurance companies—can propagate systemic risk throughout the financial system. After the crisis, policymakers in the United States and abroad developed two different strategies for dealing with nonbank systemic risk. The first strategy seeks to regulate individual nonbank entities that officials designate as being potentially systemically important. The second approach targets financial activities that could create systemic risk, irrespective of the types of firms that engage in those transactions. In the last several years, domestic and international policymakers have come to view these two strategies as substitutes, largely abandoning entity-based designations in favor of activities-based approaches. This Article argues that this trend is deeply misguided because entity- and activities-based approaches are complementary tools that are each essential for effectively regulating nonbank systemic risk. Eliminating an entity-based approach to nonbank systemic risk—either formally or through onerous procedural requirements—would expose the financial system to the same risks that it experienced in 2008 as a result of distress at nonbanks like AIG, Bear Stearns, and Lehman Brothers. This conclusion is especially salient in the United States, where jurisdictional fragmentation undermines the capacity of financial regulators to implement an effective activities-based approach. Significant reforms to the U.S. regulatory framework are necessary, therefore, before an activities-based approach can meaningfully complement domestic entity-based systemic risk regulation.
Big investment managers, such as Vanguard and Fidelity, have accumulated an astonishing amount of common stock in America’s public companies—so much that they now have enough corporate votes to control entire industries. What, then, will these big managers do with their potential power?
This Article argues that they will do less than we might think. And the reason is paradoxical: the biggest managers are too big to be activists. Their great size creates intense internal conflicts of interest that make aggressive activism extremely difficult or even impossible.
The largest managers operate hundreds of different investment funds, including mutual funds, hedge funds, and other vehicles that all invest in the same companies at the same times. This structure inhibits activism, because it turns activism into a source of internal conflict. Activism by one of a manager’s funds can damage the interests of the manager’s other funds. If a BlackRock hedge fund invests in a company’s equity, for instance, at the same time a BlackRock mutual fund invests in the company’s debt, then any attempt by either fund to turn the company in its favor will harm the interests of the other fund. The hedge fund and mutual fund might similarly come into conflict over the political and branding risks of activism and the allocation of costs and profits. Federal securities regulation and poison pills can create even more conflicts, often turning activism by a hedge fund into serious legal problems for its manager’s entirely passive mutual funds. A big manager, in other words, is like a lawyer with many clients: its advocacy for one client can harm the interests of another.
The debate about horizontal shareholding and index fund activism has ignored this truth. Research on horizontal ownership tends to treat a manager and its funds as though they were a single unit with no differences among them. Traditional analyses of institutional shareholder activism tend to go the opposite direction, treating mutual funds as though they were totally independent with no connection to other funds under the same management.
By introducing a subtler understanding of big managers’ structures, I can make sense of shareholder activism more clearly. Among other things, I show why aggressive activism tends to come entirely from small managers—that is, from the managers whose potential for activism is actually the weakest.