From warnings of the “entitlement epidemic” brewing in our homes to accusations that Barack Obama “replac[ed] our merit-based society with an Entitlement Society,” entitlements carry new meaning these days, with particular negative psychological and behavioral connotation. As Mitt Romney once put it, entitlements “can only foster passivity and sloth.” For conservatives, racial entitlements emerge in this milieu as one insidious form of entitlements. In 2013, Justice Scalia, for example, famously declared the Voting Rights Act a racial entitlement, as he had labeled affirmative action several decades before.

In this Article, I draw upon and upend the concept of racial entitlement as it is used in modern political and judicial discourse, taking the concept from mere epithet to theory and setting the stage for future empirical work. Building on research in the social sciences on psychological entitlement and also on theories and research from sociology on group-based perceptions and actions, I define a racial entitlement as a state-provided or backed benefit from which emerges a belief of self-deservedness based on membership in a racial category alone. Contrary to what conservatives who use the term would have us believe, I argue that racial entitlements can be identified only by examining government policies as they interact with social expectations. I explain why the Voting Rights Act and affirmative action are not likely to amount to racial entitlements for blacks and racial minorities, and I present one way in which antidiscrimination law today may amount to a racial entitlement—for whites.
Theorizing racial entitlements allows us a language to more accurately describe some of the circumstances under which racial subordination and conflict emerge. More importantly, it gives us a concrete sense of one way in which laws can interact with people to entrench inequality and foster conflict. It uncovers the psychological and emotional elements of racial entitlements that can turn seemingly neutral laws as well as those that explicitly rely on racial classifications against broader nondiscrimination goals. This conceptual gain, in turn, can open up new avenues for research and thought. And it can provide practical payoff: ability to isolate laws or government programs that are likely to amount to racial entitlements for targeted change.

A chorus of critics, led by the late Justice Scalia, have condemned the practice of federal courts’ refraining from hearing cases over which they have subject-matter jurisdiction because of international comity—respect for the governmental interests of other nations. They assail the practice as unprincipled abandonment of judicial duty and unnecessary given statutes and settled judicial doctrines that amply protect foreign governmental interests and guide the lower courts. But existing statutes and doctrines do not give adequate answers to the myriad cases in which such interests are implicated given the scope of present-day globalization and features of the U.S. legal system that attract foreign litigants. The problem is ubiquitous. For instance, four cases decided in the Supreme Court’s 2017 October Term raised international comity concerns and illustrate the Court’s difficulty grappling with these issues.

This Article cuts against prevailing academic commentary (endorsed, to some extent, by the newly-minted Restatement (Fourth) of the Foreign Relations Law of the United States) and presents the first sustained defense of the widespread practice of international comity abstention in the lower federal courts—a practice the Supreme Court has not yet passed on but will almost certainly decide soon. At the same time, we acknowledge that the critics are right to assert that the way lower courts currently implement international comity—through a multi-factored interest analysis—is too manipulable and invites judicial shirking. Consequently, we propose a new federal common law framework for international comity based in part on historical practice from the Founding to the early twentieth century when federal courts frequently dealt with cases implicating foreign governmental interests with scant congressional or executive guidance, primarily in the maritime context. That old law is newly relevant. What is called for is forthright recognition of a federal common law doctrine of international comity that enables courts to exercise principled discretion in dealing with asserted foreign governmental interests and clears up conceptual confusion between prescriptive and adjudicative manifestations of international comity.

In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms are not bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression—particularly the entangled concepts of “public figures” and “newsworthiness.”
This Article offers the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. It first introduces the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. The Article then turns to the “New Governors” and examines how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, this Article offers lessons for both courts and platforms as they confront new challenges posed by online speech. It exposes the pitfalls of using algorithms to identify public figures; explores the diminished utility of setting rules based on voluntary involvement in public debate; and analyzes the dangers of ad hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in law or content moderation.
Finally, the Article concludes by exploring what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. These platforms act as legislature, executive, judiciary, and press—but without any separation of powers to establish checks and balances. A change to this model is already occurring at one platform: Facebook is creating a new Oversight Board that will hopefully provide due process to users on the platform’s speech decisions and transparency about how content-moderation policy is made, including how concepts related to newsworthiness and public figures are applied.

Artificial intelligence (“AI”), and machine learning in particular, promises lawmakers greater specificity and fewer errors. Algorithmic lawmaking and judging will leverage models built from large stores of data that permit the creation and application of finely tuned rules. AI is therefore regarded as something that will bring about a movement from standards towards rules. Drawing on contemporary data science, this Article shows that machine learning is less impressive when the past is unlike the future, as it is whenever new variables appear over time. In the absence of regularities, machine learning loses its advantage and, as a result, looser standards can become superior to rules. We apply this insight to bail and sentencing decisions, as well as familiar corporate and contract law rules. More generally, we show that a Human-AI combination can be superior to AI acting alone. Just as today’s judges overrule errors and outmoded precedent, tommorrow’s lawmakers will sensibly overrule AI in legal domains where the challenges of measurement are present. When measurement is straightforward and prediction is accurate, rules will prevail. When empirical limitations such as overfit, Simpson’s Paradox, and omitted variables make measurement difficult, AI should be trusted less and law should give way to standards. We introduce readers to the phenomenon of reversal paradoxes, and we suggest that in law, where huge data sets are rare, AI should not be expected to outperform humans. But more generally, where empirical limitations are likely, including overfit and omitted variables, rules should be trusted less, and law should give way to standards.

Where you go to college and what you choose to study has always been important, but, with the help of data science, it may now determine whether you get a student loan. Silicon Valley is increasingly setting its sights on student lending. Financial technology (“fintech”) firms such as SoFi, CommonBond, and Upstart are ever-expanding their online lending activities to help students finance or refinance educational expenses. These online companies are using a wide array of alternative, education-based data points—ranging from applicants’ chosen majors, assessment scores, the college or university they attend, job history, and cohort default rates—to determine creditworthiness. Fintech firms argue that through their low overhead and innovative approaches to lending they are able to widen access to credit for underserved Americans. Indeed, there is much to recommend regarding the use of different kinds of information about young consumers in order assess their financial ability. Student borrowers are notoriously disadvantaged by the extant scoring system that heavily favors having a past credit history. Yet there are also downsides to the use of education-based, alternative data by private lenders. This Article critiques the use of this education-based information, arguing that while it can have a positive effect in promoting social mobility, it could also have significant downsides. Chief among these are reifying existing credit barriers along lines of wealth and class and further contributing to discriminatory lending practices that harm women, black and Latino Americans, and other minority groups. The discrimination issue is particularly salient because of the novel and opaque underwriting algorithms that facilitate these online loans. This Article concludes by proposing three-pillared regulatory guidance for private student lenders to use in designing, implementing, and monitoring their education-based data lending programs.

Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail.

In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”).

The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime.

The recent financial crisis demonstrated that, contrary to longstanding regulatory assumptions, nonbank financial firms—such as investment banks and insurance companies—can propagate systemic risk throughout the financial system. After the crisis, policymakers in the United States and abroad developed two different strategies for dealing with nonbank systemic risk. The first strategy seeks to regulate individual nonbank entities that officials designate as being potentially systemically important. The second approach targets financial activities that could create systemic risk, irrespective of the types of firms that engage in those transactions. In the last several years, domestic and international policymakers have come to view these two strategies as substitutes, largely abandoning entity-based designations in favor of activities-based approaches. This Article argues that this trend is deeply misguided because entity- and activities-based approaches are complementary tools that are each essential for effectively regulating nonbank systemic risk. Eliminating an entity-based approach to nonbank systemic risk—either formally or through onerous procedural requirements—would expose the financial system to the same risks that it experienced in 2008 as a result of distress at nonbanks like AIG, Bear Stearns, and Lehman Brothers. This conclusion is especially salient in the United States, where jurisdictional fragmentation undermines the capacity of financial regulators to implement an effective activities-based approach. Significant reforms to the U.S. regulatory framework are necessary, therefore, before an activities-based approach can meaningfully complement domestic entity-based systemic risk regulation.

Big investment managers, such as Vanguard and Fidelity, have accumulated an astonishing amount of common stock in America’s public companies—so much that they now have enough corporate votes to control entire industries. What, then, will these big managers do with their potential power?
This Article argues that they will do less than we might think. And the reason is paradoxical: the biggest managers are too big to be activists. Their great size creates intense internal conflicts of interest that make aggressive activism extremely difficult or even impossible.
The largest managers operate hundreds of different investment funds, including mutual funds, hedge funds, and other vehicles that all invest in the same companies at the same times. This structure inhibits activism, because it turns activism into a source of internal conflict. Activism by one of a manager’s funds can damage the interests of the manager’s other funds. If a BlackRock hedge fund invests in a company’s equity, for instance, at the same time a BlackRock mutual fund invests in the company’s debt, then any attempt by either fund to turn the company in its favor will harm the interests of the other fund. The hedge fund and mutual fund might similarly come into conflict over the political and branding risks of activism and the allocation of costs and profits. Federal securities regulation and poison pills can create even more conflicts, often turning activism by a hedge fund into serious legal problems for its manager’s entirely passive mutual funds. A big manager, in other words, is like a lawyer with many clients: its advocacy for one client can harm the interests of another.
The debate about horizontal shareholding and index fund activism has ignored this truth. Research on horizontal ownership tends to treat a manager and its funds as though they were a single unit with no differences among them. Traditional analyses of institutional shareholder activism tend to go the opposite direction, treating mutual funds as though they were totally independent with no connection to other funds under the same management.
By introducing a subtler understanding of big managers’ structures, I can make sense of shareholder activism more clearly. Among other things, I show why aggressive activism tends to come entirely from small managers—that is, from the managers whose potential for activism is actually the weakest.