As AI increasingly features in everyday life, it is not surprising to hear calls to step up regulation of the technology. In particular, a turn to administrative law to grapple with the consequences of AI is understandable because the technology’s regulatory challenges appear facially similar to those in other technocratic domains, such as the pharmaceutical industry or environmental law. But AI is unique, even if it is not different in kind. AI’s distinctiveness comes from technical attributes—namely, speed, complexity, and unpredictability—that strain administrative law tactics, in conjunction with the institutional settings and incentives, or strategic context, that affect its development path. And this distinctiveness means both that traditional, sectoral approaches hit their limits, and that turns to a new agency like an “FDA for algorithms” or a “federal robotics commission” are of limited utility in constructing enduring governance solutions

This Article assesses algorithmic governance strategies in light of the attributes and institutional factors that make AI unique. In addition to technical attributes and the contemporary imbalance of public and private resources and expertise, AI governance must contend with a fundamental conceptual challenge: algorithmic applications permit seemingly technical decisions to de facto regulate human behavior, with a greater potential for physical and social impact than ever before. This Article warns that the current trajectory of AI development, which is dominated by large private firms, augurs an era of private governance. To maintain the public voice, it suggests an approach rooted in governance of data—a fundamental AI input—rather than only contending with the consequences of algorithmic outputs. Without rethinking regulatory strategies to ensure that public values inform AI research, development, and deployment, we risk losing the democratic accountability that is at the heart of public law.

When did ideology become the major fault line of the California Supreme Court? To answer this question, we use a two-parameter item response theory (IRT) model to identify voting patterns in non-unanimous decisions by California Supreme Court justices from 1910 to 2011. The model shows that voting on the court became polarized on recognizably partisan lines beginning in the mid-1900s. Justices usually did not vote in a pattern that matched their political reputations and party affiliation during the first half of the century. This began to change in the 1950s. After 1959 the dominant voting pattern is partisan and closely aligns with each justice’s political reputation. Our findings after 1959 largely confirm the conventional wisdom that voting on the modern court is on political lines. But our findings call into question the usual characterization of the Lucas court (1987–1996) as a moderately conservative court. Our model shows that the conservatives dominated the Lucas court to the same degree the liberals dominated the Traynor court (1964–1970).

More broadly, this Article confirms that an important development occurred in American law at the turn of the half-century. A previous study used the same model to identify voting patterns on the New York Court of Appeals from 1900 to 1941 and to investigate whether those voting patterns were best explained by the justices’ political reputations. That study found consistently patterned voting for most of the 40 years. But the dominant dimension of disagreement on the court for much of the period was not political in the usual sense of that term. Our finding that the dominant voting pattern on the California Supreme Court was non-political in the first half of the 1900s parallels the New York study’s findings for the period before 1941. Carrying the voting pattern analysis forward in time, this Article finds that in the mid-1900s the dominant voting pattern became aligned with the justices’ political reputations due to a change in the voting pattern in criminal law and tort cases that dominated the court’s docket. Together, these two studies provide empirical evidence that judicial decision-making changed in the United States in the mid-1900s as judges divided into ideological camps on a broad swath of issues.

From warnings of the “entitlement epidemic” brewing in our homes to accusations that Barack Obama “replac[ed] our merit-based society with an Entitlement Society,” entitlements carry new meaning these days, with particular negative psychological and behavioral connotation. As Mitt Romney once put it, entitlements “can only foster passivity and sloth.” For conservatives, racial entitlements emerge in this milieu as one insidious form of entitlements. In 2013, Justice Scalia, for example, famously declared the Voting Rights Act a racial entitlement, as he had labeled affirmative action several decades before.

In this Article, I draw upon and upend the concept of racial entitlement as it is used in modern political and judicial discourse, taking the concept from mere epithet to theory and setting the stage for future empirical work. Building on research in the social sciences on psychological entitlement and also on theories and research from sociology on group-based perceptions and actions, I define a racial entitlement as a state-provided or backed benefit from which emerges a belief of self-deservedness based on membership in a racial category alone. Contrary to what conservatives who use the term would have us believe, I argue that racial entitlements can be identified only by examining government policies as they interact with social expectations. I explain why the Voting Rights Act and affirmative action are not likely to amount to racial entitlements for blacks and racial minorities, and I present one way in which antidiscrimination law today may amount to a racial entitlement—for whites.
Theorizing racial entitlements allows us a language to more accurately describe some of the circumstances under which racial subordination and conflict emerge. More importantly, it gives us a concrete sense of one way in which laws can interact with people to entrench inequality and foster conflict. It uncovers the psychological and emotional elements of racial entitlements that can turn seemingly neutral laws as well as those that explicitly rely on racial classifications against broader nondiscrimination goals. This conceptual gain, in turn, can open up new avenues for research and thought. And it can provide practical payoff: ability to isolate laws or government programs that are likely to amount to racial entitlements for targeted change.

A chorus of critics, led by the late Justice Scalia, have condemned the practice of federal courts’ refraining from hearing cases over which they have subject-matter jurisdiction because of international comity—respect for the governmental interests of other nations. They assail the practice as unprincipled abandonment of judicial duty and unnecessary given statutes and settled judicial doctrines that amply protect foreign governmental interests and guide the lower courts. But existing statutes and doctrines do not give adequate answers to the myriad cases in which such interests are implicated given the scope of present-day globalization and features of the U.S. legal system that attract foreign litigants. The problem is ubiquitous. For instance, four cases decided in the Supreme Court’s 2017 October Term raised international comity concerns and illustrate the Court’s difficulty grappling with these issues.

This Article cuts against prevailing academic commentary (endorsed, to some extent, by the newly-minted Restatement (Fourth) of the Foreign Relations Law of the United States) and presents the first sustained defense of the widespread practice of international comity abstention in the lower federal courts—a practice the Supreme Court has not yet passed on but will almost certainly decide soon. At the same time, we acknowledge that the critics are right to assert that the way lower courts currently implement international comity—through a multi-factored interest analysis—is too manipulable and invites judicial shirking. Consequently, we propose a new federal common law framework for international comity based in part on historical practice from the Founding to the early twentieth century when federal courts frequently dealt with cases implicating foreign governmental interests with scant congressional or executive guidance, primarily in the maritime context. That old law is newly relevant. What is called for is forthright recognition of a federal common law doctrine of international comity that enables courts to exercise principled discretion in dealing with asserted foreign governmental interests and clears up conceptual confusion between prescriptive and adjudicative manifestations of international comity.

In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms are not bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression—particularly the entangled concepts of “public figures” and “newsworthiness.”
This Article offers the first empirical analysis of how judges and content moderators have used these two concepts to shape the boundaries of free speech. It first introduces the legal doctrines developed by the “Old Governors,” exploring how courts have shaped the constitutional concepts of public figures and newsworthiness in the face of tort claims for defamation, invasion of privacy, and intentional infliction of emotional distress. The Article then turns to the “New Governors” and examines how Facebook’s content-moderation system channeled elements of the courts’ reasoning for imposing First Amendment limits on tort liability.
By exposing the similarities and differences between how the two systems have understood these concepts, this Article offers lessons for both courts and platforms as they confront new challenges posed by online speech. It exposes the pitfalls of using algorithms to identify public figures; explores the diminished utility of setting rules based on voluntary involvement in public debate; and analyzes the dangers of ad hoc and unaccountable newsworthiness determinations. Both courts and platforms must adapt to the new speech ecosystem that companies like Facebook have helped create, particularly the way that viral content has shifted normative intuitions about who deserves harsher rules in disputes about harmful speech, be it in law or content moderation.
Finally, the Article concludes by exploring what this comparison reveals about the structural role platforms play in today’s speech ecosystem and how it illuminates new solutions. These platforms act as legislature, executive, judiciary, and press—but without any separation of powers to establish checks and balances. A change to this model is already occurring at one platform: Facebook is creating a new Oversight Board that will hopefully provide due process to users on the platform’s speech decisions and transparency about how content-moderation policy is made, including how concepts related to newsworthiness and public figures are applied.

Artificial intelligence (“AI”), and machine learning in particular, promises lawmakers greater specificity and fewer errors. Algorithmic lawmaking and judging will leverage models built from large stores of data that permit the creation and application of finely tuned rules. AI is therefore regarded as something that will bring about a movement from standards towards rules. Drawing on contemporary data science, this Article shows that machine learning is less impressive when the past is unlike the future, as it is whenever new variables appear over time. In the absence of regularities, machine learning loses its advantage and, as a result, looser standards can become superior to rules. We apply this insight to bail and sentencing decisions, as well as familiar corporate and contract law rules. More generally, we show that a Human-AI combination can be superior to AI acting alone. Just as today’s judges overrule errors and outmoded precedent, tommorrow’s lawmakers will sensibly overrule AI in legal domains where the challenges of measurement are present. When measurement is straightforward and prediction is accurate, rules will prevail. When empirical limitations such as overfit, Simpson’s Paradox, and omitted variables make measurement difficult, AI should be trusted less and law should give way to standards. We introduce readers to the phenomenon of reversal paradoxes, and we suggest that in law, where huge data sets are rare, AI should not be expected to outperform humans. But more generally, where empirical limitations are likely, including overfit and omitted variables, rules should be trusted less, and law should give way to standards.