The Court’s Morality Play: The Punishment Lens, Sex, and Abortion

This Article uncovers the hidden framework for the Supreme Court’s approach to public values, a framework that has shaped—and will continue to shape—the abortion debate. The Court has historically used a “punishment lens” to allow the evolution of moral expression in the public square, without enmeshing the Court itself in the underlying values debate. The punishment lens allows a court to redirect attention by focusing on the penalty rather than the potentially inflammatory subject for which the penalty is being imposed, regardless of whether the subject is contraception, abortion, Medicaid expansion, or pretrial detention.

This Article is unique in discussing the circumstances in which the Court has simultaneously concluded that the state could regulate but could not punish, even if that means redefining a sanction as not punitive. By making visible this framework, we offer the Court and the states a potential off-ramp from the continuation of an ugly and litigious future on abortion access. If the Supreme Court seeks to deflect the outrage over Dobbs, the simplest way to do so would be to take seriously the statement that all it has to do is to return the issue to the states. In that case, the Court’s focus should be, as Justice Kavanaugh suggested in his concurrence, on the impermissibility of punishment that infringes on established rights, independent of a right to abortion, such as the right to travel, the First Amendment right to communicate accurate information about abortion availability, or doctors’ efforts to perform therapeutic abortions necessary to preserve a pregnant person’s health. The Court would not pass judgment on the permissibility of abortion, and it could affirm the propriety of state bans, but still strike down heavy-handed prosecutions and ill-defined prohibitions that impose undue penalties. 

After Dobbs v. Jackson Women’s Health Organization, this Article is particularly important for three reasons. First, this Article examines the ways in which the Court has used considerations of punishment to deflect irreconcilable values clashes. Second, a focus on punishment often illuminates the “dark side” of government action, justifying limits on such actions. Third, a focus on “punishment” often illustrates the consequences of government actions, consequences that may be an indirect result of statutes or regulations but that have disproportionate effects on marginalized communities. Understanding how the Court has used this elusive concept in the past may thus help shape the response to Dobbs.

INTRODUCTION

 The concept of punishment is central to the Supreme Court’s jurisprudence on abortion—and, beyond abortion, to the expression of moral values in the public square. In Dobbs v. Jackson Women’s Health Organization, Justice Alito found “an unbroken tradition of prohibiting abortion on pain of criminal punishment” throughout the common law until the Court’s decision in Roe v. Wade in 1973. He noted that “the great common-law authorities—Bracton, Coke, Hale, and Blackstone—all wrote that a post-quickening abortion was a crime” and he traced these developments from the thirteenth century forward.

The Dobbs opinion, like most criminal law discussions, assumes that the power to prohibit includes the power to punish violations of those prohibitions. And, indeed, criminal law scholars have produced an extensive literature on the justifications for the imposition of criminal sanctions and the constitutional limits on that imposition.

What neither that vast literature nor the Dobbs opinion addresses, however, is the role of punishment in the evolution of the jurisprudence addressing the expression of public values, separate and apart from the existence of the laws prohibiting conduct. As this Article shows, when the Supreme Court has focused on the state’s justification for punishment independently from the underlying policy, it has often used the nature of punishment as a justification for striking down legislation—even when the Court concedes that the state purpose is otherwise legitimate. And it sometimes uses the declaration that onerous provisions are not “penalt[ies]” to uphold coercive legislation that, as a practical matter, limits access to what the Court otherwise recognizes as important rights. As these cases show, outside of the narrow context of whether a criminal prohibition justifies the imposition of a particular sentence, punishment has an ill-defined life of its own in Supreme Court jurisprudence.

This Article is the first to detail how the Supreme Court has viewed the concept of “punishment” as a justification for upholding or invalidating government acts in the context of issues involving contested values. While an intense debate raged at mid-century over whether the state should regulate morality, that debate generally assumed that if the state could regulate, it could also punish. This Article is unique in discussing the circumstances in which the Court has simultaneously concluded that the state could regulate but could not punish. For example, the Court held that a state could discourage teen sex but not by encouraging pregnancy as the consequence or could adopt restrictive measures, such as blanket refusals to fund medically necessary abortions, so long as the statute did not prohibit abortion or penalize those seeking one.

This Article is particularly important following Dobbs for three reasons. First, it illustrates the ways in which the Court has used considerations of punishment to deflect irreconcilable values clashes. For those who would like to extricate the Court from the conflicts Dobbs has inflamed, limiting punishment, for example, of those exercising a constitutionally protected right to travel, offers a potential off ramp.

Second, a focus on punishment often illuminates the “dark side” of government action. The opinion in Griswold v. Connecticut placed great weight on the intrusiveness of policing the use of contraceptives in the marital bedroom. The ugliness of imposing punishment may similarly become a focal point for organization in response to the patchwork of state laws after Dobbs.

Third, a focus on “punishment” is often used to illustrate the consequences of government actions, consequences that may be an indirect result of statutes or regulations but that have disproportionate effects on marginalized communities. Abortion bans may aggravate race and class-based differences, prompting greater recognition of the rights of the pregnant to obtain the medical care needed to safeguard their health. Understanding how the Court has used this elusive concept in the past can thus help shape the response to Dobbs.

The Supreme Court’s conception of “punishment” underlying these considerations is slippery, perhaps intentionally so. The Court uses the concept at both an expressive level, reinforcing public norms, and a practical level, specifying the consequences for the violation of government mandates, both civil and criminal. Most critically for this Article, it provides the Court with a way to shape emerging norms in the context of public unease.

After describing the multidisciplinary literature on punishment’s multiple roles, we examine the way that the Court has deployed punishment as a rationale for invalidating government action, particularly in the context of cases involving sexual morality. Eisenstadt v. Baird, which stuck down bans on the sale of contraceptives to single women, provides a classic case: the Court simultaneously “conceded” that “the State could . . . regard the problems of extramarital and premarital sexual relations as ‘(e)vils’ ” but still held that this could not be the purpose of the Massachusetts legislation because it “would be plainly unreasonable to assume that Massachusetts has prescribed pregnancy and the birth of an unwanted child as punishment for fornication . . . .” The irrationality of the punishment, not the permissibility of unmarried sex, provided the basis for the decision—and implicitly for the limitation of state power to regulate sexual morality.

We then explore how the Court has used the determination of what is a punishment to affirm state decision-making power in a federal system. The question of when the state is inflicting a punishment as opposed to imposing a reasonable condition or proceeding on appropriate administrative grounds arises in contexts ranging from welfare “home visits” to detention to Medicaid expansion, with the Court using the punishment lens to sidestep the substantive bases for these decisions.

Finally, this Article considers cases that directly engage the relationship between punishment and the underlying values debate. Lawrence v. Texas, which invalidated Texas’s same-sex sodomy statute, provides the most striking example. Justice Kennedy’s majority opinion did not just strike down the criminalization of the sexual conduct. It affirmed the dignity and worth of the expression of intimacy in the case. Justice Scalia’s dissent, by contrast, saw punishment as the point, with both the majority and dissent agreeing that the values debate was central to the discussion.

This Article observes that the “punishment lens” provides a powerful tool for shaping the evolution of public values without enmeshing the Court in the underlying values debate. We consider whether the punishment lens can be successful in two ways: guiding the evolution of public values without triggering a backlash that further entrenches polarized opposition or, failing that, reaching decisions in controversial cases that do not undermine the Court’s own legitimacy and authority. By this standard, Eisenstadt v. Baird, which used the punishment lens to avoid the underlying values questions while striking down barriers to contraceptive access, and Lawrence v. Texas, which instead of relying on the punishment lens directly engaged the values questions, both succeeded in resolving issues in ways that helped move public opinion and lock in legal conclusions that remain embedded in American law. Whether applying the punishment lens to abortion can enjoy similar success remains to be seen, but this Article concludes by outlining the possibilities a focus on punishment can offer.

I.  PUNISHMENT AND THE RULE OF LAW

The role of state-administered punishment is much studied—and much contested. Existing literature addresses the questions of what might justify the ability of the state to inflict intentionally burdensome treatment on its citizens, what purposes such punishment should serve, and what constitutes appropriate punishment. As this scholarship establishes, law enforcement—and punishment of egregious crimes—is essential to a state’s legitimacy. Without punishment of criminal acts, a state cannot govern—and command either the support of its constituents or deference from the international order. This literature, however, in its most idealized form, tends to assume a straightforward relationship between crime and punishment: the state prohibits certain acts, imposes prescribed penalties for the violation of the law, and administers the penalties in accordance with principles of procedural and substantive justice, emphasizing due process rights for the accused and fairness defined in terms of proportionality between the crime and the punishment.

This Section goes beyond the conventional analysis of criminal punishment to explore the expressive role that punishment serves. It shows that the judicial oversight of punishment serves four roles that pose difficult challenges in the face of contested or changing values: establishing shared societal values, maintaining or dismantling social hierarchies, mediating disputes over the authority of governmental actors to impose punishment, and channeling the individual desire for vengeance into state-approved channels.

First, the administration of punishment defines and reinforces societal values, often in symbolic ways. For example, with the recognition that smoking caused cancer and other health risks, the perceived acceptability of smoking changed. In the United States, the state did not respond by prohibiting smoking. Instead, government entities gradually limited the places where smoking was permitted, first, creating “no smoking” areas and ultimately banning smoking in restaurants, offices, and other places. Over time, enforcement of these rules—and the imposition of sanctions on violators—did not just shift norms of politeness; they expressed moral disapproval of smoking as undesirable and deviant. In 1993, the Supreme Court of the United States embraced the shift in attitudes in a decision that held placing a nonsmoking prison inmate in a cell with a five-pack-a-day smoker could constitute constitutionally impermissible “cruel and unusual punishment.” In so ruling, the Court did not limit the word “punishment” to the prescribed penalties for a criminal act. Instead, its finding of “cruel and unusual punishment” reflected and reinforced the changed social meaning of smoking from an acceptable activity to one that violated evolving “standards of decency,” and concluded that violating this new moral sensibility could constitute “punishment” within the meaning of the Constitution. The act of placing a nonsmoker with a smoker thus became punishment because of the changed moral status of smoking.

Second, legal scholars have argued that beyond merely maintaining order, much of the power of state-administered punishment comes from this expression of “moral condemnation” and its role in establishing social hierarchies within a society. In accordance with this analysis, moral condemnation does not just declare particular conduct to be illegal; it establishes and reinforces social order and social standing in a society. Criminal acts threaten to upend the social order, as the person committing the crime asserts the right to defy established law and norms. Imposing punishment that carries moral condemnation with it restores the moral order, affirming the victim’s superior status to that of the violator. Punishment can thus signal that “the community values the victim” while the failure to punish can indicate indifference, or even disdain, toward the victim. Accordingly, both imposing punishment and failing to punish send important messages about what a society values. State-administered punishment can thus establish and reinforce norms in ways that contribute to social cohesion, cohesion operating at the group as well as the individual level. Disturbingly, the imposition of punishment can breed cohesion even if there is no crime; nonetheless, the state reaffirms its legitimacy and authority when it punishes in the name of a value or ideal, rather than simply because it can.

The role of punishment in establishing social hierarchies, particularly when it operates at a group-based level, contributes to the dark side of punishment. Brain imaging studies show that the act of punishing engages the part of the brain that produces feelings of reward—the same area of the brain involved in drug addiction. Individuals may thus derive pleasure from imposing punishment on others even when imposing punishment makes the punisher worse off.

This psychological dimension corresponds to some descriptions of the retributivist purpose of punishment. Nietzsche argues that cruelty—and the satisfaction some derive from it—is the point of punishment. Even Oliver Wendell Holmes agreed that at least in some cases, punishment “is inflicted for the very purpose of causing pain” and “one of its objects is to gratify the desire for vengeance.” And the anger and moral outrage that fuels demand for punishment can be manipulated. Research has tantalizingly suggested that the act of punishment itself reinforces perception of harm. Cultural cognition studies further show that people associate behavior contrary to their moral norms with socially detrimental consequences.

Precisely because the administration of punishment reinforces social standing at both the individual and the group level, it has implications that go beyond the punishment administered to any particular individuals. The decisions about which punishments to implement (such as firing an employee who refuses to be vaccinated or imposing work requirements as a condition of eligibility for state subsidized health insurance benefits) can create group-based winners and losers, elevating the status of one group at the expense of another. Yet, denying the legitimacy of such demands for punishment—or imposing them too harshly—can also undermine respect for law.

This leads to the third role of judicial oversight of punishment: mediating conflicts that involve the authority of different governmental actors to impose punishment. In the United States, for example, the Supreme Court has overseen evolving conflicts between the states and the federal government in the administration of family law. The U.S. Constitution has historically been viewed as entrusting family law to the states, but the Supreme Court has selectively intervened, at times to enhance or restrain state authority to impose punishment. In Stanley v. Illinois, for example, the Supreme Court held that Illinois could not treat Peter Stanley as an unfit parent—and thus deprive him of standing to seek the custody of his children after their mother’s death—solely because he had not married the mother. The Court intervened to limit the power of the state to punish unmarried fathers, at a time when attitudes were changing toward unmarried relationships.

Finally, the courts have historically overseen punishment in order to channel vengeance into socially constructive venues. The failure to punish perceived wrongs may persuade wronged individuals or groups to “take the law into their own hands” or to impose punishments out of proportion to the wrongful act. The courts, in contrast, are supposed to act “judiciously” in administering punishment in a neutral manner, not just on behalf of the wronged individual, but because the assertion of the moral values of the social order can contribute to a sense of social order and cohesion.

The challenge of serving these four roles increases as social norms change. The tension between maintaining order and imposing destabilizing punishments is particularly difficult if some social groups reject the norms, while others respond to the increasing defiance of the first group by calling for greater punishment as violations increase. The imposition of punishment thus involves an “ever-shifting relationship between a regime and a given population that makes up the most essential element in any political order.”

These four roles make the administration of punishment central to the rule of law. They are also evident as a longstanding aspect of Supreme Court jurisprudence. Yet, managing the tensions between these objectives can undermine as well as maintain social cohesion. Congress and various state legislatures, for example, have attempted to shift norms surrounding intimate relationships by changing the laws governing sexual assault to make date rape easier to prosecute. Imposing more serious penalties, however, may make judges and juries more reluctant to convict—and failures to impose punishment can undermine, in turn, the efforts to shift norms and also lead victims to feel even more isolated and aggrieved. Expressing moral condemnation while keeping punishments commensurate with the perceived seriousness of the offenses thus requires walking a tightrope, one that sways with changing public sensibilities. Abortion, perhaps as much as if not more than any other issue, involves “irreconcilable disagreement” that challenges the legitimacy of the judicial system itself. The issues of punishment in the abortion context will test whether the judiciary generally, and the Supreme Court in particular, retain any capacity for guiding the recreation of shared social values.

II.  SEX AND PUNISHMENT: RECOGNIZING REPRODUCTIVE RIGHTS

At the time Roe v. Wade was decided in 1973, the Supreme Court was carefully navigating a revolution in sexual mores. Sexual morality presents a classic case for the expressive role of punishment, with punishment serving to reinforce what are seen as consensus-based moral values broadly shared by the public. Enforcing such norms also involves, however, punishment of private consensual conduct.

This Part shows how the Supreme Court focused on the acceptability of punishment as a rationale for state action rather than on the changing norms themselves. It did so through a series of cases that addressed contraception, nonmarital children’s legitimacy status, welfare benefits, parentage—and ultimately abortion—though the lens of punishment for sexual conduct. Within this new jurisprudence, the Court carved out a right to privacy that did not address the propriety of intimate conduct, but rather evaluated the permissibility of state action designed to shape private conduct.

A.  Contraception and the Propriety of Pregnancy as Punishment for Sex

Starting with Griswold v. Connecticut in 1965, the Supreme Court began to strike down legislation that regulated sexuality in ways that the Court deemed needlessly punitive. In doing so, the Court never waged a frontal assault on the moral order that channeled sexuality into marriage. Instead, the Court examined the rationales underlying the laws and the consequences of imposing punishment.

Griswold addressed the constitutionality of a law that forbade the use of contraception. Anthony Comstock had spearheaded prohibition of contraceptives in the nineteenth century, convinced that they “facilitate[d] immoral conduct” because they “reduce[d] the risk that individuals who engage[d] in premarital sex, extramarital sex, or prostitution [would] suffer the consequences of venereal disease or unwanted pregnancy.” Comstock persuaded Congress to outlaw “print and pictorial erotica, contraceptives, abortifacients, information about contraception or abortion, sexual implements and toys, and advertisements” in 1873 and the states adopted their own “Little Comstock laws” thereafter. Connecticut’s statute, adopted in 1879, was one of the most restrictive, banning not just the advertising and sale of contraceptives, but also the use of contraception.

The Court framed the case as one against defendants who “gave information, instruction, and medical advice to married persons as to the means of preventing conception.” In resolving the matter, the Court conceptualized a right to privacy, a right that justified looking the other way at sexual conduct. The Court wrote that “[w]e deal with a right of privacy older than the Bill of Rights older than our political parties, older than our school system.” The Court did not mention married couples’ efforts to limit the number of children they had directly, although it did refer to the marital relationship as “intimate to the degree of being sacred” and suggested that enforcing a ban on contraceptive use would have “a maximum destructive impact upon [the marital] relationship.” By contrast, the Court acknowledged the validity of the state’s purported rationale for the regulation: “the discouraging of extra-marital relations.” While the Court stated that this rationale “is admittedly a legitimate subject of state concern,” banning contraceptive use by married couples was simply too far removed from the purported subject of the statute to pass constitutional muster. The Court suggested that the state could regulate the manufacture or sale of contraceptives but not their use within marital unions. In short, the Court focused on the ugliness of enforcement rather than on the permissibility of the underlying conduct—the use of contraception.

Connecticut did not often enforce its ban on married couples’ contraceptive use, but the fact that the law was on the books effectively limited the ability to use contraception to those with access to doctors and pharmacists. While the Griswold decision did not mention the issue, a major reason for challenging the ban on contraception was the unequal nature of contraceptive access. By striking down criminal penalties for contraceptive sales, the Court effectively allowed doctors and clinics to make contraception more broadly available. The implicit principle at the core of this decision was that, while the state could steer sexuality into marriage, it could no longer seek to ensure that pregnancy be the unavoidable consequence of sexual relationships.

Eisenstadt v. Baird, decided in 1972, expanded the principle—that pregnancy was an unreasonable punishment—beyond marriage. Eisenstadt struck down a Massachusetts statute that prohibited supplying contraception to single, as opposed to married, individuals. As in its decision in Griswold, the Court “conceded” that “the State could, consistently with the Equal Protection Clause, regard the problems of extramarital and premarital sexual relations as ‘(e)vils.’ ” Nonetheless, it concluded that this could not be the purpose of the Massachusetts legislation because it “would be plainly unreasonable to assume that Massachusetts has prescribed pregnancy and the birth of an unwanted child as punishment for fornication.” The Court acknowledged, as it did in Griswold, that notwithstanding the law, contraceptives are widely available, and thus “the rationality of this justification is dubious.”

By 1977, the Supreme Court was willing to say that the state could not prescribe pregnancy as the punishment for sex even where the state had a clear interest in discouraging sex between minors. In striking down a state law that prohibited selling contraceptives to minors under the age of sixteen, the Court noted the state interest in regulating the “morality of minors” in its efforts to promote “the State’s policy against promiscuous sexual intercourse among the young.” Again, however, the Court accepted the legitimacy of the state interest, but rejected the connection between such a state interest and the prohibition on sales of contraceptives to minors. The Court observed that, “with or without access to contraceptives, the incidence of sexual activity among minors is high, and the consequences of such activity are frequently devastating,” but observed that there was little evidence that banning contraception had much impact. The Court thus concluded that the state could not promote an otherwise legitimate objective—discouraging “promiscuous sexual intercourse among the young”—by making pregnancy the punishment for sex and criminalizing efforts to avoid the consequences. And it emphasized that the justification for banning contraceptive sales became that much weaker as the evidence mounted that the laws on the books did not have the desired effect. It thus concluded that the “punishment” (pregnancy) did not serve the interests of either deterrence (teens with still have sex) or an appropriate desert (a child) for a wrongful act.

B.  Public Recognition and the Removal of the Scarlet Letter from Children

The Supreme Court relied on similar reasoning in dismantling the distinctions between “legitimate” and “illegitimate” children, with the Court ultimately concluding that the states could not seek to channel childbearing into marriage by punishing children for their parents’ conduct. In the “seminal” case of Levy v. Louisiana, the Court considered a Louisiana law that restricted the ability to bring a tort action for the wrongful death of a parent to “legitimate children.” As a result, an unmarried mother’s five children, who lived with her, and whom she raised on her own earnings, had no right to sue for their mother’s allegedly wrongful death. The Court, in striking down the statute in a brief opinion, observed that the Court could imagine no reason “why, in terms of ‘equal protection,’ should the tortfeasors go free merely because the child is illegitimate?” The Court reasoned that the circumstances of the birth had “no relation to the nature of the wrong allegedly inflicted on the mother;” the children, “though illegitimate, were dependent on her.” The Court even recounted how the mother in the Levy case supported her children by working as a domestic servant, “taking them to church every Sunday and enrolling them, at her own expense, in a parochial school.” In this opinion, the Court identified no countervailing state interest; the children were deprived of the right to sue for the loss of their mother simply because of the circumstances of their birth.

The Supreme Court in Levy did not mention the issue of race, but amicus briefs filed in the case emphasized that, particularly in Louisiana, the distinctions between marital and nonmarital children had a significant racial impact. Indeed, an amicus brief filed by Illinois law professor Harry Krause (and others) argued explicitly that the statute “discriminates on the basis of race.” The brief maintained that the discrimination stemmed partly from the fact that “disproportionately more Negro children than white children are born out of wedlock,” and, partly from the fact that “a high percentage (70%) of white illegitimate children are adopted . . . whereas very few (3-5%) Negro illegitimates find adoptive parents.” As a result, “95.8 percent of all persons affected by discrimination against illegitimates under the statute are Negroes.” The brief concluded, “the classification of illegitimacy . . . is a euphemism for discrimination against Negroes.”

Louisiana denied that it sought to punish the children for immorality in sexual behavior, but it nonetheless maintained that it sought to encourage marriage. And the state asserted: “If the community grants almost as much respect for non-marriage as for marriage, illegitimacy increases” and that “illegitimate daughters tend to err in the manner of their illegitimate mothers, producing more illegitimate children.” In short, Louisiana did argue that it was necessary to punish the children to deter their parents, if not quite in so many words. And the children who would be punished as a result were overwhelmingly Black. Louisiana’s efforts to punish nonmarital births thus reinforced a racial as well as moral line, though the majority opinion for the Court did not directly address the racial issue.

In subsequent cases, the Court made the role of punishment even more explicit. In 1972, the Court reaffirmed Levy in striking down a Louisiana statute that defined “child” so that only marital children were eligible for insurance benefits resulting from their father’s death. Justice Powell’s majority opinion held that the “status of illegitimacy has expressed through the ages society’s condemnation of irresponsible liaisons beyond the bonds of marriage,” but still concluded that imposing “this condemnation on the head of an infant is illogical and unjust.” Powell concluded that the distinction between marital and nonmarital children was not justified by any state interest.

In 1977, the Court revisited the issue of inheritance, invalidating an Illinois statute that permitted nonmarital children to inherit only from their mothers, not their fathers. In a 5-4 decision, Justice Powell reiterated that “visiting this condemnation on the head of an infant is illogical and unjust.” He emphasized that, while the parents’ behavior might have been immoral, that was not the fault—nor the responsibility—of the children. The opposition to the punishment of children commanded a majority of an even more conservative Court than the Warren Court that had initially struck down such classification.

C.  The Right to Abortion: Part I

The Supreme Court’s 1973 decision in Roe v. Wade situated the case within the punishment lens the Court had constructed to deal with reproductive rights more generally. The case never squarely fit there, however, because abortion did not just involve the regulation of sexual behavior between consenting partners; it also raised issues about the involvement of the medical profession and the status of the fetus. Nonetheless, the Court framed the decision as a right centered on the irrationality of the state prescription of childbirth as a way to prevent illicit sex and a jurisprudence conscious of the consequences, intended and unintended, of regulating sexual morality. It thus treated laws banning abortion as imposing punishment—on the pregnant for incurring an unwanted pregnancy, on doctors for exercising medical judgment in treating patients, and on those who felt compelled to seek illegal abortions in unsafe circumstances.

Among the telling aspects of this analysis is the way the Court articulated the state interests at stake. The Court identified the first such interest as one based on “a Victorian social concern to discourage illicit sexual conduct.” Curiously, though, the Court acknowledged that Texas did not articulate that justification in Roe, and it appeared that courts and commentators had not actually taken the argument seriously. On the other hand, however, the Comstock laws, which banned abortifacients along with pornography and contraception, treated the regulation of sexual morality as of a piece with abortion. The Court thought the connection between an abortion ban and the regulation of morality sufficiently important to mention—and dismiss.

Second, the Court acknowledged that forcing a woman to carry an unwanted pregnancy to term is cruel. It referred to the burdens of pregnancy and childbirth, including the possibility that childbirth “may force upon the woman a distressful life and future,” her “[m]ental and physical health may be taxed by child care,” and the unwanted child may cause “distress, for all concerned.” The opinion acknowledged the hardship involved in bringing a child into a family that could not care for the child, and the potential for stigmatizing a nonmarital mother. The Court accordingly echoed earlier cases treating avoidable pregnancy and childbirth an inappropriate way to advance state purposes because of the burden imposed.

Third, the Court was aware that the states often brought criminal actions against doctors. One of the parties in Roe, Dr. James Hubert Hallford, allegedly had faced prosecutions for violations of the Texas abortion statutes. Hallford maintained that the applicable statutes were unconstitutionally vague because he could not determine whether his patients’ situations would qualify as exceptions to the abortion ban, so he faced punishment for exercising a good faith medical judgment about his patients’ therapeutic needs. Justice Blackmun’s initial draft proposed striking down Texas’s anti-abortion law as unconstitutional only on the grounds that it was void for vagueness. The punishment that doctors faced in making delicate judgements was clearly a factor in the subsequent Roe decision and in its declaration that abortion decisions should be left to “the woman and her responsible physician.”

Fourth, the Court dismissed state assertions that banning abortion was necessary to protect women’s health, observing that mortality rates during the first trimester of pregnancy “appear to be as low as or lower than the rates for normal childbirth” in contrast with the “prevalence of high mortality rates at illegal ‘abortion mills.’ ” While less explicit than the Court’s acknowledgment of the burdens of pregnancy, the Court recognized that resort to unsafe abortions was a punitive consequence of the prohibition of legal abortions.

In the background of the case, states’ law on abortion had begun to change, with some states repealing their anti-abortion statutes entirely and others reforming their law to expand the availability of therapeutic abortions. A practical consequence was that, as with contraception, the availability of abortion, particularly safe abortion, differed significantly by race, location, and class. Partly as a result, women of color were substantially more likely—by some estimates twelve times more likely—to die from illegal abortion than white women.

In limiting the state ability to restrict abortion, the Court treated these restrictions as imposing impermissible penalties on those seeking abortion. The penalties were not so much the criminal sanctions themselves; these were rarely imposed on the individuals who secured abortions. Instead, states banning abortion were making childbirth the consequence of unprotected sex—and the risk of death the price of seeking an illegal abortion. The Court found that unacceptable. And while the Court recognized the state interest in protecting fetal life, it balanced that interest against the woman’s interest in deciding whether to give birth. Fetal life, as an interest separated from the sexuality (and women’s bodies) that produced it, would become more prominent as an issue only after Roe was decided.

In these cases, the Supreme Court helped oversee a shift in sexual mores during a period where nonmarital sexuality was becoming more common and accepted. In focusing on the acceptability of the punishment, the Court did not endorse the changes directly; instead, it addressed the rationality of widely violated restrictions that imposed serious, arbitrary and discriminatory harms. The Court’s use of the term “punishment” was not, however, consistent or the subject of a coherent jurisprudence. Sometimes, it referred to the state rationales (deterring sex by limiting access to contraception, making pregnancy the “punishment” for fornication), sometimes it referred to the intrusive nature of criminal enforcement (searching the marital bedroom) rather than the imposition of criminal sanctions, and sometimes it considered the collateral consequences of government action (the stigma and limitations associated with nonmarital births). In the process, however, the Court used the punishment lens to oversee a wholesale effort to strike down what it saw as the outdated remnants of “Victorian” sexual mores without disavowing the legitimacy of state efforts to channel sexuality into marriage.

III.  PUNISHING PARENTS

The era that produced Roe involved overlapping interests reducing the support for a punitive approach to sexual morality: a change in sexual norms, a remaking of women’s roles, and more urgent calls for racial equality. In addition, the parties were less ideologically polarized, with greater elite consensus.

Nonetheless, by the mid-seventies, another jurisprudential revolution was taking place: one embedding a neoliberal view of the state into Supreme Court jurisprudence. The Warren Court had been sympathetic to calls not just for racial equality, but also for greater economic rights. These claims often took the form of calls to treat government benefits as entitlements, with more equal access to the benefits and more obstacles to denying eligibility. The neoliberal era taking hold by the late seventies rejected these claims. The Court embedded this perspective in the same way it had overseen the change in sexual mores: by using the punishment lens to resolve issues that involved farther reaching clashes in values. The Court did so by denying the very fact of punishment. It concluded that if a given regulation did not penalize the individuals subject to it for protected activity, no constitutional issue arose at all. In the process, the Court upheld regulations that supervised poor women’s sexuality and denied access to abortion funding.

This Section focuses on how the punishment lens applies in more varied civil settings, tracing the evolution of the Court’s treatment of government benefits. The first part of this Section describes how the Court deemed public benefit requirements non-punitive in order to uphold limitations on government benefits under the Aid to Families with Dependent Children (AFDC) and Medicaid programs; the second part of the Section shows how the punishment lens applies outside of the sexual-morality context, analyzing how it has been used to limit access to benefits under the Affordable Care Act.

A.  Welfare Benefits and the Rejection of Positive Rights

In the 1960s, the Supreme Court addressed the relationship between sexuality and eligibility for government benefits during a period in which the Court was enhancing access to government benefits more generally. The original Aid for Dependent Children (ADC) program was adopted in the 1930s as part of the New Deal’s far-reaching social legislation. The United States, unlike many European nations, had never adopted a universal system of family allowances to support childrearing but instead had a variety of state programs designed to provide widows’ pensions to support children who would otherwise land in orphanages because their mothers could not support them. In the 1930s, Congress nationalized these efforts, providing federal funding for a state-run system to compensate for the loss of a male breadwinner. Congress limited aid to children who had “been deprived of parental support or care by reasons of the death, continued absence from the home, or physical or mental incapacity of a parent” and allowed the states to impose additional eligibility standards, such as “moral character” requirements that excluded the children of unmarried parents from the program.

As early as the 1940s, critics argued that the moral requirements “were habitually used to disguise systematic racial discrimination; and that they senselessly punished impoverished children on the basis of their mothers’ behavior.” The federal government sought to discourage the moral requirements. By the late 1960s, the states had shifted from outright prohibition of benefits to “man in the house rules” that deemed the income of a man who cohabited with a welfare recipient to be available to the family, thereby affecting the family’s qualification for public welfare. These regulations were understood to serve the dual purpose of punishing African Americans and privatizing dependency by withholding public benefits from nonmarital families.

In King v. Smith, the Supreme Court examined the punitive nature of these requirements. The Court sidestepped the constitutional issues in the case, striking down the Alabama regulation at issue on statutory grounds, noting that federal law precluded states from denying public welfare to children because “of their mothers’ alleged immorality or to discourage illegitimate births.” The Court concluded that “Congress has determined that immorality and illegitimacy should be dealt with through rehabilitative measures rather than measures that punish dependent children, and that protection of such children is the paramount goal of AFDC.” Justice Douglas’s concurrence, however, would have reached the constitutional issue. He saw Alabama officials as discriminating against children on the basis of illegitimacy and therefore acting at odds with the ruling in Levy v. Louisiana, decided during the same term. He wrote that “the Alabama regulation is aimed at punishing mothers who have nonmarital sexual relations.” In administering the provisions, the “economic need of the children, their age, their other means of support, are all irrelevant. The standard is the so-called immorality of the mother.” He viewed that standard—and the attendant punishment—inflicted on the mother to be constitutionally impermissible.

By the time the Supreme Court decided the case in 1968, the nature of the AFDC program had changed. While 43% of the ADC caseload in 1937 consisted of widows, only 7% were in 1961. And as documented in an amicus brief in Levy v. Louisiana, decided the same term, the statute was both “overt discrimination on the basis of the criterion of illegitimacy,” and “covertly discriminate[d] on the basis of race.” The Court almost certainly saw the two cases as linked, although only Justice Douglas’s concurrence in King made the connection directly.

In deciding King v. Smith, the majority opinion, however, dealt with these issues only obliquely. Instead, it focused on the irrationality of the punishment imposed—the denial of benefits in a program intended to help children that would disproportionately disadvantage the very children the program was intended to help. The Court did not endorse a right to nonmarital sexuality. It did not discuss the discriminatory motive and effect underlying the regulations. It did not recognize an affirmative “right” to federal benefits nor a right to privacy for benefit recipients. Instead, it focused solely on the legitimacy of the punishment, concluding that children could not be deprived of benefits in an effort to change their mothers’ conduct. It treated the man-in-the-house rules not as a rational effort to determine the resources available to the family, but as a subterfuge to continue morals regulation in the face of federal disapproval. The case was thus of a piece with the contraception and legitimacy cases in challenging irrational punishments: punishments that were irrational because once they failed to deter nonmarital sexuality in an era of changing mores, their application became arbitrary and discriminatory.

In subsequent cases, however, the Supreme Court upheld provisions that burdened the poor and their children by deeming such provisions non-punitive. Thus, in Wyman v. James, the Court found constitutional a New York statute mandating home visits, that were in line with federal law’s requirements that aid be provided only after consideration of the family’s resources and only to children who were not being neglected. The “visits” could prove embarrassing in front of children and guests, and could serve to police sexual relationships. The Court refused to find that mandated visits were a penalty at all, terming them instead a condition of benefit eligibility and not a substantive, much less punitive, standard tying loss of benefits to impermissible or arbitrary considerations.

The dissent objected on the grounds that welfare rights should be seen as entitlements. While both the majority and the dissent focused on the status of welfare benefits, Justice Blackmun’s majority opinion used the conclusion that the “conditions” on receipt of benefits were not penalties to lock in a neoliberal view of government action: because there is no right to benefits, the state could impose whatever standards it chooses as preconditions for eligibility, and those conditions never become punishment subject to constitutional scrutiny.

B.  Punishing Sex

In subsequent cases, the Court’s characterization of a particular government action as non-punitive allowed it to uphold conditions that were challenged as discriminatory, cruel, or unjust. The results were particularly striking when the issue turned to abortion. Legislators who opposed abortion and who could not overturn Roe v. Wade directly sought to express their disapproval of abortion by prohibiting the use of public funds to pay for abortions, while permitting those funds to be used for pregnancy and childbirth. Were these bans penalization of a constitutionally protected right—the right to elect abortion to terminate a pregnancy—or were they simply the exercise of legislative policy preferences to allocate public funds to support some activities and not others? The Supreme Court used the punishment lens to resolve the issue. Since individuals enjoyed no positive right to health care—or to abortion funding—the denial of funding could not constitute a penalty and thus had no constitutional implications.

An initial case upheld Connecticut regulations limiting public funding of abortions to medically necessary abortions during the first three months of pregnancy. Justice Powell wrote for the 6-3 majority that the Constitution did not impose any obligation on the states to pay pregnancy-related medical expenses of low-income women or any other medical expense. He noted that the Court had not found in previous cases that wealth was a suspect class and that Connecticut was accordingly free to subsidize childbirth and not abortion as an expression of state policy designed to encourage the former.

By 1980, Congress had gone further, adopting the Hyde Amendment, a prohibition on the use of federal funds to reimburse the cost of abortions under the Medicaid program, including abortions that were the result of rape or incest or medically indicated. In a 5-4 opinion later that year, the Supreme Court upheld the constitutionality of the Amendment. The majority opinion treated the issue as a classic one of negative liberty, explaining that the freedom to choose to have an abortion, even a medically necessary one, does not carry with it a government obligation to fund the abortion. It then explained that a woman’s poverty was “the product not of governmental restrictions on access to abortions, but rather of her indigency.” Accordingly, the Court concluded that the failure to pay for abortions was not punishment and thus not subject to constitutional review.

The four dissenters viewed the Hyde Amendment as punitive and cruel. Justice Blackmun made the point that the legislators championing the Hyde Amendment cynically sought to express their own views on the morality of abortion by imposing those views “only upon that segment of our society which, because of its position of political powerlessness, is least able to defend its privacy rights from the encroachments of state-mandated morality.” He would have accordingly subjected the legislation to more exacting judicial review. Justice Stevens emphasized that “the Court expressly approves the exclusion of benefits in ‘instances where severe and long-lasting physical health damage to the mother’ is the predictable consequence of carrying the pregnancy to term” and, indeed, “even if abortion were the only lifesaving medical procedure available.” He concluded that the result “is tantamount to severe punishment” for wanting an abortion. Justice Marshall emphasized the racial impact of denying abortion funding and also noted that the Hyde Amendment resulted in “excess deaths.”

In Harris, the Court upheld the validity of an extraordinarily cynical statute. Congress, in effect, limited poor women’s abortion access because it could—it could allow expression of the anti-abortion sentiments of members of Congress at the expense of a relatively powerless group. By declaring that forced birth due to the failure to secure funding for an abortion was not a punishment, the Court avoided addressing the question of whether it burdened a constitutional right.

In Wyman and Harris, neither the majority nor the dissenting opinions treated these cases as imposing punishment for sex, and the majority opinions rejected even the premise that the aid recipients had been punished for the exercise of constitutional rights (privacy in Wyman, abortion in Harris). The reasoning in the cases backtracked on the entitlement language that had been building in the welfare rights era, leading to the conclusion that if the benefits at issue were not entitlements, the failure to provide them could not be seen as punishment—effectively ending the discussion of whether the provisions at issue were unduly cruel or whether they reinforced class- or race-based social hierarchies.

C.  The Punishment Lens Beyond Sex

The litigation over the Affordable Care Act (ACA) involves the clash of values we have described in this Article and the use of the punishment lens to resolve some of the challenges. The ACA was the largest expansion of public largesse in a half century and therefore a direct challenge to neoliberal values. The legislation’s principle of universal health insurance coverage clashed with those who wished to limit government benefits altogether or to withhold them from those deemed unworthy, such as those who were not working, reinforcing class and racial hierarchies. In addition, by treating contraception as an integral part of women’s health care, the ACA conflicted with the views of some Christian employers who opposed contraception. The legislation thus involved, on a much larger scale, the clash of values underlying the characterization of government benefits in Wyman and Harris.

In the cases discussed in this Section, the Supreme Court returns to the issue of punishment, though without any more precise a definition of the concept. Instead, the Court repeatedly faced the question of whether the ACA provisions operated as a tax or a penalty, a condition or a penalty, and a provision of alternative means of compliance or a penalty, and used the characterization of the actions as penalty or not to resolve the cases. The net effect for the ACA was a compromise: the ACA endured but on somewhat more neoliberal terms than the Obama Administration and the Congress that enacted the legislation might have intended.

The ACA, in attempting to provide universal health care access, included a series of alternatives that were designed to balance the principles of expanded access, adequate funding, and reasonable private choice. In National Federation of Independent Business v. Sebelius, the most prominent of the ACA cases, the Court addressed two issues that turned on the concept of a penalty. The first involved the “individual mandate,” which required an individual who did not otherwise receive health insurance through their employers or other state provisions, to purchase health insurance on state exchanges or pay what the legislation described as a “penalty” collected by the Internal Revenue Service with the filing of individual tax returns. The Court rejected the government’s claim that the commerce clause authorized the mandate, but upheld it instead as a “tax.”

The Court reasoned that under the ACA, “if an individual does not maintain health insurance, the only consequence is that he must make an additional payment to the IRS when he pays his taxes.” The Government accordingly argued that the mandate could “be regarded as establishing a condition—not owning health insurance—that triggers a tax—the required payment to the IRS.” Under this theory, the legislation does not establish “a legal command to buy insurance,” just a trigger for owing taxes, like “buying gasoline or earning income.” Therefore, the Court concluded that the ACA was within the Congressional tax power.

Critical to the Court’s reasoning was its decoupling of a requirement to buy insurance, which the Court concluded that Congress could not do, and a requirement to pay an amount, deemed by the Court a “tax,” intended to finance the program. In reaching this conclusion, the Court explained that “[i]n distinguishing penalties from taxes, this Court has explained that ‘if the concept of penalty means anything, it means punishment for an unlawful act or omission.’ ” The ACA mandate was not a penalty (or punishment) because while the mandate sought to incentivize health insurance purchases, it did not make the failure to do so “unlawful.” The fact that Congress sought to influence individual behavior did not matter, just as Congress’s efforts to encourage childbirth rather than abortion did not matter in Harris v. McRae; so long as the federal government did not outlaw the failure to buy insurance, the individual mandate was a tax, not a penalty (and not punishment for the failure to buy insurance). It was therefore constitutional.

The second issue the Court addressed was Medicaid expansion, which the Court again decided in terms of the acceptability of the Act’s “penalties.” Congress revised the existing Medicaid program, which is a federal-state partnership, to cover individuals within 138% of the poverty line, and to bring Medicaid coverage in line with the coverage health insurance policies offered on the exchanges. Congress then gave the states a choice: accept federal funding in accordance with the new expanded Medicaid program or forego federal Medicaid funding. The majority in Sebelius objected that the “choice” was too coercive, effectively mandating state participation in the program. It reasoned that while Congress could condition state eligibility for federal funding under a new program, it could not “penalize States that choose not to participate in that new program by taking away their existing Medicaid funding,” describing the “inducement” in the Act as “a gun to the head.” Justice Ginsburg’s dissent objected that Congress was, as it had done in the past, just requiring states to comply with “conditions” imposed by Congress to receive Medicaid funding.

The parallels between Sebelius and Wyman v. James are striking. The requirement that the states adopt Medicaid expansion in order to participate in the Medicaid program could have been described, as Justice Ginsburg wrote, as a condition for participation in a federally funded program. The Sebelius Court disagreed, finding that it penalized the states for the failure to agree to the program’s terms. The Court effectively treated the state’s existing funds as an entitlement the federal government could not threaten to take away in order to obtain the performance it sought. In Wyman, because welfare was not an entitlement, a welfare recipient’s failure to consent to intrusive home visits was not considered a penalty at all; it was labelled as “a condition of eligibility” to the continued receipt of benefits. The label—condition or penalty—resolved each case without engaging the substantive issue of whether the conditions themselves were reasonable or justified.

In Sebelius, the result cloaks the real issues underlying Medicaid expansion—skepticism about whether the poor merit medical benefits and opposition to the state role in meeting such needs. Indeed, the federal government picked up 100% of the initial costs associated with implementing the program, and 90% thereafter so that the financial burden on the states was relatively minimal—and less than the state share of the pre-ACA Medicaid program and arguably much less of a burden on the states than asking a welfare recipient to consent to frequent, unannounced, and intrusive home visits (or the uninsured to go without health care). What Sebelius did not address is why states opposed Medicaid expansion, given the substantial financial incentives in the ACA for the states to do so. Most commentators attribute the opposition to the states’ ideological opposition to government provision of health insurance, if not outright hostility to the poor people in their states. Some states continue to resist Medicaid expansion, despite widespread public support for it. In effect, the Court, in the name of federalism, authorized the states to act with impunity in frustrating Congressional efforts to ensure accessible health insurance at the expense of people in their states who qualified for the benefits.

In a later ACA case, the Supreme Court also used the concept of punishment to address the employer mandate, which gave businesses the choice of providing health insurance that met federal standards for their employees or contributing to the exchanges so that employees could purchase their own insurance. Hobby Lobby, a closely held, for-profit corporation, provided health insurance for its employees, but refused to comply with federal requirements to cover certain forms of contraception, including the morning after pill, because, according to the company, they acted as an abortifacient. In a 5-4 decision, the Court held that requiring a company to cover certain mandated health care benefits, such as the pills in question, violated the Religious Freedom Reformation Act. The Court gave little regard to women’s loss of access to the contraceptives, holding that the federal government, if it chose, could provide them through “less restrictive means.” In short, the Court held that it would be an unjustifiable penalty to compel corporate owners to comply with the terms of a neutral government program that benefitted their employees, if those terms conflicted with the owners’ religious beliefs.

The employer mandate was essentially the same as the individual mandate—it gave those affected, whether individuals or employers, a choice: meet the ACA requirements (individuals by purchasing insurance that met federal standards or employers by providing such insurance) or pay the mandated sums to the federal treasury, in each case less than the cost of the insurance. With respect to the individual mandate, the Court concluded that the payment was a tax on those without insurance and not a penalty because the federal government had not (and could not) compel the purchase of insurance. In the case of the employer mandate, the Court concluded that the required payments were, in effect, a penalty for Hobby Lobby’s desire to act on its religious beliefs, rather than a condition for participation in a program providing federal subsidies.

To be sure, the two cases do not arise under identical bodies of law. Sebelius addressed two distinct legal issues: Congressional power to enact the individual mandate under the Commerce Clause and the taxing power, and the limits of Congressional power under a federal system to incentivize state participation in a federal program. Hobby Lobby was decided in accordance with a third body of law, determining the religious rights of for-profit corporations. Yet, in each case, the Court’s framing of the law as punishment or not—that is, whether the intricate provisions of the ACA acted as sanctions designed to compel specific behavior—determined the outcome. And, in each of these cases, the Court upheld moral hierarchies: protection of religious employers at the expense of employees denied access to federal contraception benefits, protection of states disapproving of health care subsidies at the expense of their citizens who would benefit from such subsidies, and limits on the power of the federal government vis-à-vis other actors, including the states and privately-held businesses.

IV.  RETURNING MORALITY TO THE PUBLIC SQUARE

In focusing on punishment, the Supreme Court oversaw a revolution in sexual mores without directly engaging the issue of what values should govern in the public square. The Court has also strengthened a neoliberal regime by simultaneously holding that imposing conditions on program beneficiaries does not constitute punishment while imposing conditions that require coverage constitutes a constitutionally unacceptable “penalty.” In relatively few of these cases did the Court, particularly in its majority opinions, directly engage the underlying values clash. The exception has come in the discussion of LGBT rights—and increasingly in the Court’s opinions on abortion. These exchanges pull back the curtain on the role of punishment in Supreme Court jurisprudence. In these cases, the argument for the losing parties, embraced by the dissents, maintain that punishment is the point—the necessary component to affirming the “right values” in the public square. In response, the Court, in a way it did not do so in the earlier cases, directly addresses the relationship between the status of those affected by punishment and the values they express by engaging in the prohibited activity.

A.  LGBTQ+ Rights and the “Homosexual Agenda”

One of the clearest clash of values prior to Dobbs occurred in the Supreme Court’s decision in Lawrence v. Texas. In Bowers v. Hardwick, the Court had considered whether there was a fundamental right to engage in same-sex sodomy, a formulation that the Court repeated in Dobbs. In both cases, the Court referred to the long history of criminalizing the conduct at issue, with those arguing for the constitutionality of such criminal penalties maintaining that the history of punishment reflected disapproval of the underlying conduct and provided evidence of the continuing legitimacy of such sanctions.

Lawrence, which involved a criminal prosecution for same-sex sodomy, directly involved the issue of punishment. The two men in the case were arrested in a private residence when the police arrived to investigate a purported weapons disturbance. In his opinion for the majority, Justice Kennedy’s opinion had two levels of analysis. Like the Griswold line of cases, it affirmed a right to privacy, observing that “[t]he statutes do seek to control a personal relationship that, whether or not entitled to formal recognition in the law, is within the liberty of persons to choose without being punished as criminals.” The majority opinion then emphasized that the Texas statute being enforced in the case was not just about prohibiting a “particular sexual act”; it involved intimate conduct as part of “a personal bond that is more enduring.” The opinion thus concluded that such punishment was not just constitutionally impermissible but that the behavior at issue had societal value.

Justice O’Connor, in her concurrence in Lawrence, did not go as far as the majority. Instead, in a manner reminiscent of the earlier cases on contraception, she limited her analysis to a punishment lens, finding that Texas could not claim a legitimate interest. She thus rejected out of hand the asserted state interest in the case, which she described as nothing more than the “moral disapproval of an excluded group.” For O’Connor, the impermissibility of the punishment—and its discriminatory character—were enough to strike down the statute without necessarily requiring an affirmation of the value of same-sex intimacy.

Writing in dissent, Justice Scalia made clear that he thought that moral disapproval of same-sex sexuality was exactly what the case should have been about. He denounced what he called the “homosexual agenda,” which he defined as “the agenda promoted by some homosexual activists directed at eliminating the moral opprobrium that has traditionally attached to homosexual conduct.” He cast his dissent explicitly in terms of maintaining a moral hierarchy based on that opprobrium.

The opinions in Lawrence thus frame, perhaps better than any of the other cases, the permissibility of punishment and the Court’s use of a punishment lens. They involve a clash between the ability to affirm moral values in the public square versus the preservation of private homes from the intrusion of the state. They also involve the use of the declaration of values to define those to be “protected,” in Scalia’s words, from those to be “excluded,” in O’Connor’s terms, thus reaffirming societal hierarchies between the groups. And they involve the permissibility of the imposition of criminal sanction to reinforce moral opprobrium, even when the behavior at issue is consensual conduct between two adults. The Lawrence Court’s 6-3 majority unequivocally rejected the propriety of punishment used to harden the lines between the protected and the excluded—and in the majority opinion, if not O’Connor’s concurrence, embraced an alternative view of the purpose of sexual conduct as an expression of commitment to a partner, not just as a means to procreation.

In Obergefell v. Hodges, the case upholding the right to marriage equality, the majority went even further in embracing same-sex relationships as an expression of family values while the dissents reaffirmed the need to channel sexuality into marriage—and to punish those who fell outside of such precepts. Kennedy wrote that there “is dignity in the bond between two men or two women who seek to marry and in their autonomy to make such profound choices.”

The majority opinion added that the right to marry is not just about the couples’ relationship to each other, but also about their children. “Without the recognition, stability, and predictability marriage offers,” Kennedy wrote, “their children suffer the stigma of knowing their families are somehow lesser . . . . The marriage laws at issue here thus harm and humiliate the children of same-sex couples.” The opinion thus saw denial of the ability to marry as a punishment imposed not only on the couple but on their children. It accordingly equated the limitation of marriage to different-sex couples with imposition of a stigma on those raising families outside the institution.

The Obergefell majority did take sides in the culture wars—in recognizing the dignity and moral worth of same-sex relationships. In basing the decision on the changed nature of marriage, the Supreme Court acknowledged that marriage reflected a new moral sensibility: one that made autonomous choice, not religious or societal duty, the foundation of the marital relationship. The Court accordingly went beyond the rejection of the punishment (while noting “the harm and humiliation” involved in the refusal to recognize same-sex families) to confer public recognition and moral worth on LGBT families.

The four justices who dissented rejected both the premise that marriage had changed and that the Supreme Court should acknowledge that change. Chief Justice Roberts’s dissent explained that “for the good of children and society, sexual relations that can lead to procreation should occur only between a man and a woman committed to a lasting bond.”

This reasoning is the same as the reasoning that justified the vilification of nonmarital sexuality a half century ago. In accordance with this reasoning, heterosexual sex, not just procreation, needs to be channeled into marriage and marriage needs to be about a moral command to avoid nonmarital sexuality. Punishment, whether material or symbolic, is the necessary complement to this reasoning.

Alito’s dissent made explicit his objection to overturning traditional moral hierarchies. He wrote: “I assume that those who cling to old beliefs will be able to whisper their thoughts in the recesses of their homes, but if they repeat those views in public, they will risk being labeled as bigots and treated as such by governments, employers, and schools.” In short, Alito’s concern lay directly with the ability to uphold the preferred values in the public square and fear that those who did so would now be the ones receiving punishment. And while he acknowledged that family understandings and behavior could change over time, he simply treated data such as the 40% nonmarital birth rate as further reason states could chose to double-down on traditional moral understandings—drawing clear distinctions between preferred groups and those subject to moral condemnation even when a substantial or even majority of the public did not share such views.

Alito’s opinion accepted the right of moral traditionalists to insist on the primacy of heterosexual marriage and to punish those who create families or engage in sexual intimacy outside of marriage. He saw the majority, in contrast, as embracing same-sex families as entitled to equal moral worth and such views as necessarily punishing those who disagree as bigots. Moreover, he treated evidence of changing norms, such as the increase in nonmarital births, as evidence of a threat to the traditional moral order and therefore as additional reason for punishment. Framed in such terms, the legal question becomes one of power and authority to uphold the preferred views and, in Alito’s terms, punishment cannot be separated from the underlying values.

B.  Abortion Revisited

With respect to abortion, however, neither the Court’s efforts to sidestep the morality of the underlying conduct nor its efforts to address the issues directly have yet succeeded. In the years after Roe, abortion became a political marker in part because the issue offers little opportunity for compromise. While the Court largely succeeded in making contraception more available without directly embracing the sexual revolution, the Court’s efforts to sidestep the moral issues underlying the abortion issue satisfied no one. Roe satisfied neither those who saw reproductive rights as essential for gender equality nor those who believe the status of the fetus is not an issue that could be “bracketed.” These divisions, unlike those underlying recognition of LGBT relationships, have increased over time.

In Planned Parenthood v. Casey, the Court nonetheless tried to tamp down the divisions by directly engaging the values conflicts. Decided in the early 1990s, Casey had been widely expected to reverse Roe outright. Instead Casey preserved the core of the right to abortion, while permitting the states to impose new restrictions, such as waiting periods and parental consent provisions. Justice O’Connor’s plurality opinion was the only significant abortion decision for the Court written by a woman. She observed that the earlier decisions in Griswold, Eisenstadt, and Carey “support the reasoning in Roe relating to the woman’s liberty because they involve personal decisions concerning not only the meaning of procreation but also human responsibility and respect for it.” Casey, alone in the Supreme Court’s reproductive rights decisions, made women’s relationship to the growing fetus central to the decision. It succeeded, however, only in delaying the day of reckoning over Roe itself.

Dobbs v. Jackson Women’s Health Organization is radically at odds with previous decisions that have used the concept of punishment to distract attention from inflammatory subjects. It is also at odds with the conception of judicial statesmanship, through which courts legitimate the judicial system while also recognizing social change and creating community in the midst of conflicting values’ clashes. Although Justice Alito claimed otherwise, the decision is designed to inflame and, in doing so, it is likely to empower state officials who wish to exercise their authority to punish—in order to affirm the moral superiority of their position, to reaffirm their values in the public square, to impose dominance over outgroups, and to restore a sense of hierarchical order that validates their position in society. The opinion itself invites such a response.

First, it goes out of its way to say not just that opposing views, but Roe itself were never legitimate. Alito’s majority opinion declares that “Roe was egregiously wrong from the start. Its reasoning was exceptionally weak, and the decision has had damaging consequences.”

Second, it dismisses women’s interest in their bodily integrity as of no consequence, suggesting that those interests are amply protected through existing laws.

Third, while the opinion claims not to base the decision on recognition of a fetus as a human being from the moment of conception forward, it clearly views state actions based on such views as a legitimate basis for legislative action and declares that the fact that abortion serves to “destroy a ‘potential life’ ” justifies the Court’s treatment of Roe as precedent entitled to less deference than other Supreme Court precedents.

Fourth, unlike other Supreme Court decisions announcing a major change in governing law (with all deliberate speed), the Court provides no guidance for the states and no timelines for implementation. It simply overturns Roe and leaves the states—and the pregnant—on their own in the face of a rapidly shifting and still uncertain legal landscape.

The majority opinion thus has the hallmarks of an act of vengeance righting a wrong, rather than serving to provide judicial guidance in the face of contentious issues. It seeks to restore the moral hierarchy associated with the forces that see abortion as necessarily impermissible. It affirms states’ right to ban abortion without addressing the impact on the rights of states who wish to ensure its continuing availability. And in not only issuing the Dobbs’ decision, but in failing to restrain the states’ earlier vigilante laws, the Court’s current stance suggests that the states will be free to treat abortion as murder and punish those who provide abortions, those who seek abortions, and those who aid and abet those involved with abortions in any way.

V.  THE FUTURE OF ABORTION PUNISHMENT

Abortion has become a flash point for political division because it falls on the fault lines of cultural polarization and political realignment. After Dobbs, the factors that drive political divisions are likely to overlap with the factors driving calls to punish those seeking and providing abortions.

In analyzing and moving forward on these issues, it is first critical to understand the sources of the call for punitive measures and then to consider whether a focus on punishment can also provide a strategy for defusing the conflict. Without such a strategy, this Article concludes, the likely result is a replication of the conditions that preceded Roe: pregnancy as the punishment for sex, aggravating the existing class and regional bifurcation in unintended births; a high-profile fight between elite actors on the boundaries of post-Dobbs public morality; and selective enforcement that disproportionately penalizes poor and minority women. As an alternative, this Article proposes that using the punishment lens analysis can serve as a means to de-escalate the coming legal wars over abortion.

A.  Values Polarization and Abortion Punishment

The analysis of the factors underlying the calls for punishment start with the factors driving political polarization. Political theorists link partisan polarization to a sorting between the parties based on cultural values. They describe those with conservative values orientations as favoring in-group unity and strong leadership, and having “a desire for clear, unbending moral and behavioral codes,” that include an emphasis on the importance of punishing anyone who strays from the code, “a fondness for systematization,” as well as “a willingness to tolerate inequality (opposition to redistributive policies).”

Those with a liberal values orientation, in contrast, tend to be more tolerant to outsiders, to consider context rather strict rules adherence when it comes to determining appropriate behavior. They also demonstrate more empathy and less interest in strict punishment for violations of moral and behavioral rules and greater intolerance of inequality.

Attitudes toward abortion both reflect and contribute to the partisan polarization. Abortion attitudes have become more partisan over time, and psychologist Drew Westen describes this outcome as a matter of intentional political strategy. Such a strategy was designed to attract people who see abortion in rigid moral terms to the Republican party in the 1990s, and as that happened, self-identified Republicans became more opposed to abortion. Stances on abortion accordingly became a political marker.

Public opinion polls today confirm the high degree of partisan polarization on abortion. While 61% of all Americans believe that abortion should be legal in all or most cases, 60% of Republicans—and 72% of those who identify as “conservative Republicans”—believe that abortion should be illegal in all or most cases. In contrast, 80% of Democrats and 90% of “liberal Democrats” believe that abortions should be legal in all or most cases. Public opinion polls indicate that support for the imposition of criminal sanctions closely tracks abortion views generally.

These attitudes correspond to the purposes and pitfalls of punishment. All groups seek affirmation of their values, but the values to be expressed are not parallel in their relationship to the imposition of punishment. Abortion rights advocates seek to preserve a right to privacy free from government intrusion through the democratic process, including referenda as well as litigation. To the extent they wish to exact punishment for taking away abortion rights, they have suggested defeating anti-choice politicians at the ballot box, impeaching Supreme Court justices for perjury about their willingness to follow precedent, and requesting ethics investigations. We could also imagine more aggressive efforts to counter the efforts of anti-abortion activists who attempt to interfere with abortion in states where abortion remains legal. Some of the most important actions pro-choice states have taken, however, is greater support to assist those coming from out-of-state, protecting their own health care workers, and ensuring access to medication abortion. The symbolism involves a greater and more visible state embrace of a right of abortion access.

The punishment desired by those opposed to abortion, by contrast, has two components. The first involves the expressive function of law and the declaration that abortion is wrong. The declaration reaffirms the moral hierarchy that elevates those who oppose abortion entirely; empirical studies indicate that when abortion is perceived as a “moral wrong” that produces outrage in those who oppose it; they dehumanize the women (and their partners) who seek abortions. Expressing this moral opposition even has a “shaming effect” on those who require abortions because of significant health issues. It also justifies subjecting those who seek therapeutic abortions to intrusive review of their doctor’s medical determinations or requiring those experiencing rape or incest to face onerous proof requirements, retraumatizing victims of sexual assault. Yet, the symbolic effect can occur with limited punishment, prosecuting only occasional cases that involve public defiance of the new abortion bans.

This dehumanization and shame, in turn, empowers those who would pursue the second component: waging a war to root out the practice. The National Right to Life Committee has proposed sweeping measures, for example, that would not only criminalize abortion itself, but treat it as a “criminal enterprise” that needs to be eliminated using “RICO-style laws” that would reach anyone providing any type of support to someone seeking an abortion. These provisions target not only medical personnel but those providing abortion information. Others propose empowering not only state prosecutors but individual citizens to conduct surveillance on those visiting out-of-state abortion clinics, accessing internet websites providing abortion information, or even monitoring the pregnant (and their friends and family) more generally. These activities, particularly when carried out by private “vigilantes,” combine opposition to abortion with a moral crusade. While some laws immunize the pregnant from prosecution, existing laws in many states have already been used to prosecute women experiencing miscarriages for “feticide” and more draconian laws have been proposed that provide for prosecution for crimes based on an abortion. Even without new laws, the Attorney General of Alabama, for example, threatened to prosecute those crossing state lines to terminate their pregnancies or using abortion pills as child chemical endangerment, even if the patients legally obtain the pills within Alabama.

Finally, prosecutions, particularly if they are brought against those who seek abortions, are likely to enforce gender, race and class hierarchies. As anti-abortion fervor has mounted, some states over the last decade have increased criminal investigations of various types of pregnancy loss, including not just self-induced abortions but also miscarriages, stillbirths, and any form of infanticide. These cases overwhelmingly target “pregnant people who are poor, young, have substance abuse issues or live in areas with limited health services.” Advocates fear the reversal of Roe will fuel more such cases and particularly harm women of color, already disproportionately overpoliced and prosecuted on pregnancy-related issues. Farah Diaz-Tello, an attorney who works on reproductive health rights commented, “It’s this vicious cycle where lack of access, . . . increased scrutiny and stigma around abortion, as it becomes further restricted or criminalized, leads to more criminalization.” And the fact that the individuals are poor, minority group members, substance abusers, or otherwise lack full control of their lives contributes to the willingness of others to impose moral condemnation on their behavior.

Dobbs will only make this worse.

B.  Punishment in the Courts

Striking down Roe invited the states to adopt abortion bans that, in criminalizing abortion, also prescribe punishment. The courts have historically policed the limits of criminal punishment, requiring, for example, that criminal laws provide clear notice as to what acts are proscribed, that those accused enjoy appropriate procedural protections, and punishments are proportionate to the offence. This Article has gone beyond these traditional concerns to address how the Supreme Court uses a punishment lens to accomplish broader objectives, particularly in the face of irreconcilable and intrinsically divisive issues, and issues that may threaten judicial legitimacy. Abortion certainly qualifies as divisive, and Dobbs has already raised serious concerns about judicial legitimacy.

Indeed, in the years since Roe, anti-abortion activists have made the fetus the issue—with the impact on the person forced to give birth disappearing from view. When the fetus becomes the subject of concern, consensual sex—with no victims other than public mortality—is beside the point. When prosecutors act to prosecute abortions, they are passing moral judgment on the permissibility of the abortion itself and often imposing significant penalties.

Two arenas in particular, however, offer the Court an opportunity to tamp down the Dobbs-inspired conflicts.

First, if the Supreme Court seeks to deflect the outrage over Dobbs, the simplest way would be to take seriously its own statement that all it has to do is to return the issue to the states. Taking that seriously requires protecting the rights of states that wish to secure access to abortion—and protecting, as Justice Kavanaugh suggested in his concurrence, the constitutional right to travel. The most basic question involving the right to travel is whether citizens of one state can travel to another state, return to their home state, and be punished for their out-of-state conduct. Existing precedent from the Roe era suggests that such conduct is constitutionally protected and other limits on state jurisdiction ordinarily preclude punishment for out-of-state acts. Affirming the constitutional right to travel should also mean that states cannot burden exercise of the right to travel, by punishing, for example, those within the state who assist the traveler in leaving the state or acts that a pregnant person takes within the home state, such as researching out of state options, packing one’s bags, or driving to the state line for the purpose of accessing abortion in another state, just as the Court concluded in Hobby Lobby that forcing an employee to choose between an ACA compliant health plan or a monetary contribution to ACA funding constitution a burden on religious freedom. The Court should also strike down punishment that creates obstacles to First Amendment rights of expression, such as penalizing websites or advice to individuals that contain accurate information about abortion and out-of-state availability. The Court could also recognize that states encouraging private citizens to track those accessing out-of-state abortion clinics, websites, menstrual periods or other personal information either serves no legitimate state purpose to the extent it is intended to penalize the right to travel or, like searching the marital bedroom for contraceptives, is so intrusive as to be constitutionally suspect. Striking down punishment that burdens the right to travel could simultaneously affirm state abortion bans and still protect its availability in the states that permit it.

The second arena where a punishment lens could be effective in defusing abortion controversies involves women’s right to medical treatment to protect their health. Statutes banning abortion pose a dilemma for doctors; they report that they fear retaliation for performing abortion-like procedures—even when the fetus is dead or the health threat to the patient is significant. In these cases, the risks are asymmetrical: the doctor faces punishment for “doing the right thing” and little in the way of negative consequences for not acting, even if the patient dies as a result. Uncertainty itself thus imposes punishment—and serves the purposes of those who would root out abortion (with inevitable spillover effects to abortion-like procedures). Yet, criminal prosecutions of the doctor in these cases, while risky and expensive for the doctor personally, could bring the criminal justice system into disrepute. For those seeking to ensure abortion access, the question therefore should be how to bring the issue of punishing doctors—and the corresponding ability of the pregnant to receive abortions necessary to protect their health—into public focus. Test cases on enforceability of abortion bans in circumstances threatening the life of the mother might bring greater clarity. Such suits could also focus attention on the health threat that punishment poses to pregnant patients. Heavy-handed interventions into newborn care, in which governors sought to prolong the lives of children born with substantial birth defects, helped to discredit the interventions. The same approach might work in the context of pregnancy care. Justice Blackmun’s initial draft opinion in Roe sought to focus on the issue of professional judgment. Partisan differences on abortion are smaller (and overall support for punishment is substantially less) when the mother’s health is at risk. Striking down abortion laws that do not clearly immunize doctor’s decisions about medically therapeutic abortions is a first step; recognizing that the pregnant have a right to abortions necessary to protect their health is an important second step.

In cases of rape and incest, the effort ought to go further to highlight the callous treatment of such victims. Governor Greg Abbott declared, in response to questions about precluding abortion for the victims of involuntary sexual activity, that “Texas will work tirelessly to eliminate all rapists from the streets of Texas . . . .” In short, the Governor tried to deflect claims of punishment of one type (forcing the victims of rape to carry the rapist’s child to term) by talking about another type of punishment—that imposed on rapists. The veracity of the claim is not the issue, particularly because Texas has one of the highest rape rates in the country and Abbott had done little to combat it. As with abortions necessary to protect the lives of the pregnant, partisan differences narrow considerably on cases of rape and incest and the failure to provide such exceptions underscores the punitive nature of the restrictions.

Finally, cases in which patients are prosecuted ought to be used to highlight the cruelty associated with abortion restrictions in the United States. Restricting access to abortion is in fact just one more form of punishment of the marginalized, with the same groups that support abortion restrictions also opposing more generous provisions to the poor. White evangelical Protestants, for example, the religious group most opposed to abortion, is also one of the groups most likely to respond that aid to the poor does more harm than good. And the same groups have become more likely to oppose immigration and efforts to promote racial equality and to favor imposition of preferred values through authoritarian means. The cruelty of abortion bans is a large part of what motivated the decision in Roe. With abortion opponents calling for draconian enforcement measures, it should be a factor in mobilizing the opposition to post-Dobbs enforcement of abortion restrictions.

CONCLUSION

Focusing on punishment will not resolve intractable values disputes; it simply changes the subject. Changing the subject, however, does offer a tactic for diffusing intractable disputes—or a long-term strategy for reframing what is at stake. In either case, it makes visible the consequences of public actions, such as abortion bans, on those affected by them in ways that can serve to underscore their cruelty. The public wants its core values expressed and respected in the public square; in cohesive societies the values are consensus based, and punishment reinforces them. The urge to punish, when embedded in group conflict, inflames divisions (threatening violence or civil war); channeling it effectively is central to the rule of law. Understanding this dynamic gives the Court tools (and a motive) to construct an offramp: it also allows states to decide their own approaches to abortion while protecting the pathways out of the states that ban it, and ensures that doctors can save the lives of their patients.

96 S. Cal. L. Rev. 1101

Download

* Robina Chair in Law, Science and Technology, University of Minnesota Law School.

† Justice Anthony M. Kennedy Distinguished Professor of Law, Nancy L. Buc ’69 Research Professor in Democracy and Equity, University of Virginia School of Law. Thanks to workshop participants at the University of Minnesota Law School Squaretable for comments and to Sam Turco for research assistance, and to Katherine Bake, Mary Anne Case and John Q. Barrett for comments on an early draft.

Delegating War Powers

Academic scholarship and political commentary endlessly debate the President’s independent constitutional power to start wars. And yet, every major U.S. war in the last sixty years was fought pursuant to war-initiation power that Congress gave to the President in the form of authorizations for the use of military force. As a practical matter, the central constitutional question of modern war initiation is not the President’s independent war power; it is Congress’s ability to delegate its war power to the President.

It was not until quite late in American history that the practice of war power delegation became well accepted as a domestic law basis for starting wars. This Article examines the development of war power delegations from the founding era to the present to identify when and how war power delegations became a broadly accepted practice. As this Article shows, the history of war power delegation does not provide strong support for either of two common but opposite positions: that war power, as a branch of foreign affairs powers, is special in ways that make it exceptionally delegable; or that it is special in ways that make it uniquely nondelegable. More broadly, that record counsels against treating “foreign affairs delegations” as a single category, and it reveals that constitutional questions of how Congress exercises war power are as significant as whether it does.

INTRODUCTION: WAR POWER AND THE NEW NONDELEGATION DEBATES

Academic scholarship and political commentary endlessly debate the President’s independent constitutional power to start wars or launch military interventions. And yet, every major U.S. war in the last 60 years—Vietnam, the Persian Gulf War, Afghanistan, and the 2003 Iraq War—was fought pursuant to war-initiation power that Congress gave to the President in the form of authorizations for the use of military force.

Congress’s war power—and by that term, or alternatively “war-initiation power,” we mean throughout this Article specifically the power to commence war, as distinct from power to wage it—is generally understood to arise from Article I, Section 8’s power “To declare War.” But none of the congressional war authorizations of the past sixty years was in any sense a declaration of war. None had the effect of initiating, or directing the initiation, of military conflict. Instead, they were broad delegations to the President of the power to decide when and whether to initiate hostilities. In each case the President did use force (and it was apparent beforehand that he likely would, at least to some extent), but Congress left that decision to the President. Thus, as a practical matter, the central constitutional question of modern war initiation is not the extent of the President’s independent war power; it is the extent of Congress’s ability to delegate its war power to the President.

Until very recently, that latter question seemed easy—so easy that it was rarely asked. Under the Supreme Court’s modern nondelegation doctrine, Congress can, for the most part, delegate power to the President if it includes an “intelligible principle” by which the delegated power would be exercised—and this principle presents an exceptionally low bar, reviewed by courts with a high degree of deference. So while Congress likely could not delegate to the President discretion to start wars anywhere for any reason, delegations limited to particular places or particular threats (even stated broadly) would easily pass the test.

The conventional permissive nondelegation doctrine has, however, been called sharply into question by academic commentators and, more importantly, by the Supreme Court. In particular, Justice Gorsuch’s 2019 dissent in Gundy v. United States, joined by Chief Justice Roberts and Justice Thomas, argued for a new, more restrictive approach to the doctrine. In a separate opinion, Justice Alito signaled willingness to revisit the doctrine in an appropriate case, and two Justices added since Gundy—Justices Kavanaugh and Barrett—may have sympathy for the project as well. In 2022, the Court rejected the Environmental Protection Agency’s purported authority to regulate carbon emissions, reasoning that extra scrutiny and strict statutory interpretive rules apply to claims that Congress delegated to executive agencies power over “major” public policy questions. Justice Gorsuch, joined by Justice Alito, wrote separately to emphasize the foundational constitutional importance of keeping major legislative decision-making in Congress. One senses that a substantial revision of the nondelegation doctrine may be impending, thus provoking new scholarly attention to—among other things—the historical practice of delegation.

War powers have not yet been a focus of this renewed nondelegation debate—but they should be. That is especially so because when the issue comes up, those who consider it are often pulled in one of two opposing directions.

One view sees war-initiation power as special in ways that make it unusually—maybe even uniquely—non-delegable. In this view, there is something about going to war, including the stakes or the institutional advantages and proclivities of the different branches, that constitutionally requires Congress to retain ultimate control. For Congress to yield substantial discretion over such a monumental decision to the President violates a key design feature of the Constitution.

A contrary and more common view (at least in the modern era) sees war-initiation power as special in ways that make it unusually delegable. Some justices and commentators have suggested that a more stringent nondelegation doctrine, even if revived in domestic matters, would not apply to foreign affairs. And, indeed, at the height of its nondelegation jurisprudence in the 1930s, the Court in United States v. Curtiss-Wright Export Co. indicated that the doctrine generally applies less strictly in foreign affairs than in domestic matters. Given that war powers are (again, at least in the modern era) a quintessential foreign affairs matter, and given that the President has some independent military powers, this view treats war powers as especially delegable.

Neither of these opposing views has been accompanied by sustained examination of historical practice. Such examination is important not just for history’s sake but because historical interpretive gloss often plays an important role in constitutional separation of powers law and because, in addition to the rising originalist orientation of the Supreme Court, the political branches often invoke originalism to support their respective positions on war powers.

This Article examines the development of war power delegation from the founding era to the present to identify when and how war power delegations became a broadly accepted practice. Ultimately, we argue that the historical record does not provide strong support for either of the two polar views described above: that war-initiation power is exceptionally delegable, or that it is uniquely nondelegable. Throughout much of American history, both political branches sometimes treated war initiation as constitutionally distinct, but not so consistently to alone justify either of those positions. We then explore what that history suggests about both constitutional war power and foreign affairs delegations more generally.

We show first that, contrary to common assumptions, early American history offers little support for broad war-initiation delegation. If anything, the historical record reveals that such delegations were rare and narrow, and sometimes accompanied by strong expressions of concern. In that way, this Article contributes directly to the current debate about nondelegation originalism, pointing to the ways in which war power in particular was understood to operate. We then go on to show that even as war power delegations became more widely used in the nineteenth and especially the twentieth centuries, eventually becoming an accepted practice during the Cold War, constitutional objections to war power delegations have had remarkable staying power. Even if now a minority view, they resurface again and again, especially at moments of major controversy about the role of military force in American foreign policy.

We do not contend that the historical record alone yields a clear doctrinal answer to whether and to what extent the war power is delegable—and, to reiterate, by that we mean the power to commence war as distinct from powers over how to wage it. A comprehensive doctrinal analysis would look at other factors, including functional arguments.

Nevertheless, our analysis of the historical record yields at least four implications for thinking about law in this area. First, this Article casts doubt on efforts to separate a category of “foreign affairs delegation” from resurgent controversies about the nondelegation doctrine in general, because it shows that foreign affairs delegation is not a single, coherent category. Those who want to breathe new life into the nondelegation doctrine, often on originalist grounds, sometimes carve out foreign affairs for special treatment as an area in which broad delegation of executive policy discretion seems especially appropriate. This Article, however, draws attention to the ways in which war-initiation power has historically been viewed as distinct from some other foreign affairs delegations. Contrary to the tendency of some constitutional critics of delegation in general to see Congress’s war power as an area in which delegation is especially appropriate, this Article spotlights arguments as to why war power delegation has sometimes been viewed as uniquely problematic. Among other things, this account complicates efforts by some jurists and commentators to pursue on originalist grounds a restrictive domestic nondelegation doctrine while preserving broad delegations as to war and foreign affairs.

Second, this Article shows that the contemporary emphasis in constitutional debates on whether Congress authorizes war or force misses the historical emphasis on how Congress does so. The stakes involved in the latter are immense, too. Any reform project aimed at restoring Congress’s “original” war powers also needs to grapple with constitutional limits to their delegation.

Third, the periodic reemergence of war power nondelegation objections illustrates how constitutional arguments have always been a major part of policy debates over U.S. military power. A defining feature of American constitutional war powers is the extent to which, even centuries after the founding, many basic legal questions remain contested, and the extent to which partisans in strategic debates over the use of military force wield constitutional arguments for political effect. This point is worth highlighting at this moment because U.S. overseas military commitments face intense resistance from both the right and the left. The history in this Article suggests that we will likely see an uptick in war power nondelegation arguments again as a tool of resistance to military adventurism—and at a time when nondelegation doctrine generally seems to be in some flux.

And, fourth, this Article shows the many ways in which war power delegations have been used or proposed to deal with a wide array of novel strategic challenges. One obvious function of war power delegation is to manage complexity, by giving the President leeway to respond quickly and flexibly to crises. This fits with standard arguments for delegation in general. The story of war power delegation is more intricate. This tool also served as a device for handling various, specific challenges—including dilemmas that were virtually unimaginable to the founders—that arose over time in the context of overseas policing, collective security, and nuclear deterrence.

The Article proceeds as follows. Part I considers what, if anything, the Constitution’s drafting and ratifying history can contribute to debates about war power delegation. Part II examines historical war power practice up to 1860 under four categories of conflicts and their legal bases: (1) formally named “wars”; (2) the “Quasi-War” with France in 1798–1800; (3) lesser-known nineteenth-century episodes in which war power delegation was considered or debated but no actual military conflict ensued; and (4) other use-of-force delegations relating to frontier conflicts with Native American tribes, piracy, and insurrections. Part III looks at delegations from the Civil War to the Second World War, a period in which the nation’s emergence as a global power was, perhaps surprisingly, not accompanied by any material delegation of war-initiation power. Part IV examines practices beginning with the Cold War, in which we find the most decisive shift to a regime of broad delegation of war power. Part V discusses the implications of this history for war powers doctrine, foreign affairs nondelegation doctrine, and war powers reform.

I. WAR POWER DELEGATION AT THE FOUNDING

A vast scholarly literature has explored the extent to which the Constitution’s original design vested the war power exclusively in Congress. Article I gave Congress the power to declare war, and Article II vested executive power in the President and made the President commander in chief. Debate rages today about whether, beyond giving the President wide powers to control the conduct of war, those Article II powers also include authority to initiate military hostilities. We do not relitigate that issue here. For present purposes, we assume that the original design gave Congress some exclusive war power—a proposition not widely contested—and ask instead what founding-era debates suggest about Congress’s ability to delegate that exclusive power (whatever its extent may have been) to the President.

We find that the founding-era debates say surprisingly little on the matter. Neither the framers nor the ratifiers appear to have engaged war power delegation directly. The war power did not play a large role in founding-era debates, and contemporaneous commentary on that power lacked detail about how it would be exercised. Further, discussions of delegation more broadly (which themselves were rare) do not have obvious implications for war power delegations. The founding-era debates and background understandings do not clearly establish congressional authority to delegate war powers. If anything, they indicate strong beliefs among at least some key framers that important war power decisions should not lie with the President, raising doubt whether those framers would have thought it permissible for Congress to broadly hand them off to the President by statute.

A. War Initiation in the Convention and Ratification Debates

The records of the 1787 Philadelphia Convention indicate that delegates discussed war powers on two material occasions. Although both exchanges convey a strong sense that Congress, not the President, should hold war-initiation power, neither considers the question of war power delegation directly or definitively.

On May 29, Edmund Randolph opened the Convention’s substantive debate by introducing the Virginia Plan, which soon prompted a discussion of the war power. The Plan said nothing directly about war power, but it proposed a national government headed by a “National Executive” which, in addition to “general authority to execute the National laws,” would have “the Executive rights vested in Congress by the [Articles of] Confederation.” Various speakers objected that this language could be read to give war powers to the President. The delegates did not vote specifically on the war power point, but on a subsequent motion by James Madison (seconded by James Wilson) they dropped the reference to the executive powers of the Confederation Congress and substituted a direction that the executive would have power “to carry into execution the national laws” and “to appoint to offices in cases not otherwise provided for.” The task of defining legislative and executive powers ended up with the inaptly named Committee of Detail, which delivered to the Convention on August 6 a draft giving Congress the power “To make war.”

When the full Convention reached the “make war” language on August 17, Charles Pinckney suggested that the war power should go to the Senate rather than Congress as a whole, and Pierce Butler spoke in favor of “vesting the [war] power in the President.” Butler’s suggestion received no recorded support; Elbridge Gerry replied that he “never expected to hear in a republic a motion to empower the Executive alone to declare war.” Madison and Gerry famously moved to replace “make” with “declare,” which passed eight states to one. That vote established what became the Constitution’s final language, and the delegates seem not to have returned to it.

The August 17 debate tends to support the idea of congressional war-initiation power, but it is unhelpful on the question of delegation. Questions of how Congress would exercise war power were not addressed directly at all. One might argue that the delegates’ focus on the dangers of executive war initiation suggests that they would not have wanted Congress to delegate it broadly to the President. Ellsworth and Mason, for example, seemed to favor congressional war power as a way of reducing the likelihood of war—because they thought presidents would be too inclined toward it. Sherman and Gerry argued (along with Pinckney, Rutledge, Wilson, and Madison in the earlier debate) that the President should not have war-initiation power. Perhaps this meant they thought the President should not have war-initiation power even with Congress’s approval, but that is not certain. Alternatively, they (or some of them) might have thought only that Congress should make the initial decision, but that decision might include empowering the President ultimately to exercise discretion. In the end, only a few delegates spoke to the war power issue (though the speakers included some of the most influential delegates). It seems that the delegates were thinking generally about the question of which branch should have war power, and what the scope of that power would be, but were not focused on how that power would be exercised in practice, including the permissibility or impermissibility of delegating it.

This pattern continued in the ratification debates. As at the Convention, war initiation was not a major focus. When it came up, speakers seemed to assume it was a congressional power without dwelling on how they expected Congress to exercise it. For example, in an often-quoted passage, James Wilson in Pennsylvania said:

This system will not hurry us into war; it is calculated to guard against it. It will not be in the power of a single man, or a single body of men, to involve us in such distress, for the important power of declaring war is vested in the legislature at large; this declaration must be made with the concurrence of the House of Representatives. From this circumstance we may draw a certain conclusion, that nothing but our national interest can draw us into a war.

The Federalist also had little to say about war initiation. The most significant discussion is in Federalist 69, in which Alexander Hamilton—a bit disingenuously—compared the President’s power under the Constitution to the power of the British monarch and the governor of New York. Regarding war power, Hamilton noted that while the monarch alone could declare war, under the Constitution that power “would appertain to the legislature.”

As with the comments at the Philadelphia Convention, these statements can be read to imply a nondelegable power in Congress. Although Wilson’s comment does not address delegation directly, concerns about lodging war initiation in a single person—instead demanding that such decisions ultimately rest with both houses of Congress—might also cut against allowing Congress to delegate its war power to the President. But again, that is far from certain. Such statements might only mean that Congress must make the initial decision regarding war, but that choice might include a decision to pass discretionary authority to the President. In Hamilton’s contrast between the British monarch and the Constitution’s President, even if Congress’s war-initiation power were delegable, placing it in Congress in the first instance would still represent a substantial limit on the President’s power compared to the British monarch’s.

Like the drafting debates, the statements regarding war power in the ratification period have only limited value for our inquiry. They are isolated statements by only a few participants (albeit important participants), not addressed to the particular issue of delegation, and not part of an extended discussion of the operation of war powers. Their central focus was to point out an important constitutional limit on presidential power. Their phrasing—and the fact that they were not contested by anti-federalist speakers or writers—indicates a broad consensus on the basic proposition that allocating declare-war power to Congress implicitly denied the President a corresponding independent power. But, how Congress could exercise its declare-war power is a different matter.

B. General Understandings of Delegation in the Founding Era

The framers and ratifiers might not have addressed war-initiation delegations specifically because they had a broader understanding of delegation that would encompass war power along with many other congressional powers. The founding-era view on that broader issue is sharply contested, with some scholars contending that the founding generation generally saw Congress’s powers as delegable subject perhaps to only modest limits while other scholars argue that the founding generation held more exacting restrictions on congressional delegation. This debate has said little about war power directly, and we do not take a position on it here.

One specific strand of that debate over the founders’ view of delegation, however, is quite relevant to war power and merits further discussion. Several commentators have suggested that, notwithstanding substantial general limits on delegation, the framers may have understood foreign affairs powers to be broadly delegable. Because that categorical exception might include war-initiation power, we address it briefly here.

The core case against delegation starts with the text of Article I, Section 1: “All legislative Powers herein granted shall be vested in a Congress of the United States . . . .” By negative implication, it may be argued, legislative powers shall not be vested elsewhere—and statutes delegating power to the President, to the extent they transfer that legislative power to the President, appear to violate this directive. Further, influential English political theorists including Locke and Blackstone had suggested that delegation of lawmaking power by the parliament to the monarch threatened separation of powers. These sources may indicate a background principle of nondelegation informing the founding-era understanding of Article I. But even if the Constitution contained such a broad nondelegation principle regarding Congress’s legislative powers, it is not clear how it would relate to war-initiation power (and other foreign affairs powers). Under the British system, war initiation—like much of foreign affairs—was a power of the monarch, not of parliament. Thus, to the framers and the thinkers who influenced them, war power may not have been considered the type of lawmaking (that is, making rules governing ordinary private behavior) to which nondelegation principles applied.

The most developed defense of this position, principally based on Convention debates, comes from Professor Michael McConnell. He suggests that “the non-delegation doctrine, with its roots in the rejection of a Proclamation Power, may apply only to lawmaking, not to the former royal prerogative powers given to the legislative branch.” He finds support in an exchange near the outset of the Convention, in which participants discussed and rejected a proposal by Madison to specify that the executive would have power “to execute such other powers not Legislative nor Judiciary in their nature as may from time to time be delegated by the national Legislature.” McConnell suggests that the Convention accepted the view that Congress could authorize presidential exercise of congressional powers if those powers were not legislative in nature, and that the President’s exercise of such delegated powers was within the law execution power. He goes on to include “formulating foreign policy” as an example of powers that are not judicial or legislative in nature and which might be especially delegable to the President.

Perhaps, but this seems far from certain. There was little recorded debate on this issue, and it seems unclear whether the delegates rejected Madison’s proposal because they thought it redundant (McConnell’s view) or because they opposed it on the merits. Nor is it clear whether the category of matters “not Legislative nor Judicial in their nature” approximated the former royal powers or included foreign affairs. And even if McConnell is right about the broad outlines of his conclusion, it is unclear whether Convention participants would have regarded war power as within the category of non-legislative delegable powers. Several key delegates, including Wilson and Madison himself, said or implied that war power was legislative in nature (even if some other foreign affairs powers might not be).

In sum, it is difficult to discern how the founding generation would have thought general principles of delegation applied to war power, even if one could determine what, if any, general principles on delegation they held in common. Lacking specific discussion of war power delegations, the founding-era debates and assumptions seem not to provide clear direction on the matter.

II. DELEGATION AND WAR POWER, 1789–1860

Given the ambiguity of the founding era regarding war power delegations, early practices may be particularly salient in establishing precedent. This Part examines early congressional practice relating to delegation and military conflicts. It proceeds in four parts. First it considers conflicts that Congress formally designated as “war.” Second, it describes the most significant authorization of military force in the period apart from formal declarations, the naval “Quasi-War” in 1798–1800. Third, it considers a series of lesser-known incidents involving delegations that did not lead to material conflicts. Finally, it examines delegations relating to uses of force in frontier conflicts with Native American tribes and suppression of piracy and insurrections.

We conclude in this Part that the early record of war-initiation delegation is surprisingly thin. Delegations during this period were scattered, relatively narrow, and often accompanied by special circumstances that caution against their use as broad precedents. Moreover, proposals to delegate war-initiation authority (or related authority) were sometimes opposed on constitutional grounds, including on the grounds that war-initiation power was especially nondelegable. These objections stand in contrast to Congress’s extensive delegations during this period as to the manner in which the President might conduct wars and other uses of force that Congress authorized.

A. Formal Wars

In the first seventy years of practice under the Constitution, Congress recognized four wars against foreign powers by name and authorized the President to use the U.S. military to fight them. Two of these are the well-known conflicts with Britain, begun in 1812, and with Mexico, begun in 1846. The other two, less commonly included on the list of formal wars, are conflicts with Tripoli (authorized in 1802) and Algiers (authorized in 1815).

The War of 1812 was the only time in this period that Congress used the phrase “declare” war. Amid rising tensions with Britain on various matters, President Madison asked Congress for a declaration of war in mid-1812, and Congress responded with an Act stating that “[W]ar . . . is hereby declared to exist between [Britain] and the United States . . . and that the President of the United States is hereby authorized to use the whole land and naval force of the United States to carry the same into effect . . . .”

Notably for our purposes, the 1812 statute was not a delegation of war-initiation power. Unlike modern authorizations, it did not leave war initiation to presidential discretion. Congress itself invoked the state of war. The statute went on to authorize broad presidential discretion in conducting the war. But that is distinct from war initiation. At minimum, the Commander-in-Chief clause indicates a shared power of war-making between the President and Congress. Congress’s recognition of broad presidential discretion signaled Congress’s decision not to direct or limit the President’s exercise of the commander-in-chief power in conducting the hostilities.

Congress’s first formal recognition of a state of war came a decade earlier in 1802. The Pasha (ruler) of Tripoli, in modern Libya, as a prelude to beginning piratical attacks on U.S. merchant shipping in the Mediterranean, formally declared war against the United States in 1801. President Jefferson asked Congress for authority to respond; in early 1802, Congress recognized a state of war and authorized the President to conduct hostilities against Tripoli. Although Congress did not use the word “declare,” the 1802 Act resembled the subsequent 1812 declaration in other significant respects—including that it did not delegate war-initiation authority. Congress itself acknowledged the war’s existence. Again, Congress recognized broad presidential authority to conduct the war, but the President presumably would have had that authority in any event once the existence of war was established.

The 1815 events with Algiers resembled the earlier Tripoli conflict. During the War of 1812, Algiers’s navy began seizing U.S. shipping, but the United States had little ability to respond with force. After hostilities with Britain ceased, President Madison asked Congress for war-making authority, which Congress granted in similar terms to the 1802 Tripoli authorization. As with Tripoli, Congress did not delegate war-initiation power; it recognized a state of war and authorized the President to direct the military conflict as he saw fit.

Finally in this period, Congress recognized a state of war with Mexico in 1846. In popular history the Mexican War is often listed with the War of 1812 as a “declared” war. In fact, Congress’s authorization of the Mexican War tracked its authorization of the Algiers and Tripoli conflicts, not using the word “declare” but instead recognizing the existence of a state of war resulting from the other party’s acts. Prior to the war, President Polk (without Congress’s authorization) sent U.S. troops into territory claimed by both the United States and Mexico, whereupon Mexican forces attacked U.S. troops in the disputed territory. Polk then asked Congress to recognize a state of war created by Mexico, which Congress did. Leaving aside the much-debated constitutionality of Polk’s provocative deployment, for present purposes the key point is that Congress did not delegate war-initiation power to the President. As in the previous conflicts, Congress made the decision for war itself and authorized broad presidential discretion in the means of fighting it.

In sum, Congress’s treatment of formal war authorization in the early nineteenth century differed significantly from Congress’s modern authorizations. None of the four nineteenth-century acts delegated war-initiation authority. In each of them, Congress itself stated the existence of war without qualification. This contrasts with modern authorizations that, as discussed below, leave to the President the decisions when, whether, and (sometimes) against whom to begin hostilities. Early nineteenth-century practice regarding formal war authorizations thus affords little precedent for modern delegations of war-initiation power.

These four episodes do support broad congressional delegation of power over the conduct of war. But this should not be read to endorse delegation of congressional war-initiation power because the President was likely understood to have independent war-waging authority once Congress recognized a state of war. To the extent Congress has concurrent authority to manage the conduct of war, the nineteenth-century authorizations signaled that Congress would not exercise that power and left the conduct of war to the President. As a result, early precedent for the delegation of war-initiation power must be sought elsewhere.

B. The Quasi-War

The naval war with France at the end of the eighteenth century, called the Quasi-War, is a frequently cited example of early post-ratification delegation. David Currie observed: “The bellicose legislation of the Fifth Congress was riddled with broad delegations of authority.” As to war initiation delegation, however, that is something of an overstatement.

The conflict opened in 1797 when France began seizing U.S. merchant ships as part of an effort to cut off trade with Britain. Congress’s response was initially limited. Consistent with President Adams’s policy of strengthening defenses while seeking peace, it appropriated money for coastal fortifications (with discretion to the President in choosing their location), authorized (but did not require) the President to equip and man three frigates (with very specific directions as to the treatment of the crews), and authorized (but did not require) the President to increase the strength of existing revenue cutters. In early 1798, Congress increased appropriations to these ends and authorized the President to raise an additional regiment of artillery and engineers. But mostly Congress rejected proposals for more aggressive measures from Federalist leaders and awaited results from a diplomatic mission sent by Adams.

The diplomatic mission failed, and once the outcome was known in mid-1798, Congress embraced more warlike measures in the form of delegations. Congress authorized the President to use the navy to seize French ships committing “depredations” on U.S. shipping or “hovering” on the U.S. coastline for that purpose. On the same day, it also approved a Federalist proposal to authorize the President to raise additional troops at his discretion (the so-called Provisional Army); however, at the insistence of Republican and moderate Federalist congressmen, the President’s authority was limited to situations in which a foreign power declared war or there was an actual or imminent invasion. In June, Congress prohibited U.S. ships from sailing to French ports and prohibited French ships from sailing to U.S. ports, with discretion to the President to waive the prohibition in some circumstances. Congress later that month authorized U.S. merchant ships to arm themselves and resist French attacks, with the President authorized to provide what we would now call rules of engagement and to suspend the law if France disavowed further hostilities.

In July 1798, Congress took its strongest step, authorizing the President to use the navy to attack French navy ships and privateers on the high seas and to commission U.S. privateers. Some congressional leaders discussed declaring war, but that was never formally proposed, nor was there specific direction to the President to expand the war (merely an authorization). This was the high point of Quasi-War delegation. Although the war continued into 1800 before a new diplomatic mission restored peace, Congress’s war-related legislation in subsequent years was largely confined to reenacting prior measures and making additional appropriations.

As delegations of war-making power, these measures are important but modest. Congress gave the President some discretionary authority in war-related matters. But the only direct delegations of the decision to use force were the two 1798 statutes authorizing attacks on French ships. Of these, the first (in May 1798) was purely defensive: the President could respond to French attacks or imminent attacks along the U.S. coast. One might have thought that the President had that power in any event, as part of the power (recognized by Madison at the Convention) to repel sudden attacks. Moreover, Congress likely would not have seen this as delegating much policy discretion as a practical matter, as there was no doubt at that time the President would use the force described. Nonetheless, at least formally, the statute conveyed discretion to respond to warlike measures in limited circumstances.

The July 1798 authorization was broader and somewhat more akin to modern war power delegations. It permitted—but did not require—the President to expand the conflict to the high seas and against French shipping and naval forces generally. And the case for the President having this power independently is weaker than for purely defensive measures. On its face, this was a material delegation. But Congress did not authorize the President to begin new hostilities—only to extend existing hostilities. Indeed, the July statute could be seen as lifting some restrictions of the previous statute, which implicitly constrained the President to defensive responses. And the July authorization was itself limited, allowing attacks on the high seas but not against French ports or other land facilities, for example in the French Caribbean colonies. Overall, it seems that Congress was trying to maintain tight control over the extent to which the conflict escalated into full-scale war, rather than transferring to the President substantial discretion over whether to escalate.

The related matter of the Provisional Army is noteworthy because Congress’s control over raising a national army (including whether there would be one at all) was such a sensitive issue at the founding. Congress delegated only limited power in this case, which might have been viewed as constitutionally comparable to delegating war power. Some members of Congress expressed grave concerns over broad delegation, successfully narrowing the measure’s proposed scope. The initial Federalist proposal, enacted by the Senate and sent to the House in April 1798, authorized the President to raise the army at his discretion, if he found it required by the public safety. House Republicans objected, specifically in constitutional terms, that this unduly delegated congressional power to the President. Though the delegation involved raising armies rather than initiating war, the two were thought analogous; Representative Brent, for example, argued that “if a proposition was made to transfer to the President the right of declaring war in certain contingencies, the measure would at once appear so outrageous, that it would meet with immediate opposition.” These objections resonated with enough Federalists that the proposal was modified to limit the President’s discretion to specified circumstances of a declaration of war or actual or imminent invasion, and only during the next recess of Congress. With this debate on their minds from earlier in the 1798 session, the lack of delegation-based objections to the July force authorization suggests that members of Congress probably did not regard the July measure as a substantial war-initiation delegation.

To be sure, there were other delegations in the Quasi-War period that could be precedent for other types of modern delegations. But as to delegating war-initiation power, the Quasi-War affords only limited precedent. That is particularly significant because the Quasi-War was the only foreign conflict fought pursuant to delegated discretionary authority in the early post-ratification era (and indeed, as later sections show, the only one prior to the twentieth century).

C. Delegations Not Leading to Military Conflict

Perhaps the most interesting and least studied episodes of war power delegation in the post-ratification era are those in which proposed delegations were refused, or in which delegations were made but no conflict ensued. These are significant because they highlight optional war power delegations, in which the President is authorized to engage in hostilities, or to opt not to act at all. We identified four such episodes, recounted below. They indicate that no clear consensus or consistent pattern existed in the mid-nineteenth century regarding war power delegation. Further, they provide little support for the proposition, discussed above, that formerly prerogative powers were understood to be broadly delegable.

1. The No-Transfer Act

In 1811, war with Britain was on the horizon. So was the United States’ acquisition of Florida. A year earlier, President Madison directed U.S. troops to take possession of West Florida (the coastal strip between the Mississippi River on the west and the Perdido River on the east), on the view that it was part of the Louisiana territory purchased from France in 1803. Spain, which claimed and nominally controlled West Florida, objected but lacked power to mount opposition. That left Spain in control of East Florida (east of the Perdido River) for the moment, but U.S. acquisition of East Florida seemed inevitable. Seeking to make the best of a bad situation, Spain undertook negotiations for a U.S. purchase of East Florida.

With the looming threat of war with Britain and Spain’s increasing weakness, U.S. leaders worried that Britain might seize East Florida first. On January 3, 1811, Madison asked Congress for authority to use force to secure U.S. interests in East Florida. Congress responded with a resolution declaring that “the United States cannot see, with indifference, any part of the Spanish Provinces adjoining the said States eastward of the River Perdido, pass from the hands of Spain into the hands of any other foreign Power.” Simultaneously, Congress approved the so-called No-Transfer Act, authorizing the President to use force in East Florida, either under an agreement with the “local authority” or in the event of “an attempt to occupy the said territory, or any part thereof, by any foreign government.” All of this was done in extraordinary secret sessions (presumably to keep Britain in the dark). Britain never made any moves to occupy East Florida, and following the War of 1812, the Monroe Administration concluded the Adams-Onis Treaty of 1819, under which, among other things, the United States purchased East Florida from Spain.

The significance of the No-Transfer Act’s delegation of war-initiation powers is unclear. On one hand, the Act entailed a consequential transfer of power to use force from Congress to the President, made without recorded objection on that ground. Armed conflict with Britain was no small thing (as the country found a year later), and the decision to counter a British move in Florida with force carried potentially grave consequences. Unlike the Quasi-War authorizations—the most substantial prior delegations of war power—the No-Transfer Act was not a response to attacks or likely attacks on the United States or U.S. ships; it authorized the opening of new hostilities against a formidable power. On the other hand, the authorization coupled with the resolution that the United States “cannot see, with indifference” any foreign seizure of East Florida, may have been meant to leave little discretion to the President to fail to respond to a British move. The secret Act thus might be seen more as a limited declaration of war conditioned on occurrence of a specific event, rather than a delegation. In that sense, it is not directly analogous to modern war-initiation delegations that leave it to the President to decide on war or not war.

2. Rebuffs of Jackson

As President, former General Andrew Jackson twice sought authority to use the U.S. military to press claims against Mexico and France. Both times Congress declined to enact Jackson’s requested authorizations.

By an 1831 treaty, France agreed to pay claims by U.S. shipowners arising from French seizures during the Napoleonic Wars. France failed to pay as required, and in 1834 Jackson asked Congress for authority to make armed reprisals against French property. Congress refused, with some speakers referring to the issue of delegation (although much of the discussion focused on the practical question of whether force was necessary). Representative Claiborne argued that the proposal would “be virtually conferring upon the President unconstitutional power—a power to declare war.” The Senate Foreign Relations Committee Report on the matter, presented by Henry Clay, specifically objected to Jackson’s request partly on delegation grounds. The President’s supporters, while not defending delegations of war power, responded that reprisals, which were all Jackson proposed, were different from war.

Similar events transpired with respect to Mexico in 1837. United States citizens pressed various claims for injuries and lost property, which Mexico declined to satisfy. Jackson proposed that he make further demands and that Congress enact legislation authorizing reprisals and other uses of force if the demands were refused. The Senate authorized the demands but not the reprisals or use of force, providing instead that the President should return to Congress for further authorization if Mexico did not respond satisfactorily. The House Committee on Foreign Affairs recommended a similar approach, but the full House failed to act before the end of the session. Describing the episode later that year, new President Martin Van Buren observed the “indisposition to vest a discretionary authority in the Executive to take redress . . . .” Congress refused to act on Van Buren’s renewed requests for authority against Mexico, and the matter was later settled by a treaty sending the claims to arbitration.

It is hard to know what to make of the failure of Jackson’s initiatives. Congress declined the requests to authorize prospective uses of force. Some reference, usually by the President’s political rivals, was made to constitutional limits on vesting the President with war-initiation power. Perhaps as importantly, responses did not claim broad constitutional license to delegate war-initiation power (nor invoke the No-Transfer Act precedent). But congressional objections likely arose as much from opposition to Jackson’s warlike measures on the merits as from constitutional scruples.

3. The Maine Boundary

President Van Buren subsequently had more success obtaining a war power delegation outside the Mexico context (one may speculate that the quieter Van Buren seemed less worrisome to Congress than the bellicose Jackson). In the 1830s, the uncertain border between northern Maine and Canada became a substantial issue. An attempted settlement through arbitration failed during Jackson’s administration, and Van Buren inherited the dispute. Professing commitment to a peaceful solution, Van Buren nonetheless asked Congress for authority to use military force in the disputed territory. Perhaps surprisingly, given Congress’s rejection of Jackson’s requests for military authorizations, Congress in 1839 authorized the President “to resist any attempt on the part of Great Britain, to enforce, by arms, her claim to exclusive jurisdiction over that part of the State of Maine which is in dispute…” by “employ[ing] the naval and military forces of the United States and such portions of the militia as he may deem it advisable to call into service.”

The debates over this measure do not provide a clear picture of how Congress understood it. Some members of Congress specifically objected to delegating war-initiation power. Others thought the matter largely one of defense against invasion, perhaps in which the President already had constitutional and statutory power to respond. Ultimately the bill passed by wide margins.

This might at first seem a clear-cut case of substantial war power delegation. However, its constitutional significance may be discounted because it involved direct defense of territory disputed between Britain and the United States—and hence perhaps the President’s implied independent power to repel invasions—and it depended on the specific contingency of Britain using force in connection with that dispute. It nevertheless represents a counterpoint to earlier rebuffs of Jackson and a continuation—arguably an expansion—of the willingness to delegate in the No-Transfer Act. In particular, the Maine delegation is unique for the time in putting entirely in the President’s hands, as a practical matter, the decision whether or not to use force. As discussed, the No-Transfer Act (beginning with its title) was close to a direction to the President not to allow British seizure of East Florida. And in the Quasi-War delegation, Congress presumably understood and intended that President Adams would use naval force against France once authorized. The Maine delegation differed from those previous examples in that Congress probably preferred that military conflict not result. Congress would not have assumed that voting for delegation was a vote for war. Rather, circumstances indicated that Congress was passing to the President the decision whether to use force based on future circumstances. In this sense the episode—despite other aspects limiting its significance—can be seen as the first “modern” delegation of the decision whether to initiate war.

4. Buchanan’s Mixed Record

After the Maine dispute, the next major discussion of delegating war power occurred in the Buchanan Administration. Buchanan was somewhat more inclined to use force abroad than his immediate predecessors, but he also generally believed that the President lacked authority to initiate hostilities without congressional approval. Thus he made several requests for authority to use force in Mexico, Central America, and Paraguay, with mixed results.

Buchanan’s putative success arose after Paraguayan artillery fired on a U.S. ship, the Water Witch, on the Paraná River. At Buchanan’s request, Congress authorized the President, if Paraguay refused reparations, to “adopt such measures and use such force” as needed to induce Paraguay to give “just satisfaction” for the attack. Buchanan sent a naval force to the region, leading to a diplomatic settlement.

On first look, the Water Witch incident may seem to be a major step in the development of war power delegation. Like the Maine delegation some twenty years earlier, it gave the President wide discretion, both on paper and in practice, to decide whether to launch military attacks. But unlike the Maine delegation, it did not address threats to U.S. territory or immediate U.S. strategic interests. It more closely resembled the authorizations proposed by Jackson and rejected by Congress in part on the argument that they were unconstitutional delegations. Like the Maine delegation but even more so, the Paraguay delegation might be thought akin to modern war-initiation delegations.

But other events complicate the episode as a precedent for emerging consensus on war power delegation. First, Buchanan’s proposed action also resembled earlier unilateral presidential uses of force responding to affronts to U.S. interests abroad. In a notable example, in the immediately preceding Pierce Administration, U.S. forces shelled the city of Greytown, Nicaragua, after perceived mistreatment of a U.S. diplomat. In light of this and other unilateral actions, some members of Congress may have thought congressional approval was not constitutionally required in the Water Witch incident and thus might not have regarded it as a consequential delegation. Moreover, the Paraguay delegation itself drew some sharp opposition, including on the ground that it was unconstitutional. And while opposition was overcome with respect to Paraguay, it prevailed against Buchanan’s more far-reaching proposals.

Buchanan had in mind multiple aggressive uses of military force in Latin America. He asked Congress for authorization “to employ the land and naval forces of the United States” to protect the isthmus of Panama. Similarly, he asked Congress for authority to use force to prevent closure of alternate routes across Nicaragua and the isthmus of Tehuantepec in Mexico. Buchanan argued:

The remedy for this state of things [disorder and threats to Americans crossing between the oceans] can only be supplied by Congress, since the Constitution has confided to that body alone the power to make war. Without the authority of Congress the Executive cannot lawfully direct any force, however near it may be to the scene of difficulty, to enter the territory of Mexico, Nicaragua, or New Granada . . . even though they may be violently assailed whilst passing in peaceful transit over the Tehuantepec, Nicaragua, or Panama routes . . . . In the present disturbed condition of Mexico and one or more of the other Republics south of us, no person can foresee what occurrences may take place . . . .

Buchanan also asked for authority to establish a military protectorate over parts of northern Mexico to defend the U.S. border, as well as authority to respond with force against Britain for interference with U.S. shipping.

Congress declined to act on all of these requests. How much this had to do with constitutional scruples is unclear; it may simply have been that a majority distrusted Buchanan’s motives. One scholar comments: “Congress was too jealous of the war-making power to heed the President’s requests, and Republican members in particular were too fearful of giving such authority to a president so sympathetic to the South’s desire for more slave territory.” But constitutional arguments were strongly, if perhaps conveniently, invoked. Senator Trumbull objected that Congress did not have “any authority to surrender the war-making power to the President . . .  He is not vested with it by the Constitution; and we have no right to divest ourselves of that power which the Constitution vests in us.” Buchanan responded that the requested authority “could in no sense be regarded as a transfer of the war-making power to the Executive, but only as an appropriate exercise of that power by the body to whom it exclusively belongs.” Invoking precedent, he added: “In [the Water Witch incident] and in other similar cases Congress have conferred upon the President power in advance to employ the Army and Navy upon the happening of contingent future events; and this most certainly is embraced within the power to declare war.”

Thus, Buchanan’s experiences point in different ways. Congress approved a modern-looking war power delegation in the Water Witch incident, over constitutional objections. But in multiple other cases Congress ignored Buchanan’s appeals for advance authority to initiate hostilities at his discretion. Constitutional objections to delegation featured prominently in these debates as well, though Congress often had other, more practical reasons to withhold authority.

In sum, the record of war-initiation delegation as to foreign enemies in the pre-Civil War period is thin, though not entirely barren. We count three material delegations in addition to the Quasi-War: the No-Transfer Act, the Maine boundary delegation, and the Water Witch delegation. But each delegation was expressly conditioned on a specific fact—a fact that might have triggered the President’s limited independent constitutional authority to act anyway—and was somewhat offset by other near-contemporaneous episodes in which Congress refused delegations, with some objections expressed on constitutional grounds.

D. Using Force against Native American Tribes, Piracy, and Insurrection

Three other areas, distinct from war-initiation delegations, are sufficiently related to merit discussion. First, Presidents directed hostilities throughout this period against Native American tribes on the western frontier, generally with Congress’s implicit approval (although not with specific authorization). Second, Congress authorized the navy to suppress piracy and the slave trade. Third, Congress authorized the President to use the army and militia to enforce federal laws and suppress insurrections, an authority most notably invoked by President Lincoln in the Civil War.

1. Frontier Conflicts

The United States conducted military operations against Native American tribes on the frontier throughout the post-ratification period. Tribes were generally treated as tantamount to foreign nations for treaty-making purposes—that is, tribal treaties were adopted with the Senate’s advice and consent—so by parallel reasoning, the Constitution’s war power provisions arguably should have applied to them as well. It is not entirely clear how early Congresses saw the relationship between the tribes and constitutional war power, but in any event, the frontier conflicts do not provide clear examples of war-initiation delegations. They followed a similar pattern. They were not directly declared or authorized by Congress (nor formally called war). Presidents often sought expansions of the military and additional funding on the basis of frontier conflicts, so Congress was well aware of them. But Congress appeared to assume the President had some independent power to conduct frontier conflicts—perhaps because they were internal and were (or were claimed to be) defensive in nature.

The conflict in the Ohio Valley immediately after the Constitution’s ratification is illustrative. President Washington inherited a violent northwest frontier, with large numbers of U.S. settlers moving west, provoking conflicts with Native inhabitants. In 1789, he asked Congress to reauthorize and expand the small army carried over from the Articles of Confederation, citing among other things the troubled northwest. Congress did so, and followed up with a further modest expansion in 1790. Washington dispatched an expedition under Josiah Harmar against the northwest tribes. When Harmar was defeated, Washington sent a larger expedition under Arthur St. Clair—which likewise met defeat. Congress authorized more troops, at Washington’s request, while conducting a contentious investigation into St. Clair’s defeat. The new troops, commanded by Anthony Wayne, gained a decisive victory in 1794.

The source of Washington’s authority to fight the northwest conflict is unclear. It is possible to see the early military statutes as broad delegations to the President to use the authorized troops as the President thought appropriate (including for offensive operations) on the frontier. The statutes did not say this, though. They simply authorized troops, with no direction on their use. It seems more likely that Congress understood the troops to be available to respond to ongoing hostilities of the northwest tribes, which had begun before Washington took office. That is, Congress may have seen the United States as already at war in the northwest, with the troop authorizations allowing Washington to use his independent power to fight an existing war but not delegating power to start new ones.

There is reason to think Washington took the latter view. While directing campaigns against the northwest tribes without express congressional authorization apart from the authorization of the army, at the same time Washington refused requests from local authorities to use troops against tribes in the southwestern territories, where only sporadic violence had occurred. Washington explained that offensive operations in the south needed specific congressional approval. Of course, Washington may simply have wanted to avoid southwestern conflicts while embroiled in a northwestern one. But his constitutional reservations fit well with the view that in authorizing troops Congress was not authorizing new theaters of hostilities and that the President had independent power or congressional approval to fight preexisting frontier wars but not to start new ones.

In any event, the Ohio Valley conflict seems a doubtful precedent for congressional delegation of war-initiation power. It is not clear that Congress saw itself delegating such power, as opposed to supplying troops and funds to a pre-existing and ongoing effort. The relevant statutes do not speak in terms of authorization, and modern scholars have drawn various conclusions from them.

Nineteenth-century frontier conflicts took a similar course, typically proceeding on the proposition that they were defensive wars or aspects of law enforcement. The 1819 Seminole War is an important example. President Monroe, without congressional authorization, directed Andrew Jackson to attack the Seminoles in Spanish Florida in response to Seminole raids into U.S. territory. During the campaign, Jackson attacked Spanish posts—which Monroe had not authorized. Jackson’s actions prompted fierce constitutional debate in Congress. But most participants in the debate conceded that no congressional authorization was needed for hostilities against the Seminoles because those operations responded to attacks; the debate focused on the propriety of attacking the Spanish (who arguably encouraged the Seminoles but had not themselves attacked the United States). This debate reinforces the more general impression that both the executive branch and Congress regarded the Native American conflicts (rightly or wrongly) as defensive and thus undertaken on independent presidential authority.

Congress’s most important (and regrettable) action regarding the frontier conflicts in this period, the so-called Indian Removal Act of 1830, is notable for what it did not say. The Act authorized the President to enter into treaties with tribes to exchange land east of the Mississippi River for land in the unorganized western territories. It made no mention of military force; on its face it contemplated peaceful transfers. Of course President Jackson expected forcible removal and most congressmen likely did as well, but this assumption was not reflected in the statute. A range of conflicts with Native American tribes arose during implementation of the removal policy but Jackson and his successors did not seek further congressional force authorizations.

Thus, as with the earlier frontier conflicts, the early nineteenth-century frontier conflicts do not supply a ready precedent for broad war power delegation. It does not appear that Congress saw continuing authorizations of troops as delegating to the President authorization to start wars. Congress probably thought defensive wars (including offensive counterattacks) against the frontier tribes were constitutional, but this view likely rested on independent presidential power to respond to attacks, or perhaps implicit congressional approval to continue fighting preexisting conflicts, rather than delegation of war-initiation power. At minimum, the frontier wars of the period do not provide clear examples of war-initiation delegations.

2. Piracy

Some authorities suggest that early Congresses delegated to the President discretion to use force against pirates. On closer examination, this suggestion is overstated.

Congress first addressed piracy in the 1790 Crimes Act, which provided punishments for various federal offenses including piratical activities, as well as (among others) treason, murder on federal property, and counterfeiting. As with the other crimes it encompassed, the Act did not expressly authorize presidential enforcement against pirates, presumably because members of Congress thought the President had independent enforcement power under Article II. Subsequent Presidents, notably Jefferson, used U.S. naval forces against pirates to enforce the 1790 Act, without recorded constitutional concerns.

In 1819, Congress passed an act specifically targeting piracy. Unlike the 1790 Act, it expressly authorized the President to use the navy to protect U.S. shipping and seize piratical ships. The point of the 1819 Act, which passed without material recorded debate, is not entirely clear. Piratical activity in the Caribbean and the Gulf of Mexico had surged with the breakdown of Spain’s authority over its American colonies. Under pressure from constituents, Congress may have felt a need to take visible action, perhaps to encourage greater presidential attention to the matter. Part of the 1819 Act also may have been designed to overrule the Supreme Court’s 1818 decision in United States v. Palmer, which held that the general language of the 1790 Act did not criminalize piratical attacks by non-citizens against non-U.S. ships. It seems unlikely, though, that members of Congress thought the Act was constitutionally necessary to give the President enforcement authority against pirates. The 1790 Crimes Act had no express use-of-force authorization. And, as discussed below, once Congress engaged in substantial debate on the matter, members appeared to agree that the President had independent enforcement power so long as his actions did not risk war with foreign nations.

The United States stepped up anti-piracy operations after the 1819 Act, with limited success. Pirates evaded U.S. forces by developing hidden bases in remote parts of coastal Cuba and Puerto Rico, where Spanish colonial authorities either could not or would not act against them. A frustrated President Monroe asked Congress in December 1822 for authority to build additional, lighter draft ships suitable for coastal operations. Supporters in Congress proposed a bill authorizing such construction “for the purpose of repressing piracy, and of affording effectual protection to the citizens and commerce of the United States in the Gulf of Mexico, and the seas and territories adjacent.” This language provoked the first substantial congressional debate on the matter, with Representative Eustis objecting to the bill as delegating war power because the apparent grant of authority to use force in adjacent territory might lead to war with Spain. Representative Fuller, who introduced the bill, responded that it was not intended to authorize pursuit of pirates on land, but added that the President likely had some independent pursuit power under the law of nations.

An amendment proposed by Representative Smyth to authorize land operations met sharp resistance. Much of the discussion turned on the extent to which the law of nations allowed pursuit of pirates on land, on which there was no consensus among the members. Representative Archer also argued that Smyth “proposed in effect to divest Congress and give to the Executive the power to make war.” Eventually Smyth withdrew his proposal, and the bill passed the House and later (without substantive debate) the Senate, becoming law upon President Monroe’s signature later that month.

After another two years of mixed results, Congress returned to the matter in December 1824 with a proposal, backed by President Monroe, to authorize land pursuit and blockade of ports in Cuba and Puerto Rico that sheltered pirates. The blockade authorization soundly failed in the Senate. While a range of practical concerns were expressed, Maryland Senator Samuel Smith also raised a delegation objection: “Shall we then, by sanctioning a section of this kind, put in the hands of the Executive the power of declaring war? — a power which we alone possess in Congress . . . . I am unwilling to grant a provisional power, that may lead us into war.” A Senate motion also attempted to strike the provision authorizing land pursuit, with a number of Senators arguing that the authorization was unnecessary because the President already had this power under the law of nations. The Senate voted to retain the pursuit authorization, but the House deleted it, apparently on the grounds that it was unneeded. Congressman Forsythe, introducing the Senate bill on behalf of the House Committee on Foreign Relations, said “[t]here did not exist any necessity for granting this provision of the bill, since the President has it already by the law of nations.” The Senate acquiesced in the deletion; the enacted bill only authorized expenditure for the construction of ships, without authorization or direction as to their use.

These events cast considerable doubt on the idea that Congress delegated expansive power to the President regarding piracy. The 1790 Act made piracy a crime and Presidents used their constitutional enforcement power to counter it in U.S. waters and on the high seas. These activities appear not to have inspired constitutional concerns. Although Congress passed the 1819 Act authorizing anti-piracy operations, Congress became hesitant as intensifying and inconclusive conflict suggested the need for operations in Spanish territory. Members appeared to think that some pursuit of pirates on land was allowed by the law of nations and thus fell within presidential enforcement power. But Congress resisted authorizing broader hostile operations that might provoke war with Spain, with some concerns expressed about unconstitutional delegation of war power. Modern suggestions that the nineteenth-century Congress delegated broad powers to use force against pirates thus seem mistaken or overstated.

3. Insurrections and Law Enforcement

In contrast to early concern about delegating war-initiation power, early Congresses seemed relatively (though not entirely) unconcerned about delegating authority to suppress domestic disturbances. The 1792 Militia Act conveyed broad discretion, after some debate over delegation. It gave the President authority to call the militia into federal service “whenever the United States shall be invaded, or be in imminent danger of invasion from any foreign nation or Indian tribe,” as well as “in case of an insurrection in any state, against the government thereof” and “whenever the laws of the United States shall be opposed, or the execution thereof obstructed, in any state, by combinations too powerful to be suppressed by the ordinary course of judicial proceedings, or by the powers vested in the marshals by this act.”

These were quite broad delegations, made without reference to any particular situation. In the House they prompted objections. “It was surely the duty of Congress,” one member said, “to define, with as much accuracy as possible, those situations which are to justify the execut[ive] in its interposition of a military force.” The House added amendments limiting power to suppress insurrections to situations where a state requested assistance, and limiting power to enforce federal laws to situations where a federal judge found the laws could not be enforced by ordinary means. In addition, the President could use only the militia of the affected state unless it was insufficient and Congress was not in session. The 1792 Act was also effective for only two years (barely lasting to its 1794 invocation by President Washington during the Whiskey Rebellion). But even with these limitations, the Act contained much more open-ended delegations than anything on the international front for many years to come.

 A subsequent Militia Act in 1795 made the authorization permanent and dropped several of the restrictions. Congress followed up with the Insurrection Act in 1807, authorizing the President to use the regular army (as well as the militia) to suppress insurrections in situations where the President was authorized to use the militia. The 1807 Act’s most famous invocation was the Civil War, as President Lincoln rested his initial military response to Southern secession in part on his authority to suppress insurrection. As the Supreme Court put it in the Prize Cases in 1863, rejecting a challenge to Lincoln’s actions:

The Constitution confers on the President the whole Executive power. He is bound to take care that the laws be faithfully executed. He is Commander-in-chief of the Army and Navy of the United States, and of the militia of the several States when called into the actual service of the United States. He has no power to initiate or declare a war either against a foreign nation or a domestic State. But by the Acts of Congress of February 28th, 1795, and 3d of March, 1807, he is authorized to called out the militia and use the military and naval forces of the United States in case of invasion by foreign nations, and to suppress insurrection against the government of a State or of the United States.

Compared to delegations of war-initiation power, these authorizations were quite broad, especially after 1795. They operated generally, not in connection with any particular uprising, and (again, especially after 1795) left it largely to the President’s discretion when using the military or militia for domestic purposes was appropriate. And as the Civil War demonstrated, they could authorize large-scale presidential uses of force.

Yet as with piracy, delegation of authority to suppress insurrection stands in a very different light from delegation of authority to start foreign wars. The President has the constitutional authority and obligation to enforce the law, as well as an implied power to repel sudden invasions; the Militia and Insurrection Acts gave him tools (the militia and military) to do so. The President has no corresponding constitutional power relating to war initiation in situations where Congress would be delegating to the President an exclusive power of Congress. Delegating power to use state militia forces might also be distinguished from delegating war power on a separate textual ground: unlike the Declare War Clause that simply grants that power to Congress, Article I states that Congress has the power “[t]o provide for calling forth the Militia” for certain purposes, perhaps indicating that militia powers are more appropriately delegated.

E. Conclusion: Implications of the First 70 Years

The early history of war power delegations is complex and resists easy conclusions. But several important ones may be ventured. First, it supplies surprisingly little precedent for modern broad delegation of war-initiation power. Most foreign conflicts of the time were fought pursuant to formal congressional recognition of a state of war—even relatively small-scale ones such as those against Tripoli and Algiers. The only foreign conflict fought by delegated authority was the 1798–1800 campaign against French ships on the high seas, but that was limited in important respects and occurred in the midst of ongoing low-level conflict. That record does not show war-initiation delegation to be unconstitutional, but it does show it to be unusual.

Second, in some now-obscure situations, delegations of war-initiation power began tentatively to take hold—first in the No-Transfer Act, then in the Maine boundary delegation, and finally in the Water Witch incident. So one cannot say the early period rejected war-initiation delegation. But these episodes are balanced by contentious debates over the Provisional Army and unsuccessful requests for delegated power to use force by Presidents Jackson and Buchanan, in which there was a recurring idea that the Constitution imposed limits on Congress’s delegation of its war powers. From the Republic’s birth, there has been an influential strain of thought that regards war powers as especially nondelegable. At minimum, this evidence should caution against a quick assumption that early constitutional practice supports setting aside or loosening general nondelegation principles when it comes to war-initiation power.

At the same time, early practice finds support for broad authorizations in areas where the President had some degree of independent constitutional power. Substantial delegations of war waging (as opposed to war initiating) authority were routine, accompanying all of Congress’s declarations of war, consistent with the President’s power as commander-in-chief to carry out wars once begun. Further, Congress provided broad authorizations in related areas, including using force against pirates and to suppress insurrections—areas in which the President’s power to enforce law indicated substantial independent presidential authority.

III.  WAR POWER DELEGATIONS FROM THE CIVIL WAR TO WORLD WAR II

This Part considers historical practice relating to war power delegations from 1865 to 1945. Though likely beyond the time relevant to the Constitution’s original meaning, practice during this period—a time in which the United States emerged globally as a great power—might contribute to the “historical gloss” on the constitutional regime of delegation.

Again, however, we find little from this period to support a constitutional practice of war-initiation delegation. Congress declared three wars, and authorized the President to direct them, but otherwise most uses of force during this time relied on claimed independent presidential authority, an increasingly common feature of U.S. foreign policy.

It was also during this period, however, that the Supreme Court issued its most significant decision on the nondelegation doctrine and foreign affairs. The Court’s 1936 decision in Curtiss-Wright rejected a challenge to delegation regarding certain arms exports and stated that the nondelegation doctrine applies less strictly in foreign relations than domestic affairs. Though not involving war powers, the decision’s broad language could be read—and we show in later Parts that it would be read by some—to apply in that area.

A. Declared Wars

From 1898 to 1945, the United States fought three formally declared wars. As with earlier major wars, Congress delegated to the President vast discretion over how to wage them, but the declarations did not give the President decision-making discretion over whether to wage them.

1. War with Spain: Congressional Direction to Use Force

In 1898, U.S. relations with Spain had been fraying for years, primarily over Cuba, a Spanish colony seeking its independence. United States investors in Cuba’s agricultural industry also pressed for protection of their interests, and interventionist sentiments intensified when the battleship U.S.S. Maine mysteriously exploded in Havana harbor, where President McKinley had sent it to protect U.S. citizens and property.

On April 20, 1898, Congress passed—at McKinley’s request—a joint resolution calling for Spain to withdraw from Cuba and authorizing the President to intervene militarily to support Cuban independence. One remarkable feature of that force resolution was its imperative voice. It not only licensed the President to use force but instructed him to do so: “the President of the United States . . . hereby is . . . directed and empowered to use the entire land and naval forces of the United States, and to call into the actual service of the United States the militia of the several States, to such extent as may be necessary” to compel Spain to withdraw from Cuba. True, the resolution’s phrase “as may be necessary” could be read either as giving the President discretion over how much and what type of force to use—or even whether to use it at all. But unlike modern force authorizations giving the President an option to use force, this act obliged him to. Moreover, at the time that Congress directed the President to use force against Spain, the President had made clear his intention to do so.

The April 20 resolution prompted Spain to break off diplomatic relations. McKinley then imposed a naval blockade of Cuba, and Spain responded by declaring war. The President returned to Congress on April 25 requesting a war declaration. A legal formality at that point, Congress that day unanimously passed by voice votes a resolution backdating its war declaration by four days, to the date of Spain’s declaration. As in previous declared wars, Congress recognized a state of war rather than leaving the President discretion whether to do so.

2. World Wars I and II

Following German targeting of U.S. merchant ships in the Atlantic during World War I, as well as other hostile actions, President Woodrow Wilson asked Congress on April 2, 1917, to declare war against Germany. Within days Congress obliged by large majorities. Its joint resolution stipulated “[t]hat the state of war between the United States and the Imperial German Government which has thus been thrust upon the United States is hereby formally declared” and “authorized and directed”—echoing the imperative voice of the 1898 resolution—the President “to employ the entire naval and military forces of the United States and the resources of the Government to carry on war against the Imperial German Government.” Later that year, Congress declared war against Germany’s ally Austria-Hungary, after that government “committed repeated acts of war against” the United States. That war resolution’s operative language mirrored the Germany resolution. Both declarations granted immense discretion to the President over how to carry on the war, but they gave no option as to whether to engage in war.

World War II, the United States’ last formally-declared war, entailed six separate congressional war declarations. These declarations—against Japan, Germany, Italy, Bulgaria, Hungary, and Rumania—used a common template. They recognized a state of war to exist and (like the 1898 and 1917 resolutions) “authorized and directed” the President to use force to defeat each enemy. The President’s delegated discretion was entirely about how to wage war, not whether to enter the war.

B. Force Authorizations Other than Declared Wars, 1865–1945

Perhaps surprisingly, the post-Civil War period saw few congressional force authorizations apart from declarations of war. As it corresponded to the nation’s increasingly active and powerful position on the world stage, one might expect more force authorizations. But as discussed below, there were only a few, and even these came with significant qualifications. Presidents fought no major foreign conflicts pursuant to delegated authority during this period, although independent presidential uses of force became more frequent, more sustained, and more consequential. With the notable exception of the 1914 intervention in Mexico, discussed below, Congress played little role in, and at times opposed, increasingly interventionist U.S. foreign policy.

1. The Late Nineteenth Century

No conflicts of any sort were fought pursuant to expressly delegated authority between the end of the Civil War and Congress’s declaration of war against Spain in 1898. That was not because Presidents were uninterested in using force (although President Cleveland told Congress that he would not pursue war with Spain over Cuba even if Congress declared it). While executive military unilateralism is more associated with the twentieth century, it had some roots in this earlier period. In general, though, the period prior to 1898 was marked by an absence of major foreign conflicts.

A prominent use of U.S. military force in the period was the 1893 landing of marines on Oahu in connection with the overthrow of Hawaii’s native ruler, Queen Lili’uokalani, by private American interests led by Sanford Dole (who became Hawaii’s head of government). President Harrison apparently did not authorize the landing in advance (though he approved it afterward), and it is unclear whether it played an important role in Dole’s success (Harrison denied that it did). Congress did not authorize this use of force, though Congress as a whole also did not object to it.

United States Presidents (or cabinet secretaries) had more direct involvement in several other low-level deployments or uses of force, including by the Grant Administration in the Dominican Republic, the Hayes Administration in Mexico, the Cleveland Administration in Haiti, and the Harrison Administration in Brazil. None of these incidents led to significant hostilities, but they marked a trend of presidential unilateralism that intensified in subsequent years. Congress did not directly approve any of these operations.

Three incidents bordering on delegation merit brief further discussion. During the Hayes Administration, Congress passed a bill authorizing the President to use measures “short of war” in a dispute with Britain over an imprisoned U.S. citizen. Apparently nothing came of the authorization, and presumably (in keeping with the “short of war” limitation) Congress did not intend to authorize significant hostilities against a major power over a minor matter.

Second, during the late 1880s, tensions arose with Germany over the Samoan islands, where both countries had interests. President Cleveland sent naval ships to Samoa to protect U.S. interests and then “submitted [the matter] to the wider discretion conferred by the Constitution upon the legislative branch of the Government.” Congress approved an appropriation to continue the naval deployment without directly addressing the use of force. Whether Congress regarded this as an authorization to use force if Germany attempted a takeover of the islands seems unclear; ultimately no open conflict with Germany occurred.

Finally, in 1891, after street violence killed two U.S. sailors and injured others in Valparaiso, Chile, diplomatic tension escalated. President Harrison issued an ultimatum to the Chilean government and began preparations for war. However, he also submitted the matter to Congress asking for “such action as may be decreed appropriate.” It is unclear whether Harrison was asking Congress for a declaration of war (at least one member of Congress read his message that way) or whether he was asking for delegated authority. It is also unclear whether Harrison would have taken unilateral action if Chile rejected the ultimatum and Congress failed to authorize force. Chile defused the matter by meeting Harrison’s demands, and Congress took no action.

These three incidents are the closest Congress came to delegating war power during the period, and they fall far short of material delegations. As to Britain, Congress expressly disclaimed intent to delegate war power; in Samoa, it is unclear what level of force (if any) Congress meant to delegate; and the Chile episode can as easily be read as a request for a declaration of war rather than a request for a delegation (and, in any event, no congressional action followed). This period, like the preceding one, provides little clear practice or indication of consensus on war power delegation.

 2. The Twentieth Century before World War II

President McKinley kicked off the new century by sending U.S. forces to China to aid other Western governments in suppressing the Boxer Rebellion in 1900. Thereafter, presidential uses of force mounted, including Theodore Roosevelt’s support of Panama’s independence from Colombia (setting up U.S. control of the route of the prospective canal) and substantial interventions, sometimes involving commitments of ground troops spanning multiple presidencies, in the Dominican Republic, Haiti, Cuba, and Nicaragua.

One should not overstate the rise of presidential uses of force. All major foreign conflicts in this period were declared by Congress. Though some presidential uses of force were quite consequential, none involved substantial commitments of troops, extended hostilities, or significant U.S. casualties. They were not clearly “wars” in the constitutional sense, and were not regarded as wars by the political branches or in popular description. Congress was generally aware of these activities, sometimes conducting inquiries of them after-the-fact, and continued to authorize the armed forces used for them, which later (and to this day) led the executive branch to argue that Congress tacitly acknowledged the President’s independent constitutional power to conduct them. With Presidents less inclined to seek congressional authorization for low- and medium-level uses of force, there were limited congressional opportunities even to debate delegations.

Only one explicit congressional force authorization occurred in this period, though its significance is uncertain. It came with regard to the situation in Mexico in 1914.

Earlier, in 1910–1911, a popular uprising overthrew the longstanding dictatorial regime of Porfirio Díaz, bringing to power a democratically elected but weak government under Francisco Madero. During the unrest, President Taft considered the need to intervene to protect U.S. investments, but left the question to Congress, reporting that he had troops “in sufficient number where, if Congress shall direct that they shall enter Mexico to save American lives and property, an effective movement may be promptly made.” Congress did not act.

Taft’s successor, Wilson, took a more aggressive stance. In the closing months of the Taft Administration, General Victoriano Huerta seized power from Madero, plunging Mexico into a bloody multi-sided civil war. Wilson refused to accept Huerta’s legitimacy and in 1914 used a minor incident to justify a substantial intervention. Telling Congress that Huerta had insulted U.S. forces by refusing a 21-gun salute, Wilson asked for authority to use force:

No doubt I could do what is necessary in the circumstances to enforce respect for our Government without recourse to the Congress, and yet not exceed my constitutional powers as President; but I do not wish to act in a manner possibly of so grave consequence except in close conference and cooperation with both the Senate and House. I, therefore, come to ask your approval that I should use the armed forces of the United States . . . .

Congress obliged with a joint resolution declaring that “the President is justified in the employment of the armed forces of the United States to enforce his demand for unequivocal amends for certain affronts and indignities committed against the United States.” The resolution included language (added to the House bill by the Senate) that the United States “disclaims any hostility to the Mexican people or any purpose to make war upon Mexico.”

The language—that the President “is justified” rather than “is authorized”—suggests that Congress may have accepted Wilson’s view that the President had independent authority to act. Moreover, Wilson did not wait for Congress; while the Senate debated, Wilson ordered bombardment and seizure of the port of Veracruz, where U.S. forces remained for seven months until Huerta was overthrown.

Thus the only material force authorization (apart from war declarations) in this period was more likely a recognition of presidential power than a delegation, and in any event it disclaimed intent to authorize war; the ensuing hostilities, though perhaps consequential, were small in scale. Wilson’s presidency, like those before and after, was more significant for its growing presidential unilateralism than for delegation.

C. Curtiss-Wright and War Power Delegation

During this same era, the Supreme Court’s seminal 1936 opinion in Curtiss-Wright drew a distinction between foreign affairs delegation and domestic affairs delegation, stressing that the Constitution permits Congress greater latitude to delegate foreign affairs decision-making to the President. That case arose from a 1934 joint resolution authorizing the President to proclaim an arms embargo against Paraguay and Bolivia if he found that doing so would contribute to peace in their ongoing war. “[C]ongressional legislation which is to be made effective through negotiation and inquiry within the international field,” wrote Justice Sutherland, “must often accord to the President a degree of discretion and freedom from statutory restriction which would not be admissible were domestic affairs alone involved.”

A leading justification the Court gave was functional—the President’s institutional advantages in agility and information—but the opinion also emphasized historical practice:

Practically every volume of the United States Statutes contains one or more acts or joint resolutions of Congress authorizing action by the President in respect of subjects affecting foreign relations, which either leave the exercise of the power to his unrestricted judgment, or provide a standard far more general than that which has always been considered requisite with regard to domestic affairs.

Curtiss-Wright’s implications for war power delegations are uncertain. War-initiation power of course may be thought of as a prime example of foreign affairs powers, and the Court’s invocation of the President’s institutional advantages in foreign affairs may seem particularly applicable to it. But Curtiss-Wright was not itself about U.S. war powers, only the prohibition of arms sales. Further, as our review of the historical record thus far shows, the Court’s argument from historical practice lacked support as applied to war-initiation, which (unlike some other aspects of foreign affairs) had not previously been a common subject of delegation. Nonetheless, as the following Part shows, Curtiss-Wright—especially its functional and historical claims—played a role in justifying expanded war power delegations in subsequent years.

IV. THE COLD WAR AND BEYOND

This Part shows that it was in the early Cold War period—when the United States became a superpower, with large standing military forces deployed around the world—that the modern practice of war power delegations, through legislative force authorizations, took hold. A watershed moment was a 1955 force resolution that, notably, the President never exercised.

It was also in that period, however, that Presidents asserted much broader unilateral powers to use military force, and Congress largely (if tacitly and dividedly) acquiesced. To those who viewed the President’s unilateral powers as wide even without legislative authorization, force resolutions would not have posed nondelegation issues. And to those opposing that view, the nondelegation issue probably seemed secondary to reclaiming Congress’s exclusive powers.

A. Collective Security and Delegation: The UN Participation Act

From World War II’s ashes, the victorious powers created the United Nations (“UN”), with a Security Council charged with maintaining peace and security, and empowered to employ military force to do so. In subsequent years, as the East-West Cold War quickly developed, the United States embraced a network of security commitments—some formal defense treaties, some informal pledges—around the world, aimed especially at stemming Communist aggression. To the architects of these arrangements, it was important that the United States be able to react quickly to crises and to assure foreign partners and adversaries of that ability. But a constitutional system of exclusive congressional prerogative to decide on war was designed to move slowly. Thus, security imperatives encouraged both more aggressive claims of independent presidential power and wider delegation of war power by Congress.

To participate effectively in the UN, Congress enacted the UN Participation Act (“UNPA”) in December 1945. That statute provided that the chief U.S. diplomat at the UN would act at the President’s direction. It also contained a broad authorization to use force that remains on the books, but has never been used.

Specifically, section 6 authorized the President to negotiate agreements with the Security Council, pursuant to UN Charter Article 43, to make U.S. military forces available for maintaining peace and security. Section 6 made Article 43 agreements “subject to the approval of the Congress,” so that Congress retained responsibility over “the numbers and types of armed forces, their degree of readiness and general location, and the nature of facilities and assistance . . . to be made available to the [Council].” But the President did not need to return to Congress before providing these forces to the Council. Thus, if Congress approved Article 43 agreements in advance, the President could send forces into UN-approved armed conflicts as they developed. This statutory framework specified no geography. It specified no enemy. It specified no particular threat or type of threat.

The UNPA’s vast war power delegation was never activated because the idea that member states would place military forces at the Council’s disposal was stillborn. Cold War geopolitics made it impossible, given that the United States and the Soviet Union each had a veto on Council decisions. No Article 43 agreements were ever concluded. When the Charter and the UNPA were adopted, however, Article 43—and hence section 6 of the UNPA—were understood as a main way the Council would pursue its mandate to preserve international peace and security. The United States planned to carry it out and expected other members to do the same.

The UNPA generated some congressional pushback on nondelegation grounds, but not much. To some critics, the arrangement was a double-delegation: it delegated decisions on war to an international organization, the Security Council, and it delegated decisions about U.S. participation in that body to the President. Senator Burton Wheeler, a prominent isolationist, was foremost among the objectors and among seven senators who voted against the UNPA. Wheeler noted “that there is no mention in the Constitution of any power of Congress to delegate its [Declare War] authority to the President and for him in turn to authorize his appointee to an international organization to vote to put down aggression in foreign countries.”

In recommending passage, the Senate and House foreign relations committees stated that “[t]here exist several well-recognized and long-standing precedents for the delegation to the President of powers of this general nature.” Tellingly—and consistent with our reading that the historical record to this point is quite thin—they cited congressional delegations regarding international commerce in the early Republic, and only statutes specific to armed force from the Quasi-War with France. They also cited Curtiss-Wright for support.

The muted congressional concerns about the UNPA’s delegation might be explained on several grounds. Congress strongly supported the Charter—the Senate voted 89-2 for ratification—and many members understood that its collective security system required the U.S. military to back up Security Council mandates. Additionally, political leaders and lawyers may have viewed UN-backed emergency interventions, sometimes called at the time “police action,” as distinct from inter-state war; therefore, legislating discretionary authority to participate in them did not delegate war-initiation power. One lesson of World War II was that early international military action might prevent major war. If used to prevent wide-reaching war, then (so the logic went) an international police action did not implicate the Constitution’s Declare War Clause, at least not in the same way. A strong current of thought within Congress held that the President could engage in limited police actions unilaterally but required congressional assent for full war.

This latter view of presidential war powers was implemented five years later, when North Korea invaded the South and President Truman intervened militarily, without express congressional authorization, in what became the three-year Korean War. Truman called the move a police action, citing UN approval. Though the Korean War did not involve delegation, it marks an important moment in background constitutional practice. The issue of war-initiation delegation assumes that Congress’s war-initiation power is largely exclusive (perhaps subject to narrow exceptions). Although there were precursors, the Korean War was a high-water mark in presidential assertions of unilateral constitutional power to launch large-scale military interventions. Congressional reactions were mixed, but it was also a high-water mark among a contingent of legislators who regarded unilateralism as proper. The Cold War’s stakes, the advent of nuclear weapons, a general sense of permanent military emergency, and extensive overseas American military commitments and troop deployments all contributed to this shift in thinking.

Alongside these geopolitical and security developments, the postwar period marked virtual obsolescence of formal war declarations, as a matter of both international law and U.S. domestic law. The UN Charter’s outlawing of force except in self-defense or when authorized by the UN Security Council contributed to that discontinuance. Beyond legal technicalities, the widespread public view of war as a moral catastrophe also cast old-fashioned war declarations as outdated. Without such clear markers, the lines around states of war—and hence war-initiation—became even blurrier.

B. Cold War Delegations

Many of the contextual factors—including perceptions of vital stakes in Cold War security crises around the world—that contributed to broader assertions of presidential powers to use force also set the stage for the broadest and potentially most consequential delegations of war power to that point in American history. The first ones, in the Eisenhower years, were never invoked. The last one of this critical early-Cold War period, in the Johnson years, was a basis for one of the United States’ costliest wars. These force authorizations entrenched the modern practice of broad war-initiation delegations.

1. A Delegation Turning Point: Eisenhower’s Force Resolutions

The post-World War II shift in thinking about presidential war powers is important to understanding two extraordinary congressional war power delegations during the Eisenhower Administration. Eisenhower rejected broad presidential unilateralism, generally believing only Congress could authorize major U.S. conflicts, but in a reversal of typical positions, many in Congress regarded the President’s unilateral war powers as vast.

Eisenhower’s security strategy emphasized military commitments to overseas allies to offset threats posed by the Soviet Union and China. It also emphasized taming runaway defense spending. To reconcile these seemingly conflicting tenets, Eisenhower relied on the threat of massive retaliation—including with nuclear weapons—against aggression. This approach encountered a major test in 1954–1955, when Communist China shelled tiny coastal islands that were under control of U.S.-aligned Nationalist China, based on the island of Formosa. In late January 1955, Eisenhower asked Congress for authorization to use force to assure Formosa’s security. Days later, Congress obliged by nearly unanimous votes in both houses, resolving that:

[The] President . . . is authorized to employ the Armed Forces of the United States as he deems necessary for the specific purpose of securing and protecting Formosa and the Pescadores against armed attack, this authority to include the securing and protection of such related positions and territories of that area now in friendly hands and the taking of such other measures as he judges to be required or appropriate in assuring the defense of Formosa and the Pescadores.

This resolution shall expire when the President shall determine that the peace and security of the area is reasonably assured by international conditions created by action of the United Nations or otherwise, and shall so report to the Congress.

As tensions simmered, Eisenhower signaled the possibility of major military action—even publicly referencing nuclear options. But all sides soon stepped back from the brink. Several years later, shelling and skirmishing between Communist and Nationalist China resumed, but the conflict did not escalate.

The 1955 force resolution gave enormous discretion to the President. It provided advance authorization to initiate military conflict—understanding that it might include nuclear escalation—to protect a distant ally. It specified no target or enemy, though Communist China was obviously the intended one. Multiple times it emphasized the President’s role as sole judge of necessity. And its duration was subject to presidential judgment that the region was secure. Congress eventually repealed it twenty years later, and it probably would have stayed on the books much longer had the United States not reached a diplomatic détente with Communist China.

Despite this open-endedness, the nondelegation question was peripheral in congressional debates. Senator Wayne Morse, a harsh critic of Eisenhower with deep reservations about U.S. commitments to defend Formosa, was one of the few legislators to raise this issue. He objected to the constitutionality of a “predated declaration of war.” According to Morse:

I respectfully submit that we have no right under our oaths of office to delegate that great constitutional obligation of Congress. . . . In my judgement, we cannot do it constitutionally. . . . [W]e have no constitutional right to authorize any President to exercise his discretion in determining whether or not he should commit an act of war . . . .

But Morse was an outlier. Eisenhower received more pushback from Congress on the grounds that its authorization was unnecessary. When Eisenhower consulted congressional leaders before seeking the force resolution, House Speaker Sam Rayburn “said that the President had all the powers he needed to deal with the situation,” and Rayburn even believed “that a joint resolution at this particular moment would be unwise because the President would be saying in effect that he did not have the power to act instantly.”

Modern Presidents have usually requested force authorizations because the President has already initiated force or has concrete plans to do so. But an important aspect of the Formosa resolution is that it was never invoked. Eisenhower did not launch strikes, even when Communist China’s shelling of Chinese Nationalist forces later resumed. The authorization’s purpose was more about signaling than warfighting. Eisenhower’s strategy was deterrence—so China was a key audience—and he expected war power delegation to bolster the credibility of his threats.

For similar reasons, two years later, Congress passed—at Eisenhower’s urging—one of the broadest war delegations in American history. The 1957 act endorsed whatever force the President deemed necessary to prevent Communist aggression anywhere in the Middle East. It had no expiration date; in fact, it remains on the books today. Like the Formosa resolution, it was primarily about signaling rather than warfighting and has never been invoked.

As background, Eisenhower saw the Middle East as an emergency situation in 1956. The Suez crisis discredited European allies’ influence there, and the administration feared the Soviet Union would fill the vacuum without strong U.S. commitment. In January 1957, Eisenhower requested congressional support for military and economic aid for Middle East nations and sought authority to use military force to protect them. In a four-hour White House meeting with congressional leadership on January 1, 1957, the President emphasized that a force resolution would bolster deterrence and reassure allies:

[Eisenhower] added that should there be a Soviet attack in that area he could see no alternative but that the United States move in immediately to stop it. . . . He cited his belief that the United States must put the entire world on notice that we are ready to move instantly if necessary. He reaffirmed his regard for constitutional procedures but pointed out that modern war might be a matter of hours only.

Two months later, Congress passed legislation endorsing the military and economic aid and included the following provision:

[T]he United States regards as vital to the national interest and world peace the preservation of the independence and integrity of the nations of the Middle East. To this end, if the President determines the necessity thereof, the United States is prepared to use armed forces to assist any such nation or group of such nations requesting assistance against armed aggression from any country controlled by international communism.

The resolution provided that it would expire when the President determined that the “peace and security of the nations in the general area of the Middle East” was “reasonably assured” or if Congress revoked it with a concurrent resolution.

Unlike the Formosa resolution, which Congress passed quickly and overwhelmingly, the Middle East resolution prompted major debate. Some members supported the proposal, some thought it was dangerously—and possibly unconstitutionally—open-ended, and some thought it was dangerous and possibly unconstitutional in the other direction, by implying that the President lacked unilateral power to respond to emergencies.

A number of senators and representatives specifically objected that it unconstitutionally delegated Congress’s war powers. Senator William Fulbright, for instance, argued that the delegation overturned legislative checks—though without clearly saying whether this was a constitutional or a policy objection:

It asks for a blank grant of power over our funds and Armed Forces, to be used in a blank way, for a blank length of time, under blank conditions, with respect to blank nations, in a blank area. We are asked to sign this blank check in perpetuity or at the pleasure of the President––any President. Who will fill in all these blanks? The resolution says that the President, whoever he may be at the time, shall do it.

Other legislators believed that the President’s unilateral powers to use force were vast and feared that legislative authorization would undermine that position.

In part to paper over these disagreements, the resolution avoided the term “authorize,” instead adopting a statement approving a policy of force. The Senate Report emphasized that the language had “the virtue of remaining silent” on constitutional allocations of war powers. The House Report added that “the resolution does not delegate or diminish in any way the power and authority of the Congress of the United States to declare war, and the language used in the resolution does not do so.” Given that Eisenhower believed congressional approval was constitutionally required to start wars, however, he must have read the resolution as a delegation—even if not technically styled as such.

Taken together, the congressional force resolutions adopted at Eisenhower’s request represented major steps in the practice of war power delegation. They responded to a perceived strategic imperative to give the President discretion to respond immediately to threats against foreign partners. And nondelegation concerns were muffled or balanced by a rising sense among political leaders and many constitutional lawyers—though, ironically, not Eisenhower himself—that the President possessed such discretion even without congressional approval.

2. Two Cuba Crises: One Covert, One Nuclear

In the years after the Middle East resolution, Cuba was the epicenter of two major Cold War crises. Both situations involved congressional action that might be seen as war power delegations, though neither presented the issue squarely. One concerned the postwar institutionalization of covert paramilitary operations by the Central Intelligence Agency (“CIA”); the other concerned a congressional resolution on Cuba policy.

Congress established the CIA in 1947 and authorized it to conduct various intelligence activities. The statutes creating the CIA were ambiguous as to whether they authorized paramilitary operations, including training, advising, and supporting proxy forces against foreign governments. Under Eisenhower, the CIA engaged in clandestine operations against governments of, for example, Iran and Guatemala (both leading to overthrows), and Congress continued to fund the CIA. This raises questions whether Congress had implicitly delegated broad discretion to the President to engage in such operations, and whether that delegation included war-initiation power. The answers are unclear because the legislative basis was ambiguous and neither branch seemed to regard such operations as constitutionally equivalent to war or overt military intervention.

The CIA paramilitary operation that most resembled an armed invasion was the 1961 Bay of Pigs fiasco, which highlighted those ambiguities. Though originally conceived under Eisenhower, President Kennedy in 1961 implemented plans for about 1,400 U.S.-trained and -armed Cuban exiles to overthrow Fidel Castro’s regime. After landing at the island’s Bay of Pigs, the invaders were routed by government forces. Little is publicly known about internal legal discussions behind the operation, but afterwards the Justice Department produced a memorandum characterizing such activities as exercises of the President’s independent foreign relations powers. That document compared covert paramilitary operations to war powers, but seemed to treat them as distinct. It also argued that Congress’s continued funding of such activities represented tacit congressional approval.

Since then, Congress has legislated procedural and notification requirements for covert activities. It remains unclear, however, whether either branch regards the laws governing such activities as delegations, regulations of inherent presidential authority, or both—or whether either regards covert paramilitary activities as exercises of war powers or a separate category of foreign relations powers.

In 1962, Cuba was again the locus of Cold War crisis, arguably one of the most dangerous moments in world history. When U.S. intelligence discovered Soviet nuclear missiles on the island, Kennedy ordered a blockade—calling it a “quarantine”—and considered other military actions including air strikes. Although often considered an exercise of unilateral presidential powers, a congressional joint resolution resembling a war power delegation operated in the background.

Congress passed that Joint Resolution with overwhelming support on October 3, 1962, a few weeks before the missile crisis. It stated that “the United States is determined,” among other things:

to prevent by whatever means may be necessary, including the use of arms, the Marxist-Leninist regime in Cuba from extending, by force or the threat of force, its aggressive or subversive activities to any part of this hemisphere;

to prevent in Cuba the creation or use of an externally supported military capability endangering the security of the United States . . . .

The resolution did not expressly authorize presidential action and is not generally regarded as a force authorization. It instead declared a policy, implying strongly that the United States was willing to use force in broad circumstances. And the Cuban Missile Crisis is usually thought of as a momentous instance of executive unilateralism.

Nonetheless, the resolution’s language resembles the 1957 Middle East resolution discussed above, which generally is regarded as a force authorization. And although the Kennedy Administration emphasized in internal deliberations the President’s Article II authority to act, it also cited this resolution for support, without clearly stating whether that support was legally (or merely politically) significant. The record is ambiguous as to whether members of Congress regarded this as a force authorization.

In sum, around the same time Congress was enacting broad use of force delegations regarding Formosa and the Middle East, it was taking other actions that, although not formal delegations of war power, shared common attributes. One reason why their status as delegations remains ambiguous was that the executive branch simultaneously asserted (and Congress generally accepted) broad unilateral presidential war power. And, again, these episodes took place in the Cold War context of constant East-West hostilities and permanent U.S. military presence worldwide, which were further blurring the line between war and peace, or between war and military actions short of war.

3. Vietnam, War Powers Reform, and Delegation

In contrast to the Formosa and Middle East resolutions, Congress passed the 1964 Gulf of Tonkin Resolution with clear expectation that President Lyndon Johnson would use force in Vietnam—even if it was not at all clear that the conflict would become so protracted and costly. Indeed, by the time Congress enacted this resolution, the United States was already deeply involved militarily.

Following an alleged North Vietnamese attack on American naval vessels, Johnson asked Congress for a broad force authorization. Days later and nearly unanimously, Congress provided:

That the Congress approves and supports the determination of the President, as Commander in Chief, to take all necessary measures to repel any armed attack against the forces of the United States and to prevent further aggression . . . . Consonant with the [Constitution and UN Charter] and in accordance with its obligations under the Southeast Asia Collective Defense Treaty, the United States is . . . prepared, as the President determines, to take all necessary steps, including the use of armed force, to assist any member or protocol state of the Southeast Asia Collective Defense Treaty requesting assistance in defense of its freedom . . . .This resolution shall expire when the President shall determine that the peace and security of the area is reasonably assured by international conditions created by action of the United Nations or otherwise, except that it may be terminated earlier by concurrent resolution of the Congress.

This language gave the president broad discretion in extent of force (“all necessary measures” and “all necessary steps”), in purpose (“to prevent further aggression”), in geography (“southeast Asia”), and in time (until “the President shall determine” that peace and security is restored). Over the next decade, Presidents used it—in addition to assertions of unilateral executive power—to justify combat involving hundreds of thousands of troops, not just in Vietnam but also in neighboring countries.

As in earlier post-war episodes, Senator Morse was a lonely voice objecting on nondelegation grounds. Morse labeled the resolution a “predated declaration of war, in clear violation of article I, section 8 of the Constitution, which vests the power to declare war in the Congress, and not in the President.” “In effect,” he asserted, “this joint resolution constitutes an amendment of article I, section 8, of the Constitution, in that it would give the President, in practice and effect, the power to make war in the absence of a declaration of war.” The resolution’s supporters generally disregarded the nondelegation issue—sometimes referring to the 1955 and 1957 resolutions as precedent for authorizing force in broad terms. A few congressional backers of the resolution explicitly endorsed delegating war power to the President.

Although the nondelegation issue received almost no attention when the resolution was adopted, it became more controversial as the conflict became a quagmire and the Johnson and Nixon administrations expanded it. In some court cases challenging the legality of the Vietnam War, litigants argued that Congress had invalidly delegated its war powers without itself declaring war, but no courts directly adjudicated these claims. In a 1971 speech on the legal basis for the war, then-Assistant Attorney General for the Office of Legal Counsel William Rehnquist felt obliged to address the issue. Rehnquist argued that from historical examples (though citing none between the Quasi-War and the 1950s Eisenhower resolutions), “both Congress and the President have made it clear that it is the substance of congressional authorization, and not the form which that authorization takes, which determines the extent to which Congress has exercised its portion of the war power.” Brushing aside objections of “unlawful delegation of powers,” Rehnquist noted that Curtiss-Wright demonstrated that the “principle [of unlawful delegation of powers] does not obtain in the field of external affairs.” Thus, Rehnquist concluded, “[t]he notion that an advance authorization by Congress of military operations is some sort of an invalid delegation of congressional war power is untenable in the light of the decided cases.”

This notion—that Congress’s advance authorization of military operations was an invalid delegation—surfaced often in war powers reform debates at that time, including legislative discussions that culminated in the 1973 War Powers Resolution. That resolution (which is still on the books) among other things required the President to withdraw forces from hostilities within sixty days unless Congress authorized their use. In legislative discussions leading to that act, critics argued that the Gulf of Tonkin Resolution had been an unconstitutional delegation, while some critics of the Resolution further argued that allowing the President sixty days of unilateral action was also an unconstitutional delegation. Senator Eagleton, for example, who initially supported the Resolution, voted against the final version because it delegated “a predated declaration of war to the President and any other President of the United States, courtesy of the U.S. Congress.” “That is not,” he argued, “what the Constitution of the United States envisaged when we were given the authority to declare war. We were to decide ab initio, at the outset, and not post facto.” Congressional defenders of the Resolution echoed Rehnquist’s arguments based on Curtiss-Wright that, even if the resolution was a delegation, it was a valid exercise of congressional power.

The nondelegation objection to open-ended force authorizations, including the Gulf of Tonkin Resolution, was pressed at that time by prominent constitutional scholars. In a 1972 article styled Requiem for Vietnam, Professor William Van Alstyne wrote that “it seems to me clearly the case that the exclusive responsibility of Congress to resolve the necessity and appropriateness of war as an instrument of national policy at any given time is uniquely not delegable at all.” In extensive legislative testimony, Professor Alexander Bickel argued that absent detailed standards, Congress could not delegate to the President its own war power, “despite United States v. Curtiss-Wright Export Corporation, which was really quite a limited case.” Curtiss-Wright’s statements about independent executive power were “largely dicta,” Bickel asserted, and the case was not about “powers to go to war, or to use the armed forces without restriction.” When asked whether he challenged the Gulf of Tonkin Resolution as an unconstitutional delegation, Bickel replied, “Oh, yes.” The Lawyers Committee on American Policy Towards Vietnam took a similar position.

Other prominent legal voices—including Eugene Rostow, John Norton Moore, and former Supreme Court Justice Arthur Goldberg—endorsed the constitutionality of Congress delegating authority to the President to use force. Rostow rejected the arguments of Bickel and others “that, save for minor exceptions, hostilities can be authorized only by Congressional action at the time they begin [rather than in advance], and then by delegations narrowly limited in scope,” finding this argument so impractical as to be unconstitutional, and arguing that the Gulf of Tonkin Resolution was sufficiently specific. In his subsequent book about the Vietnam War and the Constitution, John Hart Ely noted that opposition to the conflict generated efforts by scholars to push nondelegation objections against the Gulf of Tonkin Resolution and other broad force authorizations, but he sided with the Resolution’s defenders: “The bottom line must . . . be that the Tonkin Gulf Resolution could not have been held at the time, and cannot now responsibly be said, to violate the delegation doctrine unless one postulates a general doctrine significantly stronger than any the Supreme Court (or the academy) has been willing to recognize since the 1930s.” Ely went on to say a stronger argument would be that force authorizations must be sufficiently specific regarding against whom they are directed, but he concluded the Gulf of Tonkin Resolution met that requirement.

In sum, after being almost entirely eclipsed in the early Cold War, war-power nondelegation arguments made a comeback in the wake of failure in Vietnam. As the following section shows, these arguments linger throughout the post-Cold War period, though at this point again contained to a small minority view in Congress.

C. Post-Cold War Delegations

Since the end of the Cold War, the United States has fought three major ground wars: two in Iraq, and the war against al Qaeda and the Taliban in Afghanistan and elsewhere. All three were waged pursuant to delegated war power. The President requested, and Congress legislated, these resolutions in the context of broad executive branch assertions of presidential power to use force.

1. Two Iraq War Delegations

Congress enacted force authorizations against Iraq in 1991 and 2002, both delegating discretion to initiate war. They authorized the President to use force—or not—based on the President’s judgments about the need and wisdom. In that respect they resembled the 1950s force resolutions, though unlike those earlier ones, presidential intentions to use force were apparent at the time. They also contrast with other force authorizations from the period, such as Congress’s 1983 (Lebanon) and 1993 (Somalia) resolutions authorizing force when substantial military deployment was already underway.

In the lead-up to the first Iraq War, following Iraq’s 1990 invasion of Kuwait, the George H.W. Bush Administration generally argued that it had authority to use military force against Iraq even absent congressional authorization. The central constitutional debate in public commentary, legislative hearings, and the eventual floor vote concerned that assertion. At this point, the UN Security Council had also authorized member states to use force if Iraq failed to withdraw from Kuwait by a certain date. Many members both favoring and opposing force authorization emphasized the importance of Congress’s role in commencing military conflict; and many members characterized even a broad delegation not as passing the buck but as preserving Congress’s formal role in war initiation. The House passed a nonbinding resolution (shortly before authorizing the use of force) that declared: “the Constitution of the United States vests all power to declare war in the Congress of the United States. Any offensive action taken against Iraq must be explicitly approved by the Congress of the United States before such action may be initiated.”

Congress ultimately passed, in January 1991, a joint resolution authorizing the President “to use United States Armed Forces” pursuant to and to achieve the objectives of UN Security Council Resolutions, that is to eject Iraqi forces from Kuwait. At that point it was virtually certain that President Bush would use force. Nonetheless, the resolution gave the President wide latitude to decide whether or not to initiate war. The only express limitation was that before commencing war, the President was required to report to congressional leadership that, in his determination, peaceful diplomatic means were insufficient to achieve the objectives.

Some congressional concerns were raised, especially in the House, about nondelegation. Like other modern force authorization debates, though, this was not a central issue and the constitutional objections remained a small minority view. A few representatives framed their criticism as constitutional protests that sound like nondelegation arguments, but it was often not clear whether they were invoking strict legal barriers or just appealing to general principles of legislative responsibility (or perhaps a different constitutional argument).

Nondelegation arguments emerged a bit more vocally in Congress during debate over authorizing the next Iraq War. For a decade after the Gulf War, the Iraqi regime had obstructed Security Council-mandated weapons inspections. In 2002, at President George W. Bush’s request, Congress again authorized force against Iraq. The 2002 resolution empowered the President to use military force “as he determines to be necessary and appropriate” to “defend the national security of the United States against the continuing threat posed by Iraq; and . . . enforce all relevant United Nations Security Council resolutions regarding Iraq.” The force resolution again included the condition only that the President report to congressional leadership his determination that diplomatic means were insufficient.

Though still a minority, several members of the Senate and House raised constitutional nondelegation concerns. Others made arguments that might be read either as legal objections or prudential ones. Several proponents expressly defended the constitutionality of the resolution. Then-Senator Joseph Biden specifically addressed delegation, arguing that the resolution included sufficient parameters to satisfy the nondelegation doctrine:

I am confused by the argument that constitutionally we are unable to delegate that authority. Historically, the way in which the delegation of the authority under the constitutional separation of powers doctrine functions is there have to be some parameters to the delegation . . . . But as I read this grant of authority, it is not so broad as to make it unconstitutional for us, under the war clause of the Constitution, to delegate to the President the power to use force if certain conditions exist. . . . [C]onstitutionally, this resolution meets the test of our ability to delegate. It is not an overly broad delegation which would make it per se unconstitutional, in my view.

Beyond the legislative debate, the 2002 force resolution generated a rare judicial opinion on the war power nondelegation issue. After the resolution passed, a group including members of the armed forces and their relatives and members of Congress sued President Bush, seeking to enjoin him from initiating war. One of the plaintiffs’ claims was that the resolution unconstitutionally delegated Congress’s power to declare war. The district court dismissed the suit and the First Circuit affirmed, holding that the dispute was unripe and “[did] not warrant judicial intervention.” However, it also addressed the nondelegation argument:

In this zone of shared congressional and presidential responsibility, courts should intervene only when the dispute is clearly framed. An extreme case might arise, for example, if Congress gave absolute discretion to the President to start a war at his or her will. Plaintiffs’ objection to the October Resolution does not, of course, involve any such claim . . . . The mere fact that the October Resolution grants some discretion to the President fails to raise a sufficiently clear constitutional issue.

The court rejected the nondelegation argument for several reasons. First, it treated war power as “shared between the political branches,” in contrast to many other Article I legislative powers. Thus it apparently rejected the premise that war-initiation power is exclusively vested in Congress, or perhaps it recognized that war-initiation power is not always so easy to separate cleanly from war-waging or other foreign affairs powers. Second, citing Zemel v. Rusk (which had cited Curtiss-Wright for this proposition), it noted that “the Supreme Court has also suggested that the nondelegation doctrine has even less applicability to foreign affairs.” It adopted the common assumption that war power is a subset of foreign relations powers for delegation purposes, and that within that subset, broader delegation is constitutionally permitted. Third, it rebutted the argument that Congress had relinquished policymaking responsibility to the executive branch. “Nor is there clear evidence of congressional abandonment of the authority to declare war to the President,” the court said. “To the contrary, Congress has been deeply involved in significant debate, activity, and authorization connected to our relations with Iraq for over a decade, under three different presidents of both major political parties, and during periods when each party has controlled Congress.”

At the time of this writing, Congress is actively considering the repeal of the 1991 and 2002 Iraq force authorizations. The fact that they remained on the books for years after the overthrow of Saddam Hussein’s regime, as well as the withdrawal of U.S. combat forces from Iraq, also means that they continued to operate as possible delegations for resuming conflicts or initiating news ones in and around Iraq.

2. The 2001 AUMF

Congress’s broadest force authorization may be the one following the terrorist attacks of September 11, 2001, which remains in effect. It authorizes the President to use

all necessary and appropriate force against those nations, organizations, or persons he determines planned, authorized, committed, or aided the terrorist attacks that occurred on September 11, 2001, or harbored such organizations or persons, in order to prevent any future acts of international terrorism against the United States by such nations, organizations or persons.

It specifies a purpose—to prevent further terrorist attacks by those categories of target—but it names no specific enemy or duration. It requires the target to have some nexus to the September 11 attacks but gives the President wide latitude to determine who—individuals, groups, or states—comes within that scope.

Unlike other modern war power delegations, the United States had been directly attacked on September 11. Even those who interpret the Constitution as lodging war-initiation decisions exclusively in Congress generally recognize an implicit exception for repelling invasions or attacks. So, although the 2001 AUMF is sweeping, at least part of its scope may be understood as recognizing preexisting presidential powers to respond to attacks. Presumably the reasoning applies to al Qaeda (the actual perpetrators), but defining that group’s organizational and geographic boundaries and determining whether presidential power also applied against, for example, Afghanistan or other nations or entities for harboring al Qaeda, are complicated matters. Thus, the authority granted the President to use force against those not already covered by the President’s constitutional power to respond to direct attacks was potentially quite broad, especially if the nexus requirement is interpreted loosely.

Nondelegation concerns were barely raised, if at all, in Congress or commentary when the AUMF was hurriedly enacted. A few members of Congress indicated at the time that they believed that this resolution was crafted more narrowly than the Gulf of Tonkin Resolution, to avoid serving as a “blank check,” but they did not explain how so.

Although nondelegation objections were inaudible in 2001, some critics of the 2001 AUMF and proponents of amending it have more recently raised such concerns. As with the Gulf of Tonkin resolution, expansive interpretations by successive administrations—including applying it in countries far beyond Afghanistan and against new terrorist groups like the Islamic State—probably contributed to a view that at minimum Congress should name specific enemies. In response to academic proposals to update the 2001 AUMF to allow the President to add new terrorist groups to its coverage, some commentators objected that doing so would skirt constitutional requirements. As two scholars put it:

The proposal to bypass Congress and instead delegate such future—and momentous—decisions to the President lacks anyhistorical precedent, and for good reason. It is Congress, not the Executive, that is given the authority under our Constitution to declare war. As our Founding Fathers understood well, an authorization to use military force is a measure that should be undertaken solemnly, after public debate and with buy-in from representatives of a cross-section of the nation, based upon a careful and deliberate evaluation of the nature of the specific threat. It should not be an ex antedelegation to the President to make unreviewable decisions to go to war at some future date against some as-yet-unidentified entity.

Note the echo of arguments from earlier eras, that there is something uniquely problematic constitutionally about delegating war-initiation power, due to its special character.

As during the Cold War, broad legislative delegations were widely accepted in the post-Cold War period as an appropriate mode of exercising war power. Still, the nondelegation objection never fully went away.

V. SUMMARY AND IMPLICATIONS

The historical record laid out in previous Parts yields several significant and surprising points about history, doctrine, and legal reform in the field of war power. As to history, we conclude that—contrary to common assumptions—the originalist or historical case for broad war-initiation delegation is weak. At the same time, however, that history does not support the opposite position, that Congress’s war power is essentially nondelegable at all. Throughout much of American history, both political branches often treated war initiation as constitutionally distinct, but not so consistently to alone justify either of those positions. Modern war power delegation practices arose in the 1950s in response to geostrategic imperatives of the Cold War, but also, importantly, against a background expansion in the exercise of unilateral presidential power to use force.

Moreover, the mixed historical record shows that treating “foreign affairs delegation” as a special constitutional category is problematic. Rather, it points in favor of disaggregating that category, and even disaggregating the sub-category “war powers delegation.” The sparse record of war-initiation delegations prior to modern times also highlights the immense practical stakes of this issue as well as the varied and evolving strategic rationales behind broad delegations. In that way our focus on how Congress exercises its war power adds new dimensions to familiar accounts of whether Congress has done so. And as to legal reform, that historical record raises important questions about calls for restoring Congress’s traditional role in initiating war.

A. The Historical Development of War Power Delegation

This Article’s account of war power delegations suggests at least three conclusions about relevant constitutional history. First, the founding era has relatively little definitive evidence to offer on the topic, particularly for those searching for affirmative support for either broad war power delegation or near-absolute war power nondelegation. The drafters and ratifiers seem not to have discussed the matter directly. Although some scholars suggest that war power (and other foreign affairs powers) was seen at the time as more delegable than domestic lawmaking power, the leading specific defense of this suggestion relies principally upon extrapolation from a single obscure exchange in the Convention debates, with little if any confirmation in subsequent practice or commentary. And to the contrary, at least some key figures of the time emphasized the need to place war-initiation decisions in Congress specifically to check the President. The influential idea at the founding that decisions to start wars should rest with Congress, because Presidents might be too tempted toward war, is in considerable tension with unconstrained delegations of that power. Overall, though, originalist-oriented analysis of the founding era seems unlikely to generate specific conclusions on the delegability of war power, making this particular issue difficult to separate from the larger debate over Congress’s power to delegate its constitutional powers more generally.

Second, broad delegations of war-initiation power were surprisingly rare in historical practice prior to the Cold War. The 1798 Quasi-War statutes, often identified as key precedents for war power delegations, were actually quite narrow and incremental, sharply limiting the President’s ability to expand the naval conflict into a larger war. Moreover, they were infrequently repeated. After the Quasi-War, no significant foreign conflict was initiated pursuant to delegated power until Vietnam.

While early Congresses authorized hostilities on a few now-obscure occasions in which Presidents ultimately chose not to use force, each of these has limitations as clear precedent for broad delegation. The 1811 No-Transfer Act was conditioned on the occurrence of specific events. The 1839 authorization concerning the Maine border involved defense of specific disputed territory under potential military threat from a hostile power. The 1858 Water Witch authorization also depended on specific events and likely contemplated a low-level use of force. And those examples have generally received little scholarly or lawyerly attention, probably because they were never activated: Presidents did not invoke the delegated authority to use force because the facts on which they were conditioned did not occur. Indeed, none of the nineteenth-century acts just mentioned even appears in a recent Congressional Research Service compilation of historical authorizations to use military force.

Moreover, during the nineteenth century, Congress rebuffed Presidents Jackson and Buchanan when they requested delegated authority to use force, amid arguments (among others) that such delegations were constitutionally impermissible. For example, the Water Witch delegation was offset by Congress’s subsequent refusal to grant Buchanan wider authority to use force in Mexico and Central America. The 1839 Maine authorization came only a few years after Congress refused Jackson’s request for force authorizations against France and Mexico. And part of the Quasi-War debate involved authorization for the President to establish a Provisional Army, in which the analogous delegated power was sharply circumscribed in response to nondelegation concerns. So, war initiation was sometimes—but not consistently—treated as a special case for which broad delegation was impermissible. In sum, there is little historical practice to support broad delegations of war-initiation power prior to the Cold War, although a somewhat better case might be made for a limited practice of narrow delegations, particularly ones tied to specific circumstances or events.

In contrast, broad delegations of military powers were much more common in related areas. For example, all of Congress’s formal declarations and other official recognitions of a state of war contained essentially unlimited authorizations for the President to choose ways of fighting the war. Similarly, as to suppressing insurrections and law enforcement, Congress made open-ended authorizations with less concern or debate. Thus, if anything the early historical record suggests that war-initiation delegation was an area of concern—even if the doctrinal limits were unclear and contested.

The historical record of war-initiation delegation spotlights another less-obvious reason that its early practice was more contested than delegation of war-waging powers. Whereas today war-initiation power is usually seen as a core foreign affairs issue, earlier it was viewed as straddling both foreign and domestic affairs. Madison, exemplifying concerns among some constitutional architects, observed that “[w]ar is the parent of armies; from these proceed debts and taxes; and armies, and debts, and taxes are the known instruments for bringing the many under the domination of the few.” When Justice Nelson, dissenting in the Prize Cases, argued that Congress’s war-initiation power cannot be delegated, he did not appeal to grave foreign policy consequences; he cited the effects on the “business and property of the citizen.” As one modern scholar puts it, even today “[t]he transition from peace to war and back again fundamentally alters many legal relationships, whether they are privately ordered through contract or publicly ordered through statutes, common law doctrines, treaties, or even the Constitution.” Historically, it was as much the domestic implications of war initiation as the foreign ones that gave opponents of its delegation pause.

A third conclusion about constitutional history in this area is that the pivotal period for war power delegation was the early Cold War, after which one might argue that the practice reflected a modern “historical gloss” on the Constitution. In a relatively short period of time, Congress passed a series of force authorizations granting or acknowledging broad presidential discretion as to whether (and sometimes even where and against whom) to begin hostilities: the Formosa resolution (1955), the Middle East resolution (1957), the Cuba resolution (1962), and the Gulf of Tonkin Resolution (1964). Nothing like these authorizations had occurred previously. Yet, at the time, they were largely uncontroversial, passing by wide margins with only isolated objections on nondelegation grounds. The Gulf of Tonkin Resolution became controversial later, with the growing unpopularity and inconclusiveness of the expanded Vietnam War, and with that controversy came a rise in political and scholarly appeals to constitutional nondelegation principles. But those objections faded as the United States withdrew from Vietnam and the Cold War was replaced by concerns over terrorism and rogue regimes.

The most evident explanation for this shift is geostrategic. To be sure, the Supreme Court gave comfort through its prior Curtiss-Wright decision, indicating reduced constitutional concern about delegation in foreign affairs generally. But the fundamental changes presaging the new regime of war-initiation delegation were the rise of enduring Cold War military and ideological competition, the U.S. emergence as a global superpower with a worldwide ring of military bases and defensive alliances, and the advent of nuclear weapons. These new and dire circumstances underlay a broad consensus that Presidents needed powers to respond to global emergencies quickly and with a broad range of options. The constant military mobilization and sense of emergency muddied the distinction between war-initiation and presidential commander-in-chief activities, and the obsolescence of formal war declarations in international law further blurred it. Those conditions drove not only new thinking about delegation, but also new acceptance of presidential war powers unilateralism, as reflected in Korea and Cuba.

Thus, while the 1955 Formosa authorization was a significant step-up from previous cases in the breadth of delegation, it occurred at a time when many officials in both political branches believed that security imperatives in the Cold War required interpreting Article II of the Constitution to allow the President to defend distant American interests from the Communist bloc. Only a few years earlier, President Truman took the United States into the Korean War without express congressional approval. Although Eisenhower, who had a narrower view of presidential powers, requested the Formosa authorization, he received at least as much congressional pushback on the grounds that he did not need it to use force as on the grounds that it granted too much discretion. These developments bring us to the modern view in which war power delegations are relatively well accepted with relatively little understanding of their origins.

In sum, although on their face congressional force authorizations over time included broader delegations, these resolutions were passed in the context of broader understandings and prevailing practice of executive unilateralism. War power delegation may generally look broader over time in absolute terms, but so do background presidential powers. Perhaps one might attach to Cold War resolutions a historical gloss in favor of delegation, but those background assumptions about independent presidential powers and the perceived need at all for congressional authorization at that time render unclear whether the political branches understood that they were systematically engaging in novel legislative delegations. Indeed, as pointed out in Part IV, that growth in unilateral presidential powers has largely obscured the nondelegation questions lurking below.

B. Doctrinal Implications

This section considers the modern doctrinal implications of the foregoing history. We suggest at least four.

First, for those who would revive a strong version of the nondelegation doctrine, war power delegations are not so easily distinguished from domestic legislative delegations. As discussed, some judges and scholars who seek such a revival on originalist and structural grounds suggest that it would not extend to war alongside other foreign affairs powers. Our account calls that suggestion substantially into question; at minimum it should caution against assuming that such a carve-out is easy to justify. As described, originalist and early post-ratification evidence for broad war-initiation delegations is quite thin. There is little basis for assuming that the founders were less concerned about war power delegations than they were about other delegations (and some evidence that they would have been more concerned). And prior to the 1950s there was essentially no practice of broad delegation of the decision to go to war. The originalist-driven project to revive the domestic nondelegation doctrine may necessarily entail grappling with war power delegations, however much some of its advocates might wish to avoid that.

Second, the historical record cautions against treating war-related or military-related delegations as a single category. Longstanding practice indicates much greater acceptance of some kinds of broad delegations: delegations as to the method of fighting wars, and as to matters of law enforcement and suppression of domestic insurrection. For example, starting with early force authorizations after the Quasi-War, including the 1802 Tripoli resolution and every formal war declaration thereafter, Congress delegated to the President broad discretion regarding how to use military force. Importantly, these are areas in which the President is widely believed to have substantial independent constitutional power as a result of the President’s constitutional status as commander-in-chief and head of the executive branch. “Some delegations have, at least arguably, implicated the president’s inherent Article II authority,” noted Justice Gorsuch in Gundy. He continued: “The Court has held, for example, that Congress may authorize the President to prescribe aggravating factors that permit a military court-martial to impose the death penalty on a member of the Armed Forces convicted of murder—a decision that may implicate in part the President’s independent commander-in-chief authority.”

In contrast, war-initiation power—much of which was widely thought, at least in the early Republic, to be vested exclusively in Congress—lacks a similar, long-running historical pattern of broad delegation. Relatedly, to the extent there is historical precedent for delegation of war-initiation power, it involves (prior to the Cold War) specific and limited delegations rather than broad open-ended ones. There is not simply one blanket category of military- or war-related powers for which delegability was historically treated and practiced in the same way.

Third, the above considerations suggest a possible path for limited revival of nondelegation principles in war power debates and adjudication, namely, through interpretation of force authorizations’ scope. To be clear, we are not arguing that such delegation in the modern era is unconstitutional, nor do we think courts are likely anytime soon to address this issue, let alone to hold so. Delegation might be defended on grounds other than originalism and history, and at this point, recent practice has ingrained broad delegations not just as an available option for Congress but even as the preferred option for those who believe that Congress must authorize war or force. However, well short of finding them unconstitutional, legislators, judges, and other legal actors who place great weight on early historical delegation practice might be inclined to read modern force authorizations narrowly.

For example, issues have arisen with respect to the scope of the 2001 and 2002 AUMFs: Presidents have sought to use the 2001 AUMF against entities such as the Islamic State, with only tenuous relationships to the 9/11 attacks, and to use the 2002 AUMF regarding Iraq to authorize force against Syrian and Iranian targets. The constitutional history of delegation suggests that if courts were ever to reach the issue, they might instead read these authorizations more narrowly, similar to the way courts have begun to read ambiguous domestic delegations narrowly, as not encompassing important matters not clearly within the contemplation of the delegating Congress. Much like the Supreme Court held that it would not read a statute to delegate to the Environmental Protection Agency power to decide “major questions” of greenhouse gas regulation absent a clear statement by Congress of that intent, so too courts could reason from the historical record that force authorizations should be read narrowly absent a clear legislative statement.

Of course, courts are likely for many reasons—including remedial problems and concerns about comparative expertise—to avoid this issue and treat it as non-justiciable. The wisdom and practicality of such an interpretive rule is beyond this Article’s scope, and it would depend on many other factors besides history. Ultimately this will likely remain a constitutional issue for the political branches to wrestle with outside of courts. But regardless of where the issue is debated and decided, the historical record—especially the founding-era concerns about this particular power and the early practice of specific and limited delegations, to the extent war powers were delegated at all—could be used to support such an interpretive approach.

One might respond to these first three doctrinal points by arguing that the President has at least some independent power to use military force, so—for the purposes of constitutional delegation analysis, and perhaps also for purposes of interpreting force authorizations—war-initiation is to some extent an overlapping set of shared powers among the political branches. But even so, assuming there is at least some zone of exclusive congressional power, the question remains how delegation operates in that zone. As noted, this Article assumes the existence of such a zone. We nevertheless acknowledge that the line separating that zone is not a bright one, and that is also among the reasons that courts are likely to regard this issue as nonjusticiable.

Finally and more generally, the above account indicates the importance of disaggregating the category of foreign affairs delegations. Since Curtiss-Wright, courts and commentators have discussed a generalized category of foreign affairs powers that (it is said) may be more easily delegated. The history of war power delegations shows that this cannot be so easily assumed. As discussed, even within the foreign-affairs sub-category of military or war-related powers, some powers were historically regarded as more readily delegable than others. By extension, it seems inappropriate to generalize about delegability of foreign affairs powers. Some foreign affairs powers may indeed be readily delegable—particularly if they are associated with independent presidential powers, or with longstanding practice of congressional delegations. Others may not be, perhaps because—like war-initiation power—structurally Congress was designed to play a checking role and longstanding practice is not supportive of delegation. Specific types of foreign affairs delegations should be assessed individually rather than in general categories.

The foreign-domestic distinction in nondelegation law has held little significance in practice since Curtiss-Wright because even in domestic cases, courts have generally upheld delegations to the President under very deferential review. However, the idea that the Constitution permits broader delegation in foreign than domestic affairs could become crucial if courts and the political branches were to apply the nondelegation doctrine more strictly, as some Justices say they would. In Gundy, for example, Justice Gorsuch (joined by two other Justices), signaled that expansive foreign affairs delegations might survive his stricter nondelegation analysis. Justice Thomas elsewhere similarly suggested that broad foreign affairs delegations might be more permissible. Although, again, courts will likely continue to treat war-initiation disputes as nonjusticiable, a number of scholars have predicted that judges applying a stricter nondelegation doctrine would likely continue to carve out foreign affairs or national security generally for different treatment. Ultimately, delegation of war-initiation may still be constitutionally justified and defended on functional or other grounds, but the history of war power delegation cautions against broad-gauge categorical approaches to foreign affairs as a whole.

C. Strategic Significance of War Power Delegation

The historical record also gives reason to think that the question whether Congress may delegate power to initiate major war has arguably been more consequential than whether Congress must authorize major war (defined loosely as ground wars with immense costs to the United States). The former issue gets almost no attention today and becomes critically important if one believes the answer to the latter is yes. Apart from the Korean War, the President has always requested and received congressional approval to launch major wars. Presidents have not always regarded this step as necessary, but they have done so. Counterfactual history is of course difficult, but it is hard to show past major wars in which a constitutional requirement of congressional approval would have made a difference.

It may be easier to identify situations where a requirement that Congress actually decide to initiate war might have influenced the outcome or timing. For example, Eisenhower believed that effectively deterring Chinese attacks on Taiwan in 1955 required diplomatic brinksmanship that in turn required congressional pre-approval to use unlimited force. At least in Eisenhower’s view, delegated war power reduced the likelihood of war compared to seeking a decision by Congress after a Chinese provocation. Requiring Congress to expressly initiate war rather than delegate the decision might reduce or delay war in other ways. In the Persian Gulf War, the Senate passed the 1991 resolution granting the President an option to initiate war by only a narrow 52-47 margin. Would Congress have passed a resolution firmly deciding to initiate war, if it could not constitutionally delegate that politically difficult decision to the President? Perhaps not, or perhaps only after diplomacy was given more time. Similarly, had Congress been required to decide on war with Iraq in 2002–2003, we wonder whether Congress might have scrutinized more carefully the intelligence about Iraq’s alleged weapons of mass destruction. It is impossible to prove the impact of such a requirement (compared to an option to delegate), but it is fair to speculate that war decisions might have played out differently or been slowed. And if merely slowing a decision for war seems insubstantial, remember that it is among the reasons most often cited for lodging war power in Congress to begin with.

The historical record also reveals that how Congress exercises its war power, specifically its choice to delegate decision-making on war, has been of great strategic importance—but for different reasons over time. That episodic history can be understood as efforts by the political branches to wrestle with new foreign policy dilemmas that did not fit neatly with a requirement or practice that Congress itself make the final decision on military intervention.

One obvious rationale for war power delegation is the generic rationale behind many legislative delegations: to manage complexity. To deal flexibly with complicated and uncertain situations, Congress often delegates substantial authority to the executive branch to implement policy within legislative parameters. War power delegations since World War II can be understood in similar terms, as recognition that fast-changing geopolitical conditions and the President’s simultaneous exercise of other military, diplomatic, and economic powers favor giving the President flexibility on whether and when to use force or initiate war. Indeed, although historically critics of war power delegation were generally concerned about presidential power, the practical impact of strict nondelegation—that is, giving Congress only a stark choice between deciding to use force or not, rather than allowing it to authorize the President to exercise some discretion—might actually have been more presidential unilateralism. As the U.S. government has dealt with a wide range of security crises, war power delegations may also thus reflect adaptive, pragmatic advantages of flexibility in how Congress legislatively exercises its war power.

Historically, however, war power delegation has served as a device for handling various specific strategic challenges in addition to managing complexity. That history is especially useful to those who would justify broad war power delegation on functional grounds. The narrowly crafted 1811 No-Transfer Act involved special need for secrecy, for example. The UNPA involved delegation to solve particular credibility challenges for formal collective security arrangements that would have been unimaginable to the founders. Another new challenge after World War II was extended deterrence, or the credible threat of force to deter attacks on allies, particularly in the Eisenhower Administration. In the UNPA and Eisenhower-era force resolution episodes, war power delegations were intended to signal policy certainty, not highlight policy discretion. That dilemma of squaring credible commitments to use force with congressional control of war initiation was also partially obviated by a shift in practice from congressional delegation to executive unilateralism. As explained next, efforts to roll back presidential war powers will bring some of these dilemmas back to the fore.

D. Implications for War Powers Reform

Finally, the historical record of war power delegation—especially questions about its acceptance at the founding and the thin body of practice since then—has implications for war powers reform. Reformists often pitch their calls as “restoring” Congress’s proper constitutional role in war initiation, but the historical record raises questions about what interbranch arrangements reformists are usually calling for a return to. For those who advocate reversion to exclusive congressional control over war initiation, it also raises tough questions about Congress’s ability to delegate discretion through future force authorizations.

Those advocating tighter congressional control of war initiation, whatever their political stripes, often appeal to originalism. In advocating reforms to the 1973 War Powers Resolution, for example, legislative sponsors often talk of restoring the original constitutional framework, in which Congress wielded exclusive control over decisions to initiate war. The core of many war power reform proposals is to add teeth to the requirement that Congress must authorize major uses of military force. To reformists, it is usually assumed not just that a congressional resolution delegating power to use force is constitutionally sufficient, but that it represents the gold standard of congressional war power primacy. Note, also, that a similar view is currently shared by some members of Congress who propose (much like Eisenhower in 1955) to authorize the President in advance to use force against China to protect Taiwan—a scenario that could entail large-scale war.

Such proposals may be normatively attractive, but if we take reformists’ appeal to originalism seriously, that commitment may prove more than reformists think. It is not clear that a forward-looking delegation of authority to use force would have satisfied constitutional requirements for how Congress exercised its exclusive war powers at the founding. Whereas today, requiring an express congressional force authorization for any major hostile use of armed force is generally seen as fully restorative of Congress’s powers as they were originally understood, our findings show that early understandings were uncertain—not uncertain in the way commonly discussed, as to whether Congress’s powers were exclusive, but uncertain as to how Congress was required to exercise those exclusive powers.

Our analysis suggests that those advocating a return to greater exclusive congressional war power should also grapple with whether there are any constitutional limits to its delegation. And in doing so, they would simultaneously have to consider how the strategic imperatives discussed in the previous section will continue often to push in favor of broad delegation.

CONCLUSION

This Article’s chief aim has been to describe the historical evolution of war power delegation from the founding era to the present. This account is interesting in itself, as it undercuts a common assumption that broad war-initiation delegations of the type used in modern practice are a longstanding feature of the constitutional landscape. To the contrary, the Article shows that from the Constitution’s earliest years until the mid-twentieth century, war-initiation delegations were rare and typically specific and conditioned on particular events. Broad delegations became more common only after World War II, first in the Cold War and then continuing to modern times in the conflicts with Iraq and the war on terrorism. The story of war-initiation delegations is a story of constitutional change.

The Article takes no firm position on the ultimate implications for modern war powers doctrine. That depends on one’s view of constitutional interpretation more generally—originalists, traditionalists and functionalists may, for example, draw different conclusions. At minimum, though, it is more difficult than often supposed to defend the modern approach to war initiation on grounds of longstanding historical practice. The historical record also spotlights an otherwise-obscured question about common calls to respect Congress’s original, exclusive war power: namely, whether originally there were constitutional limits to its delegation.

Our analysis also yields insights for broader debates about nondelegation. The Supreme Court has indicated that delegation may be categorically more appropriate in foreign affairs matters, and modern proponents of reviving the nondelegation doctrine have suggested that the revival might exempt delegation of foreign affairs powers. Especially for nondelegation revivalists who take originalism seriously, however, this Article cautions against categorical treatment of foreign affairs delegations, and even against categorical treatment of war-related delegations.

96 S. Cal. L. Rev. 741

Download

* Warren Distinguished Professor of Law, University of San Diego School of Law.

 Liviu Librescu Professor of Law, Columbia Law School. The authors thank Scott Anderson, Curtis Bradley, Harlan Cohen, Kristen Eichensehr, Jane Manners, Michael McConnell, and Kelsey Wiseman, as well as participants in the Duke-UVA Foreign Relations Law Workshop convened by Professors Bradley and Eichensehr, for their comments on earlier drafts. The authors thank Tanner Larkin, Christopher Malis, Austin Owen, Ruth Schapiro, Alec Towse, and Josh Tupler for outstanding research assistance, and they thank the Martin and Selma Rosen Research Fund for support.

The Illusory Moral Appeal of Living Constitutionalism

Two prominent theories of constitutional interpretation are originalism and living constitutionalism. One common argument for living constitutionalism over originalism is that living constitutionalism better avoids morally unjustifiable results. This Note will demonstrate that this argument is flawed because living constitutionalism lacks a definitive enough prescriptive claim as to how to interpret the United States Constitution.

Proponents of originalism assert that courts should interpret constitutional provisions in accordance with the public meaning of those provisions at the time of their enactment. One criticism of originalism is that if the Supreme Court were to faithfully apply the theory, such application leads morally unjustifiable outcomes. This criticism has two components: (1) had the Supreme Court subscribed to originalism as its interpretive method in the past, then certain outcomes, such as the banning of racial segregation in public schools in Brown v. Board of Education, would not have occurred; and (2) if the Supreme Court employs originalism in the future, the Court might issue rulings contrary to contemporary moral sensibilities. Moreover, some critics of originalism maintain that when confronted with this problem, proponents of originalism deny that its application would lead to those outcomes and stretch the theory’s meaning beyond its capacity for any meaningful constraint on interpretation, or, alternatively, they admit that they would find the morally objectionable practice unconstitutional, even if such holding would be inconsistent with the originalist method. Thus, the claim is that originalists are “faint-hearted;” that is, they either tailor the definition of originalism to conform to morally required decisions or abandon originalism when it is too much to bear. This, critics of originalism assert, indicates that originalism is not viable as a constitutional method and should be abandoned, some argue, in favor of living constitutionalism.

This Note will demonstrate the flaws in the above argument. The argument is flawed, not because it can necessarily be proven that originalism leads to more morally justifiable results than living constitutionalism, but because living constitutionalism lacks a definitive prescriptive claim to make such a comparison between the two theories possible. That is, it is impossible to identify past or hypothetical future outcomes of cases as being consistent or inconsistent with living constitutionalism. Moreover, because it is possible to do so with originalism, and thus, posit how implementing originalism could lead to morally undesirable results, living constitutionalism has an illusory moral superiority over originalism.

Download

Respect for Marriage in U.S. Territories

The 2010s were a watershed decade for marriage equality in the United States. In 2013, the Supreme Court in United States v. Windsor struck down section 3 of the so-called Defense of Marriage Act (“DOMA”), which denied federal recognition to valid state marriages between same-sex couples. The opinion left intact section 2 of DOMA, which “allow[ed] States to refuse to recognize same-sex marriages performed under the laws of other States.” Two years after Windsor, the Supreme Court in Obergefell v. Hodges invalidated all state laws against same-sex marriage. The opinion effectively invalidated section 2 of DOMA and went one step further: states had to not merely recognize out-of-state same-sex marriages but also had to perform same-sex marriages in state as well. Obergefell brought marriage equality to every state.

Although Obergefell seemed to guarantee same-sex couples the constitutional right to marry, marriage equality became vulnerable in the summer of 2022. In addition to providing the critical fifth vote to reverse Roe v. Wade in Dobbs v. Jackson Women’s Health Organization, Justice Thomas wrote a concurrence calling for the complete repudiation of substantive due process. Ominously, he wrote “in future cases, we should reconsider all of this Court’s substantive due process precedents, including . . . Obergefell.”

Justice Thomas’s concurrence in Dobbs reinvigorated congressional efforts to pass the Respect for Marriage Act (“RFMA”), a statute that would require states to grant full faith and credit to out-of-state marriages regardless of race, gender, ethnicity, or national origin. The marriage equality movement succeeded when President Biden signed the RFMA into law in December 2022. Despite the recent controversy of Thomas’s Dobbs concurrence, the RFMA was not new legislation; versions of the RFMA had been proposed in Congress for over a decade, before either the Windsor or Obergefell opinions were issued. The RFMA did not simply codify Obergefell, as the Act does not invalidate any state’s prohibition on licensing same-sex marriage within its own borders. Instead, the RFMA effectively repealed section 2 of DOMA and affirmatively requires states to recognize same-sex marriages legally performed in other states.

Opponents of the RFMA argued that the legislation was unnecessary because Obergefell already protects marriage equality. They seem unimpressed with Justice Thomas’s shot across the bow in Dobbs. For example, one month after Justice Thomas announced his intention to reconsider and perhaps reverse Obergefell, Senator Marco Rubio belittled the RFMA as a “stupid waste of time.” Iowa Senator Chuck Grassley voted against the RFMA, asserting that the “legislation is simply unnecessary. No one seriously thinks Obergefell is going to be overturned so we don’t need legislation.” He implied that RFMA supporters were seeking “to fabricate unnecessary discontent in our nation.”

The argument that the RFMA was unnecessary because marriage equality was already the law of the land failed to appreciate how constitutional law reaches the shores of U.S. territories. Even if Justice Thomas fails in his mission to overturn Obergefell, the RFMA is still essential now to bring the protections of Obergefell to all corners of the American empire. Before the RFMA, the U.S. territory of American Samoa refused to follow Obergefell and continued to restrict marriage licenses to opposite-sex couples.

While Obergefell instantly brought marriage equality to every state, the path toward marriage rights has been more complicated in U.S. territories: American Samoa, Guam, the Commonwealth of the Northern Mariana Islands (“CNMI”), the U.S. Virgin Islands (“USVI”), and Puerto Rico.

Acquired primarily from colonial powers by purchase or as the spoils of war, U.S. territories hold a precarious position in our constitutional structure. Beginning in 1901, the Supreme Court issued a series of opinions known as the Insular Cases. This line of authority prevented constitutional rights from automatically protecting territorial residents. Instead, the Court held that “the Constitution is applicable to territories acquired by purchase or conquest, only when and so far as Congress shall so direct.” In the absence of congressional directive, the Insular rubric provides that federal courts can hold that a constitutional right applies to one or more territories when the court determines that the right is “fundamental” and that recognizing the right would not be “impracticable and anomalous” for that territory. Under this test, for example, the district court in King v. Andrusstruck down rules denying jury trials in criminal cases in American Samoa, finding that it would not be impractical and anomalous to require American Samoa to provide jury trials to criminal defendants, given the structure of the American Samoan judicial system.

Conversely, in rejecting calls to provide birthright citizenship to individuals born in American Samoa, The Court of Appeals for the D.C. Circuit in 2015 in Tuaua v. United States held that it would be “anomalous to impose citizenship over the objections of the American Samoan people themselves” and federal judges should not “forcibly impose a compact of citizenship—with its concomitant rights, obligations, and implications for cultural identity.” In 2021, the Tenth Circuit in Fitisemanu v. United States followed suit and used the Insular framework to block birthright citizenship for American Samoans.

The Fitisemanu plaintiffs petitioned the Supreme Court for certiorari. Some commentators saw the case as the perfect vehicle for challenging the Insular Cases. The hope was not far-fetched. Respected scholars advocate the reversal of the Insular Cases. Significantly, in his concurrence in United States v. Vaello Madero in April 2022, Justice Gorsuch observed the following:

A century ago in the Insular Cases, this Court held that the federal government could rule Puerto Rico and other Territories largely without regard to the Constitution. It is past time to acknowledge the gravity of this error and admit what we know to be true: The Insular Cases have no foundation in the Constitution and rest instead on racial stereotypes. They deserve no place in our law.

On October 17, 2022, however, the Supreme Court denied certiorari in Fitisemanu, thus leaving the Insular Cases intact. While not obvious at first glance, that decision has implications for marriage equality in U.S. territories.

This Article proceeds in three parts. Part I examines how the governments of the five U.S. territories responded to the Obergefell decision. Because of the Insular Cases, Obergefell did not necessarily automatically apply to the territories. Of the most concern, the territorial government of American Samoa has refused to recognize either Obergefell or marriage equality. Part II explains how the RFMA provides a partial solution to the problem created by the Insular Cases. It discusses the unappreciated significance of the RFMA for residents of U.S. territories. The RFMA brings a form of marriage equality to American Samoa for the first time. Less historic, but also important, the RFMA would ensure the continuation of marriage equality in those U.S. territories where the right to same-sex marriage is currently recognized but uniquely vulnerable because of the Insular Cases. Part III exposes some of the limitations of the RFMA. For example, the RFMA requires that states and territories provide full faith and credit to marriages legally performed in other states and territories; same-sex couples still cannot get legally married in American Samoa. They must leave home to get married, a burden not imposed on opposite-sex couples.

Download

Prosecutorial Authority and Abortion

In the wake of Dobbs, abortion is now unlawful in many states. States that prohibit abortion use their regulatory authority, civil justice systems, and criminal law to do so. Presumably, many of the activists and politicians who have been fighting to ban abortion will want to see that outlawing abortion is effective at reducing the incidence of abortion in fact. Once abortion is unlawful in a state, some pro-life partisans will also want those who perform or assist abortions to be criminally punished.

This Essay identifies a serious procedural obstacle to the use of the criminal law against abortion in a post-Dobbs world: exclusive local authority to bring criminal prosecutions. The obstacle is constitutional in a small number of states, but one of those states, Texas, is the most populous state where abortion is now illegal. In these states, only local, autonomous prosecutors (district attorneys and county attorneys) can pursue indictments or file informations to commence criminal cases. Prosecutorial localism is enshrined in the Texas Constitution.

Inside the borders of states that do not allow their attorneys general to initiate prosecutions, criminal law against abortion will be a dead letter in certain urban and suburban counties as pro-choice electorates pick prosecutors who will not bring abortion prosecutions. For politicians in states like Texas with well-entrenched Republican leadership at the statewide level, the pressure to act forcefully against abortion will be immense, but without changes to jurisdictional laws, Republican attorneys general will be unable to enforce abortion bans through criminal law. At the same time, the pressure on Democratic county and district attorneys not to enforce the abortion laws will be equally immense. The outcome may be highly contentious constitutional litigation to revisit old understandings about the allocation of authority between state and local elected officials, as well as efforts in state legislatures to amend statutes and constitutional provisions that mandate localism in criminal procedure.

This brief Essay adds to the growing literature on criminal procedure in a post-Dobbs world. Those prosecuted for performing or having abortions who have lost the Fourteenth Amendment’s shield for the procedure itself will still be protected by the Fourth, Fifth, and Sixth Amendments, as well as broader common law traditions and workaday rules of criminal trials in their states. For instance, Peter Salib and Guha Krishnamurthi have already pointed out the deterrent effect of jury nullification on abortion prosecutions.

This Essay closes by recognizing that criminal prosecutions are not the only tool that pro-life leaders at the state level have to promulgate antiabortion policy. The fact that those involved with abortion in some “blue” counties in some “red” states will be safe from criminal prosecution will not restore the pre-Dobbs status quo. Rather, the likely result in these counties is a kind of gray market condition where unlicensed providers of medication abortions will be able to operate while licensed professionals and established clinics will be kept closed by the threat of regulatory fine, license revocation, and civil liability. And of course, this assumes that pro-life politicians and voters do not quickly amend state laws—even state constitutions—to permit attorneys general to prosecute abortion.

Download

Colorblind Constitutional Torts

Much of the recent conversation regarding law and police accountability has focused on eliminating or limiting qualified immunity as a defense for officers facing § 1983 lawsuits for using excessive force. Developed during Reconstruction as a way to protect formerly enslaved persons from new forms of racial terror, 42 U.S.C. § 1983 allows private individuals to bring suit against police officers when their use of force goes beyond what the Constitution permits. Qualified immunity provides a way for law enforcement to evade civil suits if officers can show that they did not infringe any constitutional right or they did not violate a clearly established law—concepts that are highly deferential to police. Implicit in the contemporary emphasis on reforming qualified immunity is the idea that but for this concept, § 1983 litigation could effectively fulfill its longstanding goal of holding police officers accountable through civil liability when they beat, maim, or kill without legal justification.

Qualified immunity certainly raises important issues, and reform in this area of law is needed. But deeper problems plague § 1983 claims. In this Article, we examine a key structural deficiency tied to legal doctrine that has largely escaped critique: how the Supreme Court’s 1989 decision in Graham v. Connor radically transformed § 1983 causes of action. Prior to the Graham decision, federal courts used diverse mechanisms, notably Fourteenth Amendment substantive due process, to determine “what counts” as an appropriate use of force. The Graham decision changed this area of law by holding that all claims of police excessive force must be judged against a Fourth Amendment reasonableness standard. This transformation has led to much discussion about what Graham means for understanding which police practices concerning the use of force are constitutionally permissible. However, there has been little conversation about what Graham has specifically meant for federal courts’ conception of civil enforcement mechanisms such as § 1983 that are designed to provide monetary relief when these constitutional rights are violated. 

In this Article, we engage in the first empirical assessment of Graham’s impact on federal courts’ understanding and application of this statute. We find that the Graham decision was not only constitutionally transformative in terms of how federal courts understand the legal standard for “what counts” as excessive force, but also correlates with changes in how federal courts think about the overall scope, purpose, and nature of § 1983. Our data analysis of two hundred federal court decisions shows that the Graham decision effectively divorced § 1983 from its anti-subordinative race conscious history and intent, recasting it in individualist terms. This has led to a regime of what we call colorblind constitutional torts in that the Graham decision doctrinally filtered § 1983 use of force claims down a structural path of minimal police accountability by diminishing the central roles of race and racism when federal courts review § 1983 cases. These findings and theoretical framing suggest that the contemporary emphasis on qualified immunity in police reform conversations misunderstand and significantly underestimate the doctrinal and structural depth of the police accountability problem. This Article provides a novel and useful explanation for how and why police use of force persists and offers a roadmap for change and greater police accountability.

Introduction

It is not uncommon for diabetics suffering from hypoglycemia (low blood sugar) to have their symptoms of disorientation and loss of consciousness misunderstood as being under the influence of drugs and alcohol, which can lead to mistreatment by the police.[1] This is what happened to Dethorne Graham one fall afternoon in 1984. Graham and his friend were pulled over by a police officer who thought Graham was “behaving suspiciously” when he quickly entered and exited a local convenience store in search of orange juice to offset his medical condition. The officer called for backup and, within a few short minutes, Graham was handcuffed face down on the sidewalk. When his friend tried to explain to the officers that Graham was a diabetic, one officer replied, “I’ve seen a lot of people with sugar diabetes that never acted like this. Ain’t nothing wrong with the [motherfucker] but drunk. Lock the [son of a bitch] up.”[2] Another neighborhood friend familiar with Graham’s condition saw the incident and brought orange juice to the scene. Graham begged Officer Matos, saying, “Please give me the orange juice.” She responded: “I’m not giving you shit.”[3] Graham was roughed up by the officers and thrown in the back of a squad car. Eventually, the officers drove him home, threw him on the ground in front of his house, and sped away.

During the altercation, Graham “sustained a broken foot, cuts on his wrists, a bruised forehead, and an injured shoulder . . . [along with developing] a loud ringing in his right ear.”[4] Graham brought a federal civil rights suit under 42 U.S.C. § 1983 against the Charlotte, North Carolina, Police Department, alleging that the police violated constitutional rights granted to him under the Fourteenth Amendment. Before this case, plaintiffs sought remedies for excessive use of force by the police through different legal mechanisms, including substantive due process, equal protection, the Fourth Amendment, and even § 1983 as a stand-alone source for making claims.[5] While the district and circuit courts ruled in favor of the officers, the United States Supreme Court made a surprising decision. The Court held that all claims regarding the constitutionality of police use of force should be analyzed under the Fourth Amendment through a standard of “objective reasonableness.”[6] Graham v. Connor (“Graham”) marks an important, though often underappreciated, moment of doctrinal transformation. It synthesized previously divergent strands of use-of-force case law and established a new constitutional standard for all cases that involve claims of police using excessive force in the context of an arrest or investigatory stop.[7] Rather than framing police use of force as a matter concerning equal protection or substantive due process, the Graham decision effectively forced all conversations concerning excessive force to federal courts’ Fourth Amendment jurisprudence.

Over the past three decades, legal scholars and practitioners have debated the impact that Graham has had on limiting issues concerning the constitutionality of police use of force to a vague and nebulous standard of “objective reasonableness” in light of the broad deference that society and the courts give to law enforcement.[8] This deference and tendency to see almost all police actions as “reasonable” explains, at least in part, how even the most egregious police behavior often goes without penalty—a concern that is at the heart of the contemporary social movement against police violence. But, despite this almost exclusive preoccupation with what Graham has meant for constitutional law, there are other meaningful doctrinal concerns that deserve exploration. Put differently, what other aspects of use-of-force inquiries have been impacted by the shift in constitutional standards brought by Graham?

There are at least two main components to § 1983 litigation concerning police use of force: the enforcement action, which is a statutory mechanism, and the constitutional standard that is being enforced (Fourth Amendment reasonableness, per Graham). The existing scholarship only examines the influence of Graham in regard to how it changed federal courts’ understanding of the constitutional standard for “what counts” as excessive force. But what has Graham meant for how federal courts understand the scope, context, and meaning of civil rights—particularly statutory enforcement mechanisms such as § 1983?

In this Article, we engage in the first empirical assessment that examines Graham’s impact on how federal courts understand the nature and purpose of § 1983. This issue concerning Graham’s impact on § 1983 litigation beyond shaping the constitutional standard for excessive force is important for several reasons. The statute emerged during Reconstruction pursuant to Congress’s Fourteenth Amendment section 5 powers to provide civil remedies such as money damages to claimants when state officials violate constitutional rights while working in their official capacities.[9] Thus, understanding Graham’s impact should not be limited to discursive and doctrinal meditations on reasonableness, which is where the bulk of the discussion on this decision lies. It is also important to explore Graham’s impact on a civil rights statute designed to enforce constitutional rights in terms of how, if at all, the decision affected the way that federal courts read and interpret the history, meaning, and application of § 1983—legislation meant to give claims concerning police excessive force purpose and effect. Clearly, § 1983 as an enforcement mechanism has a close relationship with Fourth Amendment standards on reasonableness in the police use of force context. This Article is an attempt to go beyond existing scholarship on how the Graham decision reshaped the constitutional standard to also understand how it may have impacted the way that federal courts conceptualize the reach and intent of the civil statute meant to enforce these rights.

This research is critically important in light of contemporary social movements and proposed legal reforms responding to growing public awareness of police brutality in marginalized communities. Following the killing of George Floyd in Minneapolis and subsequent global protests against anti-Black violence, the conversation on how law can compel greater accountability with regards to police use of force has focused heavily on qualified immunity. Qualified immunity is a judicially created concept that emerged in the 1960s to allow government officials facing constitutional tort actions to avoid civil suits and the possibility of paying money damages when they can show that they did not violate any constitutional right or that the law they were accused of breaking was not clearly established. Qualified immunity morphed over subsequent decades to largely become a mechanism to shield police officers from enduring § 1983 lawsuits in virtually all but the most egregious instances of force.[10] Federal courts’ deferential posture towards police facing constitutional tort actions has turned qualified immunity into an exculpatory tool for law enforcement who use excessive force. As such, the post-Floyd emphasis on eliminating qualified immunity or restricting its use has become a popular public rallying point. For example, at the federal level, Representatives Justin Amash and Ayanna Pressley introduced the Ending Qualified Immunity Act in the House of Representatives in June 2020,[11] which was followed shortly by a similar bill in the Senate proposed by Senators Edward Markey, Elizabeth Warren, and Bernie Sanders.[12] Other efforts have been pursued to address the use of qualified immunity in state-level legislation. Since George Floyd’s murder in May 2020, “at least 25 states have taken up the issue and considered some form of qualified immunity reform, including Colorado, New Mexico, Connecticut and Massachusetts, which have passed legislation to end or restrict the defense.”[13] The idea behind these and other efforts at ending qualified immunity is that making police officers open to civil lawsuits for using excessive force will increase accountability and prevent officers from engaging in violence that violates constitutional rights.

Without question, qualified immunity presents unjust and unjustifiable barriers to holding police accountable. But there are deeper structural limitations placed on this type of litigation—namely, Graham’s reframing and reorientation of the entire constitutional tort endeavor. The impact of Graham deserves as much or even greater attention to the extent that the reframing of police use of force through Fourth Amendment logics has dislodged constitutional tort litigation from its foundational purpose: protecting the Black community from state violence. Yet, conversations regarding the Graham decision, its transformative impact on policing, and its role in undermining police accountability are largely absent from legal and public discussions regarding police reform. This Article uses empirical evidence to draw attention to this problem and argues for a different focus in efforts to reduce police violence.

To understand the structural limitations on police accountability beyond qualified immunity that were ushered in by the Graham decision, Part I of this Article begins with providing a brief history of § 1983 and explores the constitutional and statutory evolutions that constitute contemporary use-of-force jurisprudence. Part I also shows that legal scholars have mostly discussed the problem of police accountability for using excessive force in terms of qualified immunity. Part II examines the research literature on Graham and how existing scholarship is largely silent on how this doctrinal evolution came to limit constitutional tort actions. The impact of Graham has been discussed in legal scholarship with very little, if any, attention to what the decision to exclusively assess the constitutionality of police use of force through Fourth Amendment frameworks has meant for federal courts’ posture towards civil remedies offered by statute (§ 1983) and sought by plaintiffs. Part III describes our empirical study examining shifts in how federal courts decided § 1983 cases after Graham. We look at two periods: (a) from Monroe v. Pape in 1961 (which marks the beginning of the modern era of § 1983 litigation) through the Graham decision in 1989 and then (b) just after Graham from 1990 to 2016. Part IV discusses the results from our study. We find that there are important changes in how federal courts understand and approach § 1983 that correlate with the Graham decision. In particular, (1) references to § 1983’s descriptive titles—Ku Klux Klan Act, Enforcement Act, etc.—that reflect the racial history tied to this civil rights statute declined substantially after Graham; (2) consistent with Graham’s holding, judicial recognition of § 1983’s tight doctrinal relationship to the Fourteenth Amendment as a more race-conscious constitutional standard for excessive force claims largely ended, diminishing the potential of § 1983 civil remedies by linking them to Fourth Amendment standards of “reasonableness” that largely defer to the police; and (3) mentions of the race of plaintiffs and officers meaningfully decreased after the Graham decision. In Part V, we draw upon these empirical findings to develop a theory of colorblind constitutional torts that can at least partially explain these results as well as the persistence of police violence despite the availability of legal mechanisms designed to prevent and remedy such abuses. We then briefly conclude with a discussion of how these empirical findings and new theoretical framework can help federal courts reimagine constitutional torts in a manner that can produce greater police accountability.

The findings from our research show how the accountability problem regarding police use of force is not simply connected to individual “bad apples” in law enforcement shielded by misguided common law arguments about qualified immunity. More to the point, there are important doctrinal barriers that emerged after the Graham decision’s imposition of a Fourth Amendment framework that infused constitutional tort actions with colorblind sensibilities that undercut the entire historical project of § 1983. The empirical evidence, doctrinal reframing, and theoretical argument provided by this Article open up important new opportunities for change.

The data provided by this study raise important questions about Graham’s significance beyond matters concerning constitutional law. Graham has also had tremendous implications on how federal courts interpret and understand federal civil right statutes, particularly § 1983. By instilling a discourse of colorblindness into excessive-force litigation, Graham disrupts, if not completely undermines the connection between § 1983 and the distinct history of state-sponsored racial terror giving rise to it. By bringing colorblindness through the backdoor into judicial interpretations of this federal statutory remedy, Graham not only fundamentally contradicts the social, political, and historical forces that give meaning to § 1983, but it also frustrates § 1983’s ability to address contemporary abuses under the color of law, such as excessive force by law enforcement.


          [1].      The American Diabetes Association offers resources on how to engage with police officers. It notes that this is a particular concern for people with this medical condition, as “[l]aw enforcement officers [can fail] to identify hypoglycemia emergencies, mistaking them for intoxication or noncompliance. This can lead to the individual being seriously injured during the arrest, or even passing away because the need for medical care was not recognized in time.” Discrimination: Law Enforcement, Am. Diabetes Ass’n, https://www.diabetes.org/tools-support/know-your-rights/discrimination/rights-with-law-enforcement [https://perma.cc/RE3M-BXXR].

          [2].      Graham v. Connor, 490 U.S. 386, 389 (1989). The quoted language was originally censored by the Court in its opinion, but it appears uncensored here.

          [3].      Direct Examination of DeThorn Graham, Graham v. Connor, No. 87-6571 (W.D.N.C. Oct. 13, 1988).

          [4].      Graham, 490 U.S. at 390.

          [5].      See generally Osagie K. Obasogie & Zachary Newman, The Futile Fourth Amendment: Understanding Police Excessive Force Doctrine Through an Empirical Assessment of Graham v. Connor, 112 Nw. U. L. Rev. 1465 (2018) (finding empirical support for that federal courts largely did not use the Fourth Amendment as a constitutional standard in § 1983 excessive-force cases prior to Graham.).

          [6].      Graham, 490 U.S. at 388.

          [7].      Graham notes that this Fourth Amendment analysis applies when the police intentionally engage in an arrest, investigatory stop, or seizure of a citizen. Instances after Graham where the police cause physical harm without this intent (such as with innocent passersby) may still be analyzed through other constitutional mechanisms. See County of Sacramento v. Lewis, 523 U.S. 833, 854 (1997). This Article only discusses excessive force that occurs in the context of an arrest or investigatory stop.

          [8].      For a discussion of how deference to law enforcement shapes the federal courts’ understanding of the constitutional boundaries of excessive force, see Osagie K. Obasogie & Zachary Newman, The Endogenous Fourth Amendment: An Empirical Assessment of How Police Understandings of Excessive Force Become Constitutional Law, 104 Cornell L. Rev. 1281, 1322 (2019). For a broader assessment of the history of judicial deference to police, see Anna Lvovsky, The Judicial Presumption of Police Expertise, 130 Harv. L. Rev. 1995, 2052 (2017).

          [9].      U.S. Const. amend. XIV, § 5 (“The Congress shall have power to enforce, by appropriate legislation, the provisions of this article.”). As background,

On April 20, 1871, the Forty-Second Congress enacted the third Civil Rights Act known as the Ku Klux Klan Act. The primary purpose of the Act was to enforce the provisions of the Fourteenth Amendment. Section 1 of the Act added civil remedies to the criminal sanctions contained in the Civil Rights Act of 1866 for the deprivation of rights by an officer “under color of law.” Thus, Section 1 of the Ku Klux Klan Act was the precursor of the present day 42 U.S.C. § 1983. . . . On June 22, 1874, the statute became § 1979 of Title 24 of the Revised Statutes of the United States, and upon adoption of the United States Code on June 30, 1926, the statute became § 43 of Title 8 of the United States Code. In 1952 the statute was transferred to § 1983 of Title 42 of the United States Code, where it remains today.

Richard H.W. Maloy, “Under Color of”—What Does It Mean?, 56 Mercer L. Rev. 565, 574 (2005) (citations omitted). Charles Abernathy notes that

we have long recognized that the resurrection of § 1983 converted the fourteenth amendment from a shield into a sword by providing a civil action for vindication of constitutional rights and, to the extent that damages have gradually become the authorized remedy for § 1983 violations, we have easily come to think of such actions as constitutional torts—civil damage remedies for violations of constitutionally defined rights.

Charles Abernathy, Section 1983 and Constitutional Torts, 77 Geo. L.J. 1441, 1441 (1989) (citations omitted).

        [10].      See generally Osagie K. Obasogie & Anna Zaret, Plainly Incompetent: How Qualified Immunity Became an Exculpatory Doctrine of Police Excessive Force, 170 U. Pa. L. Rev. 407 (2022).

        [11].      H.R. 7085, 116th Cong. (2020).

        [12].      S. 492, 117th Cong. (2021).

        [13].      Emma Tucker, States Tackling ‘Qualified Immunity’ for Police as Congress Squabbles over the Issue, CNN (Apr. 23, 2021), https://www.cnn.com/2021/04/23/politics/qualified-immunity-police-reform/index.html [https://perma.cc/G8HF-WD6H].

* Haas Distinguished Chair and Professor of Law, University of California, Berkeley School of Law (joint appointment with the Joint Medical Program and School of Public Health). B.A. Yale University; J.D. Columbia Law School; Ph.D. University of California, Berkeley. Many thanks to Richard Banks, Laura Gómez, Sonia Katyal, and Gerald López for reviewing early drafts. Comments from participants at the Stanford Law School Race and Law Workshop and UCLA Critical Race Theory Seminar and Workshop were extremely helpful. Sara Jaramillo provided excellent research assistance. 

†Senior Attorney, Legal Aid Association of California. B.A. University of California, Santa Cruz; J.D. University of California, Hastings College of the Law.

Transgender Rights & the Eighth Amendment

The past decades have witnessed a dramatic shift in the visibility, acceptance, and integration of transgender people across all aspects of culture and the law. The treatment of incarcerated transgender people is no exception. Historically, transgender people have been routinely denied access to medically necessary hormone therapy, surgery, and other gender-affirming procedures; subjected to cross-gender strip searches; and housed according to their birth sex. But these policies and practices have begun to change. State departments of corrections are now providing some, though by no means all, appropriate care to transgender people, culminating in the Ninth Circuit’s historic decision in Edmo v. Corizon, Inc. in 2019—the first circuit-level case to require a state to provide transition surgery to an incarcerated transgender person. Other state departments of corrections will surely follow, as they must under the Eighth Amendment. These momentous changes, which coincide with a broader cultural turn away from transphobia and toward a collective understanding of transgender people, have been neither swift nor easy. But they trend in one direction: toward a recognition of the rights and dignity of transgender people.

* Jennifer L. Levi, Professor of Law, Western New England University Law School.

† Kevin M. Barry, Professor of Law, Quinnipiac University School of Law. Thanks to Shannon Minter for thoughtful advice; to the Southern California Law Review staff for editorial assistance; and to Lexie Farkash for research assistance.

Designing Supreme Court Term Limits

Since the Founding, Supreme Court Justices have enjoyed life tenure. This helps insulate the Justices from political pressures, but it also results in unpredictable deaths and strategic retirements determining the timing of Court vacancies. In order to regularize the appointments process, a number of academics and policymakers have put forward detailed term-limits proposals. However, many of these proposals have been silent on several key design decisions, and there has been almost no empirical work assessing the impact that term limits would have on the composition of the Supreme Court.

This Article provides a framework for designing a complete term-limits proposal and develops an empirical strategy to assess the effects of instituting term limits. The framework we introduce outlines the key design features that any term-limits proposal must make, including frequently overlooked decisions like what the default would be if there is Senate inaction on a president’s nominee. The empirical strategy we develop uses simulations to assess how term-limits proposals would have shaped the Court if they had been in place over the last eighty years of American history. These simulations enable comparative assessments of term-limits proposals relative to each other and to the historical status quo of life tenure. Using these simulations, we are able to isolate the design features of existing proposals that produce significant differences in the composition of the Supreme Court. For instance, proposals that commence appointing term-limited Justices immediately could complete the transition in just sixteen years, but proposals that wait until after the sitting Justices leave the Court to appoint term-limited Justices would take an average of fifty-two years to complete the transition. Our results also reveal that term limits are likely to produce dramatic changes in the ideological composition of the Court. Most significantly, the Supreme Court had extreme ideological imbalance for sixty percent of the time since President Franklin Roosevelt’s effort to pack the Court, but any of the major term-limits proposals would have reduced the amount of time with extreme imbalance by almost half.

          *     Professor of Law, University of Chicago Law School. J.D. 2013, Ph.D. 2013, A.M. 2012, Harvard University. M.A., B.A. Yale University, 2007.

          †     Treiman Professor of Law, Washington University in St. Louis. J.D. Harvard University 2008, A.B. Duke University 2004.

          ‡     Associate Professor of Law, Washington University in St. Louis. Ph.D., 2015, Cornell University. J.D. 2011, Washington University. B.S.E. 2008, Grand Valley State University.                  

††         Professor of Public Policy, Harvard Kennedy School. Ph.D. 2012, A.M. 2011, A.B. 2000, Harvard University. J.D. 2004, Stanford University. For helpful conversations and comments, we are grateful to Gabe Roth and participants at workshops at the University of Chicago Law School, Washington University School of Law, NYU Law School, and the American Law & Economics Association Annual Meeting.