You’re fired: The Original meaning of Presidential Impeachment by ames C. Phillips* & John C. Yoo†

Article | Consitutional Law
You’re Fired: The Original Meaning of Presidential Impeachment
by James C. Phillips* & John C. Yoo†

From Vol. 94, No. 5 (2021)
94 S. Cal. L. Rev. 1191 (2021)

Keywords: Impeachment, Mueller Report, Federalist


In 2020, for just the third time in its history, the Senate conducted an impeachment trial of the President. While the 2020 case of President Donald Trump presented different facts than those of President Andrew Johnson in 1868 or President Bill Clinton in 1998, the Senate rendered the same verdict of acquittal. Initial investigations had probed whether President Trump or his campaign had coordinated with Russia to influence the 2016 elections, and then pursued the possibility of obstruction of the investigations themselves. But when the Justice Department decided that it could not indict a sitting President, Congress focused its inquiry on whether President Trump had withheld foreign aid from Ukraine until its leaders launched an investigation into his opponent in the 2020 election, then-former Vice President and current President Joseph Biden.

Whether Congress could constitutionally remove President Trump through impeachment raises questions as old as the Republic and facts as new as social media. The Constitution uses language to define the grounds for impeachment, such as “high Crimes and Misdemeanors,” that remains a mystery today. Does impeachment require a federal crime, or can it include abuses of power and obstruction of Congress? How would Congress define these “high Crimes and Misdemeanors” in a neutral way that would not deter future Presidents from invoking their legitimate authority or unduly place the executive under legislative control? Can Congress remove the President because of a good-faith disagreement over the scope of executive power or the meaning of the Constitution itself? Even if impeachment included noncriminal acts, does the Constitution require that the offenses rise to a level of seriousness that justify removal? President Trump’s case raised the further question whether Congress could remove the President for actions that had a plausible public interest, or whether the legislature need only find that the President had pursued personal interests as well. The 2020 trial finally asked whether impeachment provides the only remedy for presidential misconduct, or whether the Constitution provides other remedies.

This Article seeks to answer these questions by examining the original understanding of presidential impeachment. We undertake this analysis both because the Framers’ work formed the central basis for both the prosecution and defense cases during the President Trump’s first impeachment and because other guides to constitutional meaning are lacking. As the Supreme Court has decided that impeachment qualifies as a “political question” outside Article III’s case or controversy requirement,[1] these questions have no legal answers from traditional sources, such as judicial opinions. Practice also provides little help. The House of Representatives has impeached only two other Presidents in American history. In the wake of President Abraham Lincoln’s assassination, Republicans in Congress found their plans for a radical reconstruction of the South frustrated by the new President Andrew Johnson, a Southern Democrat who favored a more lenient peace.[2] In 1868, the House impeached President Andrew Johnson for conducting himself in office in a disgraceful, yet not illegal, manner. President Johnson broke prevalent norms by speaking directly to the people to lobby for legislation and attacking Congress as “traitors.” Congress responded by including an article of impeachment for his unacceptable rhetoric.[3] To strengthen their case, congressional Republicans made it a crime for the President to fire his cabinet officers without their consent—a law that the Supreme Court would later find an unconstitutional infringement of the President’s removal power.[4]

Exactly 130 years later, the House flexed its impeachment powers for only the second time in its history, but over the sordid and banal rather than the high and mighty. Rather than the reconstruction of the nation after a terrible Civil War, the impeachment of President Bill Clinton asked whether the President had committed perjury about his affair with a White House intern, Monica Lewinsky. The President had committed a crime, but the independent counsel, Kenneth Starr, concluded that the Justice Department could not indict a sitting President, much as it would almost two decades later. Instead, Starr referred the case to Congress to decide whether to take action. While the House impeached along a party-line vote, the Senate refused to convict, also on a close party-line vote. It seemed that President Clinton’s argument that he had only lied about sex and had not committed any harm to the nation on a par with treason or bribery, seemed to carry the day. But the partisan nature of the vote also suggested that impeachment and removal would become a test of party discipline, in that Presidents would likely survive so long as they could maintain the support of thirty-four Senators of their party.

A third President, Richard Nixon, likely would have faced impeachment and removal had he not resigned on August 9, 1974. Both a special counsel and the House had launched probes into a burglary of Democratic Party offices at the Watergate Hotel during the President’s reelection campaign. After the Supreme Court ordered President Nixon to obey a subpoena for White House tapes of meetings where the President had allegedly ordered the cover-up of the break-ins, the Judiciary Committee reported three articles of impeachment to the full House. President Nixon resigned before the House could vote but only after he had met with delegations of Republican congressmen who told him that he would likely lose the votes in Congress. While the committee had considered a wide variety of charges, such as bombing Cambodia without congressional authorization and tax cheating, in the end it recommended impeachment only for obstruction of the special counsel investigation, impeding the House’s probe, and for violating the individual rights of his political enemies through misuse of the CIA, FBI, and IRS. Unlike the Johnson and Clinton examples, however, President Nixon’s case never came to a vote in the House, not to mention a full trial in the Senate. It is difficult to conclude, therefore, that President Nixon’s resignation creates some kind of precedent in the way that the 1868 and 1998 examples might.

It is not even clear that the Nixon case or even the Johnson and Clinton impeachments should create any precedent, in a judicial sense, for Congress. In both the Johnson and Clinton cases, the Senate refused to convict. It could have found that the House had not “proved” its facts, though in both cases the facts seemed fairly clear. President Johnson had indeed fired his Secretary of War without the consent of Congress; President Clinton had lied to prosecutors in a deposition recorded on video. If the facts were proven, then the Senate must have acquitted because they did not amount to high crimes and misdemeanors as defined by the Constitution. But the Senate leaves behind no written opinion to explain its decision because it acts much as a jury in a criminal trial to solely determine conviction. Therefore, we can draw no firm legal precedents from these earlier impeachments.

A previous Senate, moreover, could not bind a future Senate to its interpretation of the constitutional standards on impeachment. One Congress generally cannot bind a future Congress; as with all three branches of government, Congress can simply undo any action by a past Congress by passing a repealing law or rule. The Senate that tried President Andrew Johnson may well have concluded that it should not remove a President for exercising the executive power to fire cabinet officers. It could have believed that the exercise of constitutional power could not qualify as a high crime or misdemeanor, or it could have thought the President had to actually violate federal criminal law. But the Reconstruction Senate never took a vote, issued an opinion, or enacted an internal rule that interpreted the standard for impeachment. Even if it had, a contemporary Senate could change any rule or opinion by majority vote, just as the Senate changed the filibuster rule to exclude judicial and cabinet appointments. Senators who wanted to follow the Johnson or Clinton impeachments as some sort of precedent would have to appeal to tradition, rather than any legal rule, to govern a Trump impeachment.

Without any legal precedents, or even any system of binding practice, the original understanding of the Constitution becomes magnified in importance. The Constitution does not provide for the trial or punishment of a sitting President by prosecutors or a regular court. Instead, the Impeachment Clause creates a means to remove “the President, Vice President, and all civil Officers of the United States.”[5] It vests the power to impeach in the House and specifies no vote requirement, so we have always assumed it occurs by majority vote. Impeachment amounts to an indictment in a criminal case, in which prosecutors decide they have enough evidence to bring a prosecution before a jury. Vesting the power in the House, rather than prosecutors or judges, could suggest that impeachment will not fall solely within the preserve of law, but will involve politics as well. Without any reading of the Impeachment Clauses based on legal authorities, Congress might allow politics to overwhelm law in its indictment and trial of Presidents. Then-House Minority Leader Gerald Ford, for example, defended the impeachment of Justice Douglas because “an impeachable offense is whatever a majority of the House of Representatives considers it to be at a given moment in history.”[6]

Our analysis reveals new sources of materials that make the first Trump impeachment more complex than presented in the trial, debates, and media commentary. Contrary to the claims of President Trump’s defense, we find that the Framers understood “high Crimes and Misdemeanors” to include conduct that went beyond the violation of federal criminal law. Such offenses could include abuse of power; but we also conclude that these acts had to inflict serious harm upon the nation. A President could commit a crime, but it would not impose sufficient injury upon the public to justify removal (as with the Clinton example). A President could also commit no crime, but his misconduct or negligence could so harm the nation as to justify removal from office. We also find that the Framers were so worried that Congress would turn impeachment toward partisan political purposes that they erected the two-thirds requirement for conviction to preserve executive independence. Instead of impeachment, the Framers expected that elections would provide the primary check on presidential misconduct.

This Article proceeds in three parts. Part I reviews the investigations into President Trump, his first impeachment and trial, and his acquittal. Part II uses both new and old techniques to recover the history of the drafting and ratification of the Constitution. We use computerized textual analysis—corpus linguistics—of British materials pre-dating the Constitution’s framing to analyze what those of the founding generation would have believed the phrase “high Crimes and Misdemeanors” meant. We then examine the drafting and ratification of the Constitution to understand how the Founders expected the Impeachment Clauses to work. Part III draws forth lessons from this history and applies them to the issues raised by the Trump impeachment.


        *        Assistant Professor of Law, Dale E. Fowler School of Law, Chapman University. We received helpful comments from Jesse Choper, who has now witnessed seventy-five percent of all presidential impeachments. The authors wish to thank Francis Adams, Min Soo Kim, Darwin Peng, David Song, and the research librarians at Chapman University’s Fowler School of Law for research assistance.

       †     Emanuel S. Heller Professor of Law, University of California at Berkeley Law School; Visiting Scholar, American Enterprise Institute; Visiting Fellow, Hoover Institution, Stanford University. Professor Yoo thanks the Thomas W. Smith Foundation for support.
         [1].     Nixon v. United States, 506 U.S. 224, 253 (1993).

         [2].     See Michael Les Benedict, The Impeachment and Trial of Andrew Johnson 87 (1973).

         [3].     Jeffrey K. Tulis, Impeachment in the Constitutional Order, in The Constitutional Presidency 229, 232 (Joseph M. Bessette & Jeffrey K. Tulis eds., 2009).

         [4].     Myers v. United States, 272 U.S. 52, 176 (1926).

          [5] U.S. Const. art. II, § 4.

         [6].      Kenneth C. Davis, The History of American Impeachment, Smithsonian Mag. (June 12, 2017), [].


View Full PDF

Prosecutors and Mass Incarceration by

Article | Criminal Law
Prosecutors and Mass Incarceration
by Shima Baradaran Baughman* & Megan S. Wright†

From Vol. 94, No. 5 (2020)
94 S. Cal. L. Rev. 1123 (2020)

Keywords: Prosecutor Discretion, Charging


It has long been postulated that America’s mass incarceration phenomenon is driven by increased drug arrests, draconian sentencing, and the growth of the prison industry. Yet among the major players—legislators, judges, police, and prosecutors—one of these is shrouded in mystery. While laws on the books, judicial sentencing, and police arrests are all public and transparent, prosecutorial charging decisions are made behind closed doors with little oversight or public accountability. Indeed, without notice by commentators, during the last ten years or more, crime has fallen, and police have cut arrests accordingly, but prosecutors have actually increased the ratio of criminal court filings per arrest. Why? This Article presents quantitative and qualitative data from the first randomized controlled experiment studying how prosecutors nationally decide whether to charge a defendant. We find rampant variation and multiple charges for a single crime along with the lowest rates of declination in a national study. Crosscutting this empirical analysis is an exploration of Supreme Court and prosecutor standards that help guide prosecutorial decisions. This novel approach makes important discoveries about prosecutorial charging that are critical to understanding mass incarceration.



          *     Associate Dean of Faculty Research and Development, Presidential Scholar and Professor of Law, University of Utah College of Law. We thank the Yale University Institution for Social and Policy Studies for their support of this project (Yale ISPS ID P20-001). Christopher Robertson was critical to the underlying empirical work discussed in this Article. We appreciate the feedback received at the Annual Center for Empirical Legal Studies Conference hosted at the University of Michigan. Special thanks to John Rappaport, Sonja Starr, Rachel Barkow, Carissa Hessick, Darryl Brown, Sim Gill, Andrew Ferguson, Jeffrey Bellin, L. Song Richardson, Cathy Hwang, Andy Hessick, Christopher Griffin, Ron Wright, and John Pfaff. We appreciate the comments of the Rocky Mountain Junior Conference, and the University of Utah faculty research grant for making this research possible. I am grateful for research assistance from Jacqueline Rosen, Alyssa Campbell, Amylia Brown, Carley Herrick, Tyler Hubbard, Emily Mabey, Olivia Ortiz, Haden Gobel, Hope Collins, Rebekah Watts, Melissa Bernstein, Alicia Brillon, Kerry Lohmeier and Ross McPhail. I am grateful for the careful editing from the Southern California Law Review staff and editors, especially Caleb Downs, Tia Kerkhof, Mindy Vo, and Samuel Clark-Clough. I am especially thankful for empirical support from Jessica Morrill. We are thankful to all of the prosecutors who nationally participated in this experiment. IRB 69654 (University of Utah).

   †        Assistant Professor of Law, Medicine, and Sociology, Penn State Law and Penn State College of Medicine; Adjunct Assistant Professor of Medical Ethics in Medicine at Weill Cornell Medical College. Thanks to Laureen O’Brien, Ellen Hill, Leann Jones, Danielle Curtin, and Joseph Radochonski for research assistance during data collection. Thanks to Veronica Rosenberger for assistance with qualitative data analysis.



View Full PDF

Dynamic Regulation by Natasha Sarin

Article | Financial Regulation
Dynamic Regulation
by Natasha Sarin*

From Vol. 94, No. 5 (2021)
94 S. Cal. L. Rev. 1005 (2021)

Keywords: Financial Regulation, Great Recession, Bank Capital


The average American family lost one-third of its net worth during the Great Recession. One in ten families lost their homes. One in ten workers lost their jobs.[1] The consequences of the crisis still reverberate today, reflected in distrust of large financial institutions,[2] dissatisfaction with politics as usual,[3] and concern that capitalism is no longer working for the American people.[4]

It is possible to draw a line from the crisis to the election of Donald J. Trump as the 45th President of the United States. Further, in the most recent presidential election cycle, much of Senator Elizabeth Warren’s case for her electability was tied to her work in the Recession—arguing that she, unlike Republicans (and many Obama Administration officials), was focused on putting consumers first after the worst downturn since the Great Depression.[5]

While much has been written on the ways in which financial regulation has been overhauled since the crisis,[6] little research has been done on whether this overhaul was sufficient, or whether the system is still at risk. This Article steps in to fill this void. I argue that, despite regulators’ statements to the contrary, the vulnerabilities that led to the Recession remain in our financial sector. Without a course correction, the next time will be the same, and the consequences for ordinary Americans likely even more dire.

This Article differs substantially in tone from the calm espoused by those in the financial regulatory community of late. For example, in a speech in July 2019, Federal Reserve Vice Chair Randal Quarles announced that “banks have now built enough capital to withstand a severe recession.”[7] He further stated that it was now appropriate to deregulate large financial institutions because, since the crisis, large banks have addressed the “substantial deficiencies in their ability to measure, monitor, and manage their risks”—deficiencies that led to the Great Recession.[8]

Vice Chair Quarles is not alone in his optimism. Federal Reserve officials have claimed repeatedly in recent years that financial crises are behind us due to substantial reforms enacted in the aftermath of the Recession.[9] These reforms include decreasing banks’ ability to make risky bets and designing a plan for how to unwind large financial institutions with minimal harm to consumers. Perhaps most importantly, banks are now subject to annual stress tests that are intended to measure their ability to cope with a crisis-like event. For the last several years, all large financial institutions have cleared the stress tests with flying colors, suggesting that the system today is well equipped to weather the next storm.

At the same time, the market has not been so sanguine. In fact, in August 2019, only weeks after passing the stress tests, all large banks lost ten percent of their market value. Their probability of defaulting on borrowers, assessed from the cost of buying insurance that pays out if the firm defaults, skyrocketed. Analysts attributed this steep decline in value to an increase in bank risk: The business of banking involves borrowing short term and lending long term. In 2019, revenues from lending (long-term interest rates) fell below the costs of borrowing (short-term interest rates). This threatened the business model of large financial institutions and also prompted fears that a recession was imminent.[10] Concerned credit analysts downgraded financial firms,[11] and large financial institutions themselves advised their clients to begin to prepare for a recession.[12]

While industry participants, observers, and market signals were sounding alarms, regulatory measures of bank health were static. It is plausible that the market overreacted in August (in fact, it experienced a partial recovery in subsequent weeks), but it is unlikely that the risks in the financial sector were unchanged during this period as regulatory measures of bank capital suggested. Market measures provide a more dynamic assessment of the evolution of financial stability during this period. Regulators can, and should, monitor this information. Yet they do not.

Since the 1980s, capital regulation has been the primary form of bank regulation. Banks fail when the total amount of money they owe (their liabilities) exceeds the total value of the assets they have. The difference between a bank’s assets and liabilities is known as equity capital. Capital helps banks absorb losses that decrease the value of their assets and is measured by regulators as the difference between book values of bank assets and liabilities, known as book or “regulatory” capital.

However, this information is reported only quarterly and is prone to manipulation by sophisticated firms.[13] Because regulators rely solely on backward-looking, static, and manipulatable measures of capital, their assessments paint an inaccurate picture of bank health. I show this empirically in two ways.

First, I subject large banks in the United States to a hypothetical “market-based” stress test based on the value financial markets assign a bank’s business. These measures are dynamic and forward-looking, unlike the book-capital measures regulators have relied on historically. The results of a market-based stress test demonstrate that large financial institutions would experience cataclysmic losses in the event of a crisis like the Recession. Despite policymakers’ statements to the contrary, it is unlikely that these banks would be able to continue to intermediate as usual in the absence of substantial government assistance (that is, bailouts) during the next crisis.

Second, I document the failure of regulatory capital measures during the Great Recession and show that these measures were lagging indicators of bank health. Only days before some of the largest banks in the country failed, capital ratios indicated that all was well. In the case of Bear Stearns, Securities and Exchange Commission (“SEC”) Chairman Christopher Cox even testified before Congress after the firm had failed that it was healthy and well capitalized based on regulatory measures.[14] In contrast, market measures of bank health signaled cause for concern an entire year before the bankruptcy of investment banking giant Lehman Brothers sent the economy into free fall. In addition to being slow moving, regulatory capital measures also proved inaccurate: it was impossible to distinguish between healthy and doomed banks based on their reported capital levels. In contrast, there was significant divergence in the market’s perception of risk at these institutions, and its prediction of bank failures proved prophetic.

Our very recent experience illustrates the consequences of misplaced reliance on book capital as a measure of bank health, yet the post-crisis overhaul of financial regulation did not include a rethinking of the role this information plays in our assessments of large financial institutions. There has been no move toward incorporating more accurate market information into the regulatory regime.

This is an unforced error with significant repercussions. Large banks remain vulnerable to a crisis, and these risks are unacknowledged by the regulatory community. The singular focus on regulatory capital has also fueled a misunderstanding of the causes of the Great Recession and the tools policymakers had at their disposal to address them at their onset.

Policymakers typically offer two responses when asked about their failure to act more aggressively in the early stages of the crisis—that is, before Lehman’s bankruptcy—to forestall the catastrophe that ensued. The first is that the crisis could not be foreseen; illustrated, for example, by former Treasury Secretary Henry Paulson’s 2018 statement that his “strong belief is that these crises are unpredictable in terms of cause or timing or the severity when they hit.”[15] The second is that regulators lacked the legal authority to bolster struggling institutions. For example, as former Treasury Secretary Timothy Geithner stated in 2014: “The Fed didn’t have the legal authority to force Bear Stearns, Lehman Brothers, or other investment banks to raise more capital. We couldn’t even generate stress scenarios bleak enough to force the banks we regulated to raise more capital.”[16] Neither of these explanations, however, is accurate.

As to the unpredictability of financial crises, as this Article will show conclusively, substantial time existed between the first tremors in financial markets in the summer of 2007 and their eventual collapse in the fall of 2008. I assemble data on a variety of market-risk measures (including stock-price volatility, credit default swap (“CDS”) spreads, and market-based capital measures) and compare these with regulatory capital indicators. Market-based risk measures for large financial firms raised red flags for an entire year before the system collapsed. However, regulatory measures are slow to update—thus, failing banks were well above regulatory requirements for minimum capital ratios, not because they were healthy, but because these measures are flawed.

As to the lack of legal authority to intervene with financial institutions, this Article reviews the substantial legal authority at the disposal of financial regulators and demonstrates that lack of authority was not the binding constraint to action. Some pieces of evidence from this novel analysis are especially dispositive: First, regulators in fact did rely on their substantial legal authority to strengthen small financial institutions once risks emerged in the financial sector in the fall of 2007 and early 2008. This same authority could have simultaneously been wielded to bolster large, systemically important financial firms. Second, once the crisis was underway, regulators found ways to intervene and prevent even worse damage. By 2009, they forced banks to stop paying dividends and to raise new capital, which prevented additional failures. No new legal authority emerged between 2007 and 2009, which proves that lack of authority cannot explain the failure to respond more aggressively to the crisis at its onset.

In fairness to regulators, hindsight is twenty-twenty. It was impossible to predict with certainty in the summer of 2007 that a Lehman-size catastrophe was a year away. However, in the months leading up to Lehman, and especially after the collapse of Bear Stearns in the spring of 2008, the probability of a systemic collapse increased dramatically. In February 2008, academics presenting to Federal Reserve officials estimated that the losses that followed the collapse of the housing market would total about $500 billion, with half being borne by large and heavily leveraged financial institutions. This, they estimated, would imply a $2.3 trillion contraction in bank balance sheets—a substantial decrease in lending to households and businesses that would have immediate real consequences.[17] The way to prevent this contraction was to increase banks’ capital levels so they would not fail or to stop lending to households and businesses when imminent losses began to accumulate.

Yet instead of hoarding and raising capital to buffer against imminent asset losses, more than $100 billion of bank capital left the financial system in the form of dividend payouts to bank shareholders in the year before Lehman’s catastrophic bankruptcy. In fact, Lehman increased its dividend by thirteen percent in January 2008—six months before it collapsed and months after industry observers were aware of significant problems at the firm.[18] This is akin to deflating an airbag exactly when the risks of a crash are rising. The same occurred in the lead-up to the COVID-19 crisis—regulators allowed capital to be paid out to shareholders in the form of dividends and share buybacks at the same time monetary and fiscal authorities were contemplating economic interventions of unparalleled scope.

If not wanting for time or authority, what caused regulators to underreact to the initial stages of the crisis? This Article attributes this failure to reliance on regulatory capital, which painted (and continues to paint) an overly optimistic picture of financial stability.

Specifically, regulators failed to act in the early stages of the crisis because the default rule was inaction until book-capital levels signaled distress. Many looked at banks’ high regulatory capital ratios and concluded that there were few risks in the system: in the month before Lehman’s collapse, one Federal Reserve official guessed “that the level of systemic risk has dropped dramatically and possibly to zero.”[19] Others believed that, although it would be helpful for banks to have more capital, they were unlikely to do so while well above regulatory capital minimums, pointing to banks’ assertions that “now is not a good time” for equity-raising. Still others believed that acting aggressively—for example, by restricting banks’ dividend payments—would fuel a panic rather than prevent one.[20]

It is inaccurate and unfair to equate today’s regulatory regime to that in place in the summer of 2007. Capital requirements are higher, so banks have more of a cushion in place to bolster themselves when their assets begin to lose value. But the exercise of stress testing highlights the vulnerabilities that remain—that is, should a situation arise in which losses are so large that banks need to recapitalize, regulators will be slow to force them to do so because our tools of measuring banks’ risk, despite their known unreliability, have yet to be overhauled.

This Article provides a way forward, arguing that supplementing our understanding of financial stability with market information will paint a fuller picture. It also makes a case for automating regulatory action when banks appear undercapitalizedeither based on regulatory or market measures. If in place during the crisis, such a regime would have forced banks to hoard and raise new capital in the year leading up to Lehman Brothers’ collapse, decreasing the need for costly government bailouts. The regulatory innovations advanced in this Article will prevent the next recession from becoming a “Great” Recession.

I propose different approaches to incorporating market information into the financial regulatory regime. The most extreme form would automate an aggressive response to market indicia that distress is imminent. This approach, which I label “dynamic capital regulation,” would quickly recapitalize banks the market deems to be on the brink. This recapitalization could be accomplished through: (1) a market-based stress test whereby failure requires new capital-raising; (2) the requirement that banks purchase capital insurance; (3) the conversion of some proportion of bank debt to equity, which eliminates the risk that creditors will be able to withdraw funds and push the bank to failure; or (4) a market trigger that forestalls capital leaving the financial system when bank equities experience drastic moves.

These market-based approaches will increase the dynamism and the transparency of financial regulation. However, dynamic capital regulation will also highlight concern about death spirals—that is, that market speculators will short financial firms when dilution appears imminent. Properly designed regulation can address these concerns, as I describe.

Still, dynamic capital regulation is not a panacea. The result will be fewer Great Recessions but also more false positives, which create unnecessary pain for the financial sector and its shareholders. For example, banks may be disallowed from paying dividends in periods when distress is not actually imminent, despite market signals to the contrary. However, concerns about false positives may be overblown: the analysis in this Article demonstrates that the simplest market-based indicator (bank stock performance) correctly identifies the two financial crises that have occurred since 1990 and results in no false positives. Deciding on the type of errors we prefer—false positives that are unfairly harsh to banks and their shareholders versus false negatives that result in costly losses to the government and taxpayers—is a tradeoff that requires thoughtful deliberation.

This Article favors dynamic capital regulation based on a premise that our regulatory regime should favor the protection of ordinary citizens over the protection of bank shareholders. Incidentally, given the more extreme alternatives, this approach is also likely to be favored by large financial institutions; it will allow them to intermediate efficiently with low levels of capital in normal times and only require them to bolster themselves in extraordinary moments when distress appears likely. In contrast, approaches like a thirty percent capital requirement proposed by Professors Anat Admati and Martin Hellwig,[21] or even more extreme discontinuation of financial intermediation full stop, as proposed by Professor Adam Levitin, are less efficient and more punitive.[22]

The right approach to bank capital is ultimately a question of policy, which regulators must decide. The main objective of this Article is to force a debate that is currently missing in the financial regulatory community due to misplaced confidence in regulatory measures of bank health. Given the known failure of these measures to provide useful and timely indicia of distress during the Great Recession, our continued sole reliance on them is puzzling. Market data are plentiful and informative; ignoring them would be extremely ill-advised for our regulatory regime.

This Article proceeds as follows. Part I begins by demonstrating the importance of bank capital to the financial system and describes how financial crises begin. Part II tells the story of the Great Recession, arguing that the severity of the crisis could have been mitigated by more aggressive regulatory action in 2007 and 2008. Although authority for intervention existed, inaction was the consequence of a regulatory regime that fails to respond until regulatory measures of bank health—which are static and often inaccurate—signal cause for concern. Part III calls for overhauling the regulatory default to make action, rather than complacency, the automatic response to the early stages of a downturn. This approach would have forced banks to stop paying dividends and required raising new capital at the beginning of the financial crisis. This approach will prevent the next downturn from being “Great.” Part IV concludes.

         *       Assistant Professor of Law, the University of Pennsylvania Carey Law School, and Assistant Professor of Finance, the Wharton School of the University of Pennsylvania, I am indebted to Howell Jackson, for first recommending that I write this Article and for providing feedback on various drafts. For helpful conversations, I thank Kathryn Judge, Dorothy Shapiro Lund, Timothy Geithner, David Hoffman, Andrei Shleifer, Jeremy Stein, Lawrence Summers, Daniel Tarullo, and Mark Van Der Weide.

          [1].  E.g., Fabian T. Pfeffer, Sheldon Danziger & Robert F. Schoeni, Wealth Levels, Wealth Inequality, and the Great Recession 1–2 (2014),
default/files/media/_media/working_papers/pfeffer-danziger-schoeni_wealth-levels.pdf []; Pew Rsch. Ctr., A Balance Sheet at 30 Months: How the Great Recession Has Changed Life in America 57 (2010).

         [2].     Jordan Smith, Millennials and Big Banks Have Trust Issues – Here Are Three Ways Financial Institutions Are Trying to Fix That, CNBC (Jan. 16, 2019, 5:52 PM),
banks-millennials-trust-jp-morgan-chase-goldman-bank-of-america.html [

         [3].     Matt Taibbi, Turns Out That Trillion-Dollar Bailout Was, in Fact, Real, Rolling Stone (Mar. 18, 2019, 5:11 PM),
31 [].

         [4].     David Leonhardt, Opinion, American Capitalism Isn’t Working., N.Y. Times (Dec. 2, 2018), [].

         [5].     Gretchen Morgenson, Elizabeth Warren on Big Banks and Their (Cozy Bedmate) Regulators, N.Y. Times (Apr. 21, 2017), [].

         [6].     See, e.g., Viral V. Acharya, Thomas F. Cooley, Matthew Richardson & Ingo Walter, Regulating Wall Street: The Dodd-Frank Act and the New Architecture of Global Finance (2011); Samuel G. Hanson, Anil K Kashyap & Jeremy C. Stein, A Macroprudential Approach to Financial Regulation, 25 J. Econ. Persps. 3 (2011); Ben S. Bernanke, Chairman, Bd. of Governors of the Fed. Rsrv. Sys., Speech at the Federal Reserve Bank of Kansas City’s Annual Economic Symposium: Reflections on a Year of Crisis (Aug. 21, 2009),
events/speech/bernanke20090821a.htm [].

         [7].     Randal K. Quarles, Vice Chair for Supervision, Bd. of Governors of the Fed. Rsrv. Sys., Speech at a Research Conference Sponsored by the Federal Reserve Bank of Boston: Stress Testing: A Decade of Continuity and Change (July 9, 2019),
speech/quarles20190709a.htm [].

         [8].     Id.

         [9].     See Fed’s Yellen Expects No New Financial Crisis in ‘Our Lifetimes,’ Reuters (June 27, 2017, 10:49 AM), [] (noting Federal Reserve Chair Janet Yellen’s suggestion in 2017 that she “does not believe that there will be another financial crisis for at least as long as she lives”); Press Release, Bd. of Governors of the Fed. Rsrv. Sys., Federal Reserve Board Releases Results of Supervisory Bank Stress Tests (June 22, 2017, 4:30 PM), [] (reporting Governor Jerome H. Powell’s statement that the 2017 stress-test results “show that, even during a severe recession, our large banks would remain well capitalized, . . . allow[ing] them to lend throughout the economic cycle, and support households and businesses when times are tough”).

       [10].     Yield-curve inversion has preceded every recession since 1955. See Jonnelle Marte, Recession Watch: What Is an ‘Inverted Yield Curve’ and Why Does It Matter?, Wash. Post (Aug. 14, 2019, 12:51 PM), [].

       [11].     Thomas Franck, Bank of America Is Downgraded – Inverted Yield Curve, Fed Rate Cuts Will Hurt Income, Analyst Says, CNBC (Aug. 29, 2019, 10:27 AM), [

       [12].     Scott Barlow, ‘We Advise Investors to Prepare for Recession’ – Citi, Globe & Mail (Mar. 29, 2019),
ors-to-prepare-for-recession-citi [].

       [13].     See Andreas Fuster & James Vickery, What Happens When Regulatory Capital Is Marked to Market?, Fed. Rsrv. Bank N.Y.: Liberty St. Econ. (Oct. 11, 2018), https://libertystreeteconomics. [https://per]; Jeremy Bulow, How Stress Tests Fail, VoxEU (May 9, 2019), [] (noting that the regulatory regime uses “regulatory rather than market measures for both the value and riskiness of bank assets—measures that failed badly during the financial crisis”).

       [14].     Christopher Cox, Chairman, U.S. Sec. & Exch. Comm’n, Testimony Before the U.S. Senate Committee on Banking, Housing and Urban Affairs: Testimony Concerning Recent Events in the Credit Markets (Apr. 3, 2008), [

       [15].     Interview by Andrew Ross Sorkin with Ben Bernanke, Former Chair, Fed. Rsrv., Tim Geithner, Former Sec’y, U.S. Dep’t of the Treasury & Hank Paulson, Former Sec’y, U.S. Dep’t of the Treasury, in Washington, D.C. (Sept. 12, 2018) (transcript at 9, available at the Brookings Institution), [].

       [16].     Timothy F. Geithner, Stress Test: Reflections on Financial Crises 98 (2014).

       [17].     David Greenlaw, Jan Hatzius, Anil K Kashyap & Hyun Song Shin, U.S. Monetary Pol’y F., Leveraged Losses: Lessons from the Mortgage Market Meltdown 11 (2008).

       [18].     Viral Acharya, Hyun Song Shin & Irvind Gujral, Bank Dividends in the Crisis: A Failure of Governance, VoxEU (Mar. 31, 2009), [].

       [19].     Bd. of Governors of the Fed. Rsrv. Sys., Meeting of the Federal Open Market Committee on August 5, 2008, at 51 (2008) [hereinafter August 5, 2008, Meeting], [

       [20].     Geithner, supra note 16, at 138 (“We considered forcing banks as a group to stop paying dividends in order to conserve capital, but we were concerned, perhaps mistakenly, that doing so might do more harm than good.”).

       [21].     See, e.g., Anat Admati & Martin Hellwig, The Bankers’ New Clothes: What’s Wrong with Banking and What to Do About It 179 (2013).

       [22].     See Adam J. Levitin, Safe Banking: Finance and Democracy, 83 U. Chi. L. Rev. 357, 454 (2016).


View Full PDF

The Political Reality of Diversity Jurisdiction by Richard D. Freer

Article | Civil Procedure
The Political Reality of Diversity Jurisdiction
by Richard D. Freer*

From Vol. 94, No. 5 (2021)
94 S. Cal. L. Rev. 1083 (2021)

Keywords: Diversity Jurisdiction, Politics

Support for diversity of citizenship jurisdiction has ebbed and flowed.[1] From the 1960s through the 1980s, the prevailing wind blew strongly against it.[2] A determined group, led mostly by academics and federal appellate judges, spearheaded an effort to have Congress abolish the general form of federal subject matter jurisdiction.[3] These critics were confident that diversity jurisdiction had outlived its need, which, they said, was to provide a federal court for out-of-state litigants who feared bias in the local state courts. Advances in travel and communication, critics asserted, had homogenized American culture and rid us of any reasonable fear of bias at the hands of local courts.[4] Abolishing diversity jurisdiction would free busy federal judges from the nettlesome requirement of divining and applying state law and allow them more time for limning and developing federal law.[5] The effort was so successful that the House of Representatives overwhelmingly passed a bill abolishing diversity jurisdiction in 1978.[6]

But that effort and another determined frontal assault on diversity jurisdiction in 1990 failed. Now, a generation and more later, one sees little support for abolishing diversity. Even as its place on the federal docket grows—now accounting for more than one-third of the civil cases filed in district courts—one does not find academics or federal judges urging that these state-law-based cases be taken from the federal court docket.[7] On the other hand, diversity is now becoming a topic of increasing scholarly interest. The current commentary, however, is focused mostly on rationalizing diversity doctrine, making it consistent with its presumed purpose, rather than on curtailing it.[8] The accepted wisdom seems to be that diversity jurisdiction is here to stay, but that it might be recalibrated here and there.

What accounts for diversity’s survival and apparent acceptance? In retrospect, those who sought to abolish diversity jurisdiction failed to appreciate three fundamental characteristics about diversity jurisdiction. These characteristics should not be overlooked in our new era; they should guide efforts to rationalize diversity doctrine.

First, critics failed to understand that diversity jurisdiction is not something to be considered in vacuo, as a freestanding grant of judicial authority. It is instead an integral part of the economic engine of interstate commerce. Its function, ultimately, is to support the policies underlying the commerce, full faith and credit, and privileges and immunities clauses of the Constitution.[9] One should alter the availability of diversity jurisdiction only after considering the impact of such a change on this broader constitutional mission.

Second, those who attempted to abolish diversity understated the policy bases for diversity jurisdiction. Though the traditional “bias rationale” was indeed fear of bias against out-of-state litigants in state courts, today diversity jurisdiction is more broadly grounded in at least two ways. One is subtle and based in jurisdictional legislation of 1875: that the fear backing diversity jurisdiction is not state-based bias, but region-based bias.[10] The other, an “efficiency rationale,” developed over time with the Supreme Court’s jurisprudence regarding the Fourteenth Amendment’s restriction on state-court personal jurisdiction. Specifically, it is that diversity jurisdiction facilitates efficient joinder in complex cases in ways that state courts (hemmed in by the Supreme Court’s restrictive interpretation of the Fourteenth Amendment) simply cannot.[11] This rationale led to a resurgence of jurisdictional grants based upon diversity jurisdiction in the early part of this century so that there are now more diversity-based grants of subject matter jurisdiction than ever before.

Third, those who attempted to abolish diversity failed to appreciate that jurisdiction is ultimately a political issue. Whatever the policy bases for diversity jurisdiction, Congress retains it because the practicing bar wants it. The point was demonstrated in 1978. After the House passed its bill to abolish diversity jurisdiction, the organized bar leapt into action and defeated the effort in the Senate.[12] Thus, even if critics can show that diversity jurisdiction has outlived its need, they cannot show that it has overstayed its welcome, at least not in the eyes of the politically powerful group that wants it and uses it.

These characteristics should guide any efforts to make sense of, to render consistent, the various threads of the diversity canon. In addition, these efforts should take into account two other considerations. One, that canon is the result of complex interactions between Congress, which passes jurisdictional statutes, and federal courts, which interpret them. The bench is understandably concerned about docket control and holds considerable power in shaping jurisdiction with that as one consideration. Two, a national legal culture has evolved over our 230 years of experience with diversity jurisdiction. That culture includes the dynamic of intersystemic federalism, by which the federal and state courts engage in an ongoing dialogue about the development of the substantive law and of civil procedure.

It is unlikely that Congress will ever abolish diversity jurisdiction. At most, the legislature will tinker with some aspect of diversity in an effort to ensure that the federal court caseload does not get out of hand. As long as we maintain a rough equilibrium between the practicing bar’s desire to retain diversity jurisdiction and the federal bench’s desire to keep caseloads manageable, the status quo is fine—as a matter of political reality.


         *       Charles Howard Candler Professor of Law, Emory University. I have benefited from discussions with Tom Arthur, Pat Borchers, Collin Freer, Peter Hay, Dan Klerman, Dale Larrimore, Jonathan Nash, Rafael Pardo, Martin Redish, Robert Schapiro, Joanna Shepherd, and Howard Wasserman, for which I am grateful. I am indebted to the participants of the Federal Diversity Jurisdiction Conference held by the Emory Center on Federalism and Intersystemic Governance, in particular to Brooke Coleman for her insightful review of the Article. I am also grateful to Crystal Lee of the Emory Law Library, who provided invaluable assistance in locating historical materials.

         [1].     Congress granted diversity jurisdiction upon the federal trial courts in the original Judiciary Act of 1789. It did not confer general federal question jurisdiction until 1875. Thus, until 1875, diversity cases were the staple of the federal civil docket. In the late nineteenth century, increasing federal caseloads and invocation of diversity jurisdiction by corporations led to some calls for restriction. In the twentieth century, an increasing number of federal judges, including Justices Frankfurter and Jackson, and later Chief Justices Warren and Burger, attacked diversity jurisdiction as wasteful of federal judicial resources. The anti-diversity momentum gathered throughout the 1970s and peaked with the Report of the Federal Court Study Committee (“FCSC”) in 1990 [hereinafter Report, FCSC]. For an outstanding treatment of this history (from which the foregoing is gleaned), see James M. Underwood, The Late, Great Diversity Jurisdiction, 57 Case W. Rsrv. L. Rev. 179, 180–­98 (2006). The Report, FCSC is discussed infra Section V.B.

         [2].     The American Law Institute’s Study of the Division of Jurisdiction Between State and Federal Courts (1969) was particularly influential. The American Law Institute (“ALI”) undertook the study in response to a 1959 request by Chief Justice Warren. The study concluded that diversity jurisdiction should be curtailed for two general reasons: that local bias was less pronounced than in earlier years and that the limited resources of the federal courts would better be expended on federal question cases. See John W. Reed, The War on Diversity, 18 Int’l Soc’y Barristers Q. 291, 291–92 (1983) (“Over the past decade or more there have been strong pressures to abolish the diversity jurisdiction of the federal courts. . . . The attack on diversity jurisdiction has its most distinguished formulation in a major study sponsored by the American Law Institute.”).

         [3].     By the “general form” of diversity jurisdiction, I mean cases invoking § 1332(a)(1). Technically, no one favors the total abolition of federal jurisdiction based on the diversity power. For instance, all support retaining federal interpleader jurisdiction, which is, of course, based upon the diversity power. And no one advocates curtailing alienage jurisdiction under § 1332(a)(2). This Article addresses efforts to abolish or to curtail significantly this general form of diversity jurisdiction. Throughout this Article, my references to diversity are to its general form.

         [4].     Such arguments date to the late nineteenth century, with the assertion that the advent of steam and electric power, and the Civil War, had so unified the country as to justify abolition of diversity. Alfred Russell, Avoidable Causes of Delay and Uncertainty in Our Courts, 25 Am. L. Rev. 776, 795­–96 (1891). Justice Frankfurter favored abolition in the 1920s, saying “the mobility of modern life has greatly weakened state attachments. Local prejudice has ever so much less to thrive on than it did when diversity jurisdiction was written into the Constitution.” Felix Frankfurter, Distribution of Judicial Power Between United States and State Courts, 13 Cornell L.Q. 499, 521 (1928).

         [5].     See, e.g., David Crump, The Case for Restricting Diversity Jurisdiction: The Undeveloped Arguments, from the Race to the Bottom to the Substitution Effect, 62 Me. L. Rev. 1, 5 (2010) (“[A]bolition [of diversity jurisdiction] would preserve a federal forum for those with federal claims.”); Larry Kramer, “The One-Eyed Are Kings”: Improving Congress’s Ability to Regulate the Use of Judicial Resources, 54 L. & Contemp. Probs. 73, 77 (1991) (discussing reducing federal court workload by “reduc[ing] the scope of federal jurisdiction by eliminating unimportant categories of cases so that judges can devote more time to the cases that remain”). Dean Kramer served as a reporter of the FCSC, which concluded that no case had a “weaker claim” on the federal court docket than diversity jurisdiction. See infra note 130.

         [6].     H.R. 9622, 95th Cong. (1978) (proposing “to abolish diversity of citizenship as a basis of jurisdiction of Federal district courts”). The measure passed the House by a roll call vote of 266­ to 133 on February 28, 1978.

         [7].     The most recent calls for abolishing diversity jurisdiction appear to be Debra Lyn Bassett, The Hidden Bias in Diversity Jurisdiction, 81 Wash. U. L.Q. 119, 138–­45 (2003) and Crump, supra note 5, at 22 (concluding that “[t]oday, more than ever, there are persuasive arguments for the abolition or retrenchment of the general diversity statute”). For discussion of Professor Bassett’s proposal, see infra note 176 and accompanying text.

         [8].     Scott Dodson, Beyond Bias in Diversity Jurisdiction, 69 Duke L.J. 267, 309 (2019) (noting contemporary justification of diversity jurisdiction on efficiency grounds); Steven Gensler & Roger Michalski, The Million Dollar Diversity Docket, 47 BYU L. Rev. (forthcoming 2022) (studying a broad range of docket effects of increasing amount in controversy in diversity cases); Daniel E. Klerman & Jonathan R. Nash, Aligning Diversity Jurisdiction with Its Bias Rationale (2021) (unpublished manuscript on file with author) (calling for rationalization of diversity doctrine in line with its traditional bias rationale); Patrick Woolley, Diversity Jurisdiction and the Common-Law Scope of the Civil Action, 99 Wash. U. L. Rev. (forthcoming 2022) (asserting that diversity doctrine should be understood against the backdrop of common law joinder rules).

         [9].     See infra Part I.

       [10].     See infra Part II.

       [11].     See infra Part III.

       [12].     Indeed, the matter never came to a vote in the Senate. S. 2389, 95th Cong (1978). See Underwood, supra note 1, at 199 n.91.


[maxbutton id=”2″ url=”” window=”new” 

Time to Go Auer Separate Ways: Why the Bia Should not Say What the Law is by Tatum Rosenfeld

Note | Immigration Law
Time to Go Auer Separate Ways: Why the BIA Should Not Say What the Law is
by Tatum P. Rosenfeld*

From Vol. 94, No. 5 (2021)
94 S. Cal. L. Rev. 1279 (2021)

Keywords: Board of Immigration Appeals (“BIA”), Auer

Neither fully legislative nor fully judicial, federal administrative agencies are tasked with “policing the minutiae.”1 They codify and enforce the details of the regulatory scheme set out by Congress.2 Simply put, administrative agencies administer the law. Agency regulations, however, like other legal sources, can be ambiguous.3 Thus, interpretation is inevitably necessary either to confront a novel circumstance or to resolve an inherent semantic ambiguity. This then raises the question: Who should be called upon to resolve such ambiguities? The Supreme Court’s solution is to put agencies in charge. Auer deference says an agency’s interpretation of its own rule controls so long as it is not “plainly erroneous or inconsistent with the regulation.”4 In effect, after an agency promulgates a regulation, it then maintains the latitude to fill in the gaps by interpreting its own regulation.

The Court has offered no good reason why Auer, while reasonable in some situations, should be applied indiscriminately to all agencies. A multitude of federal agencies exist to effectuate policies touching on everything under the sun—including housing, education, social benefits, food, agriculture, commerce, health, and the environment—but there is one agency in particular whose special attributes suggest that it should not be treated the same as all the others. That is the agency in charge of immigration appeals. One might reasonably think deference, for example, to the Food and Drug Administration’s expert interpretation of what constitutes an “active moiety,” promotes a robust and efficient government necessary for modern complexities. It follows that such agencies deserve deference from a court that is less well versed in the expertise involved in rendering such a judgment. However, immigration presents an entirely different set of policy concerns. 

This is because deference to the Board of Immigration Appeals (“BIA”) under Auer risks political manipulation at the expense of immigrants’ liberty and freedom. Nested under the Department of Justice (“DOJ”), and more specifically the Executive Office of Immigration Review (“EOIR”), the BIA and lower immigration courts operate as quasi-judicial bodies, specifically “prone to political manipulation because of their unique combination of structure, history, and function.”A “clarifing” interpretation by the BIA can dictate the scheme by which people are welcomed into or rejected from the United States. The BIA is the unsuspecting gatekeeper, capable of molding the rules by interpretation to advance an anti-immigrant political agenda. Auer, therefore, acts as another tool in the political toolbox to restrict immigration in what is already a labyrinth of proceedings, paperwork, and fear.

This Note argues that Auer deference, even in light of the Supreme Court’s recent clarification of the doctrine, is an inappropriate approach for courts to take when they review the BIA’s rulings. Because the BIA lacks political accountability while simultaneously commingling government powers, deference to the BIA undermines key constitutional principles, such as separation of powers and democracy. Such principles must be enhanced, rather than undermined, more than ever when there is a heightened threat to
liberty. Therefore, a close look is needed to determine whether
Auer deference is warranted for an agency in which the very freedoms of immigrants are at stake. 
The problem actually goes even further. Even if federal courts decided to eschew deference to BIA interpretations, the courts’ own interpretations would still not be an adequate mechanism to protect immigrants from unjust results. With ever-growing caseloads, Article III judges are not equipped with the requisite resources, time, and experience with immigration laws to adjudicate thousands more life-altering decisions in a timely, just manner.Immigration matters deserve to be adjudicated with proper accountability and more formalistic separations of power than those that currently stand. To achieve this, immigration courts and the BIA should, as many others have suggested before, be reformulated as Article I legislative courts to best serve democratic and separation of powers purposes. Liberty for immigrants can be salvaged through fairer adjudications and independent interpretations that are more insulated from political manipulation and the polarized ideologies that waft in and out of power.

This Note proceeds as follows: Part I briefly details a background of the BIA, and a current understanding of Auer deference. This discussion includes Auer’s political implications, and how the Supreme Court chose not to overrule the doctrine in Kisor v. Wilkie. This Section then explores the relationship between Auer and the BIA, including why the BIA’s political vulnerability makes the agency particularly unfit for Auer deference. Certain appointees to this agency have been rewarded with a position as a board member by openly declaring their hostility to the very people who are the object of the agency’s mission, and whose fragile life prospects are in their hands. Ironically, this flips the partisan commitments normally seen in the world of administrative law as follows: Those who would classically support increasing agency discretion by according Auer deference should be worried about giving heightened power to the self-declared, anti-immigrant agenda pervading the BIA, while those who would classically resist excessive delegation and deference to agencies, because of their limited accountability, seek to endow the BIA with vast independence and partisan manipulation. Part II argues that even in the wake of Kisor v. Wilkie, deference to the BIA’s interpretations of immigration regulations presents a heightened threat to constitutional principles of separation of powers and democracy. Part III then provides a potential solution to the inadequacy of Auer deference and the judicial role in the realm of regulatory gap filling for immigration laws. 

* Executive Development Editor, Southern California Law Review, Volume 94; J.D. Candidate 2021, University of Southern California Gould School of Law; B.A., 2017, University of Michigan, Communications and Minor in Law, Justice & Social Change. I am so deeply grateful for my family and their unending support, especially my dad for always being my sounding board and biggest cheerleader. I want to thank Professor Rebecca L. Brown for her invaluable guidance and inspiring perspective in drafting this Note. And, thank you to the talented Southern California Law Review staff and editors for their thoughtful work throughout this publication process.

[maxbutton id=”2″ url=”” window=”new”