Protectors of Predators or Prey: Bystanders and Upstanders Amid Sexual Crimes – Article by Zachary D. Kaufman

Article | Criminal Law
Protectors of Predators or Prey: Bystanders and Upstanders amid Sexual Crimes
by Zachary D. Kaufman*

From Vol. 92, No. 6 (September 2019)
92 S. Cal. L. Rev. 1317 (2019)

Keywords: Bad Samaritan Laws, Bystanders, Sexual Violence Prevention

 

Abstract

In the wake of widespread revelations about sexual abuse by Harvey Weinstein, Larry Nassar, and others, the United States is reckoning with the past and present and searching for the means to prevent and punish such offenses in the future. The scourge of sexual crimes goes far beyond instances perpetrated by powerful men; this misconduct is rampant throughout the country. In some of these cases, third parties knew about the abuse and did not try to intervene. Scrutiny of—and the response to—such bystanderism is increasing, including in the legal world.

In order to align law and society more closely with morality, this Article proposes a more holistic, aggressive approach to prompt involvement by third parties who are aware of specific instances of sexual crimes in the United States. This Article begins by documenting the contemporary scope of sexual crimes in the United States and the crucial role bystanders play in facilitating them.

The Article next provides an overview and assessment of “Bad Samaritan laws”: statutes that impose a legal duty to assist others in peril through intervening directly (also known as the “duty to rescue”) or notifying authorities (also known as the “duty to report”). Such laws exist in dozens of foreign countries and, to varying degrees, in twenty-nine U.S. states, Puerto Rico, U.S. federal law, and international law. The author has assembled the most comprehensive global database of Bad Samaritan laws, which provides an important corrective to other scholars’ mistaken claims about the rarity of such statutes, particularly in the United States. Despite how widespread these laws are in the United States, violations are seldom, if ever, charged or successfully prosecuted.

Drawing on historical research, trial transcripts, and interviews with prosecutors, judges, investigators, and “upstanders” (people who intervene to help others in need), the Article then describes four prominent cases in the United States involving witnesses to sexual crimes. Each case provides insight into the range of conduct of both bystanders and upstanders.

Because not all such actors are equal, grouping them together under the general categories of “bystanders” and “upstanders” obscures distinct roles, duties, and culpability for violating those duties. Drawing on the case studies, this Article thus presents original typologies of bystanders (including eleven categories or sub-categories), upstanders (including seven categories), and both kinds of actors (including four categories), which introduce greater nuance into these classifications and this Article’s proposed range of legal (and moral) responsibilities. These typologies are designed to maximize generalizability to crimes and crises beyond sexual abuse.

Finally, the Article prescribes a new approach to the duty to report on sexual abuse and possibly other crimes and crises through implementing a combination of negative incentives (“sticks”) and positive incentives (“carrots”) for third parties. These recommendations benefit from interviews with sexual violence prevention professionals, police, legislators, and social media policy counsel. Legal prescriptions draw on this Article’s typologies and concern strengthening, spreading, and standardizing duty-to-report laws at the state and territory levels; introducing the first general legal duty to report sexual crimes and possibly other offenses (such as human trafficking) at the federal level; exempting from liability one of the two main bystander categories the Article proposes (“excused bystanders”) and each of its six sub-categories (survivors, “confidants,” “unaware bystanders,” children, “endangered bystanders,” and “self-incriminators”); actually charging the other main bystander category the Article proposes (“unexcused bystanders”) and each of its three sub-categories (“abstainers,” “engagers,” and “enablers”) with violations of duty-to-report laws or leveraging these statutes to obtain testimony from such actors; and more consistently charging “enablers” with alternative or additional crimes, such as accomplice liability. Social prescriptions draw on models and lessons from domestic and foreign contexts and also this Article’s typologies to recommend, among other initiatives, raising public awareness of duty-to-report laws and creating what the Article calls “upstander commissions” to identify and “upstander prizes” to honor a category of upstanders the Article proposes (“corroborated upstanders”), including for their efforts to mitigate sexual crimes. A combination of these carrots and sticks could prompt would-be bystanders to act instead as upstanders and help stem the sexual crime epidemic.

*. Associate Professor of Law and Political Science, University of Houston Law Center, with additional appointments in the University of Houston’s Department of Political Science and Hobby School of Public Affairs. J.D., Yale Law School; D.Phil. (Ph.D.) and M.Phil., both in International Relations, Oxford University (Marshall Scholar); B.A. in Political Science, Yale University. Research for this Article, including fieldwork, was generously facilitated by a grant from Harvard University as well as institutional support from Stanford Law School (where the author was a Lecturer from 2017 to 2019) and the Harvard University Kennedy School of Government (where the author was a Senior Fellow from 2016 to 2019).

The author primarily thanks the following individuals for helpful comments and conversations: Will Baude; Frank Rudy Cooper; John Donohue; Doron Dorfman; George Fisher; Richard Ford; Jeannie Suk Gersen; Hank Greely; Chris Griffin; Oona Hathaway; Elizabeth D. Katz; Amalia Kessler; Tracey Meares; Michelle Mello; Dinsha Mistree; Mahmood Monshipouri; Joan Petersilia; Camille Gear Rich; Mathias Risse; Peter Schuck; Kathryn Sikkink; David Sklansky; Kate Stith; Mark Storslee; Allen Weiner; Robert Weisberg; Lesley Wexler; Alex Whiting; Rebecca Wolitz; Gideon Yaffe; audiences at Yale Law School, Stanford Law School, Harvard University Kennedy School of Government, Stanford University Center for International Security & Cooperation, University of Virginia School of Law, University of Southern California Gould School of Law, Louisiana State University Paul M. Hebert Law Center, Penn State Law, University of Hawai’i Richardson School of Law, West Virginia University College of Law, University of Sydney Law School, University of Western Australia Law School, South Texas College of Law, University of Houston Department of Political Science and Hobby School of Public Affairs, and Colorado College; audiences at the 2018 conferences of the Harvard Law School Institute for Global Law & Policy, Law & Society Association, American Political Science Association, International Studies Association, and Policy History Association; and audiences at the 2019 conferences of the Law & Society Association, International Studies Association, CrimFest (at Brooklyn Law School), and the Southeastern Association of Law Schools. The author is especially indebted to his students in the reading group at Stanford Law School on “The Law of Bystanders and Upstanders” he led in spring 2019: Jamie Fine, Katherine Giordano, Bonnie Henry, Jeremy Hutton, Allison Ivey, Andrew Jones, Azucena Marquez, Camden McRae, Sergio Sanchez Lopez, and Spencer Michael Schmider. The author also thanks the following individuals for their valuable feedback: Fahim Ahmed, Matthew Axtell, Maria Banda, Adrienne Bernhard, Isra Bhatty, Charles Bosvieux-Onyekwelu, Sara Brown, Ben Daus-Haberle, Brendon Graeber, Melisa Handl, Janhavi Hardas, Elliot Higgins, Hilary Hurd, Howard Kaufman, Linda Kinstler, Chris Klimmek, Tisana Kunjara, Gabrielle Amber McKenzie, Noemí Pérez Vásquez, Tanusri Prasanna, and Noam Schimmel.

The author is grateful to the following individuals for granting interviews for this Article: an anonymous attorney involved in the Steubenville case; an anonymous employee of the Massachusetts Attorney General’s office; an anonymous employee of the Massachusetts Sentencing Commission; Jake Wark of the Suffolk County District Attorney’s office in Massachusetts; Manal Abazeed, Jehad Mahameed, Mounir Mustafa, and Raed Saleh of the White Helmets/Syria Civil Defense; Naphtal Ahishakiye of IBUKA in Rwanda; Holocaust survivors Isaac and Rosa Blum; Gili Diamant and Irena Steinfeldt of Yad Vashem in Israel; Alexandria Goddard; Martin Niwenshuti of Aegis Trust in Rwanda; Lindsay Nykoluk of the Calgary Police Service in Canada; Ruchika Budhraja, Gavin Corn, Neil Potts, and Marcy Scott Lynn of Facebook; Jessica Mason of Google; and Regina Yau of the Pixel Project.

For thorough, thoughtful research assistance, the author thanks Chelsea Carlton, Michelle Katherine Erickson, Jana Everett, Thomas Ewing, Matthew Hines, Ivana Mariles Toledo, and Allison Wettstein O’Neill. The author also thanks the following individuals for research assistance on particular topics: Kathleen Fallon, Alexandria Goddard, Josh Goldman, Farouq Habib, Naomi Kikoler, Mariam Klait, Shari Lowin, Riana Pfefferkorn, Kenan Rahmani, Yong Suh, Paul Williams, Regina Yau, and library staff at Harvard Law School (including Aslihan Bulut and Stephen Wiles), Stanford Law School (including Sonia Moss and Alex Zhang), and the University of Houston Law Center (including Katy Badeaux, Christopher Dykes, and Amanda Watson). Of these individuals, the author owes the most gratitude to Katy Badeaux.

Finally, the author thanks the editors of the Southern California Law Review (“SCLR”)—particularly Editor-in-Chief Kevin Ganley, Managing Editor Christine Cheung, Executive Senior Editor Rosie Frihart, Executive Editor Celia Lown, and Senior Editor Evan Forbes—for their excellent editorial assistance. The author was honored that SCLR selected this Article as the subject of its annual symposium held at the University of Southern California’s Gould School of Law on March 21, 2019.

The author’s public engagement on this topic has drawn on the research and recommendations contained in this Article. Those activities include advising policymakers on drafting or amending Bad Samaritan laws and other legislation (including the federal Harassment and Abuse Response and Prevention at State (HARPS) Act sponsored by Congressperson Jackie Speier) and publishing op-eds in the Boston Globe (When Speaking Up is a Civic Duty, on August 5, 2018) and the San Francisco Chronicle (No Cover for Abusers; California Must Close Gap in its Duty-to-Report Law, on June 23, 2019).

This Article is current as of September 27, 2019. Any errors are the author’s alone.

 

View Full PDF

Unlock Your Phone and Let Me Read All Your Personal Content, Please: The First and Fifth Amendments and Border Searches of Electronic Devices – Note by Kathryn Neubauer

Note | Constitutional Law
Unlock Your Phone and Let Me Read All Your Personal
Content, Please: The First and Fifth Amendments and
Border Searches of Electronic Devices

by Kathryn Neubauer*

From Vol. 92, No. 5 (July 2019)
92 S. Cal. L. Rev. 1273 (2019)

Keywords: First Amendment, Fourth Amendment, Fifth Amendment, Border Search Exception, Technology

 

Until January 2018, under the border search exception, CBP officers were afforded the power to search any electronic device without meeting any standard of suspicion or acquiring a warrant. The border search exception is a “longstanding, historically recognized exception to the Fourth Amendment’s general principle that a warrant be obtained . . . .” It provides that suspicionless and warrantless searches at the border are not in violation of the Fourth Amendment merely because searches at the border are “reasonable simply by virtue of the fact that they occur at the border . . . .” The CBP, claiming that the border search exception applies to electronic devices, searched more devices in 2017 than ever before, with approximately a 60 percent increase over 2016 according to data released by the CBP. These “digital strip searches” violate travelers’ First, Fourth, and Fifth Amendment rights. With the advent of smartphones and the expanded use of electronic devices for storing people’s extremely personal data, these searches violate an individual’s right to privacy. Simply by travelling into the United States with a device linked to such information, a person suddenly—and, currently, unexpectedly—opens a window for the government to search through seemingly every aspect of his or her life. The policy behind these searches at the border does not align with the core principles behind our longstanding First and Fifth Amendment protections, nor does it align with the policies behind the exceptions made to constitutional rights at the border in the past.

In order to protect the privacy and rights of both citizens and noncitizens entering the United States, the procedures concerning electronic device searches need to be rectified. For instance, the border search exception should not be applied to electronic devices the same way it applies to other property or storage containers, like a backpack. One is less likely to expect privacy in the contents of a backpack than in the contents of a password- or authorization-protected devices—unlike a locked device, a backpack can be taken, can be opened easily, can fall open, and also has been traditionally subjected to searches at the border. Moreover, there are many reasons why electronic devices warrant privacy.

*. Executive Notes Editor, Southern California Law Review, Volume 92; J.D., 2019, University of Southern California Gould School of Law; B.B.A., 2014, University of Michigan. My sincere gratitude to Professor Sam Erman for his invaluable feedback on early drafts of this Note as well as to Rosie Frihart, Kevin Ganley and all the editors of the Southern California Law Review. Thank you to Brian and my family—Mark, Diane, Elisabeth, Jennifer, Alison, Rebecca, Tony, Jason, Jalal, Owen, Evelyn, Peter and Manny—for all of their love and support. Finally, a special thank you Rebecca for reading and editing countless drafts, and to Jason for bringing to my attention this important issue.

 

View Full PDF

Confessions of a Teenage Defendant: Why a New Legal Rule Is Necessary to Guide the Evaluation of Juvenile Confessions – Note by Hannah Brudney

Note | Criminal Law
Confessions of a Teenage Defendant: Why a New Legal Rule
Is Necessary to Guide the Evaluation of Juvenile Confessions

by Hannah Brudney*

From Vol. 92, No. 5 (July 2019)
92 S. Cal. L. Rev. 1235 (2019)

Keywords: Criminal Law, Juvenile Confessions, Civil Rights

The cases of the “Central Park Five” and Brendan Dassey are two of the highest profile criminal cases in the past three decades. Both cases unsurprisingly captured the nation’s attention and became the subjects of several documentaries. Each case forces the public to consider how police officers could mistakenly identify and interrogate an innocent suspect, how an innocent person could feel compelled to falsely confess, and how our legal system could allow the false and coerced confession of a child to be the basis of a criminal conviction. While these two cases made national headlines, they are not unique. False confessions by juveniles are a common and even inevitable occurrence given the impact of the interrogation process on children and the inadequacies of the legal standard that currently exists to protect against juvenile false confessions.

Part I of this Note will discuss the prevalence of false confessions among juvenile suspects, and explain how juveniles’ transient developmental weaknesses make them particularly vulnerable to specific coercive interrogation techniques. Part I will also emphasize the impact that a confession has on the outcome of a defendant’s trial, thereby highlighting the weight that a false confession carries.

Part II of this Note will present the existing law governing the evaluation of the voluntariness of a confession—the procedural safeguards offered by Miranda v. Arizona and the totality of the circumstances test rooted in the concern for due process. Part II will also argue that the totality of the circumstances test is insufficient to protect juveniles because it does not give binding weight to a suspect’s age, but rather considers age among several other characteristics.

Part III of this Note will propose a new legal rule to guide the evaluation of juvenile confessions. The proposed legal rule extends and expands upon the language and holding from J.D.B. v. North Carolina, and requires that age be the primary factor in courts’ evaluations of juvenile confessions. Confessions offered by children during interrogations in which coercive techniques are employed must be presumed involuntary, given the effect that manipulative interrogation techniques have on juveniles’ likelihood to falsely confess. Moreover, given that courts often have no way of knowing the circumstances of an interrogation, confessions by all juveniles should be presumed involuntary until the prosecution can prove that no coercive interrogation techniques were used. Part III also proposes a series of policy reforms that aim to reduce the prevalence of false confessions.

*. Senior Submissions Editor, Southern California Law Review, Volume 92; J.D. 2019, University of Southern California Gould School of Law; B.A. English Literature and Psychology 2014, Columbia University. I would like to thank Professor Dan Simon for his advice and guidance, as well as the members of the Southern California Law Review for their excellent editing.

View Full PDF

The Wild West: Application of the Second Amendment’s Individual Right to California Firearm Legislation – Note by Forrest Brown

Note | Constitutional Law
The Wild West: Application of the Second Amendment’s
Individual Right to California Firearm Legislation

by Forrest Brown*

From Vol. 92, No. 5 (July 2019)
92 S. Cal. L. Rev. 1203 (2019)

Keywords: Second Amendment

 

In its landmark District of Columbia v. Heller decision, the Supreme Court announced that the Second Amendment guarantees an individual right of the people to bear arms. Although Heller answered a long-standing question about the Second Amendment’s meaning, there remain issues to be settled. One of the most pressing—and the main topic of this Note—is the proper method of review and application of this individual right. Without guidance on these issues, several circuit courts have followed different approaches. Although opportunities to provide some clarity have come before the Supreme Court, so far, it has denied certiorari.

This Note will not opine on the merits of the individualist or collectivist approaches to the interpretation of the Second Amendment, as this question has been answered conclusively in Heller. Instead, this Note will provide a suggested framework for the application of this individual right to keep and bear arms, and will progress as follows. Part I will offer a contextual history of the Second Amendment. Part II will make the case for why clarity on this issue is so desperately needed and is punctuated by a discussion of the Second Circuit’s particularly troubling application of the right. Part III will offer a proposed framework that, if adopted by the Supreme Court, can resolve the questions posed in Part II. Part IV will apply the framework to California concealed carry regulations. Finally, Part V will apply the framework to a new California law that is likely to make its way to the Ninth Circuit soon, thus allowing the Supreme Court to clarify Second Amendment jurisprudence further.

*. Senior Submissions Editor, Southern California Law Review, Volume 92; J.D. 2019, University of Southern California Gould School of Law; B.A., Economics & Accounting 2015, University of California, Santa Barbara. My deepest appreciation goes to Professor Rebecca Brown for her guidance, the editors of the Southern California Law Review for all of their hard work, and my family and friends for their continued support.

 

View Full PDF

The SEC and Regulation of Exchange-Traded Funds: A Commendable Start and a Welcome Invitation – Article by​ Henry T. C. Hu & John D. Morley

Article | Securities Law
The SEC and Regulation of Exchange-Traded Funds:
A Commendable Start and a Welcome Invitation

by Henry T. C. Hu & John D. Morley*

From Vol. 92, No. 5 (July 2019)
92 S. Cal. L. Rev. 1155 (2019)

Keywords: Securities, SEC, Regulation, Exchange-Traded Funds (“ETFs”)

 

Abstract

Exchange-traded funds (“ETFs”) are among the most important financial innovations of the modern era. And yet they still have no coherent regulatory system. This Article addresses the problem by assessing the SEC’s recent effort in this area in light of the recommendations we provided in prior research. In March 2018, we offered the first academic work to show the need for, or to present, a comprehensive regulatory framework for all ETFs. On June 28, 2018, just prior to that article’s scheduled publication, the SEC issued a proposal to change the way it regulates certain types of ETFs. On May 20, 2019, the SEC issued its “Precidian” exemptive order, allowing for the first time “non-transparent” actively managed ETFs—an order that we believe has surprising, hitherto unexplored implications for ETF regulation.

This new Article thus considers the SEC proposal and the Precidian order in the context of our earlier article’s proposed regulatory framework, and also refines that framework. We provide additional rationales for the framework, relying in part on new empirical findings.

The SEC’s proposal does not seek to provide a comprehensive regulatory framework for all ETFs. However, the proposal is a commendable start to addressing some of the problems in the current ad hoc approach to ETF regulation, especially as to the substantive side of ETF regulation. In proposing a more rules-based approach, the SEC helps deal with the central problem of current substantive ETF regulation—the reliance on individualized exemptive letters. However, this partial shift only applies to certain ETFs that are organized under the Investment Company Act of 1940 and also leaves in place an anomalous set of individualized exemptions for several specific Investment Company ETFs, including those offering leveraged and inverse exposures. More broadly, the proposal does not address problems of SEC discretion pertaining to the underlying process of financial innovation in ETFs. The proposed rule also neglects to address the frequent need for individualized exemptions with respect to stock exchange listing requirements.

With respect to the disclosure side of regulation, the SEC proposal again only covers Investment Company ETFs, but is even more incremental in nature. The SEC contemplates modest enhancements of disclosures related to “trading price frictions” of such ETFs. And, going the other direction, the SEC contemplates eliminating the primary source of information for retail investors on intraday values of ETF shares. We welcome the SEC’s invitation for views on more fundamental disclosure reforms. We offer a refined version of the comprehensive disclosure approach advanced in our first article, and provide fresh rationales for such an approach, based in part on new empirical findings. This approach would apply to all ETFs, and would be cognizant of the distinctive characteristics of ETFs and the subtle complexities introduced by the underlying innovation process. Collectively, a disclosure regime consisting of a “dynamic” SEC-specified ETF nomenclature and required ETF self-identification (which nomenclature and self-identification we refer to as the “disclosure building block”), fuller quantitative disclosures of trading price frictions (such as those related to the arbitrage mechanism and bid-ask spreads), and periodic Management’s Discussion and Analysis-style qualitative information centered on the arbitrage mechanism (including, as appropriate, consideration of the impact of the liquidity of the assets in which the ETF is invested) would help individual and institutional investors alike.

*. Professor Hu holds the Allan Shivers Chair in the Law of Banking and Finance, University of Texas Law School. Professor Morley is Professor of Law, Yale Law School. We much appreciate the insights of Cary Coglianese, Jill Fisch, Itay Goldstein, Joseph McCahery, David Musto, Steve Oh, Landon Thomas, Jr., executives and counsel at a number of major ETF sponsors and other entities involved with ETFs, the library assistance of Scott Vdoviak and Lei Zhang and the research assistance of Jacob McDonald and Helen Xiang. We thank conference and workshop participants at the Wharton Finance Department/Institute of Law and Economics (University of Pennsylvania Law School) Finance Seminar (Sept. 20, 2018), the Nasdaq-Villanova Synapse 2018 (Nov. 9, 2018), the ETP Fall 2018 Forum (Nov. 29, 2018), and the Tilburg University Law and Economics Center Seminar (Feb. 6, 2019). Professor Hu served as the founding Director of the U.S. Securities and Exchange Commission’s Division of Economic and Risk Analysis (formerly called the Division of Risk, Strategy, and Financial Innovation) (2009-2011), and he and his staff were involved in certain matters discussed in this Article. This Article speaks as of July 1, 2019.

 

View Full PDF

The NCAA and the IRS: Life at the Intersection of College Sports and the Federal Income Tax – Article by Richard Schmalbeck & Lawrence Zelenak

Article | Tax Law
The NCAA and the IRS: Life at the Intersection of
College Sports and the Federal Income Tax

by Richard Schmalbeck* & Lawrence Zelenak

From Vol. 92, No. 5 (July 2019)
92 S. Cal. L. Rev. 1087 (2019)

Keywords: Tax, NCAA, IRS, College Sports

Introduction

Few organizational acronyms are more familiar to Americans than those of the National Collegiate Athletic Association (“NCAA”) and the Internal Revenue Service (“IRS”). Although neither organization is particularly popular,1 both loom large in American life and popular culture. Because there is a tax aspect to just about everything, it should come as no surprise that the domains of the NCAA and the IRS overlap in a number of ways. For many decades, college athletics have enjoyed unreasonably generous tax treatment—sometimes because of the failure of the IRS to enforce the tax laws enacted by Congress, and sometimes because Congress itself has conferred dubious tax benefits on college sports. Very recently, however, there have been signs of what may be a major attitudinal shift on the part of Congress—although, so far, there have been no signs of a corresponding change at the IRS.

This Article offers an in-depth look at the history and current status of four areas of intersection between the federal tax laws and college sports. Part I considers the possible application of the tax on unrelated business income to big-time college sports. It concludes that, even in the absence of any change in the unrelated business income statute, there is a strong argument that revenues from the televising of college sports should be subject to the unrelated business income tax. Part II examines the tax status of athletic scholarships. It explains that athletic scholarships, as currently structured, are taxable under the terms of the Internal Revenue Code but that the IRS seems to have made a conscious decision not to enforce the law.

While the first two Parts of this Article address areas in which the traditional sweetheart arrangement between the IRS and the NCAA remains in effect, the final two Parts of this Article consider areas in which Congress has—very recently—intervened to increase the tax burden on college athletics. Part III describes how Congress, three decades ago, explicitly permitted taxpayers to claim charitable deductions for most of the cost of season tickets to college football and basketball games and how Congress in 2017—to the surprise of many observers, including the authors of this article—repealed this special tax benefit. Finally, Part IV addresses issues of both statutory interpretation and policy raised by Congress’s creation, in 2017, of a twenty-one percent excise tax on at least some universities that were paying seven-figure salaries to their football and basketball coaches. This Article’s conclusion suggests the IRS should follow the lead of Congress and reconsider the administrative favoritism toward college sports described in Parts I and II.

 

*. Simpson, Thacher, and Bartlett Professor of Law, Duke University School of Law.

†. Pamela B. Gann Professor of Law, Duke University School of Law. The authors are grateful to Katherine Pratt, Paul Haagen, Steven Willborn, Leandra Lederman, David Gamage and the participants in conferences and workshops at the law schools of New York University (the National Center for Law and Philanthropy), Northwestern University, Duke University, Indiana University, and Loyola (L.A.) University, and for the research assistance of Kevin Platt.

 

View Full PDF

An Uneasy Dance with Data: Racial Bias in Criminal Law

 

From Volume 93, Postscript (June 2019)
DOWNLOAD PDF


 

an uneasy dance with data: racial bias in criminal law

Joseph J. Avery[*]

INTRODUCTION

Businesses and organizations expect their managers to use data science to improve and even optimize decisionmaking. The founder of the largest hedge fund in the world has argued that nearly everything important going on in an organization should be captured as data.[1] Similar beliefs have permeated medicine. A team of researchers has taken over 100 million data points from more than 1.3 million pediatric cases and trained a machine-learning model that performs nearly as well as experienced pediatricians at diagnosing common childhood diseases.[2]

Yet when it comes to some criminal justice institutions, such as prosecutors’ offices, there is an aversion to applying cognitive computing to high-stakes decisions. This aversion reflects extra-institutional forces, as activists and scholars are militating against the use of predictive analytics in criminal justice.[3] The aversion also reflects prosecutors’ unease with the practice, as many prefer that decisional weight be placed on attorneys’ experience and intuition, even though experience and intuition have contributed to more than a century of criminal justice disparities.

Instead of viewing historical data and data-hungry academic researchers as liabilities, prosecutors and scholars should treat them as assets in the struggle to achieve outcome fairness. Cutting-edge research on fairness in machine learning is being conducted by computer scientists, applied mathematicians, and social scientists, and this research forms a foundation for the most promising path towards racial equality in criminal justice: suggestive modeling that creates baselines to guide prosecutorial decisionmaking.

I.  Prosecutors and Racial Bias

More than 2 million people are incarcerated in the United States, and a disproportionate number of these individuals are African American.[4] Most defendants—approximately 95%—have their cases resolved through plea bargaining.[5] Prosecutors exert tremendous power over the plea bargaining process, as they can drop a case, oppose bail or recommend a certain level of bail, add or remove charges and counts, offer and negotiate plea bargains, and recommend sentences.[6]

When it comes to racial disparity in incarceration rates, much of it can be traced to prosecutorial discretion. Research has found that prosecutors are less likely to offer black defendants a plea bargain, less likely to reduce their charge offers, and more likely to offer them plea bargains that include prison time.[7] Defendants who are black, young, and male fare especially poorly.[8]

One possible reason for suboptimal prosecutorial decisionmaking is a lack of clear baselines. In estimating the final disposition of a case, prosecutors have very little on which to base their estimations. New cases are perpetually commenced, and prosecutors must process these cases quickly and efficiently, all while receiving subpar information; determining what happened and when is a matter of cobbling together reports from victims, witnesses, police officers, and investigators. In addition, prosecutors must rely on their own past experiences, a reliance that runs numerous risks, including that of small sample size bias. Given these cognitive constraints, prosecutors are liable to rely on stereotypes, such as those that attach to African Americans.[9]

II.  Predictive Analytics in Criminal Justice

The use of predictive analytics in the law can be bifurcated into two subsets. One involves policing, where what is being predicted is who will commit future crimes.[10] Embedded in this prediction is the question of where those crimes will occur. In theory, these predictions can be used by police departments to allocate resources more efficiently and to make communities safer.

Dozens of police departments around the United States are employing predictive policing.[11] Since 2011, the Los Angeles Police Department (“LAPD”) has analyzed data from rap sheets in order to determine how best to utilize police resources.[12] Chicago officials have experimented with an algorithm that predicts which individuals in the city are likely to be involved in a shooting—either as the shooter or as the victim.[13]

The second subset primarily involves recidivism. Here, we have bail decisions in which predictions about who will show up to future court dates are made.[14] Embedded in these predictions is the question of who, if released pretrial, will cause harm (or commit additional crimes).[15] This subset also includes sentencing, such that judges may receive predictions regarding a defendant’s likelihood of recidivating.[16]

The Laura and John Arnold Foundation (“Arnold Foundation”) designed its Public Safety Assessment tool (“PSA”) to assess the dangerousness of a given defendant.[17] The tool takes into account defendants’ age and history of criminal convictions, but it elides race and gender and supposed covariates of race and gender, such as employment background, where a defendant lives, and history of criminal arrests.[18] Risk assessments focusing on recidivism are consulted by sentencing courts.[19] These statistical prediction tools make use of a number of features (factors specific to a defendant) to produce a quantitative output: a score that reflects a defendant’s likelihood of engaging in some behavior, such as committing additional crimes or additional violent crimes.[20]

III.  Against Predictive Analytics in Criminal Justice

Statistical algorithms that have been used for risk assessment have been charged with perpetuating racial bias[21] and have been the subject of litigation.[22] A 2016 report by ProPublica alleging that an algorithm used in Florida was biased against black defendants received nationwide attention.[23] The subsequent debate about whether the algorithm actually was biased against black defendants pivoted on different definitions of fairness, with a specific focus on rates of false positives, true negatives, and related concepts.[24] Overall, the fear is that, at best, algorithmic decisionmaking perpetuates historical bias; at worst, it exacerbates bias. As one opponent of the LAPD’s use of predictive analytics said, “[d]ata is a weapon and will always be used to criminalize black, brown and poor people.”[25]

Professor Jessica Eaglin has argued that risk itself is a “malleable and fluid concept”; thus, predictive analytics focused on risk assessment give a spurious stamp of objectivity to a process that is agenda-driven.[26] Furthermore, Professor Eaglin argues that the agenda of these tools is one of increased punishment.

Critics also address the creation of the models. Some argue that the training data is nonrepresentative.[27] Others argue that recidivism is difficult to define[28] and that some jurisdictions are improperly defining it to include arrests, which may be indicative of little beyond police bias.[29] Still others debate which features such models properly should include.[30]

IV.  The Importance of Data for Criminal Justice Fairness

While it is important to question how data is used in criminal justice, the importance of data’s role in diminishing racial disparity in incarceration should not be underestimated. First, without robust data collection, we have no way of knowing when similarly-situated defendants are being treated dissimilarly. If we cannot clearly identify racial bias in the different stages of the criminal justice system, then we cannot fix it. And there is still a ways to go before prosecutorial data is properly organized and digitized.[31]

Second, data is essential for collaborative intelligence, which shows significant potential for improving prosecutorial decisionmaking. Prosecutors’ offices are in possession of information that can be used to form clear and unbiased baselines: hundreds of thousands of closed casefiles. Using advanced statistical and computer science methods, these casefiles can be used as a corpus from which to build a model that, based on an arresting officer’s narrative report and suggested charges, produces a prediction as to how a case would resolve if the defendant were treated race-neutrally. This is a classic machine-learning task: train an algorithm to produce a prediction function that relates case characteristics to case outcomes. This model can then be used to guide prosecutorial decisionmaking to make it more consistent (less variance across attorneys and across time) and less biased.

Algorithms will produce biased outcomes when the training data (the historical record) is biased and the algorithm is designed to maximize predictive accuracy. It should be obvious as to why this is the case: if predictive accuracy is the goal and the data is biased, then bias is a feature of the system, not a bug. In other words, bias must be taken into account if the prediction is to be accurate.

This is the reason why, in my research, I do not optimize prediction. My colleagues and I have different goals. Our models are not predictive models but “suggestive” models. One of our primary goals is to remove suspect bias from the model, bringing its suggestions into closer accord with Constitutional mandates for racially equal treatment of criminal defendants by state actors.

Can this be done? It is no easy feat, but researchers around the country are diligently working to build models that correct for suboptimal historical records.[32] Some of these approaches involve a weak version of disparate treatment in which the protected attribute (for example, race) is accessed during model training but omitted during classification.[33] Such approaches build from the recognition, long established in the scholarly community, that not only does blindness not entail fairness,[34] it often is a poor notion of fairness.[35]

Lastly, such models can themselves be used to identify racism that is endemic to the historical record or which emerges in the construction of the model. One strength of machine learning is that it is able to make connections between inputs and outputs that elude human actors. Social science long ago established that the human mind itself is a black box, and human actors have poor insight into their reasons for acting.[36] The black box of human decisionmaking, however, can be unpacked through careful use of statistics. Local-interpretable-model-agnostic explanations,[37] for instance, can be used to identify the aspects of input data on which a trained model relies as it makes its predictions, which should, in turn, offer insight into historical human reliance.[38]

CONCLUSION

When it comes to racial disparities, the U.S. criminal justice system is failing, and it has been failing for many years. In addition, charges of racial bias have been leveled against various organizations that are employing predictive analytics in their legal decisions. Scholars are right to question how data is being used. Past discrimination must not become enshrined in our machines. But movement away from data is also movement away from identification of unequal treatment, and it represents abandonment of the most promising path towards criminal justice fairness. While it is tempting for prosecutors’ offices to maintain the status quo and not augment their processes with data science, this would be a mistake. Collaborative intelligence has the potential to render prosecutorial decisionmaking more consistent, fair, and efficient.

 


[*] *. Joseph J. Avery is a National Defense Science & Engineering Graduate Fellow at Princeton University; Columbia Law School, J.D.; Princeton University, M.A.; New York University, B.A.

 [1].               Ray Dalio, Principles: Life and Work 527 (2017).

 [2]. Huiying Liang et al., Evaluation and Accurate Diagnoses of Pediatric Diseases Using Artificial Intelligence, 25 Nature Med. 433, 433 (2019), https://www.nature.com/articles/s41591-018-0335-9.pdf.

 [3]. Karen Hao, AI is Sending People to Jail—and Getting It Wrong, MIT Tech. Rev. (Jan. 21, 2019), https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai.

 [4]. Danielle Kaeble & Mary Cowhig, Correctional Populations in the United States, 2016, Bureau Just. Stat. 1 (Apr. 2018), https://www.bjs.gov/content/pub/pdf/cpus16.pdf.

 [5].               Lindsey Devers, Plea and Charge Bargaining: Research Summary, Bureau Just. Assistance 1 (Jan. 24, 2011), https://www.bja.gov/Publications/PleaBargainingResearchSummary.pdf. Plea bargaining is a process wherein a defendant receives less than the maximum charge possible in exchange for an admission of guilt or something functionally equivalent to guilt. See Andrew Manuel Crespo, The Hidden Law of Plea Bargaining, 118 Colum. L. Rev. 1303, 131012 (2018).

 [6]. Scott A. Gilbert & Molly Treadway Johnson, The Federal Judicial Center’s 1996 Survey of Guideline Experience, 9 Fed. Sent’g Rep. 87, 88–89 (1996); Marc L. Miller, Domination & Dissatisfaction: Prosecutors as Sentencers, 56 Stan. L. Rev. 1211, 1215, 1219–20 (2004); Kate Stith, The Arc of the Pendulum: Judges, Prosecutors, and the Exercise of Discretion, 117 Yale L.J. 1420, 142226 (2008); Besiki Kutateladze et al., Do Race and Ethnicity Matter in Prosecution? A Review of Empirical Studies, Vera Inst. Just., 3–4 (June 2012), https://www.vera.org/publications/do-race-and-ethnicity-matter-in-prosecution-a-review-of-empirical-studies.

 [7]. See Besiki Kutateladze et al., Cumulative Disadvantage: Examining Racial and Ethnic Disparity in Prosecution and Sentencing, 52 Criminology 514, 518, 527-537 (2014).

 [8]. See Gail Kellough & Scot Wortley, Remand for Plea: Bail Decisions and Plea Bargaining as Commensurate Decisions, 42 Brit. J. Criminology 186, 194–201 (2002); Besiki Kutateladze et al., Opening Pandora’s Box: How Does Defendant Race Influence Plea Bargaining?, 33 Just. Q. 398, 410-419 (2016).

 [9]. Decades of research at the nexus of law and psychology have identified stereotypical associations linking blackness with crime, violence, threats, and aggression. See Joshua Correll et al., The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals, 83 J. Personality & Soc. Psychol. 1314, 1324-1328 (2002); Jennifer L. Eberhardt et al., Seeing Black: Race, Crime, and Visual Processing, 87 J. Personality & Soc. Psychol. 876, 889-891 (2004); Brian Keith Payne, Prejudice and Perception: The Role of Automatic and Controlled Processes in Misperceiving a Weapon, 81 J. Personality & Soc. Psychol. 181, 190-191 (2001).

 [10]. See Albert Meijer & Martijn Wessels, Predictive Policing: Review of Benefits and Drawbacks, Int’l J. Pub. Admin. 1, 2-4 (2019).

 [11].               Issie Lapowsky, How the LAPD uses Data to Predict Crime, Wired (May 22, 2018, 5:02 PM), https://www.wired.com/story/los-angeles-police-department-predictive-policing.

 [12]. Id.

 [13]. Jeff Asher & Rob Arthur, Inside the Algorithm That Tries to Predict Gun Violence in Chicago, N.Y. Times: The Upshot (June 13, 2017), https://www.nytimes.com/2017/06/13/upshot/what-an-algorithm-reveals-about-life-on-chicagos-high-risk-list.html.

 [14].               See, e.g., Public Safety Assessment: Risk Factors and Formula, Pub. Safety Assessment [hereinafter Risk Factors and Formula], https://www.psapretrial.org/about/factors (last visited June 6, 2019).

 [15]. See Bernard E. Harcourt, Against Prediction: Profiling, Policing, and Punishment in an Actuarial Age 1 (2007); Jessica M. Eaglin, Constructing Recidivism Risk, 67 Emory L.J. 59, 61 (2017); Sonja B. Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 66 Stan. L. Rev. 803, 808–18 (2014).

 [16].               Melissa Hamilton, Adventures in Risk: Predicting Violent and Sexual Recidivism in Sentencing Law, 47 Ariz. St. L.J. 1, 3 (2015); Anna Maria Barry-Jester et al., The New Science of Sentencing, Marshall Project (Aug. 4, 2015, 7:15 AM), https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing.

 [17]. About the PSA, Pub. Safety Assessment, https://www.psapretrial.org/about (last visited June 6, 2019).

 [18]. Risk Factors and Formula, supra note 14.

 [19]. Timothy Bakken, The Continued Failure of Modern Law to Create Fairness and Efficiency: The Presentence Investigation Report and Its Effect on Justice, 40 N.Y.L. Sch. L. Rev. 363, 363–64 (1996); Starr, supra note 15, at 803.

 [20]. John Monahan, A Jurisprudence of Risk Assessment: Forecasting Harm Among Prisoners, Predators, and Patients, 92 Va. L. Rev. 391, 405–06 (2006).

 [21]. Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 674, 678 (2016); Jessica M. Eaglin, Predictive Analytics’ Punishment Mismatch, 14 I/S: J.L. & Pol’y for Info. Soc’y 87, 102–03 (2017).

 [22]. See State v. Loomis, 881 N.W.2d 749, 75760 (Wis. 2016).

 [23]. Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/
article/machine-bias-risk-assessments-in-criminal-sentencing.

 [24]. See Anthony W. Flores et al., False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks.”, 80 Fed. Prob., Sept. 2016, at 38; see also Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1, 6 (2014) (calling for predictions that are consistent with normative concepts of fairness).

 [25]. Cindy Chang, LAPD Officials Defend Predictive Policing as Activists Call for Its End, L.A. Times (July 24, 2018, 8:20 PM), https://www.latimes.com/local/lanow/la-me-lapd-data-policing-20180724-story.html.

 [26]. Eaglin, supra note 21, at 105; see also Eaglin, supra note 15, at 64.

 [27]. See Eaglin, supra note 15, at 118.

 [28]. Joan Petersilia, Recidivism, in Encyclopedia of American Prisons 215, 215–16 (Marilyn D. McShane & Frank R. Williams III eds., 1996).

 [29].               See Kevin R. Reitz, Sentencing Facts: Travesties of Real-Offense Sentencing, 45 Stan. L. Rev. 523, 528–35 (1993) (arguing against reliance on unadjudicated conduct at sentencing).

 [30]. See Alexandra Chouldechova, Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments, 5 Big Data 153, 153-162 (2017); Don A. Andrews, Recidivism Is Predictable and Can Be Influenced: Using Risk Assessments to Reduce Recidivism, Correctional Serv. Can. (Mar. 5, 2015), https://www.csc-scc.gc.ca/research/forum/e012/12j_e.pdf; Jon Kleinberg et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, Proc. of Innovations in Theoretical Computer Sci. (forthcoming 2017).

 [31].               Besiki L. Kutateladze et al., Prosecutorial Attitudes, Perspectives, and Priorities: Insights from the Inside, MacArthur Foundation 2 (2018), https://caj.fiu.edu/
news/2018/prosecutorial-attitudes-perspectives-and-priorities-insights-from-the-inside/report-1.pdf; see also Andrew Pantazi, What Makes a Good Prosecutor? A New Study of Melissa Nelson’s Office Hopes to Find Out, Fla. Times Union, https://www.jacksonville.com/news/20180309/what-makes-good-prosecutor-new-study-of-melissa-nelsons-office-hopes-to-find-out (last updated Mar. 12, 2018, 11:18 AM).

 [32]. See Alexander Amini et al., Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure (2019) (unpublished manuscript), http://www.aies-conference.com/wp-content/papers/
main/AIES-19_paper_220.pdf. For another approach at building a non-discriminatory classifier, see Irene Chen et al., Why Is My Classifier Discriminatory?, in 31 Advances in Neural Info. Processing Systems 1, 3-9 (2018), http://papers.nips.cc/paper/7613-why-is-my-classifier-discriminatory.pdf.

 [33]. See Zachary C. Lipton et al., Does Mitigating ML’s Impact Disparity Require Treatment Disparity?, in 31 Advances in Neural Infor. Processing Systems 1, 9 (2018), https://papers.nips.cc/
paper/8035-does-mitigating-mls-impact-disparity-require-treatment-disparity.pdf.

 [34]. Cynthia Dwork et al., Fairness through Awareness, in Proceedings 3rd Innovations in Theoretical Computer Sci. Conf. 214, 218 (2012), https://dl.acm.org/citation.cfm?id=2090255.

 [35]. Moritz Hardt et al., Equality of Opportunity in Supervised Learning 1819 (Oct. 11, 2016) (unpublished manuscript), https://arxiv.org/pdf/1610.02413.pdf.

 [36]. See Richard E. Nisbett & Timothy DeCamp Wilson, Telling More Than We Can Know: Verbal Reports on Mental Processes, 84 Psychol. Rev. 231, 251-257 (1977).

   [37].               Introduced by Professors Marco Ribeiro, Sameer Singh, and Carlos Guestrin, “local interpretable model-agnostic explanations,” refers to a computer science technique that attempts to explain the predictions of any classifier by learning an interpretable model around the primary prediction. See Marco T. Ribeiro et al., “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, ACM 1 (Aug., 2016), https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf.

 [38]. See Michael Chui et al., What AI Can and Can’t Do Yet for Your Business, McKinsey Q., Jan. 2018, at 7, https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/what-ai-can-and-cant-do-yet-for-your-business.

 

Technology-Enabled Coin Flips for Judging Partisan Gerrymandering

From Volume 93, Postscript (May 2019)
DOWNLOAD PDF


 

 Technology-Enabled Coin Flips for Judging Partisan Gerrymandering

Wendy K. Tam Cho[*]

This session, the Supreme Court heard oral arguments in a set of twin partisan gerrymandering cases, one brought by Democrats, Rucho v. Common Cause,[1] and the other by Republicans, Benisek v. Lamone.[2] This was not the first time the Court has considered this issue: partisan gerrymandering has now come before twenty-one Justices of the Supreme Court, without resolution. Over the history of these cases, it has remained uncontroversial that the Elections Clause in Article I, Section 4 of the U.S. Constitution gives states the right, and indeed wide latitude, to prescribe the “times, places and manner” of congressional elections. That includes the drawing of electoral boundaries. At the same time, the power of legislatures is not unfettered.  And, it is the role of the Supreme Court to guard against unconstitutional legislative acts.

Akin to every other legal issue that comes before the Court, reconciling the state’s discretion and the Supreme Court’s role in judicial review requires a judicially manageable standard that allows the Court to determine when a legislature has overstepped its bounds. Without a judicially discoverable and manageable standard, the Court is unable to develop clear and coherent principles to form its judgments, and challenges to partisan gerrymandering would thus be non-justiciable.

In the partisan gerrymandering context, such a standard needs to discern between garden-variety and excessive use of partisanship. The Court has stated that partisanship may be used in redistricting, but it may not be used “excessively.” In Vieth v. Jubelirer, Justice Scalia clarified, Justice Stevens says we ‘er[r] in assuming that politics is ‘an ordinary and lawful motive’ in districting, but all he brings forward to contest that is the argument that an excessive injection of politics is unlawful. So it is, and so does our opinion assume.[3] Justice Souter, in a dissent joined by Justice Ginsburg, expressed a similar idea: courts must intervene, he says, when “partisan competition has reached an extremity of unfairness.”[4]

At oral argument in Rucho, attorney Emmet Bondurant argued that “[t]his case involves the most extreme partisan gerrymander to rig congressional elections that has been presented to this Court since the one-person/one-vote case.”[5] Justice Kavanaugh replied, “when you use the word ‘extreme,’ that implies a baseline. Extreme compared to what?”[6]

Herein lies the issue that the Court has been grappling with in partisan gerrymandering claims. What is the proper baseline against which to judge whether partisanship has been used excessively? And how can this baseline be incorporated into a judicially manageable standard?

I. The Promise of Technology

Fifteen years ago in Vieth, Justice Kennedy wrote the following:

Technology is both a threat and a promise. On the one hand, if courts refuse to entertain any claims of partisan gerrymandering, the temptation to use partisan favoritism in districting in an unconstitutional manner will grow. On the other hand, these new technologies may produce new methods of analysis that make more evident the precise nature of the burdens gerrymanders impose on the representational rights of voters and parties.[7]

Indeed, more sophisticated technology has fueled the threat of gerrymandering. With the aid of computers and advanced software, map drawers now have the ability to adhere tightly and meticulously to legal districting practices while simultaneously and surreptitiously entrenching power. Moreover, computing power and software sophistication are only improving over time—a fact certainly not lost on Justice Kagan, who last year wrote in Gill v. Whitford, “[t]he 2010 redistricting cycle produced some of the worst partisan gerrymanders on record. The technology will only get better, so the 2020 cycle will only get worse.”[8]

In short, the threat of technology for gerrymandering is real and looms more ominously daily. However, it appears that the Justices are now seeing a possible glimmer of hope: the day of technology’s promise to help identify and curb gerrymandering may have arrived, or is, at least, arriving.

The Court now appears to accept the idea that in addition to aiding nefarious intent, computers may also help detect such intent in litigation through generating large numbers of maps that embody only the neutral districting criteria. When humans are drawing maps, it is difficult to enumerate all of the criteria that are considered for a particular map. However, with a computer, the criteria are well-specified and known. One must explicitly choose which criteria to include and which to exclude. At oral argument in Rucho, Justice Alito acknowledged as much:

If you make a list of the so-called neutral criteria—compactness, contiguity, protecting incumbents, if that’s really neutral, respecting certain natural features of the geography—and you have a computer program that includes all of those and weights them all . . . at the end, what you get is a large number of maps that satisfy all those criteria. And I think that’s realistic. That’s what you will get.[9]

The Court also seems to accept that one could use such a set of maps as some sort of “baseline.” Justice Kagan stated that “[t]he benchmark is the natural political geography of the state plus all the districting criteria, except for partisanship.”[10]

II. The Barriers to Connecting Technology with the Law

While the Court appears to be in agreement that a baseline of non-partisan maps can be created, it struggles with a way to incorporate this baseline into a judicially manageable standard that allows us to identify a partisan gerrymander. For the Justices, there is not yet a satisfactory connection between the baseline that they believe the technology can now create and the requirements of the Court for a judicially manageable standard.

There appear to be two main barriers. The first is what they see as a connection to proportional representation (PR). Justice Gorsuch seems particularly suspicious that the baseline of non-partisan maps provides nothing more than a test for proportional representation in disguise. When he sees the range of partisan outcomes that emerge from the baseline of non-partisan maps, he is not seeing how one can use those maps to identify a partisan gerrymander. He envisions that there must be a “cutoff” where partisanship becomes excessive. But, to identify that point, Gorsuch asks, “aren’t we just back in the business of deciding what degree of tolerance we’re willing to put up with from proportional representation?”[11] Justice Alito is similarly perplexed about how one might utilize the baseline set of non-partisan maps:

[I]f you have 24,000 maps that satisfy all of the so-called neutral criteria that you put in your computer program, don’t you need a criterion or criteria for deciding which of the 24,000 maps you’re going to choose? . . . [I]mplicit . . . is the idea, is it not, that you have to choose one that honors proportional representation? You have no other criteria for distinguishing among the 24,000 maps.[12]

While large deviations from PR may raise suspicion and seem intuitively problematic to the public eye, the judiciary is unequivocal that PR is inconsistent with geographically defined single member districts. Hence, this seeming connection to PR is obviously problematic given the long history of the Supreme Court’s emphasis that our system of government is explicitly not one of proportional representation. To be sure, any judicial standard cannot simply require PR or an outcome “close to PR.”

A second issue is that the Constitution grants wide discretion to the states in devising its electoral maps. Neither the appellants nor the appellees in North Carolina’s redistricting case disagree. The disagreement, rather, stems from how this wide discretion affects the use and interpretation of the baseline maps.

The challengers argue that “[t]he legislature has wide discretion, as long as it does not attempt to do two things, dictate electoral outcomes, [or] favor or disfavor a class of candidates.”[13] It is true that the legislature has wide discretion so long as it does not violate the Constitution. However, the challengers did not articulate a standard for how we would know that the legislature is dictating electoral outcomes other than to say that the legislature’s map has a partisan effect that is not one of the common effects in the baseline set of maps. The challengers’ argument, in essence, is that being on the tail of the distribution (i.e., producing an unusually uncommon partisan effect) is de facto evidence of the state overstepping its discretionary powers. We have already discussed Justice Gorsuch’s objection to this articulationthis characterization of unconstitutional gerrymander is conceptually indistinguishable from a PR standard.

Within the specific facts of the North Carolina case, the challengers also argue that statements made by the legislature show that partisanship was the predominant factor and a “material factor” in creating the map. In particular, David Lewis, a Harnett County Republican and the House redistricting leader at the time, stated that the map was drawn “to give a partisan advantage to ten Republicans and three Democrats because [I do] not believe [it’s] possible to draw a map with eleven Republicans and two Democrats.”[14] Chief Justice Roberts did not take issue with the particular facts present in the North Carolina case, but also did not see how they would then translate into a general principle to govern how the baseline set of maps would help identify the degree of partisanship utilized in future partisan gerrymandering cases.

The state of North Carolina, on the other hand, points out that all of the baseline maps are properly conceived of as non-partisan since they were all drawn without partisan information. Accordingly, they say, all of these maps would thus be within the legislature’s discretion to enact. The state looks at the large set of baseline North Carolina maps “with partisanship taken out entirely,” and observes that “you get 162 different maps that produce a 10/3 Republican split.”[15] From here, they argue that when the legislature is devising its particular map, it is “about as discretionary a government function as one could imagine.”[16] In other words, the legislature cannot be dictating outcomes when no partisan information is even being utilized. Therefore, the argument goes, all of these declaredly non-partisan maps and thus their partisan effects fall within the legislature’s discretion.

The dispute here is about what the tails of the distribution of partisan effects from the baseline set of maps indicate. Do they indicate “dictating outcomes” as the challengers argue or are all of the maps, tail or not, within the legislature’s “discretionary powers” as the state argues? More importantly for the Court, how does one distinguish “dictating outcomes” from “discretionary power?”

In short, the Court is not skeptical about whether a baseline of non-partisan maps can be created. It is skeptical about whether it can reconcile a baseline they believe exists with the wide latitude conferred to the states in the Elections Clause and our system of representation, which is explicitly not proportional representation.

III. A Judicially Manageable Standard

I argue that when the application of the “new technology” is properly conceived and executed, neither the issue of proportional representation nor our commitment to states’ rights in prescribing the “times, places and manner” of congressional elections remains problematic. In fact, both are part and parcel of a judicially manageable standard.

First, let us establish the relationship of PR with the baseline set of maps. Because partisan information is necessary to determine PR and no partisan information is used in the construction of the baseline maps, we can say, unequivocally, that PR plays no role in the construction of the baseline set of maps. Instead, the computer-drawn maps are constrained only by the locations where the particular people in the state reside and the neutral map-drawing criteria.

If partisans are randomly dispersed throughout the state and there are roughly an equal number in each party, PR is, unsurprisingly, a natural outcome. When partisans cluster geographically, this type of political geography undermines PR in the sense that a “natural outcome” would more likely be further from the PR outcome. The size of the discrepancy between PR and the common outcomes in the baseline non-partisan maps depends on the state and the precise pattern of political geography and degree of clustering. Sometimes political geography works strongly against PR. In other cases, the political geography may have only a small impact. This concept appears to be well understood by the Court. In Vieth, Justice Scalia wrote the following:

Consider, for example, a legislature that draws district lines with no objectives in mind except compactness and respect for the lines of political subdivisions. Under that system, political groups that tend to cluster (as is the case with Democratic voters in cities) would be systematically affected by what might be called a “natural” packing effect.[17] 

In other words, if Democrats tend to cluster in cities, rather than being randomly dispersed across the state, then this “political geography” that is created by their tendency toward urban clustering results in Democrats being “packed” into the same districts because the map drawer may be trying to keep cities and counties together—an objective that the Court accepts as neutral and not partisan per se.

In addition, if the partisans are not roughly proportional, PR is less likely to be the outcome. We have long known that if a state’s partisans are split, say, 70 percent Republican to 30 percent Democrat, then almost surely, the Republicans will win all of the state’s seats unless the Democrats are unusually clustered so that it is possible to place them in a district where they command the majority vote. Here again is an interactive effect between political geography and the degree to which PR is even possible—though this time, clustering would work in favor of the minority party.

Indeed, the reason we simulate maps is to understand how political geography and neutral map-drawing criteria affect the natural partisan outcomes when partisanship information is not present. The effect of political geography is statespecific since it depends on the particular people in the state, where they reside, and other neutral criteria that may be based on, for example, city and county boundaries. One can think of the simulation process as procedurally fair in the sense that the process has no explicit partisan information guiding it.

The idea behind employing simulations to understand a process, map drawing or otherwise, is not new. The concept of frequentist probabilities and their interpretation has been well-established since at least the end of the nineteenth century.[18] We can gain some intuition about how simulations work in the familiar context of flipping coins. Suppose we want to know what typically happens when you toss a fair coin one hundred times. Maybe in the first round of one hundred tosses, the coin lands on heads fiftysix times. In the second round, the coin lands on heads fortyeight times. We repeat this process a large number of times. These “simulations” help us understand the behavior of a fair coin. After we have properly repeated this process sufficiently many times, we have an accurate gauge of the behavior of a fair coin.

Figure 1 shows the result when a computer simulates one hundred tosses of a fair coin, and repeats the one hundred tosses three million separate times. This process illuminates that the outcome of more than sixty heads occurs less than 2 percent of the time. Indeed, for any outcome or number of heads, we can know how likely that outcome is to occur for a fair coin. To be sure, it is possible for a fair coin to land on heads one hundred times in one hundred tosses, but if it did, any sane person would question whether that coin was actually a fair coin. While this outcome is not impossible, it is an inordinately improbable outcome. Indeed, in my actual simulation, after the computer has tossed a coin one hundred times for three million repetitions, the event where all of the tosses landed on heads did not occur even once. We can see from the figure that even seventy-five heads would be an “extreme” outcome for an allegedly fair coin. In my actual simulation, seventy-five heads in one hundred tosses did not happen even once in the three million different attempts.

A similar baseline and analysis can inform judgments about maps. Of course, the mechanics of how to draw electoral maps are exceedingly more complex than tossing coins. Indeed, I have spent many years thinking and researching about how to do this properly,[19] but the logic is the same.

To simulate map-drawing, we repeatedly draw maps that adhere to neutral principles like equal population, preservation of cities and counties, and compactness, but do not consider partisan information. Just like for coin tosses, when properly executed, this process creates a baseline for understanding what types of outcomes emerge from a map-drawing process that does not involve explicit partisan information.

Of course, as we have discussed, a state is not constrained to consider only neutral map-drawing principles—many decisions go into devising a map, and a state has wide latitude to act in the interest of its people. There are any number of criteria that can be regarded as outside the set of neutral or “traditional districting principles” but still non-partisan. One example might be a claim that Representative Lynn Wachtmann, in the state of Ohio, made in the legislative record,

The community of Delphos is split with Representative Huffman and I, and let me share with you a little bit different story about what could happen with a great county like Lucas County if they care to work on both sides of the aisle. That is, they could gain more power in Washington.[20]

She is making a claim that the splitting of this county was not done for partisan reasons, but to garner more political power for the people of Ohio. Whether this is true or not, we leave aside at the moment. It could be true, and certainly, when a map is devised, the decisions that determine the boundaries should be done in the interest of the people. In this sense, that the legislature has wide latitude to work in the interest of its people is a feature, not a flaw. Indeed, there are many non-partisan decisions that may lie behind a particular map configuration. Possibly, a representative wants her church or her family’s cemetery in her district. Why a representative may want those things might be personal and completely devoid of partisan motivation. These types of decisions all fall within the wide latitude and undisputed discretionary power of the legislature to devise its electoral map.

Note that even completely non-partisan decisions have partisan effects. Every time a boundary is changed, partisans are shifted from one district to another district. This necessarily changes the partisan composition of the districts, and a partisan effect ensues. But, then, if all decisions, even non-partisan ones have a partisan effect, how do we know if the admittedly many decisions behind a map make it “excessively partisan”? It would be impossible, almost surely, and impractical, at the very least, to try to discover all the reasons and then to determine whether each one was partisan or not.

This realization that many elements influence district boundaries is not lost on the Court. In Vieth, Justice Breyer wrote that the desirable or legal criteria represent a series of compromises of principle—among the virtues of, for example, close representation of voter views, ease of identifying government and opposition parties, and stability in government. They also represent an uneasy truce, sanctioned by tradition, among different parties seeking political advantage.[21]

Partisan effect that arises from the compromise of principles is not problematic. The need for compromise among many factors is a given. It is well established that an important role of the legislature is to bargain and compromise in the pursuit of legislation. The issue is not the compromise of principles, but rather, determining when partisanship has been injected excessively.

To gain some insight into this conundrum, we can think about how this works with the coin toss simulation. A fair coin lands on heads roughly half the time because it is not biased toward heads or tails. Likewise, non-partisan decisions, by definition, are not biased toward one party or the other. Roughly half the time (with the exact probability again depending the political geography of the state), a non-partisan decision will shift partisans in a way that makes a map more Republican. Roughly the other half of the time, it will shift partisans in a way that makes a map more Democratic. To be sure, every shift provides a more favorable effect for one party over the other. However, in the aggregate, for non-partisan decisions, there should be no systematic bias in favor of one party and at the expense of the other party.

Recall that our baseline effect emerges from only neutral criteria (the “traditional districting principles” and the law). It shows what type of partisan outcomes we expect when one employs only the neutral non-partisan map-drawing criteria. If the other motivations behind a map are non-partisan, the unintended partisan effects should wash out, just as over the course of one hundred coin flips, the tallies of heads and tails will be similar. If the partisan effects from these other decisions do not wash out (or if there are many more heads than tails), then we have evidence of partisan motivation (or unfair coins).

The stronger the cumulative partisan effect is in one direction, the greater the evidence of underlying partisan motivations. If a coin lands on heads once, no suspicion is raised. If the second flip also lands on heads, I can say that I am not bothered in the least. But if that coin lands on heads one hundred times in a row, my disbelief is boundless.

 If the legislature uses only neutral criteria, then the expected effect is reflected in the baseline set of maps. Of course, the legislature will contemplate, negotiate, and compromise. No one would argue that they should “choose” one of the baseline maps that are restricted to a small set of criteria. This would be inconsistent with the Elections Clause because it would heavily constrain the legislature rather than allowing it wide latitude. Instead, many other criteria will be considered. Importantly, the political effect from non-partisan decisions should wash out if they are truly non-partisan in nature. If one non-partisan decision results in a map that leans more favorably toward the Republicans, I am not suspicious in the least. After all, every decision moves the map in one party’s favor or the other party’s favor. If a second decision moves the map more Republican, I remain unsuspicious. As the decisions pile up and they continually move the map toward the tail of the baseline distribution, my disbelief grows.

IV.  The State of Ohio

To see how my proposed test would work in an actual redistricting case, we can examine the congressional electoral map for the state of Ohio. I served as an expert witness in the state of Ohio’s gerrymandering case, A. Philip Randolph Inst. v. Householder.[22] Since the 2010 redistricting, each of the congressional races (in 2012, 2014, 2016, and 2018) resulted in twelve Republican seats and four Democratic seats. Figure 2 shows the seat split from more than three million computer-generated maps that I created on the Blue Waters supercomputer for the state of Ohio using only the neutral districting criteria with Ohio’s population and its particular political geography. In the figure, we can see that nine Republican seats is the most commonly expected outcome. Eleven Republican seats is not common at all, and twelve seats, which did occur among the more than three million maps, is an outcome that happens so infrequently that while the histogram bar at twelve seats is present, it is sufficiently minuscule that it is not even visible.

Judging from the legislative record in Ohio, the legislature considered population equality, compactness, contiguity, minority representation, and the preservation of cities and counties in the construction of the current Ohio map.[23] My simulated maps do likewise. The legislature also took a number of other unspecified criteria into account. Once all of the legislature’s criteria were taken into account, the map they produced resulted in a 12/4 Republican/Democrat seat split for every set of congressional elections run under this map.

While we do not know what each of the individual decisions behind the map were, we do know that every one of their “unspecified criteria” moved the map either toward a more favorable Republican outcome or a more favorable Democratic outcome. How did they end up on the tail of the seat share distribution? It is possible that using only the neutral districting criteria, they started at an extreme location. It is possible, but as we know, it’s extremely unlikely—just like obtaining a highly disproportionate number of heads when tossing a fair coin one hundred times. 

One could also argue that many other considerations went into the decision process. Indeed, many other decisions could have and should have entered the calculus. One could also make the claim that these decisions were not partisan. Some appear to be benign requests like splitting a military base across several districts. Other decisions may have involved an explicit attempt to protect constituents’ interests, aimed at better representation for the people of Ohio. Each of these decisions, partisan or not, changed the partisan effect of the map. But the non-partisan decisions should have no systematic bias toward the Republicans or the Democrats. Their collective partisan effect should wash out in the aggregate. On the other hand, partisan decisions surely are intended to have a specific partisan effect and move the map in the intended direction.

What we observe is a map that is all the way on the right end of the distribution of partisan effect. That means we either began on the tail, which is extremely unlikely, or we started in a more likely spot and then the subsequent decisions moved that partisan effect to that end of the distribution. If the subsequent decisions moved that map so far in one direction, it is like the coin that keeps landing on heads. If the first “decision” makes the map more Republican leaning, that is not bothersome since it has to have some partisan effect. If the second “decision” moves the map in the Republican direction again, that is also not so unusual. If the entire set of decisions move the partisan effect all the way to the end of the distribution, we have strong evidence that an increasingly small set of those decisions were actually non-partisan.

Importantly, note that there are different types of partisan unfairness. An electoral map can be unfair if partisanship is used excessively so that one party’s seat share or electoral outcomes are affected. This might be observed, as we have just seen, by how many seats favor one party over the other. However, this is not the only way in which a legislature may use partisan information to usurp power from the voters. Another option is to create districts that are not competitive. When districts are not competitive, the outcome is essentially pre-determined such that the voters are effectively disenfranchised because while they are still able to cast a ballot, their ability to influence elections has been non-trivially compromised.

In my capacity as an expert witness for the Ohio gerrymandering case, I produced not just the baseline distribution shown in Figure 2, but also the one shown in Figure 3. Here, I examined how many of Ohio’s congressional districts were competitive. I defined “competitive” as resulting in an outcome that was “within a 10% margin of victory” (i.e., the winning party received no more than 55 percent of the two-party vote and the losing party received at least 45 percent of the two-party vote). Recall that I have already generated more than three million baseline maps. To be sure, when we have a set of baseline maps, there are many facets of these maps that can be examined. We are not restricted to seat shares or even the number of competitive seats. Indeed, this set of baseline maps has depth and richness on many dimensions, allowing us to explore numerous and varied facets of an electoral map. When I examined the competitiveness of Ohio’s congressional seats, I found that, commonly, half of the districts in the simulated maps were competitive. In contrast, in the current Ohio congressional map, all of the districts are quite non-competitive. So, in addition to producing a highly unusual seat split, the maps also resulted in a highly unusual lack of competitive seats. To be highly unusual on two partisan measures, as you can easily intuit, is even more suspect than if the current Ohio map was unusual in only one way. Maybe the first time you toss a coin one hundred times, the coin lands on heads an unusually large number of times. Unusual events like this do happen. If you toss that coin one hundred times again and a second unusual outcome occurs, the strength of the evidence is undeniably stacking up against that coin being fair.

 

Surely a map can be unusual on only one dimension. For instance, in North Carolina, if the map resulted in a 7/6 seat split, just because this outcome is “close to PR” does not exonerate it from other possible gerrymandering claims. We see clearly here that the baseline set of maps is not about some assessment of PR. Rather, they are far richer, allowing us to scrutinize many facets of partisan unfairness. If that map is 7/6 but sufficiently uncompetitive so that the voters have very little ability to change the outcome, then that map “dictates outcomes” and can be regarded as unconstitutional in that way. What makes a map unfair is not a deviation from any sense of proportional representation. What makes it unfair is the evidence that excessive partisanship was utilized.

V.  Rigorous Identification of Partisan Gerrymandering is Possible

When subject to litigation, a state is free to protest that its legislature’s map has been improperly identified as “excessively partisan.” That state can also present exculpatory evidence. Clearly, a map drawn free of partisanship can have an extreme partisan effect that emanates from neutral considerations. A fair coin also can land on heads one hundred times, but this outcome invites incredulity. Simulations are never able to tell us definitively that a coin is not fair or that the decisions behind a map are excessively partisan with certainty. In both cases, the simulations provide evidence and give us a sense of the strength of that evidence. The greater the number of heads over tails, the greater the evidence against a fair coin. The further the partisan effect moves from the baseline maps, the greater the evidence that partisanship was used excessively.

Sometimes, one has a smoking gun. Perhaps a suspect was caught, covered in blood, standing over the victim, holding the murder weapon at the crime scene. In the case of North Carolina, one may or may not regard Representative David Lewis’s comments about purposefully drawing a 10/3 map as this type of evidence. Barring such evidence, we still have a way to develop solid, probative, and dispositive evidence through the baseline set of maps.

The ability to create a baseline set of maps, combined with a proper and theoretically sound interpretation allows us to honor the Elections Clause that provides wide latitude to the states to prescribe the times, places, and manner of its elections, support our system of geographically based single member districts, be divorced from notions of proportional representation, and maintain the Court’s oversight of the legislature by providing a judicially manageable standard that assesses whether legislative decisions are excessively partisan.

The cutoff for what qualifies as “excessive” is a legal judgment call—the bread and butter of the Supreme Court’s constitutional jurisprudence. The exact cutoff may not be clear, but the Court is the institution charged with making that judgment. What is clear is that there is a way to measure excessiveness that is consistent with the Constitution’s regard for states rights and the legislature’s mandate to legislate for the people. This measure is not related to proportional representation, and it serves as the basis for a judicially manageable standard.

Whether the Court analyzes partisan gerrymandering as a matter of First Amendment viewpoint discrimination, as a matter of vote dilution under the Equal Protection Clause, or as an abuse of the power delegated to states under the Elections Clause, recent technological developments now enable the Court to put judicially manageable limits on the excessive use of partisanship in designing election districts. Technology has surely fueled the threat and growth of gerrymandering by providing a tool for the partisan majority of a state legislature to draw self-serving electoral boundaries, but it also now fulfills its promise by providing the basis for a judicially manageable standard to help judge whether electoral maps are excessively partisan.

 


[*] Professor in the Departments of Political Science, Statistics, Mathematics, and Asian American Studies, the College of Law, and Senior Research Scientist at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. She has served as an expert witness in redistricting litigation and has published research on technological innovations for redistricting analysis in computer science, operations research, statistics, physics, political science, and law.

 [1]. Transcript of Oral Argument, Rucho v. Common Cause, No.18-442 (U.S. Mar. 26, 2019).

 [2]. Transcript of Oral Argument, Benisek v. Lamone, No. 17-333 (U.S. Mar. 28, 2019).

 [3]. Vieth v. Jubelirer, 541 U.S. 267, 293 (2004) (alteration and emphasis in original) (internal quotation marks omitted).

 [4]. Id. at 344 (Souter, J., dissenting).

 [5]. Transcript of Oral Argument, supra note 1, at 38.

 [6]. Id.

 [7]. Vieth, 541 U.S. at 312–13 (Kennedy, J., concurring).

 [8]. Gill v. Whitford, 138 S. Ct. 1,916, 1941 (2018) (Kagan, J., concurring) (citation omitted).

 [9]. Transcript of Oral Argument, supra note 1, at 42.

 [10]. Id. at 27 (emphasis added).

 [11]. Id. at 43–44.

 [12]. Id. at 30–31.

 [13]. Id. at 43.

 [14]. Common Cause v. Rucho, 279 F. Supp. 3d 587, 604 (M.D.N.C. 2018).

 [15]. Transcript of Oral Argument, supra note 1 at 30.

 [16]. Id. at 29.

 [17]. Vieth v. Jubelirer, 541 U.S. 267, 289–90 (2004) (citation omitted).

 [18]. For the early development and discussion of these concepts, see generally A. A. Cournat, Exposition de la Théorie des Chance et des Probabilités (1843); John Venn, The Logic of Chance: An Essay on the Foundations and Province of the Theory of Probability (1888); Robert Leslie Ellis, On the Foundations of the Theory of Probabilities, in Mathematical Proceedings of the Cambridge Philosophical Society (B.J. Green et al., eds., 1844).

 [19]. Wendy K. Tam Cho & Simon Rubinstein-Salzedo, Understanding Significance Tests from a Non-Mixing Markov Chain for Partisan Gerrymandering Claims, 6 Stats. and Pub. Pol’y (forthcoming 2019), https://www.tandfonline.com/doi/full/10.1080/2330443X.2019.1574687; Wendy K. Tam Cho & Yan Y. Liu, A Massively Parallel Evolutionary Markov Chain Monte Carlo Algorithm for Sampling Complicated Multimodal State Spaces, in SC18: The International Conference for High Performance Computing, Networking, Storage and Analysis (2018), https://sc18.supercomputing.org/proceedings//tech_poster/poster_files/post173s2-file3.pdf; Bruce E. Cain, Wendy K. Tam Cho, Yan Y. Liu & Emily Zhang, A Reasonable Bias Approach to Gerrymandering: Using Automated Plan Generation to Evaluate Redistricting Proposals, 59 Wm. & Mary L. Rev. 1521 (2018); Wendy K. Tam Cho & Yan Y. Liu, Sampling from Complicated and Unknown Distributions: Monte Carlo and Markov Chain Monte Carlo Methods for Redistricting, 506 Physica A 170 (2018); Wendy K. Tam Cho & Yan Y. Liu, Massively Parallel Evolutionary Computation for Empowering Electoral Reform: Quantifying Gerrymandering via Multi-objective Optimization and Statistical Analysis, in SC17: The International Conference for High Performance Computing, Networking, Storage and Analysis (2017), https://sc17.supercomputing.org/SC17%20Archive/tech_poster/poster_files/post211s2-file3.pdf; Wendy K. Tam Cho, Measuring Partisan Fairness: How Well Does the Efficiency Gap Guard Against Sophisticated as well as Simple-Minded Modes of Partisan Discrimination? 166 U. Pa. L. Rev. Online 17 (2017); Yan Y. Liu, Wendy K. Tam Cho & Shaowen Wang, PEAR: A Massively Parallel Evolutionary Computation Approach for Political Redistricting Optimization and Analysis, 30 Swarm and Evolutionary Computation 78 (2016); Wendy K. Tam Cho & Yan Y. Liu, Toward a Talismanic Redistricting Tool: A Computational Method for Identifying Extreme Redistricting Plans, 15 Election L.J. 351 (2016); Yan Y. Liu, Wendy K. Tam Cho & Shaowen Wang, A Scalable Computational Approach to Political Redistricting Optimization, in Proceedings of the XSEDE 2015 Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure (2015) https://dl.acm.org/citation.cfm?doid=2792745.2792751; Douglas M. King, Sheldon H. Jacobson, Edward C. Sewell & Wendy K. Tam Cho, Geo-Graphs: An Efficient Model for Enforcing Contiguity and Hole Constraints in Planar Graph Partitioning, 60 Operations Res. 1213 (2012).

 [20]. H. & S. Rep. No 319, pts. 12, at 28 (Ohio 2011).

 [21]. Vieth, 541 U.S. at 360 (Breyer, J., dissenting).

 [22]. Ohio A. Philip Randolph Inst. v. Householder, No. 18-cv-357, 2019 U.S. Dist. LEXIS 24736, at *40–41 (S.D. Ohio Feb. 15, 2019).

 [23]. See Wendy K. Tam Cho, Expert Witness Testimony filed in Ohio A. Philip Randolph Inst. v. Householder, No. 18-cv-357, 2019 U.S. Dist. LEXIS 24736, at *40–41 (S.D. Ohio), Oct. 5, 2018.