Article | Regulations
Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability
by Margot E. Kaminski*
From Vol. 92, No. 6 (September 2019)
92 S. Cal. L. Rev. 1529 (2019)
Keywords: Algorithmic Decision-Making, General Data Protection Regulation (GDPR)
Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail.
In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”).
The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime.
*. Associate Professor of Law, Colorado Law School; Faculty Privacy Director at Silicon Flatirons; Affiliated Fellow, Information Society Project at Yale Law School; Faculty Fellow, Center for Democracy and Technology. Many thanks to Jef Ausloos, Jack Balkin, Michael Birnhack, Frederik Zuiderveen Borgesius, Bryan H. Choi, Kiel Brennan-Marquez, Giovanni Comandé, Eldar Haber, Irene Kamara, Derek H. Kiernan-Johnson, Kate Klonick, Mark Lemley, Gianclaudio Maglieri, Christina Mulligan, W. Nicholson Price, Andrew Selbst, Alicia Solow-Niederman, and Michael Veale for reading and for detailed comments. Thanks to the Fulbright-Schuman program, Institute for Information Law (“IViR”) at the University of Amsterdam, and Scuola Sant’Anna in Pisa for the time and resources for this project. Thanks to the faculty of Tel Aviv University, the Second Annual Junior Faculty Forum on the Intersection of Law and Science, Technology, Engineering, and Math (STEM) at the Northwestern Pritzker School of Law, and my own Colorado Law School faculty for excellent workshop opportunities. Extra thanks to Matthew Cushing, whose incredible support made this project possible, and to Mira Cushing for joy beyond words.