Article | Technology
Administering Artificial Intelligence
by Alicia Solow-Niederman*
From Vol. 93, No. 4 (September 2020)
93 S. Cal. L. Rev. 633 (2019)
Keywords: Artificial Intelligence, Data Governance
As AI increasingly features in everyday life, it is not surprising to hear calls to step up regulation of the technology. In particular, a turn to administrative law to grapple with the consequences of AI is understandable because the technology’s regulatory challenges appear facially similar to those in other technocratic domains, such as the pharmaceutical industry or environmental law. But AI is unique, even if it is not different in kind. AI’s distinctiveness comes from technical attributes—namely, speed, complexity, and unpredictability—that strain administrative law tactics, in conjunction with the institutional settings and incentives, or strategic context, that affect its development path. And this distinctiveness means both that traditional, sectoral approaches hit their limits, and that turns to a new agency like an “FDA for algorithms” or a “federal robotics commission” are of limited utility in constructing enduring governance solutions
This Article assesses algorithmic governance strategies in light of the attributes and institutional factors that make AI unique. In addition to technical attributes and the contemporary imbalance of public and private resources and expertise, AI governance must contend with a fundamental conceptual challenge: algorithmic applications permit seemingly technical decisions to de facto regulate human behavior, with a greater potential for physical and social impact than ever before. This Article warns that the current trajectory of AI development, which is dominated by large private firms, augurs an era of private governance. To maintain the public voice, it suggests an approach rooted in governance of data—a fundamental AI input—rather than only contending with the consequences of algorithmic outputs. Without rethinking regulatory strategies to ensure that public values inform AI research, development, and deployment, we risk losing the democratic accountability that is at the heart of public law.
*. 2020–2022 Climenko Fellow and Lecturer on Law, Harvard Law School; 2017–2019 PULSE Fellow, UCLA School of Law and 2019-2020 Law Clerk, U.S. District Court for the District of Columbia. Alicia Solow-Niederman drafted this work during her tenure as a PULSE Fellow, and the arguments advanced here are made in her personal capacity. This Article reflects the regulatory and statutory state of play as of early March 2020. Thank you to Jon Michaels, Ted Parson, and Richard Re for substantive engagement and tireless support; to Jennifer Chacon, Ignacio Cofone, Rebecca Crootof, Ingrid Eagly, Joanna Schwartz, Vivek Krishnamurthy, Guy Van den Broeck, Morgan Weiland, Josephine Wolff, Jonathan Zittrain, participants at We Robot 2019, and the UCLAI working group for invaluable comments and encouragement; to Urs Gasser for conversations that inspired this research project; and to the editors of the Southern California Law Review for their hard work in preparing this Article for publication. Thanks also to the Solow-Niederman family and especially to Nancy Solow for her patience and kindness, and to the Tower 26 team for helping me to maintain a sound mind in a sound body. Any errors are my own.