Democracy Dies in Silicon Valley: Platform Antitrust and the Journalism Industry

Newspapers are classic examples of platforms. They are intermediaries between, and typically require participation from, two distinct groups: on the one hand, there are patrons eager to read the latest scoop; on the other hand, there are advertisers offering their goods and services on the outer edges of the paper in hopes of soliciting sales. More than mere examples of platform economics, however, newspapers and the media industry play an irreplaceable role in the functioning of our democracy by keeping us informed. From behemoths such as the Jeff Bezos–owned Washington Post to outlets like the Hungry Horse News in the small town of Columbia Falls, Montana, the press lets us know what is happening on both the national and local levels. However, the age of the Internet and the corresponding emergence of new two-sided platforms is decimating the media industry.[1] In a world where more users get their news on social media platforms like Facebook than in print,[2] the survival of quality journalism depends in large part on whether the media industry can tap into the flow of digital advertising revenue, the majority of which goes to just two corporations founded around the start of the new millennium.

Facebook and Google, formed respectively in 2004 and 1998, are new types of platforms aiming to accomplish what newspapers have done for centuries: attract a large consumer base and solicit revenue from advertisers. However, unlike the fungible papers newsies once distributed hot off the presses, Facebook and Google connect advertisers and consumers in a more sophisticated, yet opaque manner. Facebook and Google are free to consumers insofar as users do not pay with money to surf the web or connect virtually with their friends. Instead, the companies collect information about users based on their online activity, and complex algorithms connect those users with targeted advertisements.[3] This new method of connecting Internet users and advertisers has been wildly successful, creating a tech duopoly profiting from nearly sixty percent of all digital advertising spending in the United States.[4]

          [1].      Throughout this Note, I refer to the journalism industry also as the “media” industry and the “news media” industry. Although there are undoubtedly nuanced differences between journalism and news media, for the purposes of this Note, I draw no distinction between them.

          [2].      Elisa Shearer, Social Media Outpaces Print Newspapers in the U.S. as a News Source, Pew Rsch. Ctr. (Dec. 10, 2018), [].

          [3].      Although I may not be interested in an upcoming Black Friday deal for chainsaws posted in a physical publication of the Hungry Horse News, Facebook and Google are—based on my history and activity on the platforms—aware of my affinity for things like antitrust law and coffee, and so their algorithms are likely to present advertisements to me for items such as books written by Herbert Hovenkamp and expensive burr coffee grinders.

          [4].      Felix Richter, Amazon Challenges Ad Duopoly, Statista (Feb. 21, 2019), https:// [].

* Executive Senior Editor, Southern California Law Review, Volume 95; J.D. Candidate, 2022 University of Southern California, Gould School of Law. I would like to thank Professor Erik Hovenkamp for serving as my advisor. All mistakes are my own.

On Electronic Word of Mouth and Consumer Protection: A Legal and Economic Analysis by Jens Dammann

Article | Computer & Internet Law
On Electronic Word of Mouth and Consumer Protection: A Legal and Economic Analysis
by Jens Dammann*

From Vol. 94, No. 3
94 S. Cal. L. Rev. 423 (2021)

Keywords: Internet and Technology Law, Product Reviews, Consumer Protection

The most fundamental challenge in consumer protection law lies in the information asymmetry that exists between merchants and consumers. Merchants typically know far more about their products and services than consumers do, and this imbalance threatens the fairness of consumer contracts.

However, some scholars now argue that online consumer reviews play a crucial role in bridging the information gap between merchants and consumers. According to this view, consumer reviews are an adequate substitute for some of the legal protections that consumers currently enjoy.

This Article demonstrates that such optimism is unfounded. Consumer reviews are—and will remain—a highly flawed device for protecting consumers, and their availability therefore cannot justify dismantling existing legal protections.

This conclusion rests on three main arguments. First, there are fundamental economic reasons why even well-designed consumer review systems cannot eliminate information asymmetries between merchants and consumers.

Second, unscrupulous merchants undermine the usefulness of reviews by manipulating the review process. While current efforts to stamp out fake reviews may help to eliminate some of the most blatant forms of review fraud, sophisticated merchants can easily resort to more refined forms of manipulation that are much more difficult to address.

Third, even if the firms operating consumer review systems were able to remedy all the various shortcomings that such systems have, it is highly unlikely that they would choose to do so: by and large, the firms using review systems lack the right incentives to optimize them.

*. Ben H. and Kitty King Powell Chair in Business and Commercial Law, The University of Texas School of Law. For research assistance or editing, or both, I am grateful to Jael Dammann, Elizabeth Hamilton, Stella Fillmore-Patrick, and Jean Raveney.

View Full PDF

Administering Artificial Intelligence – Article by Alicia Solow-Niederman

Article | Technology
Administering Artificial Intelligence
by Alicia Solow-Niederman*

From Vol. 93, No. 4 (September 2020)
93 S. Cal. L. Rev. 633 (2019)

Keywords: Artificial Intelligence, Data Governance

As AI increasingly features in everyday life, it is not surprising to hear calls to step up regulation of the technology. In particular, a turn to administrative law to grapple with the consequences of AI is understandable because the technology’s regulatory challenges appear facially similar to those in other technocratic domains, such as the pharmaceutical industry or environmental law. But AI is unique, even if it is not different in kind. AI’s distinctiveness comes from technical attributes—namely, speed, complexity, and unpredictability—that strain administrative law tactics, in conjunction with the institutional settings and incentives, or strategic context, that affect its development path. And this distinctiveness means both that traditional, sectoral approaches hit their limits, and that turns to a new agency like an “FDA for algorithms” or a “federal robotics commission” are of limited utility in constructing enduring governance solutions

This Article assesses algorithmic governance strategies in light of the attributes and institutional factors that make AI unique. In addition to technical attributes and the contemporary imbalance of public and private resources and expertise, AI governance must contend with a fundamental conceptual challenge: algorithmic applications permit seemingly technical decisions to de facto regulate human behavior, with a greater potential for physical and social impact than ever before. This Article warns that the current trajectory of AI development, which is dominated by large private firms, augurs an era of private governance. To maintain the public voice, it suggests an approach rooted in governance of data—a fundamental AI input—rather than only contending with the consequences of algorithmic outputs. Without rethinking regulatory strategies to ensure that public values inform AI research, development, and deployment, we risk losing the democratic accountability that is at the heart of public law.

*. 2020–2022 Climenko Fellow and Lecturer on Law, Harvard Law School; 2017–2019 PULSE Fellow, UCLA School of Law and 2019-2020 Law Clerk, U.S. District Court for the District of Columbia. Alicia Solow-Niederman drafted this work during her tenure as a PULSE Fellow, and the arguments advanced here are made in her personal capacity. This Article reflects the regulatory and statutory state of play as of early March 2020. Thank you to Jon Michaels, Ted Parson, and Richard Re for substantive engagement and tireless support; to Jennifer Chacon, Ignacio Cofone, Rebecca Crootof, Ingrid Eagly, Joanna Schwartz, Vivek Krishnamurthy, Guy Van den Broeck, Morgan Weiland, Josephine Wolff, Jonathan Zittrain, participants at We Robot 2019, and the UCLAI working group for invaluable comments and encouragement; to Urs Gasser for conversations that inspired this research project; and to the editors of the Southern California Law Review for their hard work in preparing this Article for publication. Thanks also to the Solow-Niederman family and especially to Nancy Solow for her patience and kindness, and to the Tower 26 team for helping me to maintain a sound mind in a sound body. Any errors are my own.

Data Protection in the Wake of the GDPR: California’s Solution for Protecting “the World’s Most Valuable Resource” – Note by Joanna Kessler

Note | Privacy Law
Data Protection in the Wake of the GDPR: California’s Solution for Protecting “the World’s Most Valuable Resource”

by Joanna Kessler*

From Vol. 93, No. 1 (November 2019)
93 S. Cal. L. Rev. 99 (2019)

Keywords: California Consumer Privacy Act (CCPA), General Data Protection Regulation (GDPR)

This Note will argue that although the CCPA was imperfectly drafted, much of the world seems to be moving toward a standard that embraces data privacy protection, and the CCPA is a positive step in that direction. However, the CCPA does contain several ambiguous and potentially problematic provisions, including possible First Amendment and Dormant Commerce Clause challenges, that should be addressed by the California Legislature. While a federal standard for data privacy would make compliance considerably easier, if such a law is enacted in the near future, it is unlikely to offer as significant data privacy protections as the CCPA and would instead be a watered-down version of the CCPA that preempts attempts by California and other states to establish strong, comprehensive data privacy regimes. Ultimately, the United States should adopt a federal standard that offers consumers similarly strong protections as the GDPR or the CCPA. Part I of this Note will describe the elements of GDPR and the CCPA and will offer a comparative analysis of the regulations. Part II of this Note will address potential shortcomings of the CCPA, including a constitutional analysis of the law and its problematic provisions. Part III of this Note will discuss the debate between consumer privacy advocates and technology companies regarding federal preemption of strict laws like the CCPA. It will also make predictions about, and offer solutions for, the future of the CCPA and United States data privacy legislation based on a discussion of global data privacy trends and possible federal government actions.

*. Executive Senior Editor, Southern California Law Review, Volume 93; J.D. Candidate 2020, University of Southern California Gould School of Law; B.A., Sociology 2013, Kenyon College. 


View Full PDF

The Impact of Artificial Intelligence on Rules, Standards, and Judicial Discretion – Article by Frank Fagan & Saul Levmore

Article | Legal Theory
The Impact of Artificial Intelligence on Rules, Standards, and Judicial Discretion
by Frank Fagan & Saul Levmore*

From Vol. 93, No. 1 (November 2019)
93 S. Cal. L. Rev. 1 (2019)

Keywords: Artificial Intelligence, Machine Learning, Algorithmic Judging



Artificial intelligence (“AI”), and machine learning in particular, promises lawmakers greater specificity and fewer errors. Algorithmic lawmaking and judging will leverage models built from large stores of data that permit the creation and application of finely tuned rules. AI is therefore regarded as something that will bring about a movement from standards towards rules. Drawing on contemporary data science, this Article shows that machine learning is less impressive when the past is unlike the future, as it is whenever new variables appear over time. In the absence of regularities, machine learning loses its advantage and, as a result, looser standards can become superior to rules. We apply this insight to bail and sentencing decisions, as well as familiar corporate and contract law rules. More generally, we show that a Human-AI combination can be superior to AI acting alone. Just as today’s judges overrule errors and outmoded precedent, tommorrow’s lawmakers will sensibly overrule AI in legal domains where the challenges of measurement are present. When measurement is straightforward and prediction is accurate, rules will prevail. When empirical limitations such as overfit, Simpson’s Paradox, and omitted variables make measurement difficult, AI should be trusted less and law should give way to standards. We introduce readers to the phenomenon of reversal paradoxes, and we suggest that in law, where huge data sets are rare, AI should not be expected to outperform humans. But more generally, where empirical limitations are likely, including overfit and omitted variables, rules should be trusted less, and law should give way to standards.

*. Fagan is an Associate Professor of Law at the EDHEC Business School, France; Levmore is the William B. Graham Distinguished Service Professor of Law at the University of Chicago Law School. We are grateful for the thoughtful comments we received from William Hubbard, Michael Livermore, and Christophe Croux, as well as participants of the University of Chicago School of Law faculty workshop. 


View Full PDF

The New Data of Student Debt – Article by Christopher K. Odinet

Article | Regulations
The New Data of Student Debt
by Christopher K. Odinet*

From Vol. 92, No. 6 (September 2019)
92 S. Cal. L. Rev. 1617 (2019)

Keywords: Student Loan, Education-Based Data Lending, Financial Technology (Fintech)



Where you go to college and what you choose to study has always been important, but, with the help of data science, it may now determine whether you get a student loan. Silicon Valley is increasingly setting its sights on student lending. Financial technology (“fintech”) firms such as SoFi, CommonBond, and Upstart are ever-expanding their online lending activities to help students finance or refinance educational expenses. These online companies are using a wide array of alternative, education-based data points—ranging from applicants’ chosen majors, assessment scores, the college or university they attend, job history, and cohort default rates—to determine creditworthiness. Fintech firms argue that through their low overhead and innovative approaches to lending they are able to widen access to credit for underserved Americans. Indeed, there is much to recommend regarding the use of different kinds of information about young consumers in order assess their financial ability. Student borrowers are notoriously disadvantaged by the extant scoring system that heavily favors having a past credit history. Yet there are also downsides to the use of education-based, alternative data by private lenders. This Article critiques the use of this education-based information, arguing that while it can have a positive effect in promoting social mobility, it could also have significant downsides. Chief among these are reifying existing credit barriers along lines of wealth and class and further contributing to discriminatory lending practices that harm women, black and Latino Americans, and other minority groups. The discrimination issue is particularly salient because of the novel and opaque underwriting algorithms that facilitate these online loans. This Article concludes by proposing three-pillared regulatory guidance for private student lenders to use in designing, implementing, and monitoring their education-based data lending programs.

*. Associate Professor of Law and Affiliate Associate Professor in Entrepreneurship, University of Oklahoma, Norman, OK. The Author thanks Aryn Bussey, Seth Frotman, Michael Pierce, Tianna Gibbs, Avlana Eisenberg, Richard C. Chen, Kaiponanea Matsumara, Sarah Dadush, Jeremy McClane, Emily Berman, Donald Kochan, Erin Sheley, Melissa Mortazavi, Roger Michalski, Kit Johnson, Eric Johnson, Sarah Burstein, Brian Larson, John P. Ropiequet, the participants and the editorial board of the Loyola Consumer Law Review Symposium on the “Future of the CFPB,” the participants of the Central States Law Schools Association Conference, the faculty at the University of Iowa College of Law, and Kate Sablosky Elengold for their helpful comments and critiques on earlier drafts, either in writing or in conversation. This Article is the second in a series of works under the auspices of the Fintech Finance Project, which looks to study the development of law and innovation in lending. As always, the Author thanks the University of Oklahoma College of Law’s library staff for their skillful research support. All errors and views are the Author’s alone.


View Full PDF

Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability – Article by Margot E. Kaminski

Article | Regulations
Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability
by Margot E. Kaminski*

From Vol. 92, No. 6 (September 2019)
92 S. Cal. L. Rev. 1529 (2019)

Keywords: Algorithmic Decision-Making, General Data Protection Regulation (GDPR)



Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail.

In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”).

The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime.

*. Associate Professor of Law, Colorado Law School; Faculty Privacy Director at Silicon Flatirons; Affiliated Fellow, Information Society Project at Yale Law School; Faculty Fellow, Center for Democracy and Technology. Many thanks to Jef Ausloos, Jack Balkin, Michael Birnhack, Frederik Zuiderveen Borgesius, Bryan H. Choi, Kiel Brennan-Marquez, Giovanni Comandé, Eldar Haber, Irene Kamara, Derek H. Kiernan-Johnson, Kate Klonick, Mark Lemley, Gianclaudio Maglieri, Christina Mulligan, W. Nicholson Price, Andrew Selbst, Alicia Solow-Niederman, and Michael Veale for reading and for detailed comments. Thanks to the Fulbright-Schuman program, Institute for Information Law (“IViR”) at the University of Amsterdam, and Scuola Sant’Anna in Pisa for the time and resources for this project. Thanks to the faculty of Tel Aviv University, the Second Annual Junior Faculty Forum on the Intersection of Law and Science, Technology, Engineering, and Math (STEM) at the Northwestern Pritzker School of Law, and my own Colorado Law School faculty for excellent workshop opportunities. Extra thanks to Matthew Cushing, whose incredible support made this project possible, and to Mira Cushing for joy beyond words.


View Full PDF

Unlock Your Phone and Let Me Read All Your Personal Content, Please: The First and Fifth Amendments and Border Searches of Electronic Devices – Note by Kathryn Neubauer

Note | Constitutional Law
Unlock Your Phone and Let Me Read All Your Personal
Content, Please: The First and Fifth Amendments and
Border Searches of Electronic Devices

by Kathryn Neubauer*

From Vol. 92, No. 5 (July 2019)
92 S. Cal. L. Rev. 1273 (2019)

Keywords: First Amendment, Fourth Amendment, Fifth Amendment, Border Search Exception, Technology


Until January 2018, under the border search exception, CBP officers were afforded the power to search any electronic device without meeting any standard of suspicion or acquiring a warrant. The border search exception is a “longstanding, historically recognized exception to the Fourth Amendment’s general principle that a warrant be obtained . . . .” It provides that suspicionless and warrantless searches at the border are not in violation of the Fourth Amendment merely because searches at the border are “reasonable simply by virtue of the fact that they occur at the border . . . .” The CBP, claiming that the border search exception applies to electronic devices, searched more devices in 2017 than ever before, with approximately a 60 percent increase over 2016 according to data released by the CBP. These “digital strip searches” violate travelers’ First, Fourth, and Fifth Amendment rights. With the advent of smartphones and the expanded use of electronic devices for storing people’s extremely personal data, these searches violate an individual’s right to privacy. Simply by travelling into the United States with a device linked to such information, a person suddenly—and, currently, unexpectedly—opens a window for the government to search through seemingly every aspect of his or her life. The policy behind these searches at the border does not align with the core principles behind our longstanding First and Fifth Amendment protections, nor does it align with the policies behind the exceptions made to constitutional rights at the border in the past.

In order to protect the privacy and rights of both citizens and noncitizens entering the United States, the procedures concerning electronic device searches need to be rectified. For instance, the border search exception should not be applied to electronic devices the same way it applies to other property or storage containers, like a backpack. One is less likely to expect privacy in the contents of a backpack than in the contents of a password- or authorization-protected devices—unlike a locked device, a backpack can be taken, can be opened easily, can fall open, and also has been traditionally subjected to searches at the border. Moreover, there are many reasons why electronic devices warrant privacy.

*. Executive Notes Editor, Southern California Law Review, Volume 92; J.D., 2019, University of Southern California Gould School of Law; B.B.A., 2014, University of Michigan. My sincere gratitude to Professor Sam Erman for his invaluable feedback on early drafts of this Note as well as to Rosie Frihart, Kevin Ganley and all the editors of the Southern California Law Review. Thank you to Brian and my family—Mark, Diane, Elisabeth, Jennifer, Alison, Rebecca, Tony, Jason, Jalal, Owen, Evelyn, Peter and Manny—for all of their love and support. Finally, a special thank you Rebecca for reading and editing countless drafts, and to Jason for bringing to my attention this important issue.


View Full PDF