Beyond Words: The Risks of Generative Interpretation

 

Judges are beginning to use large language models like ChatGPT to interpret legal texts. This Note examines whether they should do so. Prior studies testing LLMs as legal interpreters use survey responses as benchmarks for performance. I offer the first study comparing LLM interpretations to real-world judicial decisions. Across eight Ninth Circuit cases, I test whether GPT-4 Turbo (a model of ChatGPT) correctly identifies legal text as ambiguous or unambiguous. I find that ChatGPT’s assessments diverged from the court’s determinations 50% of the time. I then advance a novel argument: judicial reliance on LLMs may constitute improper ex parte communication under current judicial ethics rules.

View Full PDF

Artificial Incompetence? Unpacking AI’s Shortcomings in Contract Drafting and Negotiation

INTRODUCTION

This Note was inspired by my time as a data center procurement contracts intern during the summer after my first year of law school. In this role, I assisted contract analysts and attorneys with their procurement of space in data center facilities by contracting with data center suppliers. I regularly reviewed contract redlines from suppliers, identified non-market or disadvantageous terms in those contracts, and suggested changes for the next “turn of the redlines,” or when the company would return the contract to the supplier with new edits to the document. An impactful conversation with my manager about artificial intelligence’s potential as a useful tool in a transactional lawyer’s toolbelt inspired a deeper dive into the benefits and drawbacks of applying artificial intelligence (“AI”) to the contract drafting, redlining, and negotiation space—ultimately leading to the development of this Note.

After the internship concluded, I began my second year of law school. While the most noticeable change upon my return was that I was no longer a first-year student, I also immediately observed a greater emphasis on AI in legal education than before. My law school offered a course on AI’s legal applications, peers used AI to supplement their studies, and professors emphasized the importance of mastering AI during law school, as it would be an essential tool in future legal practice. Similarly, students at other law schools honed their negotiation skills against AI chatbots1Facing Off with a Chatbot, Univ. of Mo.: Show Me Mizzou (Sept. 26, 2024), https://showme.missouri.edu/2024/facing-off-with-a-chatbot [https://perma.cc/ZC85-FHXU]. and even developed their own AI-driven case briefing technology.2A law student at George Washington University developed “Lexplug,” a library of case briefs powered by OpenAI’s GPT-4 AI model. Lexplug includes two aptly named features: “Gunnerbot,” which enables students to have conversations with cases, and “Explain Like I’m 5,” which translates case briefs into simplified and easily digestible language. Bob Ambrogi, Law Student’s Gen AI Product, Lexplug, Makes Briefing Cases a Breeze, LawSites (Feb. 7, 2024), https://www.lawnext.com/2024/02/law-students-gen-ai-product-lexplug-makes-briefing-cases-a-breeze.html [https://perma.cc/8UKF-PBLZ].

As with the implementation of any new technology, however, there are some points of contention that arise when applying AI to the law—especially in the context of contract drafting, formation, and negotiation. This Note covers four main challenges to applying AI to contract drafting: (1) contract law principles, (2) equity concerns, (3) accuracy issues, and (4) legal profession challenges. Additionally, this Note presents the results of a novel empirical study designed to test AI technology’s tendency to discriminate when tasked with negotiating a contract on behalf of different types of clients. Interestingly, ChatGPT, a popular AI chatbot,3John Naughton, ChatGPT Exploded into Public Life a Year Ago. Now We Know What Went on Behind the Scenes, Guardian (Dec. 9, 2023, at 11:00 EST), https://www.theguardian.com/commentisfree/2023/dec/09/chatgpt-ai-pearl-harbor-moment-sam-altman [https://perma.cc/29CS-T7TS]. appears to favor corporations and nonprofit organizations over individuals when acting as a negotiation assistant.4See infra Section VII.D. This finding suggests that the excitement surrounding AI’s potential uses in the legal field5See infra notes 58–77 and accompanying text. is premature, and professionals should hesitate to implement this technology in contract drafting and negotiation until algorithmic discrimination is adequately addressed.

Part I of this Note introduces the historical development of AI technology and its rise to stardom that began with the public release of ChatGPT in 2022.6Kyle Wiggers, Cody Corrall & Alyssa Stringer, ChatGPT: Everything You Need to Know About the AI-Powered Chatbot, TechCrunch (Nov. 1, 2024, at 10:45 AM PDT), https://techcrunch.com/2024/11/01/chatgpt-everything-to-know-about-the-ai-chatbot [https://web.archive.org/web/20241108112033/https://techcrunch.com/2024/11/01/chatgpt-everything-to-know-about-the-ai-chatbot]. Part I then describes early applications of AI technology to the contracting space, such as Spellbook, Harvey, and LegalSifter.7See infra notes 58–72 and accompanying text. After that, Part I discusses fundamental contract law principles, such as mutual and constructive assent, that AI contract drafting may not readily align with.8See infra Section I.B. Finally, Part I concludes by orienting the reader with basic legal profession concepts, such as the lawyer’s duties of confidentiality, communication, competence, and diligence.9See infra Section I.C; Model Rules of Pro. Conduct rr. 1.1, 1.3, 1.4, 1.6 (A.B.A. 1983).

Part II introduces several illustrative examples of AI in contract drafting and negotiation that pose unique questions about the key differences between human and AI-driven contracting. These differences make it difficult to apply existing contract law to AI and raise important concerns about AI’s potential to discriminate when contracting and negotiating on behalf of different clients.10See infra Part II. Part III of this Note expands upon AI’s usurpation of traditional contract law principles. Fundamental contract law concepts, such as the “meeting of the minds” required to form a valid contract, do not readily apply to wholly AI-driven contracting.11See infra Part III. Principally, AI’s application in contract drafting and negotiation can present novel complications when determining whether or not the parties to a contract mutually agree on its terms. These issues persist regardless of whether a party performs some of its obligations under an AI-driven contract and despite the controversial doctrine of constructive assent.

Part IV covers the equity concerns that arise when applying AI technology to contracting. In general, applications of AI technology in the contracting space raise concerns about “algorithmic discrimination”—AI’s tendency to produce discriminatory outputs as a consequence of being trained on tainted data.12See Anupam Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1034–36 (2017). AI in contracting also raises ethical issues regarding enforcement of fully automated contracts. A pervasive issue in the AI space is ensuring proper alignment between an AI model’s goals and those of its operator.13Jack Clark & Dario Amodei, Faulty Reward Functions in the Wild, OpenAI (Dec. 21, 2016), https://openai.com/research/faulty-reward-functions [https://perma.cc/AK6K-CXCA]. Given that AI technology regularly suffers from misalignment problems, would it be ethical and equitable to enforce contracts drafted by these models? Another ethical dilemma that arises in the AI contracting context concerns legal liability and accountability if a party is injured by an AI-formulated contract. If harm results from an AI-drafted contract, who should be held accountable for these harms? Between the AI model itself, its designer, its user, and other parties, there is no readily apparent answer. Finally, the implementation of AI in contracting—a setting that involves a plethora of sensitive information—presents serious data privacy and security concerns.14See infra Part IV.

In Part V, this Note reviews the accuracy issues apparent in current and potential applications of AI technology. Simply put, AI technology can behave unpredictably and output inaccurate results known as “hallucinations.”15John Roemer, Will Generative AI Ever Fix Its Hallucination Problem?, A.B.A. (Oct. 1, 2024), https://www.americanbar.org/groups/journal/articles/2024/will-generative-ai-ever-fix-its-hallucination-problem [https://perma.cc/RF9L-W3HY]. In the litigation context, several lawyers, including Michael Cohen’s attorney, have recently been sanctioned or publicly admonished for citing fabricated cases generated by ChatGPT in their filings.16Lauren Berg, Another AI Snafu? Cohen Judge Questions Nonexistent Cases, Law360 (Dec. 12, 2023, at 11:57 PM EST), https://www.law360.com/articles/1776644 [https://perma.cc/VNJ8-Z2V2]; Sara Merken, Texas Lawyer Fined for AI Use in Latest Sanction over Fake Citations, Reuters (Nov. 26, 2024, at 5:20 PM PST), https://www.reuters.com/legal/government/texas-lawyer-fined-ai-use-latest-sanction-over-fake-citations-2024-11-26 [https://perma.cc/7C3U-CRS2]; Robert Freedman, Judge Asks Michael Cohen Lawyer If Cited Cases Are Fake, LegalDive (Dec. 13, 2023), https://www.legaldive.com/news/judge-furman-michael-cohen-lawyer-cites-fake-cases-schwartz-chatgpt-ai-hallucinations-legaltech/702422 [https://perma.cc/8XYQ-SXTV]. In the contracting space, in which exact language and minor details can govern the legal meaning of an agreement, AI’s tendency to hallucinate can cause major problems.

Part VI presents the challenges to the legal profession that arise when using AI technology in contract drafting and negotiation. For example, overreliance on AI technology to draft and negotiate contracts may violate an attorney’s professional duties of competence and diligence—much like the actions of the lawyers who cited fabricated cases in their court filings. Overreliance may also violate an attorney’s professional duty of communication if they cannot explain their reasoning for a recommended course of action to a client due to reliance on ChatGPT in their decision-making. Additionally, since AI models operate as “black boxes,” their use may raise concerns about duty of confidentiality violations if client information is input into these systems without proper safeguards.17See Lou Blouin, AI’s Mysterious ‘Black Box’ Problem, Explained, Univ. of Mich.-Dearborn: News (Mar. 6, 2023), https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained [https://perma.cc/A86U-MQ3D].

Part VII discusses the empirical findings that resulted when the author “hired” ChatGPT to assist various types of fictitious clients with negotiating a standard commercial real estate lease. These research findings suggest that ChatGPT discriminates against individual clients by tending to recommend renegotiation less often and to a smaller degree when advising individual clients than when assisting corporate or nonprofit clients. These findings have immense equity implications for contract drafting and negotiation in an AI-driven world, as AI models that disfavor individual clients may exacerbate existing market power or resource inequalities between individuals and more sophisticated corporate or nonprofit clients.18See infra Section VII.D. Finally, Part VIII discusses some strengths and potentially useful applications of AI technology in legal work in light of this Note’s theoretical discussion and empirical findings. Part VIII posits that, although AI technology excels at summarization,19John Herrman, The Future Will Be Brief, N.Y. Mag.: Intelligencer (Aug. 12, 2024), https://nymag.com/intelligencer/article/chatgpt-gmail-apple-intelligence-ai-summaries.html [https://perma.cc/3p66-rn4b]. concerns about its ability to exercise discretion and judgment suggest that it may be best suited for administrative tasks.

I. A CRASH COURSE IN AI AND RELEVANT LEGAL THOUGHT

A. What Is Artificial Intelligence and How Can It Contract?

There is no widely accepted definition of what constitutes artificial intelligence, which is partially a byproduct of how technological capabilities have rapidly improved in recent years.20Ryan McCarl, The Limits of Law and AI, 90 U. Cin. L. Rev. 923, 925 (2022). To oversimplify, computer programs were historically classified as artificial intelligence if they successfully mimicked human rational thought.21See id.; Stuart J. Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 19–20 (4th ed. 2021). An early example of this concept is the Turing test for artificial intelligence, which was developed by the “father of modern computer science,” mathematician Alan Turing.22Graham Oppy & David Dowe, The Turing Test, Stan. Encyc. of Phil. (Oct. 4, 2021), https://plato.stanford.edu/entries/turing-test [https://perma.cc/4V7H-QB8X]; Alan Turing, The Twickenham Museum, https://twickenham-museum.org.uk/learning/science-and-invention/alan-turing-2 [https://perma.cc/Y9UA-ZXUY]. The Turing test assesses how well a machine can imitate human thought and behavior via a competition that Turing called the “Imitation Game.”23Oppy & Dowe, supra note 22. In the game, a machine and human compete by answering questions asked by a human interrogator; at the end of the game, the interrogator must identify which competitor is a human and which is a machine.24Id. If the interrogator gets it wrong—i.e., says that the machine is the human—then the machine is thought to demonstrate human-level thought and intelligence.25Id.

This Note utilizes a relatively expansive definition of artificial intelligence that is reminiscent of the Turing test. For the purposes of this Note, artificial intelligence is any computer software program that demonstrates human-like behavior or intelligence. As discussed below, the focal point of artificial intelligence in this Note is large language models, which are some of the best modern examples of AI that would likely pass Turing’s test for artificial intelligence, given their language-based design and applications.26Helen Toner, What Are Generative AI, Large Language Models, and Foundation Models?, Ctr. for Sec. & Emerging Tech. (May 12, 2023), https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models [https://perma.cc/6QGB-UVKA].

  1. Artificial Intelligence’s Rise to Prominence: The “AI Boom”27Beth Miller, The Artificial Intelligence Boom, Momentum, Fall 2023, at 12, https://engineering.washu.edu/news/magazine/documents/Momentum-Fall-2023.pdf [https://perma.cc/RU8W-GJAR].

Artificial intelligence has taken the public consciousness by storm since the release of ChatGPT, OpenAI’s text-generating chatbot, in November 2022.28Wiggers et al., supra note 6. ChatGPT is an AI model trained to engage in natural language conversations, which means that when users interact with ChatGPT, it converses with them by generating textual responses comparable to that of a human.29Konstantinos I. Roumeliotis & Nikolaos D. Tselikas, ChatGPT and Open-AI Models: A Preliminary Review, Future Internet, 2023, at 1, https://doi.org/10.3390/fi15060192 [https://perma.cc/4QCW-ZYQ4]. The model’s successful imitation of human-sounding speech captured the public’s imagination,30Karen Weise, Cade Metz, Nico Grant & Mike Isaac, Inside the A.I. Arms Race That Changed Silicon Valley Forever, N.Y. Times (Mar. 17, 2025), https://www.nytimes.com/2023/12/05/technology/ai-chatgpt-google-meta.html [https://perma.cc/GUG6-PYRT]. prompting increased interest in potential applications of AI technologies from the general public31Id. and software developers32Editorial, What’s the Next Word in Large Language Models?, 5 Nature Mach. Intel. 331, 331 (2023). alike.

ChatGPT can complete a variety of academic tasks in a matter of seconds, such as writing essays, generating ideas, and answering mathematical problems.33Megan Henry, Nearly a Third of College Students Used ChatGPT Last Year, According to Survey, Ohio Cap. J. (Sept. 25, 2023, at 4:50 AM), https://ohiocapitaljournal.com/2023/09/25/nearly-a-third-of-college-students-used-chatgpt-last-year-according-to-survey [https://perma.cc/3QVZ-AFGM]. It is no surprise, then, that students from primary school to collegiate grade levels were some of the model’s most prevalent initial users, asking ChatGPT to write papers and complete homework assignments on their behalf.34Id. Students’ widespread use of ChatGPT to complete assignments led many schools and universities to initially ban the AI model altogether,35Id. although it was difficult, if not impossible, to enforce AI bans—especially outside of the classroom.36Lexi Lonas Cochran, What Is ChatGPT? AI Technology Sends Schools Scrambling to Preserve Learning, The Hill (Jan. 18, 2023, at 6:00 AM ET), https://thehill.com/policy/technology/3816348-what-is-chatgpt-ai-technology-sends-schools-scrambling-to-preserve-learning [https://perma.cc/5CDD-82XQ]. A new industry of tools meant to detect the use of AI in students’ writing emerged to combat this issue, but their accuracy remains widely disputed.37Jackie Davalos & Leon Yin, AI Detection Tools Are Falsely Accusing Students of Cheating, Bloomberg Law (Oct. 18, 2024, at 8:00 AM PDT), https://news.bloomberglaw.com/private-equity/ai-detection-tools-are-falsely-accusing-students-of-cheating [https://perma.cc/D5V4-6NEQ].

Although initial widespread applications of ChatGPT were somewhat rudimentary in nature, such as students’ use of the tool to complete assignments,38See Henry, supra note 33. OpenAI’s introduction of the model to the public sphere was instrumental in prompting other AI developers to invest in the creation and public release of their own large language models (“LLMs”).39Weise et al., supra note 30; Editorial, supra note 32. After witnessing OpenAI’s successful launch of ChatGPT, prominent tech industry leaders such as Google and Meta immediately sought to turn AI technologies into tangible, profitable products that they could sell to individuals and companies.40Weise et al., supra note 30. Although these major technology companies had already been developing (and, in some cases, even released, to little success41Id.) their own AI technologies before November 2022, ChatGPT’s successful public launch prompted an expansion of the AI industry like never before.42Id. By the following spring, a flurry of new LLMs had emerged on the market: Meta’s LLaMA model, Google’s PaLM-E, and even OpenAI’s newest iteration of its LLM: GPT-4.43Editorial, supra note 32.

In essence, large language models are AI models designed to interact with and produce language.44Toner, supra note 26. “Large” refers to the increasing trend to train these models on large quantities of data stored in massive data sets that are usually housed in collocated data centers.45Id.; What is a Data Center?, Amazon Web Servs., https://aws.amazon.com/what-is/data-center [https://perma.cc/24EH-GTSH]. While ChatGPT, LLaMA, PaLM-E, and GPT-4 are all generally considered LLMs, much like AI more broadly, a concrete definition of what constitutes a large language model remains an open question.46Toner, supra note 26. There are no exact parameters for how large an AI model must be or how it must interact with language in order to be categorized as an LLM.47Id.

On the other hand, LLMs are generally considered to be a subset of generative AI.48Id. Generative AI is defined as artificial intelligence capable of producing new creations, such as graphic images, text, and audio, based on training data inputted into the model.49Id.; Thomas H. Davenport & Nitin Mittal, How Generative AI Is Changing Creative Work, Harv. Bus. Rev. (Nov. 14, 2022), https://hbr.org/2022/11/how-generative-ai-is-changing-creative-work [https://perma.cc/7LC7-MW24]. Therefore, generative AI enables a user to generate substantial quantities of work product with minimal effort by prompting a generative AI model and letting it create content for them based on the query. This is partly why ChatGPT became wildly popular in a short period of time50Naughton, supra note 3.—and why the generative model caused concerns about students using it to complete homework and other assignments on their behalf.

Beyond their avocational applications as homework helpers51Henry, supra note 33. and joke writers,52Emily Gersema, Think You’re Funny? ChatGPT Might Be Funnier, Univ. of S. Cal.: USC Today (July 3, 2024), https://today.usc.edu/ai-jokes-chatgpt-humor-study [https://perma.cc/9USY-RR64]. LLMs are being increasingly used by industry professionals to improve and expand the potential of their products and services.53Carina Perkins, Generative AI Chatbots in Retail: Is ChatGPT a Game Changer for the Customer Experience?, Emarketer (June 21, 2024), https://www.emarketer.com/content/generative-ai-chatbots-retail [https://perma.cc/KT68-RH9W]. For instance, Amazon Web Services implemented an externally facing AI chatbot on its Amazon.com retail site designed to handle returns, provide shipment tracking information, and generally improve the site’s customer service capabilities54Jared Kramer, Amazon.com Tests Customer Service Chatbots, Amazon Sci. (Feb. 25, 2020), https://www.amazon.science/blog/amazon-com-tests-customer-service-chatbots [https://perma.cc/XS3D-MJDZ]. (albeit the chatbot has garnered mixed reviews55Shira Ovide, We Tested Amazon’s New Shopping Chatbot. It’s Not Good., Wash      . Post (Mar. 5, 2024), https://www.washingtonpost.com/technology/2024/03/05/amazon-ai-chatbot-rufus-review [https://perma.cc/AW9L-FZ42].). Similarly, in 2024, Target Corporation launched an internally facing generative AI model, called Store Companion, to assist with employee training, store operations management, and general problem-solving tasks.56Press Release, Target Corp., Target to Roll Out Transformative GenAI Technology to Its Store Team Members Chainwide (June 20, 2024), https://corporate.target.com/press/release/2024/06/target-to-roll-out-transformative-genai-technology-to-its-store-team-members-chainwide [https://perma.cc/4KUY-CC7B]. Meanwhile, social media platforms such as Instagram use AI models to filter content and craft feeds that are better personalized to users’ individual preferences.57Cameron Schoppa, How the 5 Biggest Social Media Sites Use AI, AI Time J. (Aug. 6, 2025), https://www.aitimejournal.com/how-the-biggest-social-media-sites-use-ai [https://perma.cc/C9XD-TNAM].

  1. Early Applications of Artificial Intelligence to Legal Contracting

Naturally, the ever-increasing implementation of LLMs in a variety of businesses, industries, and settings includes applications in the legal field as well.58Nicole Black, Emerging Tech Trends: The Rise of GPT Tools in Contract Analysis, A.B.A.: ABA J. (May 22, 2023, at 9:49 AM CDT), https://www.abajournal.com/columns/article/emerging-tech-trends-the-rise-of-gpt-tools-in-contract-analysis [https://perma.cc/9ZJL-TQQN]. For example, AI has already been used to create legal workflow companions with suites of legal skills,59Matt Reynolds, vLex Releases New Generative AI Legal Assistant, A.B.A.: ABA J. (Oct. 17, 2023, at 9:39 AM CDT), https://www.abajournal.com/web/article/vlex-releases-new-generative-ai-legal-assistant [https://perma.cc/GH3K-WNL6]; Danielle Braff, AI-Enabled Workflow Platform Vincent AI Expands Capabilities, A.B.A.: ABA J. (Sept. 12, 2024, at 10:06 AM CDT), https://www.abajournal.com/web/article/the-latest-upgrade-vincent-ai [https://perma.cc/4NFZ-2QVM]. contract lifecycle management software programs,60Nicole Black, Increasing Contractual Insight: AI’s Role in Contract Lifecycle Management, A.B.A.: ABA J. (Sept. 25, 2023, at 12:29 PM CDT), https://www.abajournal.com/columns/article/increasing-contractual-insight-ais-role-in-contract-lifecycle-management [https://perma.cc/7TXW-8VX8]. and contract redlining and drafting assistants.61Spellbook, https://www.spellbook.legal [https://perma.cc/CK8K-PWJR]. A simple Google search for AI contracting services yields a plethora of (interestingly named) AI-powered software programs that purport to assist an attorney with redlining (e.g., Harvey,62Assistant, Harvey, https://www.harvey.ai/products/assistant [https://perma.cc/D883-DL2E]; Harvey, OpenAI, https://openai.com/index/harvey [https://perma.cc/PJC4-X23G]. Lawgeex,63Lawgeex, https://www.lawgeex.com [https://perma.cc/6ZU8-GYJA]. Superlegal,64Superlegal, https://www.superlegal.ai [https://perma.cc/P7WL-VDPX]. Ivo,65Ivo, https://www.ivo.ai [https://perma.cc/XV6T-LTVL]. Screens,66Screens, https://www.screens.ai [https://perma.cc/SKX8-8UPY]. and Spellbook67Spellbook, supra note 61.) or managing (e.g., Evisort,68Evisort, https://www.evisort.com [https://perma.cc/8R2W-LY6K]. Ironclad,69AI-Powered Contract Management Software, Ironclad, https://ironcladapp.com/product/ai-based-contract-management [https://perma.cc/DFJ7-BJ99]. Sirion,70Sirion, https://www.sirion.ai [https://perma.cc/MF9Y-J3K9]. and LegalSifter71LegalSifter, https://www.legalsifter.com [https://perma.cc/M9TC-V4UT].) their legal contracts. Even companies that operate widely used legal research databases, such as LexisNexis and Thomson Reuters, have created and marketed their own generative AI-powered legal assistants.72Thomson Reuters, the company that owns and operates Westlaw, developed CoCounsel, an AI tool intended to “accelerate[] labor-intensive tasks like legal research, document review, and contract analysis.” CoCounsel 2.0: The GenAI Assistant for Legal Professionals, Thomson Reuters, https://legal.thomsonreuters.com/en/c/cocounsel/generative-ai-assistant-for-legal-professionals [https://web.archive.org/web/20250113041800/https://legal.thomsonreuters.com/en/c/cocounsel/generative-ai-assistant-for-legal-professionals]. Similarly, LexisNexis released Protégé, its own legal assistant that can “support[] daily task organization, . . . draft[] full documents, and conduct[] intelligent legal research.” LexisNexis Announces New Protégé Legal AI Assistant as Legal Industry Leads Next Phase in Generative AI Innovation, LexisNexis (Aug. 12, 2024), https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-announces-new-protege-legal-ai-assistant-as-legal-industry-leads-next-phase-in-generative-ai-innovation [https://perma.cc/N88F-D5JW].

Legal professionals are generally excited about new and potential future applications of AI to the legal world.73See Braff, supra note 59. Many believe the technology will increase efficiency in a time-intensive industry by synthesizing documents and reducing the time a human attorney needs in order to perform certain legal tasks.74Josh Blackman, Robot, Esq. 1 (Jan. 9, 2013) (unpublished manuscript), https://ssrn.com/abstract=2198672 [http://dx.doi.org/10.2139/ssrn.2198672]; Matt Pramschufer, How AI Can Make Legal Services More Affordable, The Nat’l Jurist (July 23, 2019), https://nationaljurist.com/smartlawyer/how-ai-can-make-legal-services-more-affordable [https://perma.cc/F2S6-R9WM]. Some hopefuls even view AI as infallible—capable of outperforming humans, whose work is prone to errors, because AI can craft perfectly completed and accurate work product.75Adam Bingham, Mitigating the Risks of Using AI in Contract Management, Risk Mgmt. (Sept. 3, 2024), https://www.rmmagazine.com/articles/article/2024/09/03/mitigating-the-risks-of-using-ai-in-contract-management [https://perma.cc/AT6Z-ZXNC]. Finally, AI is thought by some to make legal services more affordable and accessible to the general public76Pramschufer, supra note 74. by reducing the number of billable hours an attorney must dedicate to any given task, enabling individuals to access legal services without hiring a human attorney, or both. In fact, Utah and Arizona have already implemented pilot programs that allow non-lawyer entities, such as AI chatbots, to provide legal services, and Washington may be the next state to institute such a program.77Debra Cassens Weiss, Nonlawyer Entities Could Provide Legal Services in Washington in Proposed Pilot Program, A.B.A.: ABA J. (Sept. 11, 2024, at 2:36 PM CDT), https://www.abajournal.com/news/article/nonlawyer-entities-could-provide-legal-services-in-washington-state-in-proposed-pilot-program [https://perma.cc/UTP2-TMZP].

Despite this enthusiasm about AI, the immediate application of LLMs to the legal space has not been without its challenges. Some attorneys have wrongfully used LLMs to shirk their responsibilities by asking AI models to conduct legal research or write briefs on their behalf.78Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, N.Y. Times (May 27, 2023), https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html [https://perma.cc/249Y-4LTS]. This practice has resulted in massive sanctions and fines for attorneys who cited “bogus” cases that were fabricated by ChatGPT in documents that they later submitted to a judge.79Sara Merken, New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief, Reuters (June 26, 2023, at 1:28 AM PDT), https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22 [https://perma.cc/7KR5-LL5A]; Weiser, supra note 78. Furthermore, as discussed later in this Note, issues regarding lawyers’ ethical and professional duties, algorithmic discrimination, AI’s inaccuracies, and the subversion of traditional contract law principles also arise when large language models are applied to the legal field.

B. A “Meeting of the Minds” Regarding Contract Law Theory

An orientation into the foundational principles underlying contract law theory is needed before one can take a proper deep dive into the applications of AI in contracting. A great place to start is the traditional contractual theory of mutual assent, colloquially known as the “meeting of the minds.”80Wayne Barnes, The Objective Theory of Contracts, 76 U. Cin. L. Rev. 1119, 1119–20, 1122–23 (2008) (“[D]etermining whether the parties both agreed on the same thing . . . is at the heart of contract law.”). Mutual assent is one of many requirements that must be demonstrated for a court to hold that a given contract is legally valid and enforceable.81Hanson v. Town of Fort Peck, 538 P.3d 404, 419 (Mont. 2023). “Meeting of the minds” refers to the idea that both parties must mutually agree to the terms of a contract in order for the agreement to be legally binding.82Barnes, supra note 80. That is, the parties’ minds must, in a sense, “meet in the middle” at the moment when the contract is formed. For that reason, mutual assent may not be found when one or both of the parties to a contract entered into the agreement based on a misunderstanding or a mistake of law or fact.83See generally Raffles v. Wichelhaus (1864) 159 Eng. Rep. 375; 2 Hurl. & C. 906 (establishing that there is no mutual assent to an agreement when it contains a latent ambiguity—such as, in Raffles, the two parties intending different ships named “Peerless”). Intuitively, this makes sense; it would not be good public policy to bind people to a contractual agreement if they did not fully understand the obligations and consequences they allegedly agreed to when the agreement was executed. Beyond equity justifications, it may also be inefficient to hold a party accountable for obligations that they did not intend to undertake and may not be equipped to fulfill. Relatedly, to create a binding agreement, the parties to the contract must specifically mutually assent to the material terms of the contract.84Jack Baker, Inc. v. Off. Space Dev. Corp., 664 A.2d 1236, 1238 (D.C. 1995) (“[F]or an enforceable contract to exist, there must be . . . agreement as to all material terms . . . .” (emphasis added) (quoting Georgetown Ent. Corp. v. District of Columbia, 496 A.2d 587, 590 (D.C. 1985))). Without a “meeting of the minds” between the parties to any given contract regarding the essential provisions of the agreement, the contract is invalid and not legally binding on the parties.

In some instances, courts have imputed assent to a party based on their conduct even if they did not explicitly agree to or approve of the terms of an agreement.85See Nguyen v. Barnes & Noble Inc., 763 F.3d 1171, 1178–79 (9th Cir. 2014) (“[W]here a website makes its terms of use available via a conspicuous hyperlink on every page of the website but otherwise provides no notice to users nor prompts them to take any affirmative action to demonstrate assent, even close proximity of the hyperlink to relevant buttons users must click on—without more—is insufficient to give rise to constructive notice.”). This doctrine is known as “constructive assent,”86Id. at 1176–77. and it is common among online transactions.87See Weeks v. Interactive Life Forms, LLC, 319 Cal. Rptr. 3d 666, 671 (Ct. App. 2024). For example, if a user of an online webpage affirmatively acknowledges the page’s terms of use by clicking an “I accept” or “I agree” button without actually reading the agreement, the user is usually found to have constructively assented to the terms of the agreement despite not actually being aware of its contents.88Id.; Caspi v. Microsoft Network, 732 A.2d 528, 532 (N.J. Super. Ct. App. Div. 1999) (“The plaintiffs in this case were free to scroll through the various computer screens that presented the terms of their contracts before clicking their agreement . . . [and] the [challenged] clause was presented in exactly the same format as most other provisions of the contract,” so the court found no reason to hold that the plaintiffs did not see and agree to the provision in question.).

Although many people make light of the fact that nobody ever reads various websites’ terms of use or, more notably, Apple’s Terms and Conditions,89See South Park: HumancentiPad (Comedy Central television broadcast Apr. 27, 2011); Check Out Apple’s iOS 7 Terms & Conditions (PICTURE), HuffPost (Sept. 18, 2014), https://www.huffingtonpost.co.uk/2013/09/20/apple-ios7-spoof-terms-and-conditions_n_3960016.html [https://perma.cc/6AZ4-YH59]. constructive assent is no laughing matter. In these types of situations, constructive assent can be used to essentially waive the traditional contract theory requirement of a “meeting of the minds,” instead holding individuals accountable for the contracts that they sign even if they do not fully understand or have knowledge of the terms that they allegedly agreed to.90For instance, internet users are often assumed to have constructively assented to a website’s terms of use when the site constitutes a “browsewrap” agreement. Browsewrap agreements typically include a site’s terms of use in a hyperlink at the bottom of the webpage. Courts have held internet users to have constructively assented to a website’s terms of use by merely browsing a webpage designed in this way. See In re Juul Labs, Inc., 555 F. Supp. 3d 932, 947 (N.D. Cal. 2021). Unsurprisingly, the doctrine of constructive assent is controversial—especially its application to consumer contracts91See generally Andrea J. Boyack, The Shape of Consumer Contracts, 101 Denv. L. Rev. 1 (2023) (suggesting constructive assent is detrimental in the consumer contract setting because a consumer’s decision to transact with a business is fundamentally distinct from their assent to the company’s terms). and form contracts more broadly.92See generally Donald B. King, Standard Form Contracts: A Call for Reality, 44 St. Louis U. L.J. 909 (2000) (arguing that assent in the context of a negotiated agreement is fundamentally different from assent in the standard form contract setting). Further, the ethics of constructive assent are hotly debated among scholars, with some arguing that applying constructive assent to a contested contract unfairly disadvantages the weaker party (e.g., the consumer) to the benefit of the dominant party (e.g., the retailer) whose greater market power enables them to force the weaker party to consent to the dominant party’s preferred terms.93See Boyack, supra note 91; King, supra note 92, at 911–14. For a lighthearted (and, thankfully, fictional) example of the dangers of constructive assent, the author recommends an episode of the popular television show Parks and Recreation in which a small town’s government grapples with unwanted data mining and privacy invasions resulting from a convoluted Internet service contract the town entered into with Gryzzl, a large technology company. Parks and Recreation: Gryzzlbox (NBC television broadcast Jan. 27, 2015).

C. Attorneys as Ethical and Professional Fiduciaries

Another important factor to consider when analyzing the potential applications of AI to the contracting space is the ethical and professional complications that arise due to attorneys’ special fiduciary duties to their clients. In general, attorneys are held to a higher standard than those who work in many other professions.94Rules of Professional Conduct for Lawyers, 8am MyCase (Aug. 26, 2025), https://www.mycase.com/blog/client-management/lawyer-professional-conduct [https://perma.cc/G75A-82XR]. Specifically, attorney conduct is governed by each state’s bar association, many of which have adopted the Model Rules of Professional Conduct—the generic rules promulgated by the American Bar Association.95See Model Rules of Professional Conduct, A.B.A., https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct [https://perma.cc/4ZV6-AATQ]. The Model Rules serve as a fundamental guideline for attorney conduct by prescribing various professional and fiduciary duties to attorneys, such as client confidentiality, competence, diligence, and communication.96See Model Rules of Pro. Conduct (A.B.A. 1983). The Model Rules also address various topics relating to an attorney’s practice—like conflicts of interest, the formation of an attorney-client relationship, the scope of one’s representation, and how to interact with unrepresented persons97See id.—and explain how model attorneys should approach these issues. Importantly, the Model Rules detail practices that constitute misconduct, like engaging in dishonesty or fraud, violating the Model Rules of Professional Conduct, or committing a criminal act.98Id. r. 8.4. For the purposes of this Note, it is important for one to keep the Model Rules of Professional Conduct in mind when considering how an attorney may use AI technology in drafting or negotiating contracts, as certain applications of AI may subvert the underlying goals that the Model Rules were designed to support in more traditional applications.

II. ILLUSTRATIVE EXAMPLES

Several ethical, practical, and theoretical questions arise when one considers various applications of AI to contract drafting, formation, and negotiation. To better illustrate the issues that arise from applying AI to the contracting space, consider the following numbered examples and the questions they raise regarding their implications for the contract law principles and legal profession concepts that we have discussed:

Example #1: Laypeople Using AI to Draft a Contract99Real-world instances analogous to this example are becoming increasingly common. Many people use generative AI for contracting-adjacent tasks and skills such as idea generation, text editing, document drafting, and, most notably, “generating a legal document.” Marc Zao-Sanders, How People Are Really Using GenAI, Harv. Bus. Rev. (Mar. 19, 2024), https://hbr.org/2024/03/how-people-are-really-using-genai [https://perma.cc/5SLX-SL9F].

Two laypeople (i.e., not attorneys) are doing business together. Interested in summarizing their deal in a written form, they “draft” a contract by asking ChatGPT to do so for them. Once ChatGPT has drafted the contract, the two parties both read and sign the contract, despite not understanding the agreement’s legalese or terms. Later, something goes wrong, and the contract’s validity and enforceability are disputed.

Was there a “meeting of the minds,” or mutual assent, here?

Is this a case of AI-assisted human contracting, or was this effectively an entirely AI-created contract?

Is the contract enforceable?

Should society want the contract to be enforceable?

Example #2: AI as a Contract Drafting Tool for Attorneys100As noted in the Introduction, the use of AI as a drafting tool for attorneys is becoming increasingly common. Just as lawyers have used ChatGPT for writing court filings, they are likely to use it for drafting other legal documents, such as contracts. See Berg, supra note 16.

As is industry practice, a lawyer in a corporate law firm normally uses a standard form contract from prior deals as a starting point when drafting new contracts. However, for a particular deal, she decides to use ChatGPT to draft the initial form contract instead.

Is this an example of AI as a tool that assists humans in contract drafting, or is this a wholly AI-drafted agreement?

Does this distinction have important implications for the contract’s validity and enforceability?

Is there any significant difference between this attorney using AI to create a form contract or pulling a precedent contract out of her firm’s database?

Would this amount to a breach of the attorney’s professional duties of competence, diligence, or anything else?

Example #3: Human Error Versus AI-Drafted Terms

Overwhelmed with his busy workload, a lawyer mistakenly inserts a clause in a contract he is drafting for his client. Both his client and the other party to the contract sign the agreement; neither party nor the attorney knows at the time the agreement is executed that the accidental provision is included in the contract.

Is the extra provision in the agreement enforceable (i.e., did the parties mutually assent to the term)?

Is this scenario any different from if AI completely drafts and executes a contract without humans involved in the contracting process?

How are these two examples reconciled in terms of mutual assent? Are they the same, or fundamentally different in any way?

Example #4: AI Automatically “Agreeing” to Online Terms

Annoyed with websites’ many Terms of Service and Cookies pop-ups, an inventor creates an AI-driven “ad blocker” software that automatically clicks through and “agrees” to these pop-ups on the software user’s behalf so that they never have to see them again.

Would this constitute the user’s assent to various websites’ Terms of Service?

Does the answer to this question depend on how long the user has had the software, or whether they knew or reasonably should have known that specific websites had Terms of Service or Cookies pop-ups?

 

* * *

There are two possibilities when applying AI technology to contract drafting and negotiation: (1) AI effectively functions as an assistant, aiding humans with their contracting, and (2) fully automated decision-making, in which AI completely takes over contracting, from start to finish, with no humans involved in the process. Under either scenario, four categories of problems arise when implementing AI in contract drafting and negotiation: the subversion of contract law principles, equity concerns, accuracy issues, and legal profession challenges.

III.  AI’S SUBVERSION OF CONTRACT LAW PRINCIPLES

If AI functions as a mere contract drafting and negotiation assistant, mutual assent concepts would apply in the same manner that they do for purely human-conducted contracting. An underlying principle of the mutual assent requirement for a valid contract is the notion that the parties to a given contract must understand the terms of the agreement and have a “meeting of the minds,” or mutual agreement, that they find the terms acceptable.101Barnes, supra note 80. If AI technology merely assists an attorney with drafting or negotiating a contract, this does not affect the portion of the dealmaking process that mutual assent concerns. The only point in time that is relevant for mutual assent is when the parties come to a consensus that the contract’s terms are agreeable and subsequently execute the agreement.102See Ray v. Eurice, 93 A.2d 272, 276–78 (Md. 1952). By that point in time, the drafting and negotiating phases of the process are complete (and, truthfully, long gone)—the agreement is in its final drafted form and will not undergo further redlines or revisions. Thus, the implementation of AI as a mere assistant in the contracting and negotiation process is not within the timeline or contextual scope that mutual assent concerns. AI’s use as a contracting assistant is therefore akin to any personal opinions the drafting attorney may have (outside of their thoughts and duties as a fiduciary of their client) regarding the deal at hand—i.e., irrelevant to questions about mutual assent.

While some may argue that the cyclical drafting, redlining, and negotiation process drives the parties to a contract toward the ultimate goal of mutual assent at the end of the contracting cycle, it is not a necessary component of mutual assent that agreements are modified and negotiated by the parties. If one party presents a complete agreement to another party, who signs it without criticizing its contents or insisting on revisions, it is still a valid contract. Furthermore, in many instances, an attorney drafts and negotiates on behalf of their client, who signs the final contract without a comprehensive legal understanding of the negotiations and redlines that were made during the dealmaking process. This is arguably like Example #1 in Part II, in which the two laypeople used AI to draft a contract that they then signed. Although the individuals did not negotiate between themselves, mutual assent was arguably satisfied because the humans—not ChatGPT—assented to the agreement at the end of the contracting process.

On the other hand, if contracting is entirely managed by AI—without humans involved in the process—then the contract law requirement of mutual assent is not satisfied. Arguably, if the laypeople in Example #1 did not understand the contract because ChatGPT performed a substantial portion of the legal lift for them (which is possible, considering that they did not understand the AI-drafted agreement’s legalese or terms), then the mutual assent requirement may not be satisfied because the contracting process was effectively completed without human involvement. Example #4 details a more abstract example of this concept. In Example #4, the inventor’s software “agrees” to websites’ terms of use on its users’ behalf. In this situation, the human user never sees, let alone reads, the terms of service that they allegedly agreed to through the AI-driven software. Although some might argue that there is mutual assent because a person who installs the software knows that it will “agree to” the terms on any site that the person visits, this argument does not hold up to pragmatic scrutiny. Given how often and extensively people surf the Internet, it is highly likely that, over time, the person would not know which websites had pop-up advertisements or terms of use that the AI bot “agreed” to on their behalf, let alone the content of those agreements.

Therefore, the contract law requirement of mutual assent goes unsatisfied when AI fully takes over the contracting process. This flaw in solely AI-executed contracting becomes even more apparent when considering contracts that involve multimillion- or multibillion-dollar transactions, fundamental changes in a company’s structure or dealings, or changing the client’s financial or business practices in any substantial way. Without providing notice of these changes to the client and securing their informed assent to new and material contractual terms, solely AI-driven contracting is unlikely to satisfy traditional contract law principles.

Some might argue that a party’s performance of its obligations under a fully AI-driven contract would justify its validity and waive the mutual assent requirement, much like the traditional contract law enforcement principles surrounding the Statute of Frauds.103Certain requirements that an agreement be documented in writing can be waived if a party fully and completely performs its obligations under the agreement. Koman v. Morrissey, 517 S.W.2d 929, 936 (Mo. 1974) (“[T]he statute of frauds has no application where there has been a full and complete performance of the contract by one of the contracting parties . . . .”). However, a fully automated contracting process differs from classic applications of the Statute of Frauds—such as when a party denies a prior verbal agreement, claiming that they never agreed to the deal because no written proof of it exists.104See Ian Ayres & Gregory Klass, Studies in Contract Law 434–35 (9th ed. 2017). Rather, if AI completely drives the contracting process, then the parties to a contract would likely never be aware of, let alone read, the AI-drafted and executed agreement. Due to this disconnection, it is highly unlikely that the parties would completely perform their obligations under the agreement—simply because they would not know what their obligations are. Even if the parties were generally aware of their performance obligations (e.g., because the AI model contracted an extension of an existing purchase agreement between a purchaser and supplier), they would still not know the specifications of the agreement to a high enough degree for public policy to justify holding them to the transaction.

Furthermore, although some may argue that the doctrine of constructive assent can waive the mutual assent requirement in the purely AI-driven contracting setting, this argument is specious. Constructive assent is a highly controversial doctrine in its current limited uses, such as form contracts.105See generally King, supra note 92. Scholars have raised particular concerns about constructive assent eliminating the need for mutual assent in online transactions, such as clickwrap agreements,106See Matt Meinel, Requiring Mutual Assent in the 21st Century: How to Modify Wrap Contracts to Reflect Consumer’s Reality, 18 N.C. J.L. & Tech. 180, 180 (2016) (“Intention to manifest mutual assent is increasingly becoming a legal fiction in cyberspace.”). because the doctrine can infer an Internet user’s assent from their decision to click “I agree”—regardless of how “ill-informed and not well considered” that decision might have been.107Daniel D. Haun & Eric P. Robinson, Do You Agree?: The Psychology and Legalities of Assent to Clickwrap Agreements, 28 Rich. J.L. & Tech. 623, 649–56 (2022). Therefore, because constructive assent is thought by many to subvert traditional contract law theory, especially in online transactions, it provides a weak justification for waiving the mutual assent requirement in a purely AI-driven contracting setting.

Therefore, the distinction between AI as a contracting assistant and wholly AI-driven contracting carries significant contract law implications. In Example #2 in Part II, the legal difference between an attorney using a precedent contract from prior deals and relying on an AI-generated form contract is crucial, even though practicing attorneys may see little to no practical difference between the two. As AI technology continues to advance, the line between human-driven and AI-driven contracting will increasingly blur, raising questions about contract validity, enforceability, and an attorney’s professional obligations. Whether AI serves merely as a drafting tool or takes on a more autonomous role could have far-reaching legal consequences.

IV. EQUITY CONCERNS

A. Algorithmic Discrimination

Algorithmic discrimination occurs when ostensibly impartial AI technology produces discriminatory results because it was trained on tainted inputs.108See Chander, supra note 12. Put more simply, algorithmic discrimination is a perfect example of “Garbage In, Garbage Out.”109Robert Buckland, AI, Judges, and Judgment: Setting the Scene (Harvard Kennedy Sch. M-RCBG Assoc. Working Paper Series, No. 220, 2023), https://dash.harvard.edu/server/api/core/bitstreams/98187fff-8a7a-4ca6-8123-3049e417f088/content [https://perma.cc/27RB-YUKA]. Proponents of AI argue that even if algorithmic discrimination occurs, automated decision-making is preferable to human decision-making because humans are biased.110See Daniel J. Solove & Hideyuki Matsumi, AI, Algorithms, and Awful Humans, 92 Fordham L. Rev. 1923, 1924–27 (2024). However, algorithmic discrimination can perpetuate and amplify existing biases or stereotypes in an AI model’s training data, with the dangerous added implication that the tainted model appears facially objective and neutral.111Chander, supra note 12. Furthermore, because of their reliance on human inputs, algorithms will arguably never be fully bias-free and nondiscriminatory, but perpetually flawed as “partially human.”112Catarina Santos Botelho, The End of the Deception? Counteracting Algorithmic Discrimination in the Digital Age, in The Oxford Handbook on Digital Constitutionalism (Sept. 19, 2024) (manuscript at 1), https://doi.org/10.1093/oxfordhb/9780198877820.013.28 [https://perma.cc/P5X4-UPKF]. Additionally, due to its highly advanced pattern-detection abilities, AI technology has the potential to develop new forms of discrimination by extracting patterns from its inputted data that humans alone would not have been able to detect.113Solon Barocas, Moritz Hardt & Arvind Narayanan, Fairness and Machine Learning: Limitations and Opportunities 1–20 (2023).

Algorithmic discrimination is also concerning because current legal theories do not supply satisfactory remedies for discrimination by AI systems.114See generally Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671 (2016) (discussing algorithmic discrimination and the inapplicability of existing legal remedies to its harms). For example, imagine that an online job search site, such as LinkedIn, uses an AI-driven algorithm to “match” employers with potential interview candidates on the site by recommending certain user profiles to employers.115In reality, LinkedIn does have an algorithmic system that suggests potential employees to employers, called “Talent Match.” Id. at 683. If a user believed that the algorithm discriminated against them in choosing not to suggest their profile to employers, they would have limited options to seek legal redress. In the employment space, discrimination claims are separated into two categories: (1) disparate treatment and (2) disparate impact.116Id. at 694. Disparate treatment is focused on combating explicit discrimination, which requires a finding of intent.117Barnes v. Yellow Freight Sys., Inc., 778 F.2d 1096, 1101 (5th Cir. 1985) (“Since this is a disparate treatment case, . . . the plaintiff is still required to prove discriminatory intent.”). In a traditional, non-AI setting, explicit discrimination may be demonstrated by a qualified job candidate who was denied employment by a firm that refused to hire her by proving that the refusal was based on one of the candidate’s protected characteristics, such as race or gender.118See McDonnell Douglas Corp. v. Green, 411 U.S. 792, 802 (1973) (“The complainant in a Title VII trial must carry the initial burden under the statute of establishing a prima facie case of racial discrimination. This may be done by showing (i) that he belongs to a racial minority; (ii) that he applied and was qualified for a job for which the employer was seeking applicants; (iii) that, despite his qualifications, he was rejected; and (iv) that, after his rejection, the position remained open and the employer continued to seek applicants from persons of complainant’s qualifications.”). Conversely, to claim disparate treatment in the case of an AI algorithm, the disgruntled LinkedIn user would have to demonstrate that the algorithm had the intent to discriminate, which may be incredibly difficult, if not impossible, to prove in the case of a nonhuman entity. Thus, algorithmic discrimination is likely thought to be a product of unintentional or incidental discrimination.

Alternatively, disparate impact claims do not require the plaintiff to prove discriminatory intent;119Barnes, 778 F.2d at 1101 (“The intent requirement is an element differentiating the analysis for disparate treatment cases from that of disparate impact cases. Although sometimes either theory may be applied to a given set of facts, disparate impact analysis does not demand that a plaintiff prove discriminatory motive.”). rather, the doctrine considers whether there is a disparate impact on members of a protected class, any business necessity for the impact, and a less discriminatory alternative means of achieving the same result.12042 U.S.C. § 2000e-2(k). Therefore, given the aforementioned difficulty of ascribing any particular cognitive motivations to an AI model, disparate impact discrimination is the only potential mode of existing discrimination law that

might provide legal redress for members of protected classes who experience algorithmic discrimination in the employment context.

In the contracting space, algorithmic discrimination has the potential to create disastrous consequences. If an AI model is trained on discriminatory data or its algorithm is improperly weighted by its human developers, it may tend to favor one type of party over another, such as men over women.121See generally Alejandro Salinas, Amit Haim, & Julian Nyarko, What’s in a Name? Auditing Large Language Models for Race and Gender Bias (Sept. 25, 2024) (unpublished manuscript) (on file with the Southern California Law Review) (describing an empirical study that found GPT-4 to systematically disadvantage names commonly associated with women and racial minorities). This bias may then prompt the AI model to negotiate more favorable deals for certain parties than it would for others. This potential for AI to act as a discriminatory advocate may exacerbate existing inequalities, especially if the model’s reliance on tainted training data causes it to reinforce biases that disproportionately harm certain groups. Particularly sensitive communities include women, racial or ethnic minorities, and people who are socioeconomically disadvantaged. In the contracting setting, where every word in a contract has an important implication for the meaning of the agreement, a tainted AI model could selectively include unfavorable terms—or simply choose terms that are not the most favorable—in an agreement when “hired” by a party that the model’s data disfavors. The individual who experiences discrimination by receiving the “short end of the stick,” or undesirable contract terms, would likely never know that they were discriminated against by the model they used to contract. Even if the disadvantaged individual later became aware of the discriminatory term selection, it is likely that they would not have the ability or resources to advocate for themselves.

Furthermore, the contracting setting presents a multitude of consequential and important situations in which a person’s livelihood depends on the degree of favorability they are able to negotiate for themselves in a given contract. For example, in an employment contract, the starting salary, amount of paid family leave, and inclusion of any noncompete provisions may have huge implications for a prospective employee’s financial stability and future wellbeing. If an AI model poorly negotiates on a potential employee’s behalf, that potential employee may experience a lower quality of life than they would have otherwise—and if the reason for AI’s poor performance is discriminatory conduct, these disadvantaged outcomes will only exacerbate existing inequalities in our society.

B. Ethics of Enforcing Automated Deals

Another serious concern that arises when using AI in contracting is the ethical dilemma of deciding when to enforce completely automated deals. If we get to the point in which contracting is an entirely AI-driven task, do we feel comfortable holding humans accountable for the deals that an AI model entered into on their behalf?

A critical consideration when determining accountability in this circumstance is AI (mis)alignment. Broadly speaking, direct alignment refers to the ability to program an AI system so that it pursues goals consistent with the goals of its operator.122Anton Korinek & Avital Balwit, Aligned with Whom?:Direct and Social Goals for AI Systems 2 (Brookings Ctr. on Regul. & Mkts. Working Paper No. 2, 2022), https://www.brookings.edu/wp-content/uploads/2022/05/Aligned-with-whom-1.pdf [https://perma.cc/48BN-547C]. There are a plethora of difficulties in ensuring proper direct alignment, including (1) determining the operator’s goals, (2) conveying those goals to the AI software, and (3) getting the AI model to correctly translate those goals into actions.123Id. at 6. It is often incredibly difficult for an AI user to overcome these challenges, and efforts to do so sometimes cause AI programs to take unexpected actions that result in adverse consequences.124Clark & Amodei, supra note 13.

In the contracting context, holding the user of an AI contracting software to an agreement that the AI model drafted on their behalf can have especially inequitable consequences. Much like Example #3 in Part II, in which the human attorney mistakenly added language to the contract he was drafting, if an AI program is misaligned with its user’s goals, then it may draft contracts that do not reflect those goals. Both general intuition and contract law theory suggest that in a scenario like Example #3, the parties to the contract should not be bound by terms to which they did not assent. Similarly, in the case of misaligned AI contracting software, intuition suggests that it would be unethical to bind a party to an agreement if the AI model that contracted on their behalf did so in a manner that did not align with the user’s intentions.

C. Who Is Liable or Accountable?

If and when AI-assisted or wholly automated contracting goes wrong, who should we hold liable for breached contracts? Would we want to differentiate between the AI developer, the human who “hired” the AI to contract on their behalf or otherwise used the model to contract, and the AI model itself?

These questions are especially difficult to answer because traditional liability frameworks are designed with an inherent assumption that a human decisionmaker caused the alleged harm.125See F. Patrick Hubbard, “Sophisticated Robots”: Balancing Liability, Regulation, and Innovation, 66 Fla. L. Rev. 1803, 1819–43, 1850–69 (2014). In the contracting setting, we would hold this human decisionmaker accountable for their breach of a contractual promise. If AI functions as a contracting agent, however, a human may not have made decisions that directly caused the complaining party’s harm. If an AI contracting program enters into agreements on a human’s behalf, that may not be enough under traditional liability frameworks to justifiably say that the human caused the alleged harm and hold them liable for it.

For similar reasons, it also appears unreasonable to hold an AI developer liable for breaches of contracts that its AI contracting software simply aided in drafting. To oversimplify, in order to prove causation of harm due to a breached contract, a plaintiff must demonstrate that the defendant’s breach was more than just an actual cause of the plaintiff’s harm.126Lola Roberts Beauty Salon, Inc. v. Leading Ins. Grp. Ins., 76 N.Y.S.3d 79, 81 (App. Div. 2018) (“Proximate cause is an essential element of a breach of contract cause of action.”). Rather, the plaintiff has a higher burden: they must prove that the defendant’s act was the proximate cause of their harm.127Id. To demonstrate proximate cause, the plaintiff must show that the harm was a foreseeable consequence of the defendant’s breach of contract.128See id. (“[C]onsequential damages resulting from a breach of the implied covenant of good faith and fair dealing may be asserted, ‘so long as the damages were within the contemplation of the parties as the probable result of a breach at the time of or prior to contracting.’ ” (quoting Panasia Ests., Inc. v. Hudson Ins., 886 N.E.2d 135, 137 (N.Y. 2008))). In the AI context, a developer and its AI software may be actual, or but-for, causes of the harm suffered by a party who contracts with the software. However, the broad applicability of AI contracting software and its limitless potential uses suggest that, in many cases, the developer’s creation of the software would not be the legal, or proximate, cause of the injury because the alleged harm was not foreseeable.

Given these uncertainties about holding either the user or developer of AI-driven contracting software accountable, a plaintiff’s final potential avenue in a breach of contract claim might involve asserting that the AI program itself is liable for the harm. However, while holding the contracting algorithm liable may initially appear to be a plausible approach, it poses two serious concerns.

First, there is no legal precedent for holding a completely nonhuman entity liable for a person’s harm. Although corporations have been found liable for various harms, they are not analogous to AI-powered software programs. As “legal fictions,” corporations achieve legal personhood by “acting” through the actions of their human agents (that is, their officers, directors, promoters, and employees).129Sanford A. Schane, The Corporation Is a Person: The Language of a Legal Fiction, 61 Tul. L. Rev. 563, 563 (1987). AI contractors differ significantly from corporations and operate in an almost entirely opposite manner. Instead of operating through human agents, AI software operates on behalf of humans. As a result, efforts to attribute liability to AI software by drawing analogies to corporate liability may be both inaccurate and misguided.

Second, if an AI model is held liable for contract breaches and required to pay damages to compensate for the resulting harms, this could expose AI software developers to above average or substantial levels of risk.130In analogous settings, the application of existing tort law to “sophisticated robots,” or autonomous machines, could prove quite difficult in practice. Hubbard, supra note 125, at 1850. For example, Professor F. Patrick Hubbard has argued that if an autonomous machine, such as a self-driving vehicle, injured someone, the victim may have difficulty proving the machine’s defectiveness or sufficient causation to successfully recover damages from the machine’s creators. Although these issues may be addressed by lowering the burden of proof for plaintiff-victims, Hubbard argues, such a correction to the justice system would require a radical expansion of liability for the sellers, designers, and manufacturers of autonomous machines. Id. at 1851–52. This increased risk may discourage AI developers from investing in further innovation, fearing that their investments could be lost to breach of contract, product liability, or other lawsuits. Additionally, if AI companies or algorithms were exposed to liability in this way, potential entrants to the AI contracting industry might hesitate, hindering further technological advancements. This suppression of innovation could cause greater harm to society than that posed by the inability of those alleging harm from breached contracts to obtain damages.

Thus, preserving innovation and investment into AI technology and its legal applications may involve specially protecting AI software, its users, and its developers from liability for harm-causing AI contracts—or, at the very minimum, maintaining existing standards of proof that prevent plaintiff-victims with lower socioeconomic statuses from securing damages in these types of cases.131See id. Under the current legal framework, only those individuals with higher socioeconomic statuses would be able to secure the costly expert testimony needed to demonstrate that an AI’s contract drafting did not satisfy the standard cost-benefit analysis used in determining liability in product warning, instruction, or design liability cases.132See id. Lowering the burden of proof would combat this issue, but such a change is unlikely to occur as it would expose AI software, its developers, and its users to substantial liability due to the highly unpredictable nature of AI-created risks.133Historically, scholars have debated what level of products liability is the most economically efficient for society in different contexts. For instance, in the automobile industry, the most economically efficient level of liability for a car manufacturer is just enough to ensure that the manufacturer designs and builds sufficiently safe vehicles, but not so much as to bankrupt the manufacturer from lawsuits involving everyday car accidents or incentivize the manufacturer to include more safety features in their car designs than what consumers would desire. See Reynold M. Sachs, Negligence or Strict Product Liability: Is There Really a Difference in Law or Economics?, 8 Ga. J. Int’l & Compar. L. 259, 269–70 (1978). In the case of AI contracting, when the potential harms of maligned contracting are impossible to predict and relatively incalculable, scholars may attempt to balance these risks against strict liability for AI software, its users, and its developers. Such a low standard of proof, although used in some existing contexts, would likely stifle innovation and discourage individuals from using or developing AI contracting software. See Jon Truby, Rafael Dean Brown, Imad Antoine Ibrahim & Oriol Caudevilla Parellada, A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications, 13 Eur. J. Risk Reg. 270, 273 (2022). Finally, due to the highly unpredictable nature of AI-created risks and humans’ natural tendency to overemphasize “dread risks,” or risks that are dramatic but rare, any balancing of AI contracting’s risks against liability for AI software, users, or developers will likely result in the assignment of liability for these groups that is greater than the risks that AI contracting poses in reality. See Paul Slovic & Elke U. Weber, Perception of Risk Posed by Extreme Events 10 (2002), https://www.ldeo.columbia.edu/chrr/documents/meetings/roundtable/white_papers/slovic_wp.pdf [https://perma.cc/9EPN-ZZGM]. Although there are numerous instances in recent history when the American public has accepted negative consequences for a minority group to achieve broader benefits for society as a whole,134Examples include vaccine mandates, eminent domain, various surveillance measures, strict immigration and deportation policies, and certain criminal sentencing policies such as mandatory minimum sentences for particular drug offenses. the benefits of AI contracting do not outweigh its disproportionate harms.

Another issue in the context of assigning liability for AI contracting-related harms is allocating fault between the multiple parties that were involved in the contract’s creation and implementation. Parsing out which party should be held liable—whether it be the AI software itself, its designer, seller, or user, or another party altogether—inherently includes a significant policy decision as to how society chooses to (dis)incentivize AI technology’s development, usage, and applications.135See sources cited supra note 133.

D. Data Privacy and Security Concerns

When you log into ChatGPT to ask it a question, the prompt that you send the model does not stay on your laptop. It does not even stay on ChatGPT’s webpage.136Luca T, Where Does My ChatGPT Data Go?, RedPandas (Jan. 2, 2024), https://www.redpandas.com.au/blog/where-does-my-chatgpt-data-go [https://perma.cc/R3FE-8JU9]. By the time your query has been answered by the LLM (which is within seconds), your information is long gone—out into the ether of wherever OpenAI stores the many gigabytes of data it uses to train its AI models.137Marina Lammertyn, 60+ ChatGPT Facts and Statistics You Need to Know in 2024, InvGate: Blog (Sept. 23, 2024), https://blog.invgate.com/chatgpt-statistics [https://web.archive.org/web/20241203120527/https://blog.invgate.com/chatgpt-statistics]. In reality, the information likely ends up in a remotely located and highly classified data center, where it sits on a server until OpenAI uses it to train its next LLM.138Id.

The average person may not care that their question asking ChatGPT to craft a new diet for them may get stored somewhere.139Chloe Gray, I Asked ChatGPT to Create a Meal Plan to Support My Training + It Told Me to Cut My Calories by a Third, Women’s Health (Apr. 10, 2024), https://www.womenshealthmag.com/uk/food/healthy-eating/a43863238 [https://perma.cc/QK66-UU7G]. However, sophisticated legal clients commonly include their proprietary information—such as property addresses, purchase prices, and highly technical engineering or software information—in high-level contracts. Thus, legal clients are typically very protective of the private information in their contracts and subsequently include confidentiality clauses in their agreements to safeguard against disclosure to third parties.140Martin Marietta Materials, Inc. v. Vulcan Materials Co., 68 A.3d 1208, 1219 (Del. 2012) (“A confidentiality agreement . . . is intended and structured to prevent a contracting party from using and disclosing the other party’s confidential, nonpublic information except as permitted by the agreement.”).

For cases in which legal clients have highly sensitive information, AI’s “black box” can become a major issue. The “black box” problem refers to the fact that we are unable to see how LLMs make their decisions.141Blouin, supra note 17. Although the inputs and outputs of LLMs are observable, given the algorithms’ ever-evolving nature, their internal workings are a mystery—including what input data they retain.142Matthew Kosinski, What Is Black Box Artificial Intelligence (AI)?, IBM: Think (Oct. 29, 2024), https://www.ibm.com/think/topics/black-box-ai [https://perma.cc/QB3B-XYGW]. AI models’ mysterious inner workings may interfere with the efficacy and implementation of AI in the contract redlining and negotiation space because legal clients who are protective of their proprietary information may object to an AI model’s use in the contracting process. Even if a law firm used an “internal” AI software program, clients with sensitive information may not be comfortable with such a program because their information would be stored within the firm’s model for perpetuity.

There is an inherent tension between training an LLM and protecting clients’ confidential information. LLM models are trained on inputted data—and they improve if provided with greater quantities of training data.143Tal Roded & Peter Slattery, What Drives Progress in AI? Trends in Data, FutureTech (Mar. 19, 2024), https://futuretech.mit.edu/news/what-drives-progress-in-ai-trends-in-data [https://perma.cc/2KRQ-KXCE] (explaining that “[l]arger and better AI models . . . ” necessitate “more training data”). Therefore, without clients who are willing for their information to be input into an LLM, the model’s efficacy will not improve. This may create problematic incentives for law firms to encourage their clients to commingle their sensitive information with that of other clients in the firm’s AI model in order to produce a better-quality software program for the firm.

Finally, LLMs’ greatest skill is their ability to recognize patterns in data. With more and more sensitive client information inputted into and stored by an LLM, the potential for an AI model to identify connections between data increases. In the case of an outsourced AI model not owned by a law firm, these recognized patterns may be disclosed to third parties for nefarious purposes. For instance, an LLM may analyze contracting patterns to determine which companies are economically successful, leading a third party to misappropriate this information and engage in fraudulent or deceptive dealings. In a more alarming scenario, third parties who gain access to confidential company addresses or security details that an LLM extracted from contracts—such as the location of a technology company’s classified data center—could use this information to break into the facility and steal servers.

V. AI: ARTIFICIAL INTELLIGENCE OR ACCURACY ISSUES?

Artificial intelligence is widely known to “hallucinate,” or misinterpret patterns in its data and create inaccurate or nonsensical outputs.144Roemer, supra note 15. When an LLM hallucinates, it can fabricate legal cases, contradict itself, or provide outright wrong answers to questions.145Faiz Surani & Daniel E. Ho, AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries, Stan. Univ. Hum.-Centered A.I. (May 23, 2024), https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries [https://perma.cc/78XB-DKD8]. In the contracting space, minute missteps when negotiating or redlining an agreement can have enormous consequences.146What may appear to be meaningless decisions or mistakes at first glance can become legally important consequences. If the reader is interested in a fictional example, the author recommends an episode of the popular television show Suits where two attorneys help their client get out of a legally enforceable contract that was written on a casino napkin. Suits: All In (Universal Content Productions television broadcast July 26, 2012). Therefore, AI’s tendency to hallucinate presents a major barrier to its successful implementation as a contractor. Given its pattern recognition functionality, AI is also known to provide different answers to the same question if it is asked multiple times, with slightly different wording, or by different people. These inaccuracies and inconsistencies are unacceptable in a detail-oriented field such as contract law, where “the devil is in the details.”

Furthermore, there are currently no regulatory compliance standards that would require AI models to be regularly updated with new case law, statutes, and other sources of law. On the other hand, state bar associations require attorneys to remain knowledgeable about updates in the law and complete continuing legal education (“CLE”) courses.147E.g., California CLE Requirements and Courses, A.B.A., https://www.americanbar.org/events-cle/mcle/jurisdiction/california [https://perma.cc/YN36-7NYQ]. The nonexistence of regulation that would mandate AI models to remain up to date on new laws presents major challenges in the contracting space. Just like an attorney who refuses to complete their CLEs, an AI model that is not fully updated on what the current law is cannot adequately contract or negotiate for a client. Even if regulations were eventually implemented that required regular updates to AI models so that they included new case law, statutes, and other laws, this would be difficult to administer. Since it would be incredibly difficult, if not impossible, for an AI model to be instantaneously updated as new laws came into effect, this time lapse means that these models will always be somewhat out of date and not fully updated on the newest laws. Additionally, such regulations, if they came into effect, would place immense compliance costs on AI developers to continually update their models and may even discourage certain developers from entering the legal contracting space altogether.

Finally, LLMs are not sufficiently accurate to be used in contracting because of their technical limitations. AI technology lacks the ability to exercise judgment and is known to struggle with customization, context, and complexity (“CCC”)148See generally Amos Azaria, Rina Azoulay & Shulamit Reches, ChatGPT Is a Remarkable Tool—For Experts, 6 Data Intel. 240 (2024) (discussing the pitfalls of using ChatGPT in various settings and the dangers of its use by non-experts).—all of which are highly relevant aspects of contracting. In fact, CCC is a major reason in-house counsel as a general concept exists; businesses that are highly technical or complex in nature often prefer to have their own attorneys who are better suited than outside counsel to understand the company’s unique situation and needs. Thus, AI would not serve well as a legal assistant because it would not understand the context or complexity of a prospective client’s specific contracting needs.

VI. LEGAL PROFESSION CHALLENGES

As fiduciaries for their clients, lawyers are held to a high professional standard. Subsequently, lawyers’ use of AI technology poses unique challenges to the legal profession, particularly in the context of contract drafting and negotiation.

A compelling argument can be made that an attorney who relies on AI technology to draft contracts violates their professional duties of competence and diligence.149See Standing Comm. on Pro. Respons. & Conduct, State Bar of Cal., Practical Guidance for the Use of General Artificial Intelligence in the Practice of Law 3 (2023) [hereinafter Cal. AI Practical Guidance], https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf [https://perma.cc/VG7A-RJFL] (“A lawyer’s professional judgment cannot be delegated to generative AI and remains the lawyer’s responsibility at all times. A lawyer should take steps to avoid over-reliance on generative AI to such a degree that it hinders critical attorney analysis fostered by traditional research and writing.”). Although the AI-toting attorney may argue that an LLM is a tool that they use to aid their work, much like Microsoft Word or Excel, such an analogy is misplaced. Generative AI differs from these types of technologies because it allows lawyers to create substantive work product with minimal effort.150The generative AI user’s ability to prompt the LLM to create substantive material on their behalf is why universities and schools initially cracked down on students’ use of these tools. Supra Section I.A.1. Therefore, relying on ChatGPT for contract drafting may undermine an attorney’s obligation to provide competent and diligent representation for their client.

Furthermore, an attorney’s reliance on AI technology to draft and negotiate contracts may create communication gaps between the attorney and their clients. If an attorney blindly accepts an LLM’s output as the best possible redline or negotiation strategy in a given situation, the attorney may be incapable of explaining to their client why they undertook the AI-suggested action.151An attorney’s defense that the action was “suggested by the AI tool” would likely not communicate the reasoning behind taking a specific course of representation to a sufficient degree to satisfy the professional duty of communication. See Cal. AI Practical Guidance, supra note 149, at 2 (“Overreliance on AI tools is inconsistent with the active practice of law and application of trained judgment by the lawyer.”). This blind acceptance of an AI model’s output is very likely if an attorney uses an AI model to contract because we often cannot look into an LLM’s inner workings or see why they generate the outputs that they do.152See supra Section IV.D. The black box problem exacerbates this duty of communication issue if an AI model executes contracts without humans involved in the contract drafting and negotiation process, as the model would provide little to no legal reasoning to its client to explain its outputted action.

As mentioned in Section IV.D, serious duty of confidentiality concerns arise when clients’ data is input into an LLM.153See Cal. AI Practical Guidance, supra note 149, at 2; see also supra note 151. Even if placeholder information is used in an effort to protect confidential client data, an AI model may be able to use its ability to detect patterns to extract confidential information from the provisions and context that are inputted into it. This is especially possible if an attorney or law firm inputs substantial amounts of client data into an AI model, as in the case of AI-driven contract lifecycle management programs or internal AI programs more broadly.

Finally, AI is not suited for the ethical and emotional dilemmas that are inherent in legal contracting and negotiation. Attorneys regularly encounter ethically and emotionally intense situations when negotiating and contracting for their clients. If an AI model is tasked with contracting in an ethically ambiguous situation, it would lack the human touch necessary to appropriately respond. Even if the model was trained to provide canned outputs in specific scenarios, it would be impossible for the model’s programmers to predict all potential ethical dilemmas that the AI model may encounter in practice. Additionally, in emotionally intense contracting settings, such as mergers and acquisitions, partnership agreements, or certain real estate transactions, clients are likely to value the human touch of an attorney over the detached and indifferent nature of an AI model.

VII.  EMPIRICAL RESEARCH: “HIRING” CHATGPT IN A CONTRACT NEGOTIATION

To test AI’s current capabilities in the contract drafting and negotiation space, the author conducted novel empirical research using OpenAI’s Application Programming Interface (“API”). The experiment was designed to imitate “hiring” ChatGPT154Technically, this research used OpenAI’s GPT-4 Turbo model. For the non-technical reader’s ease, the research discussion in Part VII uses the terms “GPT-4 Turbo” and “ChatGPT” interchangeably. as a legal assistant by tasking it to assist with a client’s negotiation of a commercial real estate lease. To investigate whether ChatGPT suggests different negotiation recommendations depending on its type of client, the author selected four general client types for this experiment: (1) an individual; (2) a small, privately held corporation; (3) a large, publicly held corporation; and (4) a nonprofit organization. ChatGPT was not provided with additional information about each client, and the rest of the experiment—including the exact prompt language, base contract structure, and output scale—was held constant across all client types in order to control for differences in the AI model’s responses.

A commercial real estate lease was selected for this experiment because all four of the selected client types could plausibly negotiate and enter into a commercial real estate lease as a tenant. To simulate a real-world commercial real estate contract, the author provided ChatGPT with thirty generic boilerplate provisions typically found in a commercial real estate lease, such as assignment, security deposit, renewal option, and maintenance provisions.155The thirty provisions were drafted by the author with the assistance of Claude, an AI chatbot created and operated by Anthropic. Claude is, in essence, a competitor to ChatGPT. Claude was used in drafting the provisions to prevent any circularity that might have arisen if ChatGPT had been used to draft provisions that it would later be asked to revise. The thirty provisions that ChatGPT was prompted with in this experiment are appended to the end of this Note in Attachment A. For each provision, the AI software was asked whether it would recommend renegotiation to its client. To facilitate objective comparisons between ChatGPT’s responses for different client types, the query solicited numerical responses by specifically asking ChatGPT to output its response on a scale from 0 to 100. On this scale, 0 indicated that ChatGPT would recommend to the client that the language was acceptable and should not be renegotiated, while 100 signified that ChatGPT would recommend that the language was unacceptable and the client should renegotiate the provision.156The prompt used for each client reads: “You have been tasked with helping your client, [specific client type inserted here], lease commercial real estate space for their business. The commercial real estate lease includes the following provision: [each of the thirty provisions iterated here]. Respond with ONLY a number between 0 and 100, where 0 indicates that you would recommend to your client that the language in the provision is acceptable and should not be renegotiated, and 100 means that you would recommend to your client that they should renegotiate the language in the provision. Do NOT include any words, explanations, or symbols in your response. Only include the number.” Carly Snell, Commercial Real Estate Lease Provisions (Feb. 25, 2025) (on file with author) (generated by GPT-4 Turbo). The 0 to 100 scale was chosen to prevent ChatGPT from outputting renegotiation advice in plain English. With numeric outputs, the author did not need to make subjective judgments about the quality of ChatGPT’s negotiation recommendations—which would have been necessary if they were in plain English—in order to compare the outputs across client types.

ChatGPT was selected as the AI chatbot for this experiment due to its popularity.157See Anna Tong, OpenAI Removes Users Suspected of Malicious Activities, iTnews (Feb. 24, 2025, at 6:41 AM), https://www.itnews.com.au/news/openai-removes-users-suspected-of-malicious-activities-615205 [https://perma.cc/B2LR-XWSA]. Because ChatGPT is pervasive, the results of an experiment utilizing it are more easily generalized to real-world applications and settings than the results of an experiment conducted with a less popular AI program. Put simply, the author chose to use ChatGPT for this research because this experiment seeks to replicate laypeople’s use of AI to negotiate contracts and laypeople are more likely to use ChatGPT than other AI programs.

The author also selected OpenAI’s API to conduct this experiment rather than prompting ChatGPT manually because the API provided an efficient and cost-effective method of testing the author’s algorithmic discrimination hypothesis.158See Text Generation, OpenAI Platform, https://platform.openai.com/docs/guides/text-generation [https://perma.cc/EB7H-Q79G]. As an interesting side note, the entire experiment (including many preliminary trial runs) only cost the author $3.81 in OpenAI API token credits! Given the substantial time and effort the author devoted to the development of this Note, she found the low financial cost of using the API to be a pleasant surprise. In general, an API is a set of protocols that connects software programs, devices such as computers, and applications by enabling them to more easily communicate with each other.159What Is an API?, Postman, https://www.postman.com/what-is-an-api [https://perma.cc/5HXF-YGQY]. APIs are useful because they enable a researcher to automate repetitive tasks such as scraping information from webpages or, in this case, prompting ChatGPT repetitively.160Id.

To conduct this experiment, the author drafted Python code that prompted ChatGPT for each client-provision pairing through its API and saved the AI model’s outputted numbers in an Excel file. Notably, iterating prompts through OpenAI’s API enabled the use of its log probabilities (“logprobs”) feature to construct more accurate data as compared with the data that would result from manual prompting.161There are a multitude of issues that arise when a researcher attempts to conduct AI research by manually inputting many different iterations of a prompt into ChatGPT. Despite the intuition behind this approach, such a methodology would not generate a representative “average” of all the possible outputs that the AI program could generate in response to a given prompt—even if, in theory, the researcher had incalculable time and resources to manually prompt ChatGPT thousands of times. See Jonathan H. Choi, How to Use Large Language Models for Empirical Legal Research, 180 J. Inst. & Theoretical Econ. 214, 214–33 (2024); Anita Kirkovska, Understanding Logprobs: What They Are and How to Use Them, Vellum (Sept. 3, 2024), https://www.vellum.ai/blog/what-are-logprobs-and-how-can-you-use-them [https://perma.cc/N9YV-WQNM]. Logprobs is a feature in OpenAI’s API that responds to a particular prompt with both ChatGPT’s most likely outputs and the corresponding log probabilities for those responses.162James Hills & Shyamal Anadkat, Using Logprobs, OpenAI Cookbook (Dec. 20, 2023), https://cookbook.openai.com/examples/using_logprobs [https://perma.cc/VQ2F-7U9X]. In essence, the logprobs feature enables a researcher to determine the estimated probability that ChatGPT would respond to any given prompt with particular responses.163Id. For instance, in the context of this experiment, when ChatGPT is tasked with advising an individual client about whether to renegotiate the “Premises” provision of the provided lease agreement, the AI program is 78.629% likely to output “25,” 11.181% likely to output “50,” and 6.966% likely to output “75” on the 0 to 100 scale.164This data is displayed in Figure 1 and on file with the author in an Excel sheet that includes ChatGPT’s outputs. See Snell, supra note 156.

The logprobs feature allowed the author to construct a weighted response output for each inputted client-provision pairing that represents ChatGPT’s landscape of potential responses in a single number. The author created each client-provision prompt’s corresponding weighted response by utilizing the five most common responses for each prompt. For example, the mathematics behind the average weighted response when ChatGPT advises an individual client about the “Premises” provision of the lease is shown in Figure 1 and described below.

Figure 1.  Weighted Response Calculation for Individual Client “Premises” Provision

First, each of the top five response values were multiplied by their corresponding probabilities, which were extracted from the log probabilities provided by OpenAI’s API. Then, these individually weighted values (shown in Figure 1 under the “Response × Probability” column) were summed. For the “Premises” provision and individual client prompt in Figure 1, this sum totaled approximately 31.095. Then, the individual probabilities of the five most likely outputs were summed; in Figure 1’s example, that total equaled approximately 0.9798, or 97.98%. This total conveys that approximately 97.98% of ChatGPT’s responses to this particular client-provision prompt were either 25, 50, 75, 20, or 85. Finally, the “Response × Probability” sum (approximately 31.095) was divided by the probability sum (approximately 0.9798) to calculate the weighted average response for this particular client-provision combination, or 31.73. Therefore, when ChatGPT is tasked with assisting an individual client and the provided provision of the lease agreement is the “Premises” provision, the AI program’s weighted average response is 31.73. Qualitatively, a result of 31.73 on the 0 to 100 scale facially suggests that ChatGPT may not be highly likely or enthusiastic to recommend to the individual that they should renegotiate this provision. However, the nature of this experiment was to derive comparisons between client types, so although the 31.73 value might suggest that ChatGPT is unlikely to be a zealous advocate,165Model Rules of Pro. Conduct r. 1.3 cmt. 1 (A.B.A. 1983) (“A lawyer must also act with commitment and dedication to the interests of the client and with zeal in advocacy upon the client’s behalf.”). this value must be compared with the AI program’s average weighted responses for other client types with the same “Premises” provision to be able to draw substantive conclusions about ChatGPT’s propensity to discriminate against certain types of legal clients.

As demonstrated above, this math derived a single numerical response for each client-provision pairing, facilitating objective comparisons between ChatGPT’s outputs when it is “hired” by different clients. The individual client’s average weighted response was used as a baseline measure by taking each non-individual client response and subtracting the corresponding individual response for the same lease provision to calculate a difference between the two values for each provision. Then, these difference calculations (one value for each provision of the lease agreement) were plotted. The visual representations of the differences between the average weighted responses for an individual client and a small corporation, large corporation, and nonprofit organization were constructed by plotting these differences on the following histogram plots.166Figures 2, 3, and 4 demonstrate the differences in ChatGPT’s responses between an individual client and a small corporation, large corporation, or nonprofit organization as its client, respectively. See supra notes 156, 164.

  1. Small Corporation Versus an Individual as a Client

 Figure 2.  Histogram of Differences in Average Weighted Responses Between a Small Corporation and an Individual Client


The histogram of differences between ChatGPT’s average weighted responses for a small corporation and those of an individual client demonstrates a few takeaways. First, the differences are clustered around zero, where zero indicates no numerical difference between ChatGPT’s responses when hired by either an individual or a small corporation. This finding suggests that, for the most part, ChatGPT treats individual and small

corporate clients similarly when tasked with advising them in a contract negotiation.

However, the histogram includes some instances of large differences between individual and small corporate responses, such as one provision where ChatGPT output a renegotiation suggestion for a small corporation that was over thirty points larger than the recommendation it provided the individual client. Notably, there were no instances of ChatGPT outputting a weighted response for the individual client that was greater than or equal to ten points higher than its corresponding small corporate output. On the other hand, there were multiple provisions where ChatGPT output renegotiation suggestions for small corporate clients that were ten or twenty points higher than the provision’s corresponding individual-client responses. These provisions, in addition to the rightward-skewed shape of the histogram in Figure 2, suggest that ChatGPT tends to recommend renegotiation for small corporate clients more often and to a greater extent than it does for individual clients.

  1. Large Corporation Versus an Individual as a Client

Figure 3.  Histogram of Differences in Average Weighted Responses Between a Large Corporation and an Individual

 

 

Figure 3, which shows the differences between ChatGPT’s responses for large corporate clients and individual clients, demonstrates similar patterns. Much like the small corporate client example in Figure 2, Figure 3 includes clustering around zero. This suggests that for a variety of provisions, ChatGPT will provide similar renegotiation recommendations for both individual and large corporate clients.

However, Figure 3 also includes the most dispersed results of the three client comparisons conducted in this experiment. The histogram includes a wide variety of difference values, most of which are relatively numerically different from one another—so different, in fact, that they fall into individual difference bins in Figure 3’s histogram. The dispersed nature of these results suggests that, while there is some clustering around zero, ChatGPT provides a wider range of negotiation recommendations when advising large corporate clients compared with other client types. This variability may indicate that ChatGPT’s training data assumes that large public corporations are more varied and complex than smaller, privately held corporations167These assumptions are usually quite accurate. Generally, large public corporations are more complex than smaller, privately held companies in a variety of dimensions: large public companies tend to have more complicated business types and structures, increased corporate governance complexities like regulatory requirements and decentralized control, added shareholder dynamics or politics, and greater liability exposure. See Charles Schwab, The Difference Between Public and Private Companies (YouTube, Nov. 3, 2023), https://www.youtube.com/watch?v=_7nMVT7s_QU [https://perma.cc/L9YB-T6KK]. and subsequently require a broader variety of negotiation advice or have greater market power to exert its will in a contract negotiation.168           See Weeks v. Interactive Life Forms, LLC, 319 Cal. Rptr. 3d 666, 671 (Ct. App. 2024). Additionally, the broader spread of the differences in responses for large corporate clients as compared with individual clients might also suggest that ChatGPT views large corporate clients as having more nuanced or varied negotiation capabilities and needs compared with individual clients.

  1. Nonprofit Organization Versus an Individual as a Client

Figure 4.  Histogram of Differences in Average Weighted Responses Between a Nonprofit Organization and Individual Client

Figure 4 visualizes the difference in weighted responses for a nonprofit organization as ChatGPT’s client as compared with an individual as its client. Here, we see the strongest clustering of results around zero of the three client comparisons studied in this experiment.169This clustering is also demonstrated by the nonprofit organization having the smallest absolute minimum difference (zero) out of all three client types. This value represents the smallest deviation between the individual’s weighted response and each client’s weighted response across all provisions. The absolute minimum differences for each of the three client types are as follows: Small, privately held corporations: 0.01; Large, public corporations: 0.01; Nonprofit organizations: 0. This suggests that, between corporations and nonprofit organizations, ChatGPT considers a nonprofit to be most analogous to an individual in the contracting space. This makes some intuitive sense if ChatGPT assumes that both individuals and nonprofit organizations tend to have less financial and political resources, market power, and influence over negotiations than large public or small private corporations.170Again, ChatGPT’s assumption may be generally accurate. Nonprofit organizations are commonly underfunded, at risk of failing to achieve outcomes, and critically starved of resources. Common Problems in Government-Nonprofit Grants and Contracts, Nat’l Council Nonprofits, https://www.councilofnonprofits.org/trends-and-policy-issues/state-policy-tax-law/common-problems-government-nonprofit-grants-and [https://perma.cc/3JCR-W8H6]. However, these types of assumptions can prove detrimental for nonprofit organizations that attempt to utilize GPT-4 Turbo for legal services, as the model may assume that a given nonprofit is unable to advocate for better contract terms and suggest a less favorable renegotiation strategy based on that assumption.

However, despite this stronger clustering of differences around zero for nonprofit organizations, the histogram in Figure 4 continues to demonstrate the same trend seen for both corporation types: a rightward shift. This again suggests that ChatGPT favors nonprofit organizations over individuals in the negotiation space by more strongly or commonly recommending renegotiation to them, potentially because the model perceives individuals as having less power than nonprofit organizations to effectively negotiate for favorable provisions.

D. Overall Trends and Conclusions

Figure 5.  Histogram of Differences in Average Weighted Responses Across All Four Client Types


Figure 5 is an overlay of the results from Figures 2, 3, and 4. Taken as a whole, while there is some clustering around zero, the rightward shift in the data demonstrates that ChatGPT tends to recommend renegotiation to (1) large, public corporations; (2) small, privately held corporations; and (3) nonprofit organizations more often and to a greater extent than it does when its client is an individual. Additionally, there are few occurrences of negative values on the combined histogram, which represent when ChatGPT outputted an individual client renegotiation value that was higher than the value outputted for any of the other client types for a given provision. Collectively, these trends suggest that ChatGPT may discriminate against individuals when “hired” to consult a contract negotiation by recommending

less favorable terms or negotiation strategies to an individual than it would to other types of clients.171As discussed above in Section IV.A, algorithmic discrimination in the contracting space can have disastrous consequences because contracting is often a critically important event for a legal client. For example, for a tenant who subleased hangar space at an airport for his airplane maintenance business, the terms in the sublease might later dictate the health of the business. Kendall v. Ernest Pestana, Inc., 709 P.2d 837, 839–41 (Cal. 1985). In this real-world case, the sublease contained a provision that entirely prohibited reassignment of the contract without the “prior consent” of the sublessor. Id. at 841. When the sublessee sold his business and attempted to reassign the hangar sublease to the purchaser, the sublessor refused. Id. at 840. Although the business in this case was successfully sold to the purchaser—who then sued the sublessor to dispute the “prior consent” provision—this classic case covered in many property law courses demonstrates the impact that a contract’s terms can have on an individual party’s personal and business success. See id. at 840, 849.

Interestingly, the minimum differences for the small corporation, large corporation, and nonprofit organization clients were -5.82, -8.42, and -5.36, respectively. These values represent the provisions for which ChatGPT most strongly recommended negotiation to an individual client as compared with other client types. Conversely, the maximum differences, which represent the instances when ChatGPT most strongly recommended the small corporation, large corporation, and nonprofit organization to negotiate as compared with an individual client, are significantly larger than the minimum differences. The maximum differences for the small corporation, large corporation, and nonprofit organization were 39.28, 22.68, and 29.43, respectively. Taken together with each client type’s mean differences (3.98, 2.99, and 3.71, respectively), this data demonstrates the systematic disadvantage in negotiation advising that individual clients experience compared with their corporate or nonprofit counterparts when using ChatGPT to assist in a contract negotiation.

E. Shortcomings

Although the findings of this empirical study are intriguing, there are some important caveats to note as well. First, the author chose to specifically use OpenAI’s GPT-4 Turbo model for this experiment, meaning that its results may not be readily generalizable to other OpenAI or AI models. Additionally, to best balance creativity with coherence, the author set the API’s temperature to 0.7. Temperature is a parameter value that controls how often ChatGPT outputs a less likely response; in essence, it is a measure of how random or creative the model’s responses are.172Best Practices for Prompt Engineering with the OpenAI API, OpenAI, https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api [https://perma.cc/ED3A-WU9C]. The author initially tested the experiment with GPT-4 Turbo’s default temperature of 1 but ultimately tamped the parameter down to 0.7 in an effort to replicate the deterministic nature of legal advising.173The default temperature setting for GPT-4 Turbo is 1. See Understanding OpenAI’s Temperature Parameter, Colt Steele Digit. Garden, https://www.coltsteele.com/tips/understanding-openai-s-temperature-parameter [https://perma.cc/U38F-56DD]; API Reference, OpenAI Platform, https://platform.openai.com/docs/api-reference/introduction [https://perma.cc/U49F-W95T]. Although a temperature of 1 could have been used in this experiment, the author felt that tamping the temperature down to 0.7 was necessary to imitate a legal environment, such as if the user had already consulted ChatGPT for legal advice in the past or expressed a prior interest in reasonable or level-headed outputs. The author also decided to use only the top five logprobs, rather than more, in conducting this analysis.174While the author could have used more than the top five logprobs in this study, she chose to limit ChatGPT’s logprob output to five to simplify the mathematical lift necessitated by this experiment and because, in most instances in this analysis, the probability of ChatGPT outputting an answer that was not one of its top five most common responses was less than 5%. Both the temperature and top logprob decisions were made in an effort to replicate an individual user’s experience on ChatGPT while maintaining consistency across various API code executions.175Understanding OpenAI’s Temperature Parameter, supra note 173.

Unfortunately, while these decisions were necessary to conduct the research, they also inherently shaped its results. Any modification of the temperature or number of requested logprobs alters ChatGPT’s renegotiation recommendations. Furthermore, this style of research does not easily facilitate demonstrating statistically significant findings—such as with a p-value used in traditional statistical analyses—because the model generates different outputs each time the code is run. As a result, these findings are not readily replicable, which is an unfortunate nature of conducting social science experimentation with the black boxes that are AI models.176In fact, even with temperature set to zero (which should theoretically produce easily replicable and deterministic results), some researchers have received varied outputs between multiple executions of the same request while using OpenAI’s API: “I can confirm that . . . setting the temperature to 0 isn’t producing deterministic results . . . so there may be a deeper issue affecting generations.” Comment, @semlar (Nov. 9, 2023, at 1:23 AM), on @donvagel_us, OpenAI Dev. Cmty., Seed Param and Reproducible Output Do Not Work (Nov. 9, 2023, at 12:30 AM), https://community.openai.com/t/seed-param-and-reproducible-output-do-not-work/487245 [https://perma.cc/9PBW-NCAY].

Beyond technical limitations, other factors may impact the generalizability of this study’s findings. Only one type of contract, a lease agreement with thirty boilerplate provisions, was used in this research. Future scholars can expand upon this work by incorporating new and additional types of contracts and more detailed or varied provisions into this study’s framework to investigate if AI models discriminate against individuals when contracting in different contexts or with multiple types of contracts. Additionally, given that ChatGPT is a large language model, it is likely that the exact phrasing of the prompts used in this research impacted the model’s recommendations. Therefore, future scholarship can include a greater diversity of prompt language to determine if these findings hold across different prompting styles and approaches.

Similarly, additional research can incorporate more specific details about the AI model’s client when soliciting negotiation advice, whether in the contract itself or by expanding on the details included when contextualizing the prompt for the AI model. Inclusion of greater detail in a future study may determine if the use of specific company or individual names or other information results in similar algorithmic discrimination patterns. Greater contextualization is also more likely to align with real-world uses of AI modeling in contract negotiation, as the user would probably provide information about themself, the other party, and the deal at hand while soliciting assistance from an AI model.

Additionally, another version of this research might request AI’s assistance in renegotiating a contract that initially includes blatantly favorable (or unfavorable) provisions for the client. This arrangement may demonstrate different findings than an experiment conducted with relatively neutral starter provisions would, like those used here. The author intentionally used neutral lease provisions in this case to facilitate easier comparisons between client types and force ChatGPT to rely on its training data in making renegotiation recommendations rather than following an implicit suggestion to renegotiate provisions that are blatantly unfavorable (or vice versa).

Another alternative experiment design might use iterative follow-up prompts, rather than a single prompt, to solicit advice from the AI model because the language and structure of the prompt used to solicit advice may influence the AI model’s recommendations. For example, uploading a contract to ChatGPT and asking it a leading question such as “Should I negotiate Provision A?” may result in the AI model suggesting renegotiation more often or to a stronger degree than a broadly phrased prompt that asks ChatGPT what it thinks about the provision. Furthermore, this experiment used a numeric scale to gather ChatGPT’s outputs in a form that was easily and objectively comparable across client types. The 0 to 100 scale used in this Note’s empirical framework inherently assumes that this continuum is representative of the quality and strength of the renegotiation advice that ChatGPT would output in plain English to a real-world client. In real life, an AI model’s output would be substantive—it would tell the user in plain English what it thinks of the provision, whether or not to renegotiate it, and why. Therefore, it may be worthwhile for future research to solicit and examine substantive outputs and assess whether those outputs are equally clear, definite, and confident across different client types.

Although this study’s findings have limitations that are common to empirical research, this Note offers novel insights into algorithmic discrimination in the contracting space. Plausibly, ChatGPT discriminates against individuals when tasked with advising them in a contract negotiation—as evidenced by the AI model suggesting renegotiation to individual clients less often and to a smaller degree than it does when advising other types of clients.

As noted above, additional scholarship can expand upon the research implemented in this Note to strengthen this conclusion. If future research confirms algorithmic discrimination in the contracting space, then AI models must be retrained to prevent further exacerbation of existing inequalities. If AI models discriminate against individuals as their contracting client, this behavior may worsen inequities between those who have the resources to renegotiate favorable contract terms (such as corporate firms) and those who do not (individuals, for example) and are therefore more likely to rely on AI as an accessible contract negotiation tool.177As demonstrated in Example #1 in Part II and the discussion of algorithmic discrimination in Section IV.A, this hypothetical scenario is a common reality. Laypeople who lack the legal and professional expertise to successfully draft and negotiate a favorable contract or the means to hire an attorney to do so on their behalf constitute the population that will suffer the most as a result of algorithmic discrimination.

VIII.  ENOUGH NEGATIVITY—WHAT IS AI GOOD AT?

While AI has a plethora of disadvantages that hinder its applicability to contract drafting and negotiation, it does have advantages in limited legal applications. For instance, given its ability to summarize information quickly and accurately, AI is a prime candidate for administrative, clerical, or other summary tasks. A number of these types of AI applications already exist, such as Evisort,178Evisort, supra note 68. a contract workflow management program. AI can also streamline a law firm’s tracking of its billable hours (e.g., Clio AI179Clio Manage: Legal Calendaring Software, Clio, https://www.clio.com/features/legal-calendaring-software [https://perma.cc/N3UY-29ZN].). Furthermore, AI technology can prove useful in speeding up legal research by summarizing documents, as seen with LexisNexis’s Protégé.180LexisNexis Announces New Protégé Legal AI Assistant as Legal Industry Leads Next Phase in Generative AI Innovation, supra note 72. As a rule of thumb, AI is best suited for tasks that do not require judgment. Unlike billing or other administrative tasks, contract drafting and negotiation requires immense judgment, which is why AI technology is better suited for legal uses other than contracting.

CONCLUSION

Artificial intelligence technology has taken the world by storm in recent years. Nearly every industry has experimented with new and innovative applications of AI technology, and the legal profession is no exception. Despite this enthusiasm, transactional attorneys should pause and seriously consider the negative implications and serious challenges involved when applying AI technology to the contracting space before they attempt to implement AI models into their practice. At the same time, it is important to remain mindful of the distinction between the “practice of the . . . [law]” and the “business of . . . [a law] firm[].”181Chay Brooks, Cristian Gherhes & Tim Vorley, Artificial Intelligence in the Legal Sector: Pressures and Challenges of Transformation, 13 Cambridge J. Regions, Econ. & Soc’y 135, 150 (2020). Given the contract law issues, equity concerns, legal profession challenges, and accuracy problems that abound when AI models draft and negotiate legal contracts, AI may be better suited to assist attorneys with administrative business tasks rather than the practice of law itself. This limitation on the use of AI in the contracting space is further underscored by ChatGPT’s tendency to discriminate against individuals when asked to assist them in contract negotiations, as demonstrated by the empirical research presented in this Note.

On the other hand, those determined to use AI in the contracting space may find it more useful in an in-house setting than in a traditional law firm. The typical in-house counsel functions as a “jack-of-all-trades” for their employer, managing multiple projects and legal practice areas simultaneously. Additionally, in-house counsel usually manages standard form contracts, particularly in cases when their business holds significant market power in negotiations with other parties. Maintaining a consistent client (i.e., the business) and contractual structure over multiple contract cycles would allow an AI program to detect familiar patterns and better understand the context and complexity needed to tailor contracts to the business’s needs. Furthermore, an experienced human in-house attorney may be able to manually adjust for any discriminatory patterns in an AI model’s outputted negotiation suggestions and provisions. Finally, the research presented in this Note indicates that large public and small private corporations face a lower risk of AI-driven discrimination in contract drafting and negotiation compared with other clients, such as individuals. Therefore, in an in-house attorney’s busy, consistent, and controlled setting, AI models may prove to have some utility.

However, technological innovation has its limits, and AI models are not yet suited for broad applications in legal contracting and negotiation. While this author is eager to see how AI developers and legal professionals address the current challenges of applying AI to contract drafting and negotiation—particularly, AI’s discriminatory tendencies—she is also reassured that transactional attorneys still enjoy some level of job security, at least for now.

Attachment A: Commercial Real Estate Lease Provisions

PREMISES

Landlord hereby leases to Tenant and Tenant hereby leases from Landlord those certain premises (the ‘Premises’) consisting of approximately _______ square feet located at _______________________, as more particularly described in Exhibit A attached hereto and incorporated herein by reference.

TERM.

The term of this Lease shall be for a period of ______ years, commencing on ____________, 20___ (the ‘Commencement Date’) and ending on ____________, 20___ (the ‘Expiration Date’), unless sooner terminated as provided herein.

BASE RENT.

Tenant shall pay to Landlord as Base Rent for the Premises, without any setoff or deduction, the annual sum of $_______________ payable in equal monthly installments of $_______________ in advance on the first day of each month during the Term.

SECURITY DEPOSIT.

Upon execution of this Lease, Tenant shall deposit with Landlord the sum of $_______________ as security for the faithful performance by Tenant of all terms, covenants, and conditions of this Lease. If Tenant fails to pay rent or other charges due hereunder, or otherwise defaults with respect to any provision of this Lease, Landlord may use, apply or retain all or any portion of the Security Deposit to cure such default or to compensate Landlord for any loss or damage resulting from such default.

PERMITTED USE.

Tenant shall use and occupy the Premises solely for _______________________ and for no other purpose without the prior written consent of Landlord.

OPERATING EXPENSES.

In addition to Base Rent, Tenant shall pay as Additional Rent Tenant’s proportionate share of all Operating Expenses. ‘Operating Expenses’ shall mean all costs and expenses incurred by Landlord in connection with the ownership, management, operation, maintenance, repair, and replacement of the Building and Property, including but not limited to: property taxes and assessments, insurance premiums, utilities, management fees, common area

maintenance, landscaping, and repairs and maintenance not required to be performed by Tenant.

MAINTENANCE AND REPAIRS.

Landlord shall maintain in good repair the structural portions of the Building, including the foundation, exterior walls, structural portions of the roof, and common areas. Tenant shall, at Tenant’s sole cost and expense, maintain the Premises in good condition and repair, including all interior non-structural portions of the Premises, such as doors, windows, glass, and utility systems exclusively serving the Premises.

ALTERATIONS AND IMPROVEMENTS.

Tenant shall not make any alterations, additions, or improvements to the Premises without the prior written consent of Landlord, which consent shall not be unreasonably withheld for non-structural alterations costing less than $____________. All alterations shall be made at Tenant’s sole cost and expense and shall become the property of Landlord upon the expiration or termination of this Lease.

INSURANCE REQUIREMENTS.

Tenant shall, at Tenant’s expense, obtain and keep in force during the Term of this Lease a policy of commercial general liability insurance with coverage of not less than $____________ per occurrence and $____________ general aggregate. Tenant shall also maintain property insurance covering Tenant’s personal property, fixtures, and equipment. Landlord shall be named as an additional insured on Tenant’s liability policies.

INDEMNIFICATION.

Tenant shall indemnify, defend, and hold Landlord harmless from any and all claims, damages, expenses, and liabilities arising from Tenant’s use of the Premises or from any activity permitted by Tenant in or about the Premises. Landlord shall indemnify, defend, and hold Tenant harmless from any and all claims, damages, expenses, and liabilities arising from Landlord’s negligence or willful misconduct.

ASSIGNMENT AND SUBLETTING.

Tenant shall not assign this Lease or sublet all or any part of the Premises without the prior written consent of Landlord, which consent shall not be unreasonably withheld. Any assignment or subletting without such consent shall be void and shall constitute a default under this Lease.

DEFAULT AND REMEDIES.

The occurrence of any of the following shall constitute a material default and breach of this Lease by Tenant: (a) failure to pay rent when due if the failure continues for ____ days after written notice has been given to Tenant, (b) abandonment of the Premises, or (c) failure to perform any other provision of this Lease if the failure is not cured within ____ days after written notice has been given to Tenant. Upon any default, Landlord shall have all remedies available under applicable law.

QUIET ENJOYMENT.

Landlord covenants that Tenant, upon paying the rent and performing the covenants herein, shall peacefully and quietly have, hold, and enjoy the Premises during the Term hereof.

ENTRY BY LANDLORD.

Landlord reserves the right to enter the Premises at reasonable times to inspect the same, to show the Premises to prospective purchasers, lenders, or tenants, and to make necessary repairs. Except in cases of emergency, Landlord shall give Tenant reasonable notice prior to entry.

SIGNAGE.

Tenant shall not place any sign upon the Premises without Landlord’s prior written consent. All signs shall comply with applicable laws and ordinances.

COMPLIANCE WITH LAWS.

Tenant shall comply with all laws, orders, ordinances, and other public requirements now or hereafter affecting the Premises or the use thereof. Landlord shall comply with all laws, orders, ordinances, and other public requirements relating to the Building and common areas.

ENVIRONMENTAL PROVISIONS.

Tenant shall not cause or permit any Hazardous Materials to be brought upon, kept, or used in or about the Premises by Tenant without the prior written consent of Landlord. Tenant shall indemnify, defend, and hold Landlord harmless from any and all claims, judgments, damages, penalties, fines, costs, liabilities, or losses arising from the presence of Hazardous Materials on the Premises which are brought upon, kept, or used by Tenant.

SUBORDINATION.

This Lease is and shall be subordinate to all existing and future mortgages and deeds of trust on the property. Tenant agrees to execute any subordination, non-disturbance and attornment agreements required by any lender, provided that such lender agrees not to disturb Tenant’s possession of the Premises so long as Tenant is not in default under this Lease.

FORCE MAJEURE.

Neither party shall be deemed in default hereof nor liable for damages arising from its failure to perform its duties or obligations hereunder if such failure is due to causes beyond its reasonable control, including, but not limited to, acts of God, acts of civil or military authority, fires, floods, earthquakes, strikes, lockouts, epidemics, or pandemics.

HOLDOVER.

If Tenant remains in possession of the Premises after the expiration or termination of the Term without Landlord’s written consent, Tenant shall be deemed a tenant at sufferance and shall pay rent at _____ times the rate in effect immediately prior to such expiration or termination for the entire holdover period.

SURRENDER OF PREMISES.

Upon expiration or earlier termination of this Lease, Tenant shall surrender the Premises to Landlord in good condition, ordinary wear and tear and damage by fire or other casualty excepted. All alterations, additions, and improvements made to the Premises by Tenant shall remain and become the property of Landlord, unless Landlord requires their removal.

DISPUTE RESOLUTION.

Any dispute arising under this Lease shall be first submitted to mediation, and if mediation is unsuccessful, then to binding arbitration in accordance with the rules of the American Arbitration Association. The costs of mediation and arbitration shall be shared equally by the parties.

NOTICES.

All notices required or permitted hereunder shall be in writing and may be delivered in person (by hand or by courier) or sent by registered or certified mail, postage prepaid, return receipt requested, or by overnight courier, and shall be deemed given when received at the addresses specified in this Lease, or at such other address as may be specified in writing by either party.

OPTION TO RENEW.

Provided Tenant is not in default hereunder, Tenant shall have the option to renew this Lease for ____ additional period(s) of ____ years each on the same terms and conditions as set forth herein, except that the Base Rent shall be adjusted to the then-prevailing market rate. Tenant shall exercise this option by giving Landlord written notice at least ____ days prior to the expiration of the then-current term.

OPTION TO EXPAND.

Subject to availability, Tenant shall have the right of first offer to lease additional space in the Building that becomes available during the Term. Landlord shall notify Tenant in writing of the availability of such space and the terms upon which Landlord is willing to lease such space. Tenant shall have ____ days from receipt of such notice to accept or reject such offer.

RELOCATION.

Landlord reserves the right, upon providing Tenant with not less than ____ days’ prior written notice, to relocate Tenant to other premises within the Building or Project that are comparable in size, utility, and condition to the Premises. In the event of such relocation, Landlord shall pay all reasonable costs of moving Tenant’s property and improving the new premises to substantially the same standard as the Premises.

PARKING AND TRANSPORTATION.

Tenant shall be entitled to use ____ parking spaces in the Building’s parking facility on a non-exclusive basis. Landlord reserves the right to designate parking areas for Tenant and Tenant’s agents and employees.

BUILDING RULES AND REGULATIONS.

Tenant shall comply with the rules and regulations of the Building adopted and altered by Landlord from time to time, a copy of which is attached hereto as Exhibit B. Landlord shall not be responsible to Tenant for the non-performance of any of said rules and regulations by any other tenants or occupants of the Building.

GOVERNING LAW.

This Lease shall be governed by and construed in accordance with the laws of the State of ______________. If any provision of this Lease is found to be invalid or unenforceable, the remainder of this Lease shall not be affected thereby.

ENTIRE AGREEMENT.

This Lease contains the entire agreement between the parties and supersedes all prior agreements, whether written or oral, with respect to the subject matter hereof. This Lease may not be modified except by a written instrument executed by both parties.

Attachment B: Excel Spreadsheet & Python Code

The Excel spreadsheet of OpenAI’s API outputs and the Python code used to obtain this data is on file with the author and available upon request.

 

 

99 S. Cal. L. Rev. 239

Download

*Executive Articles Editor, Southern California Law Review, Volume 99; J.D. Candidate 2026, University of Southern California Gould School of Law; Master of Public Policy Candidate 2027, University of Southern California Sol Price School of Public Policy; B.S., Mathematics, 2023, University of Arizona; B.A., Political Science, 2023, University of Arizona. I extend my sincere gratitude to Professor Jonathan H. Choi for his invaluable guidance, my friends and family for their unwavering support, and the editors of the Southern California Law Review for their hard work and dedication in preparing my Note for publication.

Fintech and Techno-Solutionism

Silicon Valley–style technological innovation is ill-suited to address complex problems like financial inclusion and concentrated market power, yet promises abound that “fintech” can fix them. This oversimplified reduction of complex structural problems into technological puzzles has been critiqued as “techno-solutionism,” and it poses real dangers for public policy. When we start with the tech industry’s favored tools and then ask how to solve complex problems using those tools—rather than starting by defining the problem to be solved—it can distract policymakers from supporting real, structural solutions. Techno-solutionism can also deter policymakers from interrogating the limitations, and regulating the harms, of the proffered technological solutions.

This Article argues that not only are many fintech products themselves extremely techno-solutionist, but techno-solutionism is also impeding financial regulation’s ability to protect the public from fintech’s harms. It makes several contributions: First, this Article introduces into the financial regulation literature theories of how the law can perpetuate, and then be stymied by, techno-solutionism. Second, it comprehensively calls out the techno-solutionism inherent in many fintech offerings (particularly crypto), laying bare their harms and demonstrating where they are unable to solve the problems they claim to address. Such harmful nonsolutions do not warrant accommodative regulatory treatment—and yet, some policymakers have sought to give fintech products just that. This Article’s third contribution is a detailed exploration of techno-solutionism’s impact on U.S. financial regulatory policy as it pertains to fintech. This Article also uses this lens to consider how techno-solutionism might impact the regulation of AI in financial services.

 

Introduction

Technology has been an integral part of finance for a long time, but the rise of “fintech” has placed Silicon Valley–style technological innovation front and center in financial services. New technologies and technology-based business models have been developed as putative solutions to the limitations of the financial system, but fintech often fails to address the problems it claims to solve. Instead, fintech tends to create new problems that remain unaddressed because of misguided assumptions that technology can fix any problem—including the ones it causes. This “mistaken belief that we can make great progress on alleviating complex dilemmas, if not remedy them entirely, by reducing their core issues to simpler engineering problems” has been dubbed “techno-solutionism.”1Evan Selinger, The Delusion at the Center of the A.I. Boom, Slate (Mar. 29, 2023, 10:00 AM), https://slate.com/technology/2023/03/chatgpt-artificial-intelligence-solutionism-hype.html [https://perma.cc/4DPC-NF2W]. For more on the history of the term techno-solutionism, see Henrik Skaug Sætra & Evan Selinger, Technological Remedies for Social Problems: Defining and Demarcating Techno-Fixes and Techno-Solutionism, 60 Sci. & Eng’g Ethics 1, 7–13 (2024). It is predicated on a reductionist worldview that sees complex problems flattened into engineering puzzles and neglects their multifaceted history and context.

This Article argues that not only are many fintech products themselves extremely techno-solutionist, but techno-solutionism is also impeding financial regulation’s ability to protect the public from fintech’s harms. Techno-solutionism is often evident in conversations about the financial applications of technologies like artificial intelligence (“AI”), blockchain, cloud computing, and application programming interfaces (“APIs”), which have been promoted as having the power to make the delivery of financial services more inclusive, more efficient, more competitive, and more secure. While there may be promise in some fintech business models, this Article explains why fintech’s ability to solve long-standing, complex problems is often oversold. This Article also explores how techno-solutionist fintech hype can distract from more meaningful solutions to long-standing problems and obscure fintech’s harms.

Fintech marketing has correctly identified many of the pain points in traditional finance, but these pain points are largely structural problems that cannot be addressed by tech-centric business models that disregard economic and political realities. In this regard, fintech solutions are emblematic of a broader techno-solutionist Silicon Valley worldview that disregards context—as Silicon Valley historian Margaret O’Mara describes it, “Why care about history when you were building the future?”2Margaret O’Mara, The Code: Silicon Valley and the Remaking of America 7 (2020). Unfortunately, despite the flimsiness of many fintech promises—and despite the harms that many fintech business models have inflicted on the public—techno-solutionist rhetoric about fintech’s potential has been stubbornly resilient. This rhetoric sets the scene for a “wait-and-see” legal environment designed to allow these technological solutions to flourish without regulatory intervention. This Article argues that such accommodative inaction is unacceptable, given how damaging financial harms (to individuals, and to the broader economy) can be, but unfortunately lawmakers and financial regulators have been encouraged to internalize a techno-solutionist perspective by the fintech businesses and venture capitalists who will profit from such accommodative legal treatment.

Techno-solutionism is not a purely private sector creation, however. Sometimes—whether through the expressive value of their words or the more concrete impacts of their action or inaction—lawmakers and financial regulators perpetuate the very techno-solutionism that will ultimately undermine their ability to protect the public from harm. If financial regulators are convinced or forced to get out of the way so that technological innovation can go ahead and fix things, then that will create a conducive environment for the fintech industry and its funders to arbitrage regulatory requirements and perhaps even harden that arbitrage into durable legal permissions (a strategy known as “regulatory entrepreneurship”).3Elizabeth Pollman & Jordan M. Barry, Regulatory Entrepreneurship, 90 S. Cal. L. Rev. 383, 385, 392–98 (2017). To illustrate these dynamics, this Article will examine examples of legislative proposals and administrative actions that highlight where techno-solutionism seems to be driving policy around fintech, as well as examples of pushback against techno-solutionism. This Article also examines nascent regulatory approaches to AI’s financial applications through this lens.

The primary aim of this Article is to identify and describe the problems that techno-solutionism creates for financial regulatory policy but that of course invites questions about what can be done to remedy the situation. Recognizing that techno-solutionism is a heuristic that probably will not be eliminated without an alternative, this Article argues that financial regulators and lawmakers should instead adopt a posture of contextually informed skepticism that draws on domain knowledge about what can go wrong in finance and is sensitive to the harms that fintech may cause. Of course, there are many structural impediments to such a shift in perspective and it will not be easily accomplished. Right now, the best that we can do may be to simply call out the phenomenon of techno-solutionism where we see it and, in doing so, rob it of some of its power.

The rest of this Article will proceed as follows: Part I will explore the concept of techno-solutionism, emphasizing its dangers for public policy as a general matter. Part I will also provide some insight into techno-solutionism’s relationship with the venture capital industry and with the law. Part II will look more specifically at fintech technologies and business models and expose the techno-solutionism inherent in fintech’s claims to improve financial inclusion, efficiency, competition, and security. Part III will explore the relationship between financial regulation and techno-solutionism, looking at legislative proposals and administrative actions relating to crypto and other fintech. Part III will also consider prospectively how techno-solutionism may impact regulation of the use of AI in financial services. Part IV suggests a posture of contextually informed skepticism as an alternative to techno-solutionism, before the final Part concludes.

I.  Techno-Solutionism

A.  What Is Techno-Solutionism?

In his 2023 Techno-Optimist Manifesto, leading venture capitalist Marc Andreessen stated his belief that “there is no material problem—whether created by nature or by technology—that cannot be solved with more technology.”4Marc Andreessen, The Techno-Optimist Manifesto, Andreessen Horowitz (Oct. 16, 2023), https://a16z.com/the-techno-optimist-manifesto [https://perma.cc/42BC-7JUN]. This techno-optimist sentiment has a long heritage: in his book American Technological Sublime, David Nye recounts that technological achievements, ranging from “the first railroads, suspension bridges, skyscrapers, city skylines” to “atomic explosions, and the rockets of the space program” have been central to the American national identity for centuries.5David E. Nye, American Technological Sublime 282 (1996). While it does not always get as much oxygen, criticism of techno-optimism is not a new phenomenon, either. Critiques of “techno-fixes” date back to the 1960s,6Sætra & Selinger, supra note 1, at 1. and interrogations of “innovation worship” and the “cult of innovation” can be found at least as far back as the 2000s.7See, e.g., Dan Saffer, The Cult of Innovation, Bloomberg (Mar. 5, 2007), https://www.bloomberg.com/news/articles/2007-03-04/the-cult-of-innovation [https://perma.cc/8HT5-LPXK].

In his 2013 book, To Save Everything, Click Here: The Folly of Technological Solutionism, Evgeny Morozov popularized the related critical term “technological solutionism.”8Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism 5 (2013). Morozov intends techno-solutionism as a pejorative, one that describes the tendency to “[r]ecast[] all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!”9Id. at 5. In their critique of fintech, Jones and Maynard, Jr. use the related term “technotopian.” Lindsay Sain Jones & Goldburn P. Maynard, Jr., Unfulfilled Promises of the FinTech Revolution, 111 Calif. L. Rev. 801, 804 (2023). Furthermore, Morozov considered techno-solutionist solutions to be “likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.”10Morozov, supra note 8, at 5.

While solutionism itself is nothing new—people have always sought easy solutions to complex problems—Morozov was particularly interested in the solutionism associated with that nebulous thing we call “the Internet.”11Id. at 17. Morozov argued that the internet allows solutionism to be scaled in a way that was never before possible—as he describes it: “the latest technologies make the fixes easier, cheaper, and harder to resist.”12Id. at xv. In recent years, internet technologies have been coupled with increased computing power, mass data storage capabilities, and automation to make technological solutions even more powerful, cheaper, and harder to resist than in 2013. Morozov’s concern—that the way we conceptualize social problems is skewed by our desire to solve them with increasingly fancy technological silver bullets—is only becoming more relevant.

Techno-solutionism is in many ways de-contextual: it fails to investigate the context of the problem at hand and starts instead with the technological tools available to fix things.13Malcolm Campbell-Verduyn & Marc Lenglet, Imaginary Failure: RegTech in Finance, 28 New Pol. Econ. 468, 471 (2023). This has also been described as an “isolationist approach to technology and technological change.” Henrik Skaug Sætra, Introduction: The Promise and Pitfalls of Techno-Solutionism, in Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism 1, 4 (Henrik Skaug Sætra ed., 2023). Much as too much reliance on mathematical models can cause us to focus on the risk that can be measured rather than the risk that matters,14For a discussion of the dangers of focusing financial models on the risks that can be measured rather than the risks that matter, see James Hackney, Regulating Through Financial Engineering: The Office of Financial Research and Pull of Models, 50 Loy. U. Chi. L.J. 695, 698–700, 703 (2019). techno-solutionism can flatten complex problems into just the elements that lend themselves to easy technological fixes, and ignore the rest.15“[T]he very availability of cheap and diverse digital fixes tells us what needs fixing.” Morozov, supra note 8, at xv. Reducing problems to their technological elements can be very seductive, particularly during times of political dysfunction when solving structural problems through democratic means seems nigh on impossible. But the resulting technological solutions are typically inadequate at best, harmful at worst, because they fail to reckon with both the complexity of the issues they purport to solve and their impacts

on people excluded from the technological development process.16Regarding the “fundamental mismatch between complex social issues and tech solutionism,” see Greta Byrum & Ruha Benjamin, Disrupting the Gospel of Tech Solutionism to Build Tech Justice, Stan. Soc. Innovation Rev. (June 16, 2022), https://ssir.org/articles/entry/disrupting_the_gospel_of_tech_solutionism_to_build_tech_justice [https://perma.cc/M7V8-WJ8S]. Sometimes, we will be better off without the proposed technological solution; at other times, the technological solution may have merit but will be effective only as part of a package of other structural reforms, and may require strong regulation.

As an ideology, techno-solutionism also tends to cast technological development as an inevitability,17Hearing on Oversight of A.I.: Legislating on Artificial Intelligence Before the Subcomm. on Priv., Tech., and the L. of the S. Comm. on the Judiciary, 118th Cong. 11–13 (2023) [hereinafter Hartzog Testimony] (Statement of Woodrow Hartzog, Professor of Law, Boston University). Cohen (disparagingly) describes this orientation as “[i]f innovation is autonomous, then what is produced is what should be produced. Regulators can only get in the way, and when they do we are all worse off, so they should not meddle.” Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism 91 (2019). and those who seek a more textured understanding of problems and technologies as Luddites or cranks standing in the way of progress.18Cohen, supra note 17, at 105, 195. See also Morozov, supra note 8, at xiii, on techno-solutionism’s blunting of our ability to ask questions. As Section I.C will explore in more detail, a techno-solutionist orientation can be weaponized to inhibit regulation of a technology’s associated harms (in particular, the complexity of the underlying technology can be weaponized to deflect oversight and restraint). More subtly, technologies that overpromise but are incomplete solutions to complex structural problems can also be distractions, alleviating political pressure for solutions to the non-technological dimensions of problems.19Techno-solutionism does not envision “fundamental change to the long-existing regulatory perspectives,” and so distracts attention from other approaches to financial regulation. Campbell-Verduyn & Lenglet, supra note 13, at 473. As tech ethicist Elizabeth Renieris has put it, “Our imaginations and resources are once again diverted from fixing or rehabilitating what exists”20Elizabeth M. Renieris, Amid the Hype Over Web3, Informed Skepticism Is Critical, CIGI (Jan. 14, 2022), https://www.cigionline.org/articles/amid-the-hype-over-web3-informed-skepticism-is-critical [https://perma.cc/N94L-C99F].: when the technological solution is pitched as so exceptional, the slow plodding changes of structural reform seem less worthy by comparison.21“The use of technology to transform the lives of these individuals has particular allure when all other policy prescriptions have seemingly failed,” Christopher K. Odinet, Predatory Fintech and the Politics of Banking, 106 Iowa L. Rev. 1739, 1746 (2021); techno-solutionism “promises an affordable, if not cheap, silver bullet in a world with limited resources for tackling many pressing problems,” Selinger, supra note 1. This dynamic is sometimes evident, for example, in policy debates about climate change, where the promise of new technologies has sometimes undercut support for policies to reduce emissions.22Sætra, supra note 13, at 2.

While techno-solutionist solutions will rarely benefit society writ large, fighting techno-solutionism is an uphill battle. Not only is techno-solutionism highly profitable for Silicon Valley and not only does the law help entrench techno-solutionism (as the next Sections will explore), but our brains are also hardwired toward techno-solutionism to some extent. Humans have long sought easy solutions to complex problems,23Scholars have been engaging critically with different kinds of “solutionism” since at least the 1950s. Sætra & Selinger, supra note 1, at 7. “It feels good to believe that in a complicated world, tough challenges can be met easily and straightforwardly.” Selinger, supra note 1. and we are also susceptible to what are known as “automation biases”: tendencies to defer to technologically-generated outputs as more correct and legitimate than human judgments.24For a discussion of automation bias, see Linda J. Skitka, Kathleen Mosier & Mark D. Burdick, Accountability and Automation Bias, 52 Int. J. Hum.-Comput. Stud. 701, 701–05 (2000). If we perceive the output of technology to be inherently accurate and superior to anything a human could produce, we will be dissuaded from asking whether technology offers a true solution to the problem at hand.25“[T]echnological solutionism reinforces optimism about innovation—particularly the technocratic idea that engineering approaches problems to problem-solving are more effective than alternatives that have social and political dimensions.” Selinger, supra note 1.

Even critics of new technologies can fall into the trap of techno-solutionism. By critiquing the hype spun by the technology’s developers rather than critiquing the technology’s reality and limitations, they can unintentionally validate and amplify that hype in the process.26For a discussion of the phenomenon of “criti-hype,” see Lee Vinsel, You’re Doing It Wrong: Notes on Criticism and Technology Hype, Medium (Feb. 1, 2021), https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5 [https://perma.cc/4XW3-YY4W]. Critics can also entrench techno-solutionism by demanding that these developers fix the technology’s problems with more of their own technology, rather than demanding regulatory or other non-technological solutions.27For a discussion of this issue in the context of children’s online safety, see María P. Angel & danah boyd, Techno-Legal Solutionism: Regulating Children’s Online Safety in the United States, 2024 CS&Law 86, 91, https://dl.acm.org/doi/10.1145/3614407.3643705 [https://perma.cc/G8VU-K64N] (“Policymakers not only argue that social media platforms are the site of the problem; they also frame technology as the site of the fix. As KOSA’s Section 3 makes evident, their rationale appears to go as follows: if design features are the problem, requiring good design can make the harms go away.”).

Take, for example, new developments in AI. There will likely be a variety of harms associated with these developments—for example, some kinds of jobs may be eliminated, and the proliferation of phishing scams, misinformation, and discrimination are all likely to increase.28On AI discrimination, see generally Ziad Obermeyer, Brian Powers, Christine Vogeli & Sendhil Mullainathan, Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, 366 Sci. 447 (2019). However, many leading figures in the AI industry (including OpenAI founder Sam Altman) have claimed potential harms on a much greater scale, co-signing a statement that reads, “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”29Kevin Roose, A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn, N.Y. Times (May 30, 2023) https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html [https://perma.cc/M6F3-LLZ9]. This invocation of AI-doomerism may be self-serving, however, if it is intended to distract lawmakers and regulators from AI’s near-term harms and to encourage them to put their faith in private sector technological solutions for heading off more cataclysmic potential harms.30As OpenAI CEO Sam Altman said in a Senate Committee hearing, “I think if this technology goes wrong, it can go quite wrong . . . We want to work with the government to prevent that from happening.” Id. It is critical, as the debate about regulating AI (and other technologies) progresses, that critics engage with technology’s present realities and not just its hype—even if that hype is apocalyptic in nature.31Selinger, supra note 1.

B.  Techno-Solutionism and Venture Capital

Techno-solutionism does not just flatten complex problems; it often flattens the concept of technology itself. If we believe that the only solution we need lies in the components of a machine or lines of software code, we miss the “relationship[] between them and people.”32Norman Balabanian, On the Presumed Neutrality of Technology, IEEE Tech. & Soc’y Mag., Winter 2006, at 15, 16. When conceptions of technology are stripped of the human agency involved in developing and using the technology, that gives technology an undeserved veneer of neutrality. It also leads to naïve assumptions that the same technology will have the same results regardless of the time and place in which it is deployed.33Morozov, supra note 8, at 260; Campbell-Verduyn & Lenglet, supra note 13, at 474; see also Meg Leta Jones, Does Technology Drive Law? The Dilemma of Technological Exceptionalism in Cyberlaw, 2018 U. Ill. J.L. Tech. & Pol’y 249, 251 (2018) (“[A] great deal of variation and messiness is found when looking at the same technology in different times and places.”). Such purported neutrality and universality are common talking points: we regularly hear statements like, “Technology is technology. It isn’t criminal. It has no motive. It’s not looking to make more money. It just balances accounts,”34Serj Korj (@SerjKorj), X (Mar. 11, 2023, 11:48 AM), https://twitter.com/SerjKorj/status/1634642595237208067 [https://perma.cc/RLY2-6RXZ] (quoting former U.S. Acting Comptroller of the Currency, Brian Brooks). and “technology is universalist. Technology doesn’t care about your ethnicity, race, religion, national origin, gender, sexuality, political views, height, weight, hair or lack thereof.”35Andreessen, supra note 4. But the reality is that technology is never neutral; it cannot exist or function separate and apart from the human beings who create and deploy it.36“Scholarship in science and technology studies has shown that new technologies do not have predetermined, neutral trajectories, but rather evolve in ways that reflect the particular, situated values and priorities of both their developers and their users.” Cohen, supra note 17, at 3; see also Paul Ohm & Jonathan Frankle, Desirable Inefficiency, 70 Fla. L. Rev. 777, 800 (2018).

Because the development of technology is not a neutral process, it is important to consider the incentives of those who develop and sell it. When technologies are developed by for-profit businesses, those businesses have strong incentives to develop those technologies in the way that will most benefit them financially (even if doing so could inflict harm on society).37For a discussion of misconduct by tech “unicorns” like Theranos, Uber, and Juul that detrimentally impacted non-investor third parties, see Matthew Wansley, Taming Unicorns, 97 Ind. L.J. 1203, 1215–24 (2022). Regarding the political and economic power that may be bound up in a technology, see Jones, supra note 33, at 257. See also Hartzog, supra note 17, at 8 (“[D]angerous, disruptive systems are being released on the world by for-profit companies with scant regard to the potential larger societal effects produced by these systems.”). Some have gone further to argue that the technological solutions produced by Silicon Valley are designed to thwart real solutions to structural problems: “After all, how could those occupying powerful positions in the tech industry—having directly benefited from the racist, sexist, and classist status quo—ever develop tools that would undo those very sources of power?” Byrum & Benjamin, supra note 16. Financial incentives will also impact how startup founders and their tech employees describe their technologies to others, including the venture capital (“VC”) firms they approach for funding.38“[C]omputer scientists and engineers are critical participants in propagating ideas about the nature, purposes, and social significance of their work.” Silvia Semenzin, ‘Blockchain for Good’: Exploring the Notion of Social Good Inside the Blockchain Scene, Big Data & Soc’y, July-December 2023, at 1, 2. VCs display significant herd behavior in choosing which “hot” technologies to fund,39Peter Lee, Enhancing the Innovative Capacity of Venture Capital, 24 Yale J.L. & Tech. 611, 616 (2022). with the result that founders trying to attract capital are likely to start by asking “how can we use [currently favored technology] to solve X?,” rather than “how can we best solve X?”40Molly White, Blockchain Solutionism (Lecture Transcript), Molly White (Sept. 21, 2022), https://blog.mollywhite.net/blockchain-solutionism-lecture [https://perma.cc/W2NG-2CGF].

Compensation for the VCs themselves will depend on the dollar amounts invested in their funds, and on the profits their funds generate by deploying those dollars to fund and then sell startups.41“The [limited partners] compensate the VCs in two ways: an annual management fee of 2% of the fund’s assets and ‘carried interest’ equal to 20% of the fund’s profits.” Matthew T. Wansley & Samuel N. Weinstein, Venture Predation, 48 J. Corp. L. 813, 832 (2023). In order to maximize their own compensation, VCs must therefore find (and develop a reputation for finding) startups that will grow exponentially in the five or six years before they must be sold in order to return profits to the fund’s investors.42Lee, supra note 39, at 668–69. Although venture capital (“VC”) funds typically have a term of ten or twelve years, “[v]etting and selling startups takes time, so VCs only have about five to six years between investment and exit for their startups to grow in value.” Wansley & Weinstein, supra note 41, at 832. For more on the pressures VC faces to exit investments, see Elizabeth Pollman, Startup Governance, 168 U. Penn. L. Rev. 155, 209–16 (2019). Venture capital is not a passive investment strategy: as Wansley and Weinstein put it, “[t]he most successful VCs . . . do not just try to find home runs—they try to build home runs.”43Wansley & Weinstein, supra note 41, at 833. VCs’ compensation therefore tends to depend on their ability to engineer exponential growth for their ventures—through managerial advice, certainly,44Elizabeth Pollman, Adventure Capital, 96 S. Cal. L. Rev. 1341, 1354 (2024). but also by manufacturing hype for industries,45See, e.g., Daren Matsuoka, Eddy Lazzarin, Robert Hackett & Stephanie Zinn, 2023 State of Crypto Report: Introducing the State of Crypto Index, a16zcrypto (Apr. 11, 2023), https://a16zcrypto.com/posts/article/state-of-crypto-report-2023 [https://perma.cc/CZ6E-C2UW]. For further discussion of Andreessen Horowitz’s efforts to hype the crypto industry, see Hilary J. Allen, Interest Rates, Venture Capital, and Financial Stability, U. Ill. L. Rev. (forthcoming 2025) (manuscript at 23–28), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4513037. lobbying,46See, e.g., Eric Lipton, Daisuke Wakabayashi & Ephrat Livni, Big Hires, Big Money and a D.C. Blitz: A Bold Plan to Dominate Crypto, N.Y. Times (Oct. 29, 2021) https://www.nytimes.com/2021/10/29/us/politics/andreessen-horowitz-lobbying-cryptocurrency.html [https://web.archive.org/web/20221226052114/https://www.nytimes.com/2021/10/29/us/politics/andreessen-horowitz-lobbying-cryptocurrency.html]. and engaging in predatory pricing.47Wansley & Weinstein, supra note 41, at 817.

In short, the technological solutions that receive VC funding will not necessarily be the best solutions. Often, society would benefit from more nuanced solutions that would involve non-technological elements and take a lot longer to develop than VCs and their investors would tolerate.48Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths 12 (2011). Furthermore, the VC industry is notoriously white and male, and notoriously funds founders with whom VCs have social connections49Lee, supra note 39, at 650–51.: this limits the perspectives brought to bear on how technology should solve problems, often excluding the possibility of public sector solutions as well as the voices of those who actually experience the problem in question.50Techno-solutionism can “shape our societies in ways unrooted in democratic processes and democratic will.” Sætra, supra note 13, at 6–7. Semenzin discusses “the prevailing cultural values of Silicon Valley, portraying society as classless and devoid of socioeconomic struggles, advocating the idea that technological markets, rather than government intervention, act as the catalyst for improving people’s lives.” Semenzin, supra note 38, at 12. Notwithstanding persistent claims that technological innovation exists to “make the world a better place,”51“Technological innovation in a market system is inherently philanthropic, by a 50:1 ratio.” Andreessen, supra note 4. Silicon Valley historian Margaret O’Mara has observed that “[t]he Valley’s engineering-dominated culture rewarded singular, near-maniacal focus on building great products and growing markets, and as a consequence often paid little attention to the rest of the world.”52O’Mara, supra note 2, at 7. And yet, a techno-solutionist perspective tends to assume that the solutions emerging from Silicon Valley, even if uninformed by domain expertise, are the superior ones.53“The techno-capital machine makes natural selection work for us in the realm of ideas. The best and most productive ideas win and are combined and generate even better ideas.” Andreessen, supra note 4.

This disregard for history and outside perspectives can lead to a disregard for non-technological dimensions of problems, as well as a disregard for technology’s harms. In the absence of any legal requirements to minimize those harms, there is no reason to think that they will be addressed by technologists or their VC funders.54Prominent AI ethicist Dr. Timnit Gebru, for example, has said, “Our recommendations basically say that before you put anything out, you have to understand what’s in your data set and document it thoroughly . . . . But at the end of the day this means taking more time, spending more resources and making less money. Who’s going to do that without legislation?” Emily Bobrow, Timnit Gebru Is Calling Attention to the Pitfalls of AI, Wall St. J. (Feb. 24, 2023) https://www.wsj.com/articles/timnit-gebru-is-calling-attention-to-the-pitfalls-of-ai-8e658a58 [https://web.archive.org/web/20230329183721/https://www.wsj.com/articles/timnit-gebru-is-calling-attention-to-the-pitfalls-of-ai-8e658a58?cx_testId=3&cx_testVariant=cx_170&cx_artPos=7&mod=WTRN]. And yet a techno-solutionist perspective tends to assume that subsequent technological interventions will inevitably fix any problems technology creates, without the need for any government interference.55Jodi L. Short, Reuel Schiller, Susan S. Silbey, Noah Jones, Babak Hemmatian & Leeanna Bowman-Carpio, The Dog That Didn’t Bark: Looking for Techno-Libertarian Ideology in a Decade of Public Discourse About Big Tech Regulation, 19 Ohio St. Tech. L.J. 1, 10 (2022); Andreessen, supra note 4. Indeed, techno-solutionism is often weaponized to discourage government oversight, as the next Section will explore.

C.  Techno-Solutionism and the Law

Technological advances may challenge laws but they do not in and of themselves drive changes in the law—people do.56Jones, supra note 33, at 253. The ways in which people like legislators, regulators, and judges respond to technological advances change how law is applied and developed, and the phenomenon of techno-solutionism can drive law if it impacts these individuals and their responses. Laws and legal institutions that are influenced by techno-solutionism can also nurture and entrench techno-solutionism in a vicious cycle. While a comprehensive discussion of the relationship between techno-solutionism and the law is beyond the scope of this Article, this Section will provide an overview of some of the ways in which the law helps perpetuate the very techno-solutionism that can ultimately co-opt and stymie the law’s harm protection functions.

  1. How Law Perpetuates Techno-Solutionism

The starting point here is to recognize that no technology business is built in a vacuum. Any business is built in an environment constructed by laws, and the laws themselves have been impacted by currents of economic and political power.57Cohen, supra note 17, at 1. Laws and legal institutions engage with technology-based business models from the beginning,58“Not only does law not linearly follow technology, a great deal of legal work shapes technology and the way in which it will be understood in the future.” Jones, supra note 33, at 278; see also Hilary J. Allen, Regulatory Sandboxes, 87 Geo. Wash. L. Rev. 579, 587–88 (2019). and those laws and legal institutions have been “enlisted to help produce the profound economic and sociotechnical transformations that we see all around us.”59Cohen, supra note 17, at 2. If citizens concerned about public harms cede the legal sphere to businesses with vested interests in structures that insulate them from the consequences of perpetrating harms, then the ability of the law to protect the public from harm will be further eroded.60Id. at 9. This is a pervasive political economy problem, but it will be exacerbated by techno-solutionism if public-minded citizens cede their ground because those who stand to profit also have intimidating technological bona fides.

The influence of techno-solutionism can shape laws in ways that can maximize industry profitability at the expense of the public interest. We often hear that technologies can “solve all of our most pressing problems—if only the law, which cannot move at the speed of human thought, will stop undermining technology’s potential and either get with the program or get out of the way.”61Id. at 1. As Jodi Short and her colleagues have observed, “no industry has been more zealous in crafting and championing a regulatory ideology than the tech sector,”62Short et al., supra note 55, at 4. but this regulatory ideology is not a purely private sector creation. Lawmakers and the law have helped perpetuate it.

Many lawmakers helped perpetuate this kind of regulatory ideology in the early years of the internet; for example, Anupam Chander describes Congress, courts, and the Presidential Administration all eagerly checking one another “when they proved less than friendly to Internet innovation.”63Anupam Chander, How Law Made Silicon Valley, 63 Emory L.J. 639, 649 (2014). In many ways, this trend continues today, with lawmakers often responding to technological innovations (if they respond at all) with “half-measures” that are designed to allow the underlying technology to flourish without fully addressing the attendant harms.64Hartzog Testimony, supra note 17, at 1. Support for such half-measures stems in part from understandings of technological innovation as so exceptional that the law should not interfere in the same way it would in other spheres—but technological exceptionalism is ultimately in the eye of the beholder. As Meg Jones puts it, “[n]ew technologies’ distinctions from legacy technologies are as political as they are technical. Novelty is constructed and as construction is performed, the method and politics of this interpretation should not be overlooked.”65Jones, supra note 33, at 256. When lawmakers craft bespoke legal and regulatory regimes for technological solutions, they are communicating their view that those technological solutions are indeed exceptional—superior to other types of solutions that receive no such special legal treatment.

An important point to note here is that law can have a messaging or expressive valence: it “creates a public set of meanings and shared understandings between the state and the public. It clarifies, and draws attention to, the behavior it prohibits. Law’s expressed meaning serves mutually reinforcing purposes. Law educates the public about what is socially harmful.”66Danielle Keats Citron, Law’s Expressive Value in Combating Cyber Gender Harassment, 108 Mich. L. Rev. 373, 407 (2009). While the expressive function of the law is most often discussed in terms of what it prohibits, permissive laws may also change public attitudes about what should not be considered socially harmful—and change behavior accordingly.67“[R]egulators may help generate norms around which market practices may coalesce.” Onnig H. Dombalagian, The Expressive Synergies of the Volcker Rule, 54 B.C. L. Rev. 469, 500 (2013). The literature on expressive laws focuses on the law’s ability to standardize norms,68Id. at 493. and the law can perform a particularly potent standardizing function at a time when a technologically-enabled practice is new and the public is looking for guidance as to what to think about that practice.69Citron, supra note 66, at 410. As a result, laws and rules that emphasize the benefits of a technology and related business models and deprioritize their harms can have a normative consequence in addition to their direct impact, lending legitimacy and encouraging adoption. Once public adoption has been encouraged, it will be all the harder for lawmakers to take protective steps that have the practical impact of limiting public access to, or increasing the cost of, a technology-based business model.70See Arthur E. Wilmarth Jr., Citigroup: A Case Study in Managerial and Regulatory Failures, 47 Ind. L. Rev. 69, 73–74 (2014).

Regulators are often the lawmakers who are on the frontlines of dealing with new technologies.71The judiciary is also often on the front lines, but that is beyond the scope of this Article. While some regulators proactively seek to address problems or harms associated with new technologies, others propose new regulatory structures or dispense waivers that effectively get law out of the way—or simply accommodate the new technologies through their inaction.72Chander describes this dynamic in a more positive fashion, noting that Silicon Valley’s success can be attributed in part to “U.S. authorities (but not those in other technologically advanced states) act[ing] with deliberation to encourage new Internet enterprises by both reducing the legal risks they faced and largely refraining from regulating the new risks they introduced.” Chander, supra note 63, at 645. In a way, these latter approaches are institutionalized versions of Jonathan Zittrain’s procrastination principle: “a propensity to ‘set it and forget it’ without attempting to predict and avert every imaginable problem,” on the assumption that technological advances will be able to fix any problems that do ultimately arise.73Jonathan Zittrain, Fixing the Internet, 362 Sci. Mag. 871, 871 (2018). On the presumed ability of technology to fix its own problems, see Short et al., supra note 55, at 10. When regulators take these accommodative approaches, though, they reinforce the perception that law cannot keep up with technological progress (sometimes referred to as the “pacing problem”),74Jones, supra note 33, at 256. and therefore should yield to technological solutions.

Once something does go wrong and Congress and the public demand a response, regulators will find that their own delays with regard to regulating new technologies have made it harder for them to take action. For example, if technological fixes are needed (for example, to “hardwire principles and values . . . such that violating them is impossible or nearly impossible”),75Raúl Carillo, Seeing Through Money: Democracy, Data Governance, and the Digital Dollar, 57 Ga. L. Rev. 1207, 1238 (2023). regulators will already have forfeited their opportunity to impact the design process. If technological changes are insufficient and regulatory interventions need to take the form of stronger regulation (for example, a preapproval regime),76In a discussion of social media regulation, danah boyd criticizes as overly simplistic the rationale that “if design features are the problem, requiring good design can make the harms go away.” Angel & boyd, supra note 27, at 91. Regarding preapproval regimes in the financial regulatory context, see generally Saule T. Omarova, License to Deal: Mandatory Approval of Complex Financial Products, 90 Wash. U. L. Rev. 63 (2012). implementation also becomes far more challenging once an ecosystem of vested interests has evolved that is resistant to any change. In short, accommodative regulatory approaches can entrench the mistaken notion that regulators have no option other than to wait and see—that the tech genie cannot be put back in the bottle—which can then thwart subsequent regulatory efforts.

Laws can also put a techno-solutionist thumb on the scale in allocating responsibilities among private parties.77Cohen, supra note 17, at 90. In an article titled How Law Made Silicon Valley, Chander argues that:

Silicon Valley’s success in the Internet era has been due to key substantive reforms to American copyright and tort law that dramatically reduced the risks faced by Silicon Valley’s new breed of global traders. Specifically, legal innovations in the 1990s that reduced liability concerns for Internet intermediaries, coupled with low privacy protections, created a legal ecosystem that proved fertile for the new enterprises of what came to be known as Web 2.0.78Chander, supra note 63.

More recently, technology-based businesses have also proactively wielded trade secrecy laws to avoid public scrutiny.79Carillo, supra note 75, at 1230. The result has already been “a constellation of powerful de jure and de facto legal immunities that insulate their architects and operators from accountability for a wide and growing variety of harms.”80Cohen, supra note 17, at 10. Certainly, such a faciliatory approach has helped technological innovation flourish, but context matters (notwithstanding that techno-solutionism encourages us to ignore that context). If the attendant harms of technological innovation are seemingly minor, then an accommodative or faciliatory approach may make sense; such an approach is less justifiable when the associated harms are significant. But by insulating technology’s harms from legal scrutiny, such legal structures shift public attention away from the harms, entrenching techno-solutionist perspectives that focus only on technology’s positives.

Public actions have also perpetuated techno-solutionism by helping to fund Silicon Valley. While the mythology of Silicon Valley tells of innovation born of self-made visionaries, governmental bodies have in fact created significant subsidies for the VC industry, which (together with the liability shields and intellectual property protections already discussed) have allowed Silicon Valley and its techno-solutionism to flourish.81On the mythology and reality of Silicon Valley, see O’Mara, supra note 2, at 5–7. As Peter Lee points out, “[t]he federal government played a critical role in catalyzing the VC industry by funding technologies that attracted private investment.”82Lee, supra note 39, at 627. State legislatures also created the type of business entity known as the limited partnership, allowing limited liability protection for investors while still preserving favorable capital gains taxation associated with traditional unlimited liability partnerships—the VC industry has embraced this type of business entity, and its industry associations have aggressively lobbied over the years to lower capital gains taxation rates.83Id. at 629. The VC industry has also benefitted from other types of favorable tax treatment, outright subsidies, and pension fund regulation that permits such funds to invest in VC84Id. at 629–31. (institutional investment was a particular boon to the VC industry during the prolonged period of low interest rates that ran from the Global Financial Crisis until 2022—interest rate setting can also function as a type of VC subsidy).85Richard Waters, Venture Capital’s Silent Crash: When the Tech Boom Met Reality, Fin. Times (July 31, 2022) https://www.ft.com/content/6395df7e-1bab-4ea1-a7ea-afaa71354fa0 [https://perma.cc/3SFE-TAEW]. See generally Allen, supra note 45.

To be clear, providing incentives and subsidies for private sector innovation will often be good public policy. If public authorities remain mindful of potential harms and deploy incentives and subsidies as part of a portfolio strategy that also considers where direct public investment might be more effective, such an approach is likely to broadly benefit society. Unfortunately, the political landscape in the United States has evolved in such a way that the deck is often stacked against pursuing public sector solutions: Mazzucato attributes this in part to “the emergence of ‘new public management’ theory, which grew out of ‘public choice’ theory in the 1980s,” and “led civil servants to believe that they should take up as little space as possible, fearing that government failures may be even worse than market failures.”86Mazzucato, supra note 48, at xxiii. How to encourage public innovation is an important topic, but it is beyond the scope of this Article. What is relevant to this Article is that the flip side of timidity with regard to public innovation can manifest as credulousness with respect to private sector technological solutions and undeserved acceptance of their harms. While such credulousness is often unwarranted—particularly when the problem that needs solving would never truly be attempted by the private sector because solving it will take too long and primarily generate public goods that venture capitalists cannot profit from87Id. at 12.—the law has helped build this credulousness with its subsidies and waivers for private sector technological innovation.

  1. How Law Can Be Stymied by Techno-Solutionism

Law can therefore help perpetuate techno-solutionism—and then find its harm protection functions stymied by it. We regularly hear that existing law is becoming outdated, that the legislative process is too slow to keep up with the pace of technological change, and that the administrative state is becoming obsolete as regulators of specific industries (for example, banks) can no longer comprehend how those industries carry out their functions in a technologically advanced world. These are sometimes real concerns, but they are sometimes overstated and weaponized by those who would rather not have the existing rules applied to them—even when those rules continue to be fit for purpose. As Julie Cohen puts it, the relationship between technology and law is often framed as “what happens when an irresistible force meets an immovable object.”88Cohen, supra note 17, at 1. If lawmakers accept this framing, they will internalize the position that innovation and legal protections are in tension89Id. at 91. and might undermine legal protections so as to not be the immovable object which impedes technological development. The previous Section helped explain how the law can bolster the narrative that technology is an irresistible force; this Section will give an overview of cognitive capture, regulatory arbitrage, and regulatory entrepreneurship—three interrelated dynamics that techno-solutionists can weaponize to undermine existing applicable laws.

There is a classic techno-solutionist narrative that the industry often deploys when confronted with regulation: “[L]auding tech’s benefits, suggesting that government regulation will kill innovation, and advocating for technology-enabled self-regulation instead.”90Short et al., supra note 55, at 18. This kind of narrative suggests that real and present harms should be disregarded in the face of (often unsubstantiated) excitement about potential benefits.91“[E]xploring a technology’s potential should go beyond its upsides, since there are both existing risks and drawbacks as well as future ones if the sector continues to grow.” Tonantzin Carmona, Debunking the Narratives About Crypto and Financial Inclusion, Brookings (Oct. 26, 2022), https://www.brookings.edu/research/debunking-the-narratives-about-cryptocurrency-and-financial-inclusion [https://perma.cc/5W2Y-9AQK]. Repetition of this narrative can help generate “cognitive capture” that discourages regulators from standing in the way of technological innovation.92“Powerful information-economy actors have worked to craft narratives that make unaccountability for certain types of information harms seem logical, inevitable, and right.” Cohen, supra note 17, at 89. The concept of “cognitive capture” is often distinguished from the more venal forms of regulatory capture prevalent in public choice literature; in both instances, regulators come to prioritize the interests of industry over the public, but cognitive capture arises not because of bribes or other hopes of aggrandizement, but because regulators genuinely come to see the world the way industry does.93Willem H. Buiter, Central Banks and Financial Crises, in Federal Reserve Bank of Kansas City Symposium 495, 601–02 (2008). If that happens, then public and industry interest may appear synonymous to regulators.

Movements to portray government as ineffective have already helped convince many regulators that they have limited capacity to restrain harms, and that they should be afraid of impeding important progress by the private sector.94Jodi L. Short, Regulatory Managerialism as Gaslighting Government, 86 L. & Contemp. Probs. 1, 5 (2023) (“Civil servants have internalized attacks on them in ways that are at best demoralizing and at worst debilitating.”). When it comes to technology, regulators are aware that their actions can impact how technology develops, and they may come to feel that actions which could deprive the public of a particular technological innovation are a public disservice (even if there are harms associated with that technological innovation, and even as the general public evinces growing concerns about the power of Big Tech).95“The utopian narratives that big tech companies (and their lobbyists) tell about themselves do not seem to have captured the public’s imagination.” Short et al., supra note 55, at 5. Technology philosopher Evan Selinger has described how “[s]olutionism is a crucial component of how Big Tech sells its visions of innovation to the public and investors,”96Selinger, supra note 1. but solutionism is also a crucial component of how technological innovation is “sold” to regulators.

Cognitive capture is built in part through relationships,97James Kwak, Cultural Capital and the Financial Crisis, in Preventing Regulatory Capture: Special Interest Influence and How to Limit It 71, 80 (Daniel Carpenter & David A. Moss eds., 2014). and the subsidies and regulatory waivers discussed in the previous Section have helped VC firms to prosper sufficiently to ensure their access to regulators, enabling them to reinforce the techno-solutionist tendencies that benefit them. Cognitive capture can be particularly insidious when regulators are dependent on industry for information about how a technology works, because then regulators’ understanding will have been filtered through and permeated by industry’s perspectives on its creations.98“[I]nputs [from powerful actors] function as information subsidies, supplying policymakers who have limited resources of their own with ready access to a trove of facts, anecdotes, theories, and narrative frameworks from which to draw.” Cohen, supra note 17, at 104. There is also a status aspect to cognitive capture, where “[r]egulators are more likely to adopt positions advanced by people whom they perceive to be of higher status in social, economic, intellectual, or other terms.”99Kwak, supra note 97, at 80. With Silicon Valley’s successes has come an “an almost mythic reputation for meritocracy, innovation, and long-term value creation,” the “political valence” of which can sometimes be hard for regulators to resist.100Lee, supra note 39, at 620.

Such status concerns can be particularly pernicious if they result in regulators (particularly regulators of industries that were not traditionally technologized) undervaluing their own expertise—notwithstanding that their domain knowledge typically far exceeds that of the technologists developing solutions for that domain.101See supra notes 50–53 and accompanying text. In an “Emperor’s New Clothes” type scenario, regulators may feel too intimidated to ask preliminary questions about whether their industry’s problems can, in fact, be solved with the technological tools at hand (or indeed, by technological tools at all). Or regulators might be discouraged from asking questions about the domain-specific harms that technology could inflict. As Jones puts it, “[s]ometimes, a technology is so innovative, we are told that it cannot be proactively regulated, for how are policymakers to understand its technical complexities or know its potential.”102Jones, supra note 33, at 250. If regulators buy into this techno-solutionism, they are likely to adopt a posture of accommodative inaction: viewing even technological solutions that are at best band-aids as plausible solutions that they don’t want to stifle—even if those solutions pose significant social harms.

This environment of techno-solutionist cognitive capture is a highly fertile one in which to deploy strategies of regulatory arbitrage and entrepreneurship. “Regulatory arbitrage” describes industry strategies for exploiting gaps and differences in legal treatment—perhaps by performing activities that are prohibited in one jurisdiction in a more friendly jurisdiction, or by achieving the same outcome as a regulated activity but doing so in a way that was not clearly contemplated by existing regulatory regimes.103For a discussion of regulatory arbitrage, see Elizabeth Pollman, Tech, Regulatory Arbitrage, and Limits, 20. Eur. Bus. Org. L. Rev. 567, 571 (2019). Techno-solutionist narratives can facilitate arbitrage in the latter context, by suggesting that the technology is so novel and so free that it simply cannot be regulated in the same way as existing modes of performing the relevant activities.104Short et al., supra note 55, at 8. If regulators wish to respond to such regulatory arbitrage with new regulations, technological exceptionalism may tempt them to create rules that are very specifically tied to the technology in question—but when regulation is made too specific to a particular technology, it can be very easy for industry to evade that regulation by making small technological tweaks.

Businesses built on regulatory arbitrage may seek to “harden” that arbitrage into a durable legal permission through strategies of regulatory entrepreneurship. As used by legal scholars Elizabeth Pollman and Jordan Barry, the term “regulatory entrepreneurship” is most notably associated with the ride-hailing platform Uber, and refers to a growth strategy utilized particularly by VC-funded enterprises that involves “pursuing a line of business in which changing the law is a significant part of the business plan” even when it can “lead to negative consequences when companies’ interests diverge from the public interest.”105Pollman & Barry, supra note 3, at 383–84. Pollman and Barry have identified

three creative techniques that modern regulatory entrepreneurs have adopted in various combinations: They break the law and take advantage of legal gray areas, real or imagined, asking forgiveness instead of permission. They seek to grow ‘too big to ban’ before regulators can act, sometimes referred to as ‘guerilla growth.’ Perhaps most dramatic, they mobilize their users and stakeholders as a political force.106Id. at 390.

In other words, regulatory entrepreneurs engage in regulatory arbitrage or outright non-compliance until their businesses have become so large and established that they can paint legal changes permanently authorizing their activities as an inevitable necessity—notwithstanding that the business’s public harms will go unchecked as a result.

While the strategy of regulatory entrepreneurship is not exclusive to technology-based businesses,107For example, one could characterize Citigroup’s 1998 acquisition of Traveler’s Insurance—in an (ultimately successful) attempt to end Glass-Steagall’s prohibitions on certain kinds of financial institution affiliations—as regulatory entrepreneurship. For background on this event, see Wilmarth Jr., supra note 70, at 73–74. it is most commonly associated with VC-funded startups.108Pollman & Barry, supra note 3, at 424. Part of the explanation for this lies in the asymmetric incentive structures of VC funders, who face little legal liability for encouraging their portfolio companies to break the law but stand to capture a significant part of any upside from regulatory entrepreneurship strategies.109Allen, supra note 45, at 26. But it is also true that regulatory entrepreneurship is enabled by techno-solutionist narratives that make it particularly difficult for lawmakers and regulators to proactively rein in tech-related legal breaches. Regulatory entrepreneurship capitalizes on the pacing problem, seeking to grow “too big to ban” before the law catches up. But it is not inevitable that the law will fall hopelessly behind technological development. Ultimately, refusing to apply the law to a technology until after it is fully developed and entrenched—and then crafting accommodative laws that treat the extant incarnation of technology-based business models as inevitable—is a choice. That choice, which can stymie the harm-reduction functions of law, is often encouraged by cognitive capture, donations, and lobbying, all of which are part of the regulatory entrepreneurship playbook.110As Pollman and Barry observe,

The regulatory entrepreneur may push social policy away from the optimal outcome. The most direct way this can happen is when the regulatory entrepreneur’s business is built on reversing an efficient regulatory regime. When regulatory entrepreneurs change the law through quiet lobbying, without popular support, their behavior is consistent with a story of regulatory capture or rent-seeking and can produce all of the same negative consequences.

Pollman & Barry, supra note 3, at 443.

II.  Fintech and Techno-Solutionism

The previous Part spoke about techno-solutionism generally; the rest of this Article will focus more specifically on techno-solutionism as it relates to fintech. Because “finance is at the heart of the economy; is social and political; and is composed of non-stationary relationships that exhibit secular change,”111John C. Coates IV, Cost-Benefit Analysis of Financial Regulation: Case Studies and Implications, 124 Yale L.J. 882, 1003 (2015). it should be obvious (but sadly often is not) that solutions that neglect the social and political dimensions of financial problems will be inadequate. When technology is presented as the whole solution to a financial problem, then the best-case scenario will be that it fails to live up to its promises. Worst-case scenarios will arise if the shiny promises of the technology distract us from interrogating the downsides of business models that use that technology or distract us from addressing the root causes of the problem that is purportedly being solved.

In order to critique fintech’s techno-solutionism, we need a framework for thinking about what might need “solving” in finance in the first place. In many ways, the list of potential improvements to financial services and the financial system is infinite, but it is conceptually helpful to start by identifying what finance is supposed to do—at a high level—in order to consider how it could do it better. In the book Principles of Financial Regulation, John Armour and his colleagues identify the following as the key socially beneficial functions of the financial system: facilitating payments; mobilizing capital; selecting projects and monitoring their performance; and managing risk.112John Armour, Dan Awrey, Paul Davies, Luca Enriques, Jeffrey N. Gordon, Colin Mayer & Jennifer Payne, Principles of Financial Regulation 22–23 (2016). These can be collapsed further into three broad categories of functions: transaction processing, capital intermediation, and risk management.113Hilary J. Allen, Driverless Finance: Fintech’s Impact on Financial Stability 14 (2022). If the financial system is not performing these functions inclusively, efficiently, competitively, or securely, there may be a problem that needs to be fixed.

Of course, going back to first principles, we sometimes rely on the private sector financial industry to perform functions that it is ill-equipped to perform; public sector alternatives will often be needed to ensure reasonably-priced and widely-available transaction processing, capital intermediation, and risk management services.114As Adam Levitin notes,

The problem is that the market, left to its own devices, will not produce the desired policy outcome of fair and widely available services absent some form of subsidization. To the extent there is a failure here, then, it is a failure of government to intervene when the market fails to produce the desired policy outcome.

Adam J. Levitin, The Financial Inclusion Trilemma, 41 Yale J. on Regul. 109, 113 (2024). For proposals, see id. at 158–63; Mehrsa Baradaran, Banking on Democracy, 98 Wash. U. L. Rev. 353, 358–59 (2020).
Still, these three goals reflect general understandings of what the private sector financial system is supposed to achieve, and fintech technologies and business models are typically marketed as improving the delivery of these goals. Transaction processing (particularly payments processing) lends itself most obviously to technological improvement, but fintech entrepreneurs have also sought to improve capital intermediation (for example, with fintech lending and algorithmic trading business models) and risk management (for example, with AI-driven robo-advisory services).115Allen, supra note 113, at 83–86 (regarding fintech lending), 86–89 (regarding algorithmic trading), 66–69 (regarding robo-advisory services).

These disparate services all count as fintech. “Fintech” is not really a unified term, and it can be used to describe an assortment of different kinds of firms, technologies, and business models.116Id. at 8. This Article will focus less on fintech as firms and more on the underlying fintech technologies and business models that rely on them. Morozov focused his critique of techno-solutionism on “the Internet,”117Morosov, supra note 8, at 14. but when it comes to fintech, techno-solutionism also extends to other digital technologies like cloud computing, AI, blockchain, and APIs.118Allen, supra note 113, at 11. These technologies are diverse in many ways, but because they are accessed through the Internet, they can all reach significant scale.119Capacity for scaling is not unlimited, though, see infra note 210 and accompanying text. They also tend to rely on Big Data and often share the capacity for automation.120Yesha Yadav, Fintech and International Financial Regulation, 53 Vand. J. Transnat’l L. 1109, 1112 (2020).

Notably, fintech technologies and business models are not the exclusive province of new fintech firms, but have found their way into traditional financial institutions as well.121Chris Brummer & Yesha Yadav, Fintech and the Innovation Trilemma, 107 Geo. L.J. 235, 277 (2019). There are many different drivers of the adoption of these technologies and business models, but it is likely that some of the adoption is being driven by supply-side incentives to profit from the “next new thing,”122Dan Awrey, Complexity, Innovation, and the Regulation of Modern Financial Markets, 2 Harv. Bus. L. Rev. 235, 263–67 (2012). and it is also possible that some adoption is being driven by FOMO (“fear of missing out” on new tech trends).123Ina Bansal, Are Banks Facing FinTech ‘FOMO’?, LinkedIn (Mar. 18, 2016), https://www.linkedin.com/pulse/banks-facing-fin-tech-fomo-ina-bansal [https://perma.cc/429Q-JG5W]. The more commonly articulated narratives around fintech adoption, though, are desires to improve financial inclusion, efficiency, competition, and security.124See infra Sections II.A, B, C, and D. Regarding inclusion specifically, see Baradaran, supra note 114, at 356 (“The language of fintech as financial inclusion is so widespread that one could be forgiven for assuming that increasing access to credit were the sole aim of these companies.”). This Part will evaluate these narratives with a skeptical eye and conclude that while fintech may sometimes form part of the solutions we need, technology cannot provide the entire solution.

A.  Financial Inclusion

As noted above, the financial system provides critical payments and other transaction processing services. Everyday people benefit from these services, and they also benefit from the mobilization of capital: both as savers and investors who profit from returns, and as recipients of credit. Building wealth and diversifying investments can also help people manage the financial risks they may face in their lives. People who are excluded from traditional financial services can be charged significant premiums for transacting, locked out of full participation in the economy, and denied opportunities to manage their financial risks and build wealth.125Levitin, supra note 114, at 117–18, 120–21. Improving access (which is often referred to as “financial inclusion”) is therefore viewed as a critically important social goal.126Id. at 119. See also Baradaran, supra note 114, at 364–82, 399, which advocates for pushing back against the current conceptualization of financial inclusion. However, improving financial inclusion requires an understanding of the reasons why people are currently excluded, and the consequences of that exclusion. These are textured and context-specific, and once we start looking at the relevant context, it soon becomes clear that technology alone cannot solve financial inclusion problems. Unfortunately, though, fintech’s hype can undermine support for the kinds of public-driven solutions (including “hard service mandates, public provision, or taxpayer subsidies”) that could actually improve financial inclusion.127Levitin, supra note 114, at 114, 145.

Whether adults have a bank account or not is often used as a proxy for gauging the level of financial inclusion in a particular country. Research by the World Bank indicates that account ownership often varies by age, by level of education, and by gender (among other things), suggesting that there are structural explanations for financial exclusion.128Asli Demirgüç-Kunt, Leora Klapper, Dorothe Singer & Saniya Ansar, The Global Findex Database 2021: Financial Inclusion, Digital Payments, and Resilience in the Age of COVID-19, World Bank Grp. (2022), https://www.worldbank.org/en/publication/globalfindex/Report [https://perma.cc/8NHC-T3EX]. These structural explanations will vary significantly from place to place,129Jones, supra note 33, at 251. and so visions of universally applicable solutions to global financial inclusion will inevitably prove overly simplistic. This Article will focus more narrowly on fintech’s aspirations to improve financial inclusion within the United States (although we should not ignore the rest of the world: Silicon Valley-funded firms often try out their new tech solutions on populations in developing countries who lack the regulatory protections available in the United States).130For more background, see Olivier Jutel, Blockchain Financialization, Neo-Colonialism, and Binance, 6 Frontiers in Blockchain 2023, at 03 (July 27, 2023); Eileen Guo & Adi Renaldi, Deception, Exploited Workers, and Cash Handouts: How Worldcoin Recruited Its First Half a Million Test Users, MIT Tech. Rev. (Apr. 6, 2022), https://www.technologyreview.com/2022/04/06/1048981/worldcoin-cryptocurrency-biometrics-web3 [https://perma.cc/9JCW-NMQN]; Peter Howson, The Crypto Colonists, in Let Them Eat Crypto: The Blockchain Scam That’s Ruining the World (2023).

There is a striking racial dimension to financial inclusion problems in America.131For examples of scholarly work articulating the persistent structural discrimination that has driven disparate financial situations along racial lines, see Jones & Maynard, Jr., supra note 9; Darrick Hamilton & William Darity, Jr., The Political Economy of Education, Financial Literacy, and the Racial Wealth Gap, 99 Fed. Rsrv. Bank St. Louis Rev. 59, 60 (2017). See generally Mehrsa Baradaran, Jim Crow Credit, 9 U.C. Irvine L. Rev. 887 (2019). A 2021 survey found that while 4.5% of U.S. households overall were “unbanked” (in the sense that “no one in the household had a checking or savings account at a bank or credit union”),132Federal Deposit Insurance Corporation, 2021 FDIC National Survey of Unbanked and Underbanked Households Executive Summary 1 (2022) [hereinafter FDIC Survey], https://www.fdic.gov/analysis/household-survey/2021execsum.pdf [https://perma.cc/57Y3-NMTB]. “[d]ifferences in unbanked rates between Black and White households and between Hispanic and White households in 2021 were present at every income level.”133Id. at 2. As Adam Levitin puts it, “[n]early one in nine Black households and one in eleven Hispanic households lacks a bank account, and nearly one in four Black and Hispanic households are underbanked” (meaning they have bank accounts but still rely on alternative providers like check cashers or payday lenders).134Levitin, supra note 114, at 111. Many who are unbanked or underbanked identify the primary reason as either insufficient wealth to meet minimum balance requirements or lack of trust in banks.135FDIC Survey, supra note 132, at 2.

Fintech services are regularly depicted as a solution to both this lack of trust and underserved populations’ need for reasonably priced financial services: claims to “democratize finance” and “[b]ank the [u]nbanked” abound.136See, e.g., Circle, Serving the Unbanked with USDC, https://www.circle.com/en/stories/serving-the-unbanked-with-usdc [https://perma.cc/BTR2-XTC5] (“How USDC Can Help Bank the Unbanked”); Robinhood, About Us, https://robinhood.com/us/en/about-us [https://perma.cc/6NKK-9NQ9] (“We’re on a mission to democratize finance for all”). “A commonly held belief in the world of finance is that what stands between the current landscape of financial exclusion to full financial inclusion is the right technology or innovation.” Baradaran, supra note 114, at 356. Ultimately, though, technology is not a response to the lack of wealth and trust that creates racial disparities in financial inclusion in the United States. Black Americans in particular tend to distrust traditional financial institutions, often with good historical reason.137Jones & Maynard, Jr., supra note 9, at 822–24. Instead of doing the hard work of repairing that relationship, a techno-solutionist approach to financial inclusion allows new entrants to exploit that lack of distrust, often with even more exploitative results.138See supra notes 156–59 and accompanying text.

While traditional financial institutions have a very mixed track record with regard to underserved populations,139For a discussion of this history, see Mehrsa Baradaran, How the Other Half Banks: Exclusion, Exploitation, and the Threat to Democracy 138–62 (2015). they are at least subject to regulations designed to protect consumers and investors. Fintech business models, however, are often designed to skirt these regulations, often leaving their users (once again) with second-best, more exploitative financial services. Fintech proponents may hope that it will help “close the racial wealth gap,” but the reality is often a markedly less rosy form of predatory inclusion (similar to prior innovations like payday loans and subprime mortgages).140Predatory inclusion “refers to marginalized communities gaining access to goods, services, or opportunities that they were historically excluded from—but this access comes with conditions that undermine its long-term benefits and may reproduce insecurity for these same communities.” Carmona, supra note 91.

Christopher K. Odinet, for example, argues that while some fintech credit providers claim that their online interfaces and machine learning-based credit scoring procedures differentiate them from predatory payday lending models, they often charge rates of interest that are similar to those charged by payday lenders.141Odinet, supra note 21, at 1761–63. In a similar vein, Nakita Cuttino has examined the earned-wage access fintech business model,142These are “internet- and mobile-based platforms that have emerged in recent years to serve as safer alternatives to much-maligned payday loans . . . by facilitating transfers of earned-but-unpaid wages to workers in advance of their standard periodic paydays.” Nakita Q. Cuttino, The Rise of “FringeTech”: Regulatory Risks in Earned-Wage Access, 115 Nw. U. L. Rev. 1505, 1507–08 (2021). which has been described by one proponent as a “revolutionary employee benefit program that offers employees almost instant access to their pay.”143Is Earned Wage Access the Way of the Future? 5 Tips for Employers Seeking to Attract and Retain Talent Through On-Demand Pay, Fisher Phillips (Mar. 30, 2022), https://www.fisherphillips.com/news-insights/earned-wage-access-tips-for-employers-seeking-to-attract-retain-talent.html [https://perma.cc/2T25-JA4Y]. She finds that while this business model does offer some improvements over the prevailing payday lending model, it still has “varying effects that sometimes perpetuate, and in some instances exacerbate, the very risks providers claim to eliminate when displacing short-term creditors like payday lenders.”144Cuttino, supra note 142, at 1516–17.

Notwithstanding their deficiencies, there is consumer demand for these kinds of products, and so the problems associated with fintech lending and earned wage access products should be addressed by robust consumer protection regulation. Fintech lending models have, however, been constructed to avoid certain consumer protections like usury limits and state licensing requirements by engaging in “rent-a-bank” partnerships with banks;145Odinet, supra note 21, at 1776, 1779. earned-wage access programs also currently escape most meaningful consumer protection regulation.146Cuttino, supra note 142, at 1568–69. Odinet notes that the mystique of technology has been strategically weaponized to avoid regulation, observing that “the politics of tech . . . is giving political cover to predatory fintech lenders and clouding what should otherwise be a clear headed and aggressive approach by financial regulators in stamping out these harmful practices.”147Odinet, supra note 21, at 1745.

These fintech lending business models have been billed as “unlock[ing] more credit opportunities” for those who otherwise have bad credit scores or thin credit files,148Jones & Maynard, Jr., supra note 9, at 837–38; see also Carillo, supra note 75, at 1211, 1213. but unfortunately, the kinds of machine learning models used to process non-traditional data sources have often been shown to perpetuate discrimination and bias. Machine learning algorithms are guided by patterns and correlations evident in the data they have been exposed to,149Alicia Solow-Niederman, Information Privacy and the Inference Economy, 117 Nw. U. L. Rev. 1, 5–6 (2022). and so credit scoring algorithms that learn from biased data will perpetuate those biases in their credit-scoring decisions.150Jones & Maynard, Jr., supra note 9, at 837–40; Baradaran, supra note 114, at 371. This biased algorithmic decision-making can be particularly insidious, though, because it is often hidden: “[m]arkers for protected class membership can be inferred with relative ease and near-impunity from other, seemingly neutral data.”151Cohen, supra note 17, at 179. Once again, it is very techno-solutionist to assume that technology alone could winnow out centuries of entrenched biases, but automation biases and narratives of technological neutrality can lend undeserved credibility to such assumptions, impacting access to credit.

The bigger picture problem, of course, is the demand for credit: many Americans are so strapped for cash that they cannot survive from month-to-month without interim payments or loans.152“[F]or many households, borrowing is the only way to survive.” Odinet, supra note 21, at 1800; see also Baradaran, supra note 114, at 398–99. A 2023 survey by the Board of Governors of the Federal Reserve found that

[w]hen faced with a hypothetical expense of $400, 63 percent of all adults in 2023 said they would have covered it exclusively using cash, savings, or a credit card paid off at the next statement (referred to, altogether, as “cash or its equivalent”). The remainder said they would have paid by borrowing or selling something or said they would not have been able to cover the expense.

Bd. Governors Fed. Rsrv. Sys., Economic Well-Being of U.S. Households in 2023, at 31–32 (2024), https://www.federalreserve.gov/publications/files/2023-report-economic-well-being-us-households-202405.pdf [https://perma.cc/38AW-BTS8].
The predatory fintech loans and earned wage access products discussed here can obfuscate and draw attention away from the need to address this deeper, underlying structural problem.153“[T]he increased ability to borrow money, cast as a mechanism of positive social change, may function in some ways as a Trojan horse, wheeling in the unique dangers of indebtedness to the front gates of marginalized communities and threatening their already tenuous socioeconomic existence.” Abbye Atkinson, Borrowing Equality, 120 Colum. L. Rev. 1403, 1405–06 (2020). In their work on fintech, Lindsay Sain Jones and Goldburn Maynard explore one part of this underlying problem—the racial wealth gap. They consider a variety of fintech business models (including “e-trading, robo-advising, alternative credit platforms, neobanks, and decentralized payments”)154Jones & Maynard, Jr., supra note 9, at 808. and demonstrate that many of fintech’s claims about building wealth for traditionally excluded groups do not bear out, and in fact often disguise predatory practices that disproportionately harm vulnerable members of society.155Id.

Consumers may struggle to detect predatory practices because of fintech’s technological complexity: financial literacy is already extremely challenging for most people,156See Lauren E. Willis, Against Financial-Literacy Education, 94 Iowa L. Rev. 197, 201–02, 205 (2008). and fintech often overlays a requirement to be technologically literate too, which puts an even more unrealistic burden on users.157“Computer scientists often adopt a worldview where anyone can become a hacker and access the power of computer networks through coding knowledge gained from a DIY perspective. This perspective often downplays social inequalities related to Internet access and technological knowledge.” Semenzin, supra note 38, at 7. Baradaran has noted that the rhetoric of financial literacy “pathologize[s] the poor—and assume[s] that their poverty was created by individual choices—or treat[s] their state of poverty or financial exclusion as a trait inherent in the excluded borrower.”158Baradaran, supra note 114, at 381. As Darrick Hamilton has observed, if the poor internalize this critique, it fuels their desire not to look foolish for missing out on financial opportunities presented to them, which can make them more vulnerable to predatory practices.159Darrick Hamilton describes the problem as follows:

The characterization of Black people and their position in the United States is often one of ‘they are fools,’ ‘they make bad choices,’ . . . The narrative in America is that you should seize opportunity, make something of yourself, so if you have limited pathways towards traditional ways of wealth building and access to finance, you are particularly vulnerable to not wanting to be left behind.

Americans for Financial Reform, A Conversation with Ben McKenzie hosted by Americans for Financial Reform, YouTube (Sept. 25, 2023), https://www.youtube.com/watch?v=U8d_jws-KfA (starting at 16:50) [https://perma.cc/BXV4-MUP3]. See generally Hamilton & Darity Jr., supra note 131.
If debunking a too-good-to-be-true financial opportunity requires not just financial knowledge, but also understanding how a new technology works, it is not surprising that vulnerable people are sucked in.

This dynamic is particularly evident in the context of the crypto industry. Often described by its critics as “a solution in search of a problem,”160See, e.g., Arvind Narayanan & Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference 235 (2024) (“it has gradually become clear that crypto is a solution looking for a problem”); Adam Lashinsky, Crypto Is a Solution in Search of a Problem, Wash. Post (May 20, 2022), https://www.washingtonpost.com/opinions/2022/05/20/crypto-bitcoin-dogecoin-ethereum-crashing [https://perma.cc/88J4-DJ7U]; White, supra note 40. crypto represents in many ways the apotheosis of fintech and techno-solutionism. Promises have been made that crypto’s underlying blockchain technology can democratize financial services by making them cheaper, more efficient, and more secure—but none of these promises withstand scrutiny. Ultimately, a blockchain is simply a type of database to which entries can only be added, not removed, and which is controlled by multiple nodes instead of relying on centralized intermediaries.161Primavera De Filippi & Aaron Wright, Blockchain and the Law: The Rule of Code 2 (2018). While this technology might be interesting from an academic perspective, according to more than 1,500 independent computer scientists, software engineers, and other technologists who signed a letter to U.S. Congressional leaders in 2022, “[b]y its very design, blockchain technology is poorly suited for just about every purpose currently touted as a present or potential source of public benefit.”162Letter in Support of Responsible Fintech Policy (June 1, 2022), https://concerned.tech [https://perma.cc/467C-ULJK].

It is not the blockchain itself that is offered as an investment opportunity, but the crypto tokens (like Bitcoin) whose ownership is recorded on blockchains. The crypto industry regularly invokes claims of financial inclusion, focusing in particular on reported high uptake of crypto tokens in Black communities in the United States.163See, e.g., Coinbase Presents: Black Americans & Crypto, Coinbase, https://www.coinbase.com/learn/community/black-americans-and-crypto [https://perma.cc/9959-NNC9]. But most of these crypto tokens are not backed by any real-world productive capacity, and are Ponzi-like in their need for significant amounts of new demand and liquidity to support their value.164Allen, supra note 45, at 21–23. A Ponzi scheme exists where “early investors are paid returns from funds provided by new investors, as opposed to being paid from actual returns of a purported investment.” Catherine Carey & John K. Webb, Ponzi Schemes and the Roles of Trust Creation and Maintenance, 24 J. Fin. Crime 589, 589 (2017). Not all Ponzi processes are coordinated manipulative schemes, however: Shiller notes the existence of Ponzi processes where asset prices rise as a result of purchases made by those who have heard positive stories from those who will benefit from further price increases. Robert J. Shiller, Irrational Exuberance 93–94 (Rev. & Expanded 3d ed. 2015). Data analysis by economists at the Bank for International Settlements in 2023 concluded that “a majority of investors have probably lost money on their bitcoin investment,” and that large holders (commonly referred to as “whales”) likely profited at their expense.165Giulio Cornelli, Sebastian Doerr, Jon Frost & Leonardo Gambacorta, Bank for Int’l Settlements Bulletin No. 69: Crypto Shocks and Retail Losses 3–4 (Hyun Song Shin ed., 2023). Some data does indicate that members of Black communities are disproportionately likely to own crypto,166Ariel Invs. & Charles Schwab, 2022 Black Investor Survey: Report of Findings 7 (2022), https://content.schwab.com/web/retail/public/about-schwab/Ariel-Schwab_Black_Investor_Survey_2022_findings.pdf [https://perma.cc/E72H-35HT]. but this will be predatory inclusion if “whales” are using Black communities to provide the liquidity they need to cash out. There is some indication that this is, in fact, the case. Results from a Pew survey conducted in 2023 suggested that Black, Hispanic, and lower-income investors were disproportionately likely to have entered the crypto markets in March 2022 or later, after the market peak at the end of 2021.167Describing the survey’s results, Pew researchers found that

[i]n 2023, Black users (27%) were more likely than White users (12%) to say they first invested in, traded or used cryptocurrency within the previous year. Roughly two-in-ten Hispanic users (21%) said the same. (There were not enough Asian American cryptocurrency users to look at their responses separately.) . . . About three-in-ten users from lower-income households reported first investing in cryptocurrency within the past year, compared with about one-in-ten adults from middle- or upper-income households.

Michelle Faverio, Wyatt Dawson & Olivia Sidoti, Majority of Americans Aren’t Confident in the Safety and Reliability of Cryptocurrency, Pew Rsch. Ctr. (Apr. 10, 2023), https://www.pewresearch.org/short-reads/2023/04/10/majority-of-americans-arent-confident-in-the-safety-and-reliability-of-cryptocurrency [https://perma.cc/SQM3-5TWR].

When assets have no fundamentals and trade entirely on sentiment, traditional checks on fraud (like independent valuations and audits) break down, leaving crypto investors particularly vulnerable to fraudsters.168Regarding the ease with which crypto valuations can be manipulated, see Matt Levine, FTX’s Balance Sheet Was Bad, Bloomberg (Nov. 14, 2022, 10:09 AM), https://www.bloomberg.com/opinion/articles/2022-11-14/ftx-s-balance-sheet-was-bad [https://perma.cc/658Y-3TDB]. Financial disclosures from crypto issuers can reflect these manipulated values and often take the form of “attestations” or “proof of reserves” that have not undergone the scrutiny of an audit. Jonathan Weil, Binance Is Trying to Calm Investors, but Its Finances Remain a Mystery, Wall St. J. (Dec. 10, 2022), https://www.wsj.com/articles/binance-is-trying-to-calm-investors-but-its-finances-remain-a-mystery-11670679351 [https://perma.cc/H544-MH2T]. Crypto is also highly attractive to scammers and hackers because transactions on a blockchain cannot be undone (at least, not without taking drastic steps).169“Undoing a transaction requires either a change in the ledger’s underlying software, or what is known as a “hard fork,” where the ledger is split in two with one version of the ledger not recognizing the problematic transaction.” Allen, supra note 113, at 100. Unsurprisingly, the crypto markets are rife with fraud, hackings, and scams—and crypto users are expected to be able to protect themselves from these.170For a running tally of crypto hacks, scams, and frauds impacting consumers, see Molly White’s website, Web3 is Going Just Great, https://web3isgoinggreat.com [https://perma.cc/S62J-98G2]. As discussed previously, however, self-protection in these circumstances requires unrealistically high levels of both technological and financial literacy.171Jutel, supra note 130, at 07; see also supra notes 156–59 and accompanying text. Even in the absence of frauds, scams, and hackings, blockchain technology struggles to scale,172See infra note 210 and accompanying text. with the result that transactions processed on a blockchain can be subject to unexpected delays and high fluctuating fees at peak times (in addition to the fees users incur converting their crypto into and out of fiat currency on crypto exchanges).173For a discussion of fees, see Levitin, supra note 114, at 144. It is also important to note that these crypto exchanges typically require users to have a bank account in order to open an exchange account, meaning that unbanked customer will not be able to use an exchange to acquire crypto or to cash out of it in order to transact in the real economy.174Baradaran, supra note 114, at 384–85. Bitcoin ATMs, which tend to cluster in the same locations as payday lenders and check cashers, do provide a bank-free alternative for obtaining Bitcoin, but these usually charge extremely high fees, and while they “will accept cash to buy crypto . . . most aren’t equipped to sell crypto and dispense cash.” Dan Mika, High-Fee Crypto ATMs Center Around Low-Income Parts of Kansas City, Kan. City Beacon (Aug. 15, 2023), https://thebeaconnews.org/stories/2023/08/15/high-fee-crypto-atms-center-around-low-income-parts-of-kansas-city/#:~:text=Engagement%20Data%20Economics-,High%2Dfee%20crypto%20ATMs%20center%20around%20low%2Dincome%20parts%20of,targeting%20residents%20with%20extraordinary%20fees [https://perma.cc/PH9Z-PKDA].

This practical need for a bank account to access crypto also undermines industry claims that a type of crypto asset known as a “stablecoin” will bank the unbanked.175For an example of such industry claims, see Circle, supra note 136. Unlike most other crypto assets, stablecoins typically have some reserve of assets backing them and are therefore not as volatile as other kinds of crypto assets. Still, stablecoins remain vulnerable to runs where first movers are made whole while the remaining holders suffer losses.176Gary B. Gorton & Jeffery Y. Zhang, Taming Wildcat Stablecoins, 90 U. Chi. L. Rev. 909, 936–39 (2023). Indeed, some stablecoins have collapsed in recent years, causing their users to lose everything.177Leo Schwartz & Abubakar Idris, From Argentina to Nigeria, People Saw Terra as More Stable Than Local Currency. They Lost Everything, Rest of World (May 26, 2022), https://restofworld.org/2022/argentina-nigeria-terra-crash [https://perma.cc/WXH9-Z53S]. This article references Terra, a particularly risky form of stablecoin known as an algorithmic stablecoin, but as the article observes, “Lots of people lost money they couldn’t lose . . . They don’t care if it’s an algorithmic stablecoin, a collateralized stablecoin, decentralized, or what—their attitude will be, crypto f***ed me, I lost all my money. I won’t come back.” Id. Also, Bank for International Settlements (“BIS”) research on collateralized stablecoins has found that none of them are as stable as they claim, with depegging from the USD$1 price being a reasonably regular occurrence. Anneke Kosse, Marc Glowka, Ilaria Mattei & Tara Rice, Will the Real Stablecoin Please Stand Up? 11 (Bank for Int’l Settlements Papers, No. 141, 2023). As for those that have not collapsed, the World Economic Forum has concluded that stablecoins do not provide any novel payments functionality, noting that “stablecoins as currently deployed would not provide compelling new benefits for financial inclusion beyond those offered by pre-existing options.”178World Econ. F., What Is the Value Proposition of Stablecoins for Financial Inclusion 8 (2021), https://www3.weforum.org/docs/WEF_Value_Proposition_of_Stablecoins_for_Financial_Inclusion_2021.pdf [https://perma.cc/K8AG-6XMC]. Ultimately, stablecoins have little to recommend them as a financial inclusion solution.

Despite these realities, techno-solutionist narratives about crypto’s ability to improve financial inclusion are stubbornly resilient. Brookings’s Tonantzin Carmona has broken down crypto’s financial inclusion narrative into two halves: (1) easy access to transactional services for those previously locked out of the financial system, and (2) a wealth building avenue with low barriers to entry.179Carmona, supra note 91. She thoroughly debunks both halves, demonstrating that cryptocurrencies are poorly suited to perform transactional services, and that the volatility of most crypto assets’ value makes them unsuited to wealth building.180Id. As already mentioned, most crypto exchanges require users to have a bank account to acquire any crypto asset in the first place, so crypto solves little for the unbanked.181Id. This is also true of many other non-crypto fintech products: “[E]lectronic payment systems like PayPal and Venmo allow funds to be transferred among users without requiring a bank account, but the initial loading of funds must either be from a bank account or a credit card or a payment from another user.” Levitin, supra note 114, at 117. Crypto loans typically require overcollateralization before they are extended, so those without wealth (in the form of collateral) will not be able to receive loans.182Sirio Aramonte, Wenqian Huang & Andreas Schrimpf, DeFi Risks and the Decentralisation Illusion, BIS Q. Rev., Dec. 2021, at 21, 27. Rejecting techno-solutionism, Carmona admonishes policymakers to “first clarify the problems they are trying to solve, and more importantly, why they are trying to solve them.”183Carmona, supra note 91.

Unbanked and underbanked individuals in the U.S. would benefit enormously from access to simple, quick, low-cost transactional services.184“[C]ommunities do not need better blockchain design or mobile apps—they need simple access to a checking account and a debit card.” Baradaran, supra note 114, at 410. We already have the technology needed to provide these, though, and it seems to be more a lack of political will that prevents such transactional services from being provided more widely.185Aaron Klein identifies a simple amendment to existing law that would significantly help the underbanked:

The single most impactful thing the federal government could do is to give people access to their own money immediately. This can be done by simply amending the Expedited Funds Availability Act to require immediate access for the first several thousand dollars of a deposit, instead of permitting the lengthy, costly delays that harm people living paycheck to paycheck.

Aaron Klein, Opening Statement of Aaron Klein at Roundtable on America’s Unbanked and Underbanked, Brookings (Dec. 15, 2021), https://www.brookings.edu/opinions/opening-statement-of-aaron-klein-at-roundtable-on-americas-unbanked-and-underbanked [https://perma.cc/4AS7-WHT9]; see also Edmund Schuster, Cloud Crypto Land, 84 Mod. L. Rev. 974, 981 (2020).
Reliance on predatorily priced credit is a thornier problem186For a discussion of why access to credit is a very different problem from access to transaction processing services, see Levitin, supra note 114, at 116.—here, solving the problem of financial inclusion will ultimately require that people have some wealth to begin with, and building that wealth is a complex political and social problem that will require public sector involvement.187“Ultimately, household solvency problems can only be addressed by secular changes in the economy that will result in greater income and lower expenses for households and greater savings rates that can provide cushion against unexpected expenses.” Id. at 162–63. Mehrsa Baradaran, for example, has argued for compensatory policies designed to build home-ownership in geographical areas that have typically been marginalized.188Baradaran, supra note 131, at 946–48. Sain Jones and Maynard have called for infrastructure improvements, tax policy changes, and government wealth transfers—in addition to improvements to financial services and technology oversight.189Jones & Maynard, Jr., supra note 9, at 848–61. Darrick Hamilton and William Darity, Jr., have proposed “baby bonds,” which would allow children in need to build wealth by the time they become adults.190Darrick Hamilton & William Darity, Jr., Can ‘Baby Bonds’ Eliminate the Racial Wealth Gap in Putative Post-Racial America?, 37 Rev. Black Pol. Econ. 207, 215 (2010). While technology might play a minor role in creating the infrastructure for delivering this kind of wealth-building, it will not come close to providing the whole solution. The undeservedly shiny promise of fintech can be weaponized, though, to argue that such meaningful structural solutions are unnecessary.

B.  Efficiency

Another big claim of fintech is that it can make financial services more efficient.191Saule T. Omarova, Technology v. Technocracy: Fintech as a Regulatory Challenge, 6 J. Fin. Regul. 75, 89 (2020). It is particularly common to hear that fintech is more efficient because it eliminates the need for human customer service or brick-and-mortar bank branches.192Levitin, supra note 114, at 142. In many ways, though, this rhetoric is overblown: most fintech payment services and lenders, for example, ultimately depend on traditional bank infrastructure and therefore do not fully eliminate their costs. Still, that promise of increased efficiency remains the front of many financial inclusion claims: the hope is that transaction processing services that are quicker and cheaper can serve more people (including traditionally excluded populations) more effectively.193Odinet, supra note 21, at 1755; Levitin, supra note 114, at 141–42. We have already discussed how these financial inclusion claims are often hollow; many fintech services have, in fact, become profitable by appealing to higher income customers. Still, promises of increased efficiency are also key to how fintech is marketed to these higher income consumers.194Baradaran, supra note 114, at 371–72; Levitin, supra note 114, at 143. But solving for “efficiency” in the abstract is an impossible task. It is critical that we define the precise problem to be solved, instead of simply assuming that some version of increased efficiency will get us where we need to go.

Techno-solutionism is tied to commonly accepted notions that “more efficient” is always an improvement: efficiency has been our mantra for so long, in so many business contexts, that it has come to be perceived as an obvious and neutral goal. But there are many different ways of conceptualizing efficiency that are relevant to fintech policy195Luke Herrine, What Do You Mean by Efficiency? An Opinionated Guide, LPE Project (Oct. 11, 2023), https://lpeproject.org/blog/who-cares-about-efficiency [https://perma.cc/4XDE-6G6A].: There is the colloquial sense of efficiency as avoiding wastefulness.196Id. We must also contend with economic definitions of allocative efficiency (which often hide distributional inequities),197Graham S. Steele, The Tailors of Wall Street, 93 U. Colo. L. Rev. 993, 1035 (2022). “Efficiency, in the Kaldor-Hicksian optimal allocative efficiency sense, is insensitive to distributional inequalities, and so regulation will be acceptably ‘efficient’ as long as someone’s gains offset someone’s harms.” Hilary J. Allen, Regulatory Managerialism and Inaction: A Case Study of Bank Regulation and Climate Change, 86 L. & Contemp. Probs. 71, 77 (2023). and informational efficiency (which relates to how well prices of financial assets reflect available information).198Yesha Yadav, How Algorithmic Trading Undermines Efficiency in Capital Markets, 68 Vand. L. Rev. 1607, 1610 (2015). Or we might take a computer science approach and try to “minimize the consumption of time, energy, space, or cost in satisfying a specification of correctness for a given problem”—although Ohm and Frankle note that there are still many axes of efficiency to be traded off even within this technology-centric definition.199Ohm & Frankle, supra note 36, at 804. There has also been increased recognition within the computer science discipline that computational efficiency is not always the right parameter to maximize, with computer scientists and engineers sometimes “turn[ing] away from efficient solutions when faced with the need to inject complex human values into systems.”200Id. at 838. As the previous Section explored, one of the most challenging human values to inject into financial services is distributional equity.

And so there is no single universal definition of efficiency, but it is true that payments often take too long to clear in the United States, which is a real and persistent problem for the underbanked.201Regarding the desire for faster funds availability among the underbanked, see Levitin, supra note 114, at 121. For more affluent people, such delays are merely an annoyance; for those who live paycheck to paycheck, waiting three days for a payment to clear can result in costly defaults or the need for expensive services like check cashing and payday lending.202Klein, supra note 185. The earned-wage access fintech products discussed in the previous Section aim to make delivery of funds more rapid, but they too can prove costly.203Financial regulators in California found that tip-based earned-wage access companies succeeded in pushing customers to tip their provider 73% of the time: the average APR (representing the total cost of using the service) for these tip-based companies was 334%. Cal. Dep’t Fin. Prot. & Innovation, 2021 Earned Wage Access Data Findings (2023), https://dfpi.ca.gov/wp-content/uploads/sites/337/2023/03/2021-Earned-Wage-Access-Data-Findings-Cited-in-ISOR.pdf [https://perma.cc/55SJ-N7SN].

While slow payments processing may seem at first blush like a technology problem, technologies for faster payments processing by banks already exist, and have been widely used (particularly outside of the United States) for some time.204Real-time transaction processing is common in many other countries. For example:

India had 89.5 billion real-time transactions in 2022 and an annual growth rate of 76%. Brazil was in second place with 30 billion transactions and a 230% annual growth rate in 2022. . . . By comparison, real-time transactions in North America are expected to expand from 3.9 billion in 2022 to 13 billion by 2027.

John Adams, Can FedNow Give U.S. Processors an Edge Over Global Rivals?, Am. Banker (July 31, 2023), https://www.americanbanker.com/payments/news/can-fednow-give-u-s-processors-an-edge-over-global-rivals [https://perma.cc/J2AR-DSE4].
The fact that these kinds of technologies are not widely used in the United States is in large part a political problem, requiring political solutions. Banks, for example, could be required to use readily available technologies to clear and settle payments more speedily by amending the Expedited Funds Availability Act.205Aaron Klein recommends “amending the Expedited Funds Availability Act to require immediate access for the first several thousand dollars of a deposit, instead of permitting the lengthy, costly delays that harm people living paycheck to paycheck.” Klein, supra note 185. The Federal Reserve launched its real-time payments service, FedNow, on July 20, 2023, but uptake by banks has been slow.206Felix Salmon, FedNow Is Live with 35 Banks, Axios (July 20, 2023), https://www.axios.com/2023/07/20/federal-reserve-fednow-payment [https://web.archive.org/web/20240303021335/https://www.axios.com/2023/07/20/federal-reserve-fednow-payment]. Congress could consider mandating that banks join FedNow to ensure that these faster payment rails are available to their customers.

To be clear, these political problems can be very intractable. If fintech providers could provide an end run around these political problems by providing quick and affordable payments processing, then that would be very appealing. Unfortunately, though, fintech payments providers sometimes overclaim regarding the increased efficiencies of their technologies. For example, despite repeated crypto industry assertions of improved efficiency,207Semenzin, supra note 38, at 8. the underlying blockchain technology is inefficient by design.208Ohm & Frankle, supra note 36, at 797. Processing transactions on any decentralized permissionless ledger will always be slower and more cumbersome than available centralized alternatives, because in the absence of costly computations, it would be too easy for a bad actor to take over a technologically decentralized system.209Schuster, supra note 185, at 981. As a result, transaction processing on blockchains is slow and expensive (and the cost and timing of such processing is often unpredictable), and blockchains struggle to scale to process large volumes of transactions.210White, supra note 40.

Since inefficiency is a feature and not a bug of technologically decentralized systems, if blockchain-based businesses are able to increase efficiencies, they are likely to derive from regulatory arbitrage strategies that reduce regulatory compliance costs. Most parties involved in financial transactions are required to engage in “know-your-client” due diligence and other compliance checks to help prevent the financial system from being used for money laundering and sanctions evasion.211These obligations derive from the Bank Secrecy Act, codified at 31 U.S.C. §§ 5311–36 and 12 U.S.C. §§ 1951–60. These checks necessarily add time and expense to transaction processing—time and expense that unregulated members of the crypto industry can avoid by engaging in regulatory arbitrage212“In many ways, the current modus operandi of cryptocurrencies is similar to an old Swiss model of banking where people could set up anonymous accounts and no questions were asked.” Igor Makarov & Antoinette Schoar, Crytpocurrencies and Decentralized Finance (DeFi), Brookings Papers on Econ. Activity, Spring 2022, at 141, 175. (the crypto industry has pushed back on legislative attempts to extend anti-money laundering obligations to entities involved in processing crypto transactions, citing the decentralized nature of the crypto ecosystem and the costs of impeding innovation).213See, e.g., Chamber Digit. Com., Statement on Digital Asset Anti-Money Laundering Act (July 28, 2023), https://digitalchamber.org/statement-on-digital-asset-aml-act [https://perma.cc/K4P4-A2LM].

There are, of course, many technological alternatives to blockchains. Some fintech alternatives may indeed have the potential to improve the speed or cost of payments processing and other financial services. But focusing on these kinds of efficiency to the exclusion of all else can cause problems, too. Faster payments, for example, often enable faster fraud,214“Faster transactions are susceptible to the same social engineering techniques fraudsters have employed to target legacy systems—but with the added twist that funds intercepted via faster payments are often irrecoverable due to their speed.” FIs Look to Advanced Technologies to Protect Faster Payments, PYMNTS (Apr. 12, 2024), https://www.pymnts.com/money-mobility/2024/fis-look-to-advanced-technologies-to-protect-faster-payments [https://perma.cc/X5BH-C4SS]. and are therefore opening up new consumer protection problems that need to be addressed. As we will discuss shortly, increased efficiencies can also increase the susceptibility of the financial system to financial crises, with all the human misery those crises entail.215Allen, supra note 113, at 23–24. Concerns about efficiency-induced fragility have been percolating since highly efficient but brittle supply chains stalled and crumbled during the Covid-19 pandemic. People are now asking whether we have gone too far in maximizing supply chain efficiency, at the expense of overall resilience and robustness.216See generally Rana Foroohar, Homecoming: The Path to Prosperity in a Post-Global World (2022); Kathryn Judge, Direct: The Rise of the Middleman Economy and the Power of Going to the Source (2022). We should ask the same question of technological innovations that are promising to make finance more efficient: What are they doing to the resilience of our financial system? To put the question a little differently, are increases in efficiency delivering diminishing marginal returns that are not commensurate with the increased fragilities they create?217In the context of algorithmic trading, Adair Turner commented that

the benefits of market liquidity must, like the benefits of any market completion, be of declining marginal utility as more market liquidity is attained. The additional benefits deliverable, for instance, by the extra liquidity which derives from flash or algorithmic training, exploiting price divergences present for a fraction of a second, must be of minimal value compared to the benefits from having an equity market which is reasonably liquid on a day-by-day basis.

Adair Turner, Chairman of the Financial Services Authority, Lecture at CASS Business School: What Do Banks Do, What Should They Do and What Public Policies Are Needed to Ensure Best Results for the Real Economy? 27 (Mar. 17, 2010), https://www.bayes.city.ac.uk/__data/assets/pdf_file/0006/77136/Adair-Turner-March-2011.pdf [https://perma.cc/RR4T-764U].

For example, fintech business models designed to make capital intermediation and risk management more efficient (ranging from robo-advisors to high frequency trading) may end up making our financial system more fragile—as well as undermining other kinds of efficiency, like informational efficiency.218Yadav, supra note 198, at 1610. Take the high frequency trading business model. It is facilitated entirely by algorithms designed to trade at speeds and in volumes that humans would not be capable of.219Id.; see also Allen, supra note 113, at 86–87. Proponents of high frequency trading argue that it improves the efficiency of capital intermediation because it increases the volume of trading and by providing more opportunities to transact, increases liquidity and lowers trading costs.220Senior Supervisors Group, Algorithmic Trading Briefing Note 1 (2015), https://www.newyorkfed.org/medialibrary/media/newsevents/news/banking/2015/SSG-algorithmic-trading-2015.pdf [https://perma.cc/Z88L-LZ9C]. But that is only true in normal times. When things are obviously wrong in the market (at least, obvious to a human), the algorithm may continue to trade in a way that generates “flash crashes” of asset prices, which could spark fire sale externalities that threaten the stability of the financial system.221Id. at 1, 3. If the algorithm does recognize that something is really wrong, more often than not its preprogrammed instruction is to simply stop trading, draining liquidity from the system when it is most needed.222“[I]n periods of heightened volatility . . . passive HFT market players, ie those that provide liquidity, typically keep a low profile by deleting trading orders, thereby reducing the supply of liquidity.” High-Frequency Trading Can Amplify Financial Market Volatility, Deutsche Bundesbank (Oct. 25, 2016), https://www.bundesbank.de/Redaktion/EN/Topics/2016/2016_10_25_monthly_report_october_high_frequency_trading.html [https://perma.cc/E4RG-9MGG].

“Tokenization” of real-world assets is another efficiency-driven form of fintech that could make the financial system more vulnerable during unanticipated circumstances.223Bank for Int’l Settlements, Blueprint for the Future Monetary System: Improving the Old, Enabling the New, in BIS Ann. Econ. Rep. 2023, at 85, 85 (2023) [hereinafter BIS Blueprint], https://www.bis.org/publ/arpdf/ar2023e3.pdf [https://perma.cc/UX8E-YXG4]. For further discussion of this issue, see generally Next Generation Infrastructure: How Tokenization of Real-World Assets Will Facilitate Efficient Markets Before the Subcomm. on Digit. Assets, Fin. Tech., & Inclusion of the H. Comm. on Fin. Servs., 118th Cong. (2024) (statement of Hilary J. Allen, Professor of Law, American University Washington College of Law). These tokens are digital representations of real-world assets that can be preprogrammed such that financial transactions will self-execute without human intervention.224BIS Blueprint, supra note 223, at 85. Automating transactions can certainly increase speed and reduce costs225“The projects . . . reportedly seek to improve efficiency . . . [by] embedding features like programmability, and automaticity.” Fin. Stability Oversight Council, Annual Report 2023, at 45 (2023). (tokenization is typically associated with blockchain technologies, but programmable tokens can also be hosted on other kinds of ledgers and so avoid blockchain’s inefficiencies).226BIS Blueprint, supra note 223, at 94. However, the speed of self-execution can cause problems when the world has changed in ways that were not contemplated by the token’s programmers.227Just like legal contracts, computer programs cannot anticipate all future states of the world. For an overview of the literature on incomplete contracts, see Cathy Hwang, Collaborative Intent, 108 Va. L. Rev. 657, 665–67 (2022). During periods of systemic stress (when flexibility is critical to avoiding a crisis),228Katharina Pistor, A Legal Theory of Finance, 41 J. Compar. Econ. 315, 321 (2013). automated transactions will still execute rapidly—even if the parties would otherwise have agreed to negotiate or extend some grace to their counterparties to prevent temporary liquidity problems from metastasizing into something worse.

If we want our financial system to be more robust and resilient overall, we will sometimes need to focus on preserving or adding back inefficiencies, to allow the system to reconfigure when the unexpected happens in order to prevent failure.229J.B. Ruhl, Governing Cascade Failures in Complex Social-Ecological-Technological Systems: Framing Context, Strategies, and Challenges, 22 Vand. J. Ent. & Tech. L. 407, 422 (2020). This may require certain aspects of the financial system to have frictions (like circuit breakers), or to be slower, or to have more redundancies. Obviously, a system that is entirely inefficient would be of no use at all, so the key is to achieve the right balance of efficiency against other system attributes.230Id. We are more likely to achieve the right balance if we reject techno-solutionist exhortations for efficiency qua efficiency. Then we can start interrogating on a case-by-case basis where a type of efficiency will deliver only diminishing marginal returns and is not worth the attendant fragilities, as well as where financial regulation might help compensate for those fragilities.

C.  Competition

Where there is a perceived lack of efficiency in the provision of financial services, innovation-driven competition is often seen as the answer.231Brummer & Yadav, supra note 121, at 275. Fintech proponents often trumpet the disruption and competition fintech creates for the financial industry’s more highly-regulated institutions when it comes to providing capital intermediation (particularly credit), risk management, and transaction processing services.232Id. at 275–77. However, as with efficiency, if the competition benefits associated with fintech are a product of regulatory arbitrage rather than technological superiority, then they may not be worthwhile or desirable from a public policy perspective.

It is true that disrupting incumbents can be challenging in highly regulated industries, like finance, because regulatory compliance can serve as a barrier to entry—arguments have been made for repealing or waiving financial regulations as a result.233Allen, supra note 58, at 587–88. This Article will take up the topic of deregulation in Part III: here, it suffices to say that we’ve already seen that businesses like fintech lenders and crypto intermediaries often find their competitive advantage not by fundamentally changing how financial services are delivered, but by using the veneer of techno-solutionism to justify their regulatory arbitrage.234See supra notes 145–47, 211–13 and accompanying text. This kind of regulatory arbitrage may in some circumstances result in reduced costs for consumers (although predatory pricing exists in some fintech markets, so this is by no means guaranteed).235On the high cost of fintech loans, see Odinet, supra note 21, at 1743. However, where the law being skirted serves an important social purpose—particularly if it exists to protect the public from harm—then this kind of competition may be socially undesirable even if it lowers prices. In a recent article, Saule Omarova and Graham Steele argued that prudential banking regulation, which seeks to ensure that banks are managed in a safe and sound manner, does not in fact inhibit competition but actually restrains incumbents from abusing their existing market power.236Saule T. Omarova & Graham S. Steele, Banking and Antitrust, 133 Yale L.J. 1162, 1171 (2024). They argue that without this regulation, new firms would have to contend with even more firmly entrenched incumbent banks.237Id. They also argue that firms who skirt this regulation can develop market power in an antisocial way where gains are privatized and losses socialized.238Omarova and Steele identify a number of risks of regulatory arbitrage:

Shadow banking in general, and fintech and crypto specifically, are often motivated by a desire to arbitrage around the existing banking rules and regulations, thereby capturing the benefits of banks’ ‘specialness’ while evading the constraints of banking law. As the pre-2008 experience shows, unchecked growth of such alternative markets impairs regulators’ ability to prevent excessive accumulations of risk and leverage in the financial system. More fundamentally, permitting the rampant growth of private forms of money and money substitutes threatens the sovereign public’s ability to control the supply and flow of money and credit in the economy.

Id. at 1245.

Ultimately, whether rent-a-bank partnerships and other business models that use new technologies to arbitrage existing laws are seen as a “solution” to imperfectly competitive markets will depend on how the problem of “competition” is construed. For nearly fifty years, competition law in the United States has focused very narrowly on addressing inefficiencies arising from market power that impact the prices paid by consumers.239Id. at 1177–78. If, however, we embrace a more expansive and nuanced notion of the public harms that can result from excessive economic concentration, and appreciate that “[m]arket power also harms society as a whole by lessening economic growth and productivity and by contributing to our Gilded Age levels of inequality,”240Jonathan B. Baker, Finding Common Ground Among Antitrust Reformers, 84 Antitrust L.J. 705, 707 (2022). then it will become clear that technology cannot resolve these kinds of concerns on its own.

Technology may, in fact, be the source of some of these concerns about market power (or at least, their accelerant). For example, the power of dominant technology platforms to use algorithms to manipulate their users and their competitive environment has been a dominant concern of Lina Khan and other “neo-Brandeisian” antitrust scholars.241Id. at 706.

These scholars have proposed antitrust law reforms to the economic concentration and market power of the giant tech platforms,242Id. but the tech industry prefers its own tech solution in the form of Web3.243Chris Dixon, Read, Write, Own: Building the Next Era of the Internet xix (2024); see also Semenzin, supra note 38, at 1. “Web3” is not so much a reality as it is a marketing term for a more utopian vision of an internet where the use of blockchain technology helps wrest control and ownership away from the existing tech platforms. (By way of background, Web1 describes the read-only internet of the 1990s; Web2 is our current era in which we can read and also create content, but it is all intermediated through large platforms; and Web3 is supposed to let us “read, write, and own” the Internet.)244White, supra note 40. Although this may sound superficially appealing, there are many reasons to be cynical about this techno-solutionist vision (which many consider to be no more than a cynical crypto rebrand).245Id.

First of all, we can look at who is investing in Web3. Andreessen Horowitz, the preeminent VC firm investing in Web3 companies, also has important relationships with Web2 platform companies (like Meta) that Web3 purports to disrupt.246Ephrat Livni, Tales from Crypto: A Billionaire Meme Feud Threatens Industry Unity, N.Y. Times (Jan. 18, 2022), https://www.nytimes.com/2022/01/18/business/dealbook/web3-venture-capital-andreessen.html [https://web.archive.org/web/20220923102005/https://www.nytimes.com/2022/01/18/business/dealbook/web3-venture-capital-andreessen.html]. Meta (née Facebook) itself invested heavily in a Web3-aligned Metaverse that incorporated blockchain technology—although Meta has now largely pivoted away from the Metaverse to AI.247Selinger, supra note 1. For a discussion of the relationship between Web3, the Metaverse, and blockchain technology, see generally Thien Huynh-The, Thippa Reddy Gadekallu, Weizheng Wang, Gokul Yenduri, Pasika Ranaweera, Quoc-Viet Pham, Daniel Benevides da Costa & Madhusanka Liyanage, Blockchain for the Metaverse: A Review, 143 Future Generation Comput. Sys. 401 (2023). Obviously, none of this investment would have happened if the players involved did not see opportunities to profit in Web3—some have surmised that the real vision was for a Web3 where institutional players could use blockchain technology to make a small profit from every interaction that happens online.248“[I]n blockchain discourses, almost every human transaction is conceived in terms of value . . . and every human relationship can be conceptualized in terms of economics.” Semenzin, supra note 38, at 6.

Even if we put aside cynicism about the bona fides of Web3 proponents and take it at face value, though, it is clear that the technology alone will not solve the Internet’s economic concentration problem. Visions of Web3 rely on the same blockchain technology as crypto.249Web3 is the “internet of the metaverse,” and blockchain is considered a critical technology for that metaverse. Huynh-The et al., supra note 247, at 409. Blockchain technology is designed to ensure that no one single node in the system has centralized control over which transactions are added to the blockchain;250De Filippi & Wright, supra note 161, at 2. the tokens and other protocols built on blockchains like Ethereum are designed to decentralize control by distributing ownership among token holders and automating transactions so that no humans are required to execute those transactions. As already discussed, many inefficiencies are incurred in order to achieve this kind of technological decentralization,251See supra notes 207–210 and accompanying text. but even after all that, technological decentralization does not guarantee economic decentralization.252See generally Aramonte et al., supra note 182. A system can have lots of nodes, but if someone controls a lot of those nodes, then they can control the system.

Aspirations notwithstanding, economic power in crypto is often highly concentrated and can be exploited in many ways. When projects are built on blockchains, for example, they often take the form of nominally “decentralized autonomous organizations,” in which participants are given governance tokens that allow them to vote on the direction of the project, which are then preprogrammed using software called a smart contract. However, as economists Makarov and Schoar have documented, “in the majority of crypto projects, developers and early investors choose to keep control of the platform by allocating significant stakes to themselves. In addition, even if developers do not have a large stake, in many cases they managed to maintain de facto significant control over the platform.”253Makarov & Schoar, supra note 212, at 184; see also Tom Barbereau, Reilly Smethurst, Orestis Papageorgiou, Johannes Sedlmeir & Gilbert Fridgen, Decentralised Finance’s Timocratic Governance: The Distribution and Exercise of Tokenised Voting Rights, Tech. Soc’y, May 2023, at 1, 11 (“[M]inority rule is the probable consequence of tradable voting rights . . . and no applicable anti-monopoly or anti-concentration laws.”).

When it comes to the process of validating transactions on the blockchains themselves, again, there are strong economic incentives that have resulted in the concentration of validation power in the hands of just a few groups.254“[T]here are strong implicit incentives for validators to pool their capacity and coinsure their risk of winning a block reward.” Makarov & Schoar, supra note 212, at 147. There is evidence that some concentrated groups of validators process transactions in the order that reflects the wishes of the highest bidder and potentially harms the interests of those whose transactions are processed later (a practice known as maximal (formerly miner) extractable value (“MEV”)).255“[A]s a pending transaction sits in a mempool, miners and validators have found ways to profit from them by including, excluding or reordering transactions in a block. This strategy involves maximal (formerly miner) extractable value, or MEV.” Ekin Genç, What is MEV, aka Maximal Extractable Value?, CoinDesk (Sept. 2, 2022, 7:00 PM), https://www.coindesk.com/learn/what-is-mev-aka-maximal-extractable-value [https://web.archive.org/web/20250112130542/https://www.coindesk.com/learn/what-is-mev-aka-maximal-extractable-value]. We therefore need a solution other than blockchain if we wish to ensure that powerful technology platforms do not inhibit inclusive economic growth. That solution will likely be found in antitrust law, not in technology.

D.  Security

The concentration of validation power in the hands of just a few groups will also create security vulnerabilities for blockchains. In 2022, cybersecurity researchers found that just four pools of Bitcoin validators working in concert could subvert the Bitcoin blockchain.256Evan Sultanik, Alexander Remie, Felipe Manzano, Trent Brunson, Sam Moelius, Eric Kilmer, Mike Myers, Talley Amir & Sonya Schriner, Trail of Bits, Are Blockchains Decentralized?: Unintended Centralities in Distributed Ledgers 4 (2022), https://apps.dtic.mil/sti/pdfs/AD1172417.pdf [https://perma.cc/7ZED-3CZW]. There are also security vulnerabilities associated with the fact that no person or entity is designated accountable for ensuring that a blockchain’s software is maintained and kept secure from cyberattacks.257Angela Walch, The Bitcoin Blockchain as Financial Market Infrastructure: A Consideration of Operational Risk, 18 N.Y.U. J. Legis. & Pub. Pol’y 837, 870 (2015). In 2024, for example, the Department of Justice indicted two MIT graduate brothers for attacking the protocols of the Ethereum blockchain and stealing approximately $25 million of Ethereum cryptocurrency in 12 seconds.258Press Release, U.S. Dep’t of Just. Off. of Pub. Affs., Two Brothers Arrested for Attacking Ethereum Blockchain and Stealing $25M in Cryptocurrency (May 15, 2024), https://www.justice.gov/opa/pr/two-brothers-arrested-attacking-ethereum-blockchain-and-stealing-25m-cryptocurrency [https://perma.cc/6YWX-EZ9S]. It is not realistic to think all of a blockchain’s users will protect and maintain the blockchain’s software by way of a collective effort,259“Everyone involved in a blockchain ecosystem benefits from the existence of a rock-solid protocol and high-quality software, but everyone is also better off free riding on someone else’s work to develop them.” James Grimmelmann & A. Jason Windawi, Blockchains as Infrastructure and Semicommons, 64 Wm. & Mary L. Rev. 1097, 1120 (2023). and so blockchain security tends to depend on informal groups of core software developers with no legal responsibilities.260Walch, supra note 257, at 870. This is in stark contrast with regulated financial infrastructure providers like the Depositary Trust & Clearing Corporation, who must comply with the internationally accepted Principles for Financial Market Infrastructure. These Principles require, among other things, that financial infrastructure providers have a clear legal basis and governance structure, and policy and procedures around the management of risks (including security risks).261See generally Comm. on Payment & Settlement Sys., Bank for Int’l Settlements, & Tech. Comm. of the Int’l Org. of Sec. Comm’ns, Principles for Financial Market Infrastructures (2012), https://www.bis.org/cpmi/publ/d101a.pdf [https://perma.cc/F9MG-VRRF]. No such requirements are currently applied to blockchains.

Blockchains are not the only new fintech infrastructure that has generated new security vulnerabilities. Consider, for example, the push for open banking, which has been described as “the sharing and leveraging of customer-permissioned data by banks with third party developers and firms to build applications and services, such as those that provide real-time payments, greater financial transparency options for account holders, and marketing and cross-selling opportunities.”262Basel Comm. on Banking Supervision, Bank for Int’l Settlements, Report on Open Banking and Application Programming Interfaces 19 (2019), https://www.bis.org/bcbs/publ/d486.pdf [https://perma.cc/8K9Y-VSSN]. Application programming interfaces (“APIs”) are computer programs that allow different technology systems to speak directly to one another, and they form the backbone of many open banking initiatives.263Dan Awrey & Joshua Macey, The Promise & Perils of Open Finance, 40 Yale J. on Regul. 1, 3–4 (2023). However, API development is often outsourced to third-party software developers,264Id. at 42. and there can be quality control issues with regard to the maintenance and security of API software: it has been documented in the healthcare context, for example, that APIs are often the “weakest link” in cybersecurity protections.265Steve Alder, 100% of Tested mHealth Apps Vulnerable to API Attacks, HIPAA J. (Feb. 16, 2021), https://www.hipaajournal.com/100-of-tested-mhealth-apps-vulnerable-to-api-attacks [https://web.archive.org/web/20240629000000*/https://www.hipaajournal.com/100-of-tested-mhealth-apps-vulnerable-to-api-attacks/].

Even when APIs work well, their efficiencies may cause new security vulnerabilities, in the vein of the efficiency-induced fragilities discussed in Section II.B. One use case for APIs is to increase the speed of payments processing by making it easier for different systems to share payments data.266Basel Comm. on Banking Supervision, supra note 262, at 16. However, APIs are not just more efficient at passing desired data between systems—they may potentially be very efficient at passing along problems as well. It is underappreciated that APIs may work as channels that transmit operational problems from one institution to another.267Hilary J. Allen, Reinventing Operational Risk Regulation for a World of Climate Change, Cyberattacks, and Tech Glitches, 49 J. Corp. L. 727, 759 (2024). If, by linking all the players in a financial system, we improve efficiencies in normal times but increase the chance that the players will all fail together if something goes wrong, then that will undermine financial stability. The same could be said of a financial system where just a few cloud computing providers efficiently store critical data for all of the world’s financial institutions.268Id. at 757–58; U.S. Dep’t of Treasury, The Financial Services Sector’s Adoption of Cloud Services 57 (2023), https://home.treasury.gov/system/files/136/Treasury-Cloud-Report.pdf [https://perma.cc/6VMQ-XD2Q].

The broader idea behind open banking is to use APIs to make it easier for bank customers to share their data with, and thus obtain services from, other fintech providers. While pitched as a solution to some of the barriers to competition discussed in Section II.C, the rise of open banking implicates important questions about information security that we need to grapple with. Most obviously, using insecure APIs to transmit data creates opportunities for data breaches, fraud, and identity theft (fintech lending business models that assemble extensive non-traditional data profiles to address the creditworthiness of their users will also be attractive targets for such practices).269The information economy has given rise to a “seemingly continuous stream of major data breaches and epidemic levels of fraud and identity theft” where “vulnerability is a given, and eventual loss seems only a matter of time.” Cohen, supra note 17, at 101. But the sharing of data contemplated by open banking will also generate more subtle threats to our informational security, in the form of increased surveillance by an increased number of parties who can then use that data to manipulate us and others like us.

Raul Carillo has noted that fintech firms, like other technology companies, “reconstitute people into ‘data doubles,’ which can then be sorted, stored, scored, shared, and sold.”270Carillo, supra note 75, at 1210. The increased sophistication of machine learning technology is only making this kind of data more valuable.271Solow-Niederman, supra note 149, at 6. Data about consumers’ payments are particularly valuable, because those data yield rich, detailed, and unvarnished insights into how individuals behave and what they value.272Carillo, supra note 75, at 1211. On the value of unmediated data, see Cohen, supra note 17, at 84. Individuals will often fail to understand how their payments data might be used or what it communicates about them,273Solow-Niederman, supra note 149, at 1. but this kind of data can be used to surveil and then manipulate them.274Carillo, supra note 75, at 1222. For example, Consumer Finance Protection Bureau (“CFPB”) Director Rohit Chopra raised concerns that “Big Tech firms can use detailed payments data to develop personalized pricing algorithms for e-commerce or increase engagement with behavioral advertising.”275Rohit Chopra, CFPB Director, Remarks at the Global Financial Innovation Network’s Annual General Meeting (Nov. 8, 2023), https://www.consumerfinance.gov/about-us/newsroom/prepared-remarks-of-cfpb-director-rohit-chopra-at-the-global-financial-innovation-networks-annual-general-meeting [https://perma.cc/6CFW-UGXU]. Alicia Solow-Niederman has emphasized that machine learning technology can now be deployed to “use available data collected from individuals to generate further information about both those individuals and about other people,” and these inferences can then be used to predict people’s behavior, manipulate them, and color reputations.276Solow-Niederman, supra note 149, at 5; see also Cohen, supra note 17, at 76. Payments platforms may even use the data they collect about their users to deplatform them, censoring people’s ability to engage in financial transactions.277“PayPal updated its regulations to give itself the power to levy fines and take other punitive actions, including deplatforming, against users engaged in conduct that would not otherwise violate federal law. (PayPal withdrew the regulation.)” Rohit Chopra, CFPB Director, Remarks at the Brookings Institution Event on Payments in a Digital Century (Oct. 6, 2023), https://www.consumerfinance.gov/about-us/newsroom/prepared-remarks-of-cfpb-director-rohit-chopra-at-the-brookings-institution-event-on-payments-in-a-digital-century [https://perma.cc/VC3C-VTPS]. These kinds of harms are not distributed equally, and often the most vulnerable groups will be surveilled the most as well as suffer the most from this surveillance: “[M]any lower-income users rely exclusively on mobile platforms that are less versatile, less amenable to user customization and control, and designed to maximize data sensing and harvesting.”278Cohen, supra note 17, at 177.

The subtle and not-so-subtle harms associated with payments data collection prompt a need to minimize the collection of payments data in the first place.279Carillo, supra note 75, at 1227–28. Fintech once again proposes a techno-solutionist solution to this problem, in the form of the pseudonymous blockchain. However, the blockchain does not minimize the production of data—it still records every transaction on the blockchain, although it cloaks them in pseudonymity.280Id. at 1240. Blockchains make all transactions associated with a public key visible to everyone—meaning that once someone (law enforcement, an intimate partner, a stalker) knows someone’s public key, they can easily identify all of their transactions.281Anna P. Kambhampaty, Alisha Haridasani Gupta & Valeriya Safronova, Crypto Joins the Abortion Conversation, N.Y. Times (May 14, 2022), https://www.nytimes.com/2022/05/14/style/abortion-crypto-donations.html [https://web.archive.org/web/20241201161402/https://www.nytimes.com/2022/05/14/style/abortion-crypto-donations.html]. This reality exposes the folly of techno-solutionist proposals to use crypto to assist women seeking abortions in the United States, for example.282Id. As one New York Times article put it, “though many crypto enthusiasts dangle the lure of anonymity . . . because of the precision with which the blockchain traces transactions, paying for abortions using crypto could potentially have the opposite effect: exposing both the women getting abortions and the people paying for them.”283Id. And not only is the blockchain itself highly legible, but those who use blockchain-based financial services typically also rely on a number of intermediaries who can also collect user data.284Carillo, supra note 75, at 1245. For a discussion of the different kinds of crypto intermediaries who may collect data, see Hilary J. Allen, DeFi: Shadow Banking 2.0?, 64 Wm. & Mary L. Rev. 919, 924 (2023).

If we truly wish to minimize the production of payments data, the most simple solution does not require any technology—lawmakers could take steps to preserve physical cash infrastructure, as cash transactions do not generate any data (there are also financial inclusion and resilience justifications for ensuring that cash continues to be accepted).285Brett Scott, Cloudmoney: Cash, Cards, Crypto, and the War for Our Wallets 191–92, 200 (2022); Hilary J. Allen, Payments Failure, 62 B.C. L. Rev. 453, 513 (2021). As a supplement to physical cash, Carillo proposes a “Postal Cash Card” that can store value and facilitate transactions in a way that emulates debit cards but does not generate any data about the holder.286Carillo, supra note 75, at 1295–99. Carillo’s proposal is an illustration of the principle that rejecting techno-solutionism does not necessarily mean rejecting technology: he has proposed a technological innovation (the card), but also provided a detailed proposal about the institutional context in which it will be offered (non-profit, at the post office), in a way that is responsive to expressed privacy concerns and pushes back against the tide of “data-vacuuming” in for-profit technological development. Carillo’s proposal also supplies another illustration of the point that when it comes to technological innovation, incentives matter, and so a technology developed by a public entity for a non-profit purpose is more likely to avoid the siren song of mass data collection than a private sector payments technology.

III.  Financial Regulation and Techno-Solutionism

The previous Parts have described what techno-solutionism is and how it manifests in the context of fintech. As part of that discussion, Part II identified a panoply of fintech harms in need of regulation, but the law’s ability to rein in such harms is often stymied by techno-solutionism that it helps perpetuate. We certainly should not assume that the law is the only thing at work here—techno-solutionism is itself a complex phenomenon with many causes.287See supra notes 24–26 and accompanying text. However, illuminating financial regulation’s relationship with techno-solutionism is an important precondition to addressing the negative impacts of fintech.

A.  Quick Primer on Financial Regulation

This Article has already observed that technology businesses are constructed in part by law; as Katharina Pistor has explained, the same is true for finance.288Pistor, supra note 228, at 321. Financial regulation is a constitutive part of fintech’s evolution, but the law as applied to fintech has sometimes had an unhealthy relationship with techno-solutionism. One problem with techno-solutionism is that it downplays the value of non-technological domain area expertise,289See supra notes 50–53 and accompanying text. but the history and context for why we regulate finance are critical parts of any discussion of how the law should address fintech. This Section therefore provides some background on financial regulation more generally, before the next Section demonstrates how financial regulation can both facilitate and be inhibited by techno-solutionism.

We have already explored techno-solutionism’s false neutrality.290See supra notes 32–36 and accompanying text. More specifically to fintech, Omarova observes that “even the most advanced technology is merely a tool. How to use it—for what purposes, and to what effect—is a choice.” Omarova, supra note 191, at 76. Along with this false neutrality often comes a false equivalence where different applications of technologies are painted as equally transformative and equally worthy of pursuit, notwithstanding that the benefits and costs of different applications will inevitably vary. We often hear fintech services analogized to other internet services—“send money around the world as easily as you can send an email”291See, e.g., Decentralized Finance (DeFi), Ethereum, https://ethereum.org/en/defi [https://perma.cc/J8H6-SVB9] (“Ethereum makes sending money around the world as easy as sending an email.”).—but losing money is much more consequential than losing an email (certainly for the person involved, and potentially also for confidence in financial institutions and the broader financial system). Because the stakes are so high, and because we have so many historical examples of things going badly wrong in the financial system, finance has long been heavily regulated—in a way that couriered letters never were. Techno-solutionists ignore that history at their (or rather, our) peril.

Financial regulatory agencies are typically given mandates to pursue one or more of the following “menu” of financial regulatory goals: financial stability, consumer protection, investor protection, market efficiency, competition, and preventing financial crime.292Armour et al., supra note 112, at 61–69. It should be noted that the Commodity Futures Trading Commission’s (“CFTC”) mandate to pursue market integrity does not fit easily into this menu but relates most closely to missions to promote market efficiency. Notably, no U.S. financial regulatory agency has an express statutory mandate to promote innovation. Instead, the banking agencies (the Federal Deposit Insurance Corporation (“FDIC”), Office of the Comptroller of Currency (“OCC”), and the Federal Reserve) were all formed in response to episodes of financial instability, and all have some form of “safety and soundness” mandate oriented toward ensuring the stability of the financial system293Hilary J. Allen, Regulating Fintech: A Harm Focused Approach, 52 Comput. L. & Sec. Rev. 1, 2–3 (2024). (a council of these and other regulatory agencies known as the Financial Stability Oversight Council has an explicit mandate to promote financial stability).294Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No 111-203, § 112(a), 124 Stat. 1394–96 (2010) (codified at 15 U.S.C. § 5322). Financial stability regulation can have microprudential and macroprudential orientations: a microprudential approach seeks to ensure the solvency of individual financial institutions, whereas a more macroprudential approach seeks to protect the financial system as a whole by understanding and responding to how those financial institutions are interconnected, and to other market dynamics.295Jeremy C. Kress & Jeffery Y. Zhang, The Macroprudential Myth, 112 Geo. L.J. 569, 578 (2024). Regardless of orientation, the ultimate goal of financial stability regulation is to ensure that the financial system can continue to supply the credit and transactional services on which the broader economy depends for growth.296When a financial system is stable, it is “able to withstand shocks without giving way to cumulative processes which impair the allocation of savings to investment opportunities and the processing of payments in the economy.” Tommaso Padoa-Schioppa, Central Banks and Financial Stability: Exploring a Land in Between 20 (Second ECB Cent. Banking Conf., Policy Panel Introductory Paper, 2002), http://www.ecb.de/events/pdf/conferences/tps.pdf [https://perma.cc/8ZJH-3EQC].

Market regulators like the Securities and Exchange Commission (“SEC”), Commodity Futures Trading Commission (“CFTC”), and CFPB were also formed in response to specific episodes of public harm. The SEC was created as an investor protection body in the wake of the stock market crash of 1929 and ensuing Great Depression (later, in 1996, the SEC was given additional mandates to promote efficiency and capital formation).297National Securities Markets Improvement Act of 1996, Pub. L. No. 104-290, § 106, 110 Stat. 3424–25 (1996). The CFTC was created in 1974 in response to concerns about excessive speculation and manipulation in agricultural futures markets.298In 1973, “[g]rain and soybean futures prices reach record highs. This is blamed in part on excessive speculation and there are allegations of manipulation. Congress begins to consider revising the Federal regulatory scheme for commodities.” History of the CFTC: US Futures Trading and Regulation Before the Creation of the CFTC, CFTC, https://www.cftc.gov/About/HistoryoftheCFTC/history_precftc.html [https://web.archive.org/web/20241225012428/https://www.cftc.gov/About/HistoryoftheCFTC/history_precftc.html]. The CFPB was formed in 2010 as a response to the consumer protection failures that contributed to the 2008 financial crisis,299Leonard J. Kennedy, Patricia A. McCoy & Ethan Bernstein, The Consumer Financial Protection Bureau: Financial Regulation for the Twenty-First Century, 97 Cornell L. Rev. 1141, 1144–45 (2012). and has mandates to protect consumers and promote competition.300“The Bureau shall seek to implement and, where applicable, enforce Federal consumer financial law consistently for the purpose of ensuring that all consumers have access to markets for consumer financial products and services and that markets for consumer financial products and services are fair, transparent, and competitive.” 12 U.S.C. § 5511. In 2023, some Republican lawmakers sought to give the SEC an additional mandate to promote innovation, but the provision was eventually struck from the proposed legislation (had such a provision been enacted, it would no doubt have served as a weapon for those seeking to invalidate the SEC’s investor protection rules on the grounds that they stifled innovation).301Hilary J. Allen, The SEC Should Not Sacrifice Citizens on the Altar of Private Sector Innovation, The Hill (July 18, 2023, 9:00 AM), https://thehill.com/opinion/finance/4101392-the-sec-cannot-sacrifice-citizens-on-the-altar-of-private-sector-innovation [https://web.archive.org/web/20231106022916/https://thehill.com/opinion/finance/4101392-the-sec-cannot-sacrifice-citizens-on-the-altar-of-private-sector-innovation]. In the absence of any express innovation mandates, efficiency and competition mandates are the ones typically invoked to justify innovation-friendly regulatory policies.

While it is possible to interpret efficiency and competition mandates as complementary to the goals of investor and consumer protection and financial stability,302For example,

[i]f the genesis of financial regulation was the desire to force the financial industry to internalize the costs of the harm it creates for others, then it would be more consistent with that harm reduction function to interpret the efficiency criterion in a distributionally sensitive way and consider what would be more efficient from the perspective of society more broadly.

Allen, supra note 293, at 5 (emphasis omitted).
efficiency and competition mandates are often framed in ways that conflict with those other goals (for example, as Part II explored, fintech that has been touted as promoting efficiency and competition can come at the price of exposing consumers and investors to predatory inclusion). If it is assumed that technology is the best, easiest, or only way to improve efficiency and competition, this techno-solutionist framing will lend itself to accommodative regulatory strategies that sacrifice investor, consumer, and financial stability protection goals. This is not just an issue for regulators: lawmakers in Congress have also sometimes been swayed by techno-solutionism. The next Section will consider whether fintech-specific legislative and regulatory proposals have helped perpetuate techno-solutionism in a way that undermines financial regulation’s ability to protect the public from harm.

B.  Financial Regulation and Techno-Solutionism

Fintech poses many challenges for the enterprise of financial regulation: as Saule Omarova has observed, fintech disrupts financial regulation’s “basic normative thrust, its hierarchy of goals, its procedural mechanisms and tools, and its practical efficacy.”303Omarova, supra note 191, at 77. For further discussion of the challenges that fintech poses for financial regulation, see Allen, supra note 113, at 135–62. Furthermore, there are some truly novel privacy-type harms arising from the movement toward an economy “oriented principally toward the production, accumulation, and processing of information,” and existing financial regulation is not up to protecting against these kinds of harms.304Cohen, supra note 17, at 6. For example, existing financial privacy statutes (like the Gramm-Leach-Bliley Act) are simply not up to the task of responding to the types of privacy concerns explored in Section II.D,305Carillo, supra note 75, at 1224. and existing financial regulation would similarly struggle to address the harms that would arise from the integration of large tech platforms and finance.306Section 4 of the Bank Holding Company Act (“BHC Act”) enforces a separation between deposit-taking banks and other commercial enterprises but does nothing to separate commercial enterprises from lending or payments activities. 12 U.S.C. 1843. There are also loopholes in the BHC Act’s definition of “bank” for things like industrial loan companies that tech platforms may seek to exploit. See infra note 332. With all that said, though, existing financial regulation can still force a reckoning with many of the negative consequences of fintech innovation and require them to be remedied. We have decades of experience with many of the kinds of harms that fintech is inflicting, and many of the problems raised in Part II have solutions based in existing legal remedies. The fact that new technologies have come to play an increasingly important role in delivering financial services has sometimes been weaponized (through cognitive capture and related strategies) to obscure the applicability of existing law, but we should not unquestioningly accept the premise that all previous grants of regulatory authority (and the rules implementing them) are hopelessly outmoded and obsolete as a result of technological change.

This Section will look at fintech-specific legislative proposals and administrative actions that illustrate how techno-solutionism is impacting the creation of new financial regulation, and the implementation of existing financial regulation (this is not a comprehensive survey of all fintech-related financial regulation to date, but instead a series of illustrative examples). The Section will finish by looking at a developing area of financial regulatory practice: regulation of the financial industry’s use of AI.

  1. Legislative Proposals

As of the date of writing, the United States Congress has not enacted any fintech-specific legislation. However, a number of fintech-related bills have been introduced, and in a context where norms about how to respond to fintech and its harms are still developing, these bills can have an expressive valence. Some of these bills express the standard techno-solutionist message that

government regulation will stifle innovation in the dynamic tech sector, that it is unnecessary because market forces and the tech companies’ own benevolence will prevent social harms, and that, where regulation is called for, self-regulation is the only effective way to order the behavior of companies in this complex industry.307Short et al., supra note 55, at 4.

Other proposed bills have sought to address the harms associated with fintech business models and serve as something of a counterbalance to the formation of techno-solutionist norms.

In particular, a number of crypto-related bills have been introduced into Congress. Some of these bills are targeted narrowly at the harms associated with using crypto for money laundering and sanctions evasion, consistent with the regulatory goal of preventing financial crime.308See, e.g., Digital Asset Anti-Money Laundering Act, S. 2669, 118th Cong. (2023). The more far-reaching bills, however, (like the Lummis-Gillibrand Responsible Financial Innovation Act,309S. 4356, 117th Cong. (2022). the Digital Commodities Consumer Protection Act,310S. 4760, 117th Cong. (2022). and the Financial Innovation and Technology for the 21st Century Act311H.R. 4763, 118th Cong. (2023). passed by the House of Representatives in May 2024) are widely regarded to have been driven by the crypto industry and their VC funders.312“Crypto lobbyists pushed heavily for [the Financial Innovation and Technology for the 21st Century Act] on Capitol Hill, and the bill was publicly supported by leading voices in the industry including Coinbase, The Block, and Digital Currency Group.” Sophia Kielar & Samidh Guha, The Future of Crypto Regulation: What is FIT 21?, Thomson Reuters (Sept. 20, 2024), https://www.thomsonreuters.com/en-us/posts/government/crypto-regulation-fit-21 [https://perma.cc/A95J-KMEE]; see also Cheyenne Ligon, The ‘SBF Bill’: What’s in the Crypto Legislation Backed by FTX’s Founder, CoinDesk (Nov. 15, 2022, 3:05 PM), https://www.coindesk.com/policy/2022/11/15/the-sbf-bill-whats-in-the-crypto-legislation-backed-by-ftx-founder [https://perma.cc/8LUN-ULC4]. The same dynamic is playing out at the state level. See Eric Lipton & David Yaffe-Bellany, Crypto Industry Helps Write, and Pass, Its Own Agenda in State Capitols, N.Y. Times (Apr. 10, 2022), https://www.nytimes.com/2022/04/10/us/politics/crypto-industry-states-legislation.html [https://web.archive.org/web/20240907152718/https://www.nytimes.com/2022/04/10/us/politics/crypto-industry-states-legislation.html]. Given their genesis, these bills are unsurprisingly deeply techno-solutionist in orientation, ignoring the history and context that led to the development of existing financial regulatory structures in their bid to allow the crypto industry to innovate outside of these structures: House Financial Services Committee leadership described its bill as “facilitating a regulatory environment that allows this technology to flourish in the United States.”313Press Release, Patrick McHenry, Chairman, House Fin. Servs. Comm., McHenry Delivers Opening Remarks at Historic Markup of Comprehensive Digital Asset Market Structure Legislation (July 26, 2023), https://financialservices.house.gov/news/documentsingle.aspx?DocumentID=408928 [https://perma.cc/FBQ9-QCW4].

Among other problems, these bills seek to remove the vast majority of crypto assets from the investor protection oversight of the SEC and give jurisdiction to the CFTC—a regulatory body that has significantly fewer resources than the SEC, lacks a statutory investor protection mandate or culture of protecting retail investors, and also allows exchanges to self-certify the assets they list.314For elaboration on these types of concerns, see Letter from Dennis M. Kelleher to House Agricultural and Financial Services Committee Leadership Regarding Concerns About Provisions in the Digital Asset Market Structure Discussion Draft (July 11, 2023) [hereinafter Kelleher Letter], https://bettermarkets.org/wp-content/uploads/2023/07/Final-Ltr-to-FSCAG-re-cryptocurrency-.pdf [https://perma.cc/TRN5-T7WE]. For more on the CFTC and self-certification, see Lee Reiners, Bitcoin Futures: From Self-Certification to Systemic Risk, 23 N.C. Banking Inst. 61, 90–92 (2019). Doing so would deprive investors of the protections afforded by the SEC’s registration and disclosure regime for public offers and sales of securities, as well as the protections of securities broker/dealer and exchange registration requirements that would help mitigate the conflicts of interest inherent in the crypto exchange business model.315Kelleher Letter, supra note 314, at 2–5. As I testified in 2022, these kinds of bills “are designed to offer fewer investor protections than the existing securities laws, and they were intentionally designed in this way in order to facilitate crypto innovation.”316Hearing on Crypto Crash: Why the FTX Bubble Burst and the Harm to Consumers Before the S. Comm. on Banking, Hous., & Urb. Affs., 117th Cong. (2022) [hereinafter Allen Testimony] (statement of Hilary J. Allen, Professor of Law, American University Washington College of Law), https://www.banking.senate.gov/imo/media/doc/Allen%20Testimony%2012-14-22.pdf [https://perma.cc/EV9C-NR2K]. They would also lend legitimacy and credibility to crypto assets in the eyes of both retail and institutional investors, expanding a market for such assets that the industry has struggled to sustain in the absence of government endorsement.317Faverio, Dawson & Sidoti, supra note 167. Furthermore, these bills would create regulatory arbitrage opportunities outside of the crypto industry: while crypto advocates have described these bills as bespoke regimes for crypto, issuers of other types of securities would also have incentives to migrate into the new, lighter-touch regime (which would seemingly be accessible to them if they simply recorded ownership of their securities on a blockchain). Finally, these bills often suffer from trying to tie law too specifically to crypto technology and business models at a particular moment in time, ensuring that technological innovation could be used to arbitrage any such law that is enacted, quickly rendering the investor protections that are included in the bill obsolete.

There have also been crypto bills introduced that would undermine the financial stability regulation implemented by the federal banking agencies by creating new lighter-touch regulatory regimes for stablecoins.318In commenting on the Lummis-Gillibrand bill, Wilmarth notes that it includes

excessively lenient chartering criteria and dangerously weak capital standards for stablecoin issuers, woefully inadequate supervisory powers over stablecoin issuers and entities controlling those issuers, nonexistent stabilizing measures (like federal deposit insurance) to reduce the risks of contagion from failures of stablecoin issuers, misguided opportunities for stablecoin issuers to engage in risky derivatives activities, and a disturbing lack of regulatory controls over stablecoin transactions occurring on crypto exchanges and other crypto trading venues.

Arthur E. Wilmarth, Jr., Policy Brief: Congress Should Reject the Lummis-Gillibrand Stablecoin Bill Because It Would Endanger Consumers, Investors, and Our Financial System 1 (Apr. 30, 2024) (unpublished manuscript), https://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2989&context=faculty_publications [https://perma.cc/76SB-YGUS].
The stated aim of these bills is to support stablecoins as “an exciting technological development that could transform money and payments,”319Toomey Introduces Legislation to Guide Future Stablecoin Regulation, U.S. S. Comm. on Banking, Hous. & Urb. Affs. (Dec. 21, 2022), https://www.banking.senate.gov/newsroom/minority/toomey-introduces-legislation-to-guide-future-stablecoin-regulation [https://perma.cc/ZJU8-GALP]. notwithstanding that from a technological perspective, stablecoins are extremely ill-suited to large-scale payments processing.320Regarding the costs and delays associated with processing transactions on a blockchain, see White, supra note 40; Levitin, supra note 114, at 144. As I previously testified regarding the Stablecoin TRUST Act introduced by then-Senator Toomey, the Lummis-Gillibrand Responsible Financial Innovation Act, and a draft House Financial Services Committee stablecoin bill:

If any of these bills were enacted, they would authorize banks to issue stablecoins, making it highly probable that the Federal Reserve would feel compelled to bail out a failing stablecoin (which would operate as an indirect bailout of the crypto speculation the stablecoins are used for). Even more problematic, those bills would also authorize non-banks to issue stablecoins, yet be subject to lighter-touch regulation ex ante than traditional banks.321Allen Testimony, supra note 316.

This critique applies equally to a later iteration of the House Financial Services Committee stablecoin bill that was voted out of committee in July 2023.322Clarity for Payment Stablecoins Act, H.R. 4766, 118th Cong. (2023).

The techno-solutionism inherent in these crypto bills is all the more striking because crypto inverts the typical dynamic where the benefits of innovation are immediately obvious, but the harms take longer to manifest. As Federal Reserve Vice Chair for Supervision Michael Barr has observed, people often “assume too quickly that they know how the new products work, and novel products can appear both safe and lucrative, particularly if they have not been tested through bouts of market stress.”323Michael S. Barr, Vice Chair for Supervision, Bd. of Governors of the Fed. Rsrv. Sys., Remarks at the Peterson Institute for International Economics, Supporting Innovation with Guardrails: The Federal Reserve’s Approach to Supervision and Regulation of Banks’ Crypto-Related Activities (Mar. 9, 2023), https://www.federalreserve.gov/newsevents/speech/barr20230309a.htm [https://perma.cc/Q2TN-ZSVE]. This kind of dynamic can unsurprisingly make lawmakers loath to crack down on new technologies with evident benefits, but with crypto, harms have been evident for some time, while the industry still struggles to articulate concrete use cases after fifteen years.324Regarding use cases (and lack thereof), see White, supra note 40. Regarding harms, for a running tally of crypto hacks, scams, and frauds impacting consumers, see Web3 is Going Just Great, supra note 170. For a discussion of the environmental toll of crypto that relies on proof-of-work blockchains, see Sanaz Chamanara, S. Arman Ghaffarizadeh & Kaveh Madani, The Environmental Footprint of Bitcoin Mining Across the Globe: Call for Urgent Action, 11 Earth’s Future 1, 2 (2023). For a discussion of the use of crypto for money laundering, ransomware attacks, and sanctions evasion, see generally Hearing on Understanding the Role of Digital Assets in Illicit Finance Before the S. Comm. on Banking, Hous., & Urb. Affs., 117th Cong. (2022) [hereinafter Stansbury Testimony] (statement of Shane T. Stansbury, Professor of Law, Duke University School of Law), https://www.banking.senate.gov/imo/media/doc/Stansbury%20Corrected%20Statement%203-17-22.pdf [https://perma.cc/RV92-3R58]. As explored in Part II, there are strong impediments to crypto-related innovation ever delivering on its promises of financial inclusion, efficiency, competition, and privacy: it is a testament to the rhetorical power of techno-solutionism that facilitating this “solution in search of a problem” remains a defensible goal for many Members of Congress.

Of course, techno-solutionism is not the only force at work here. When it came time to vote on the Financial Innovation and Technology for the 21st Century Act, Members of Congress facing tough reelection campaigns were loath to draw the ire of the crypto industry (the pro-crypto Fairshake Political Action Committee amassed an unprecedented $114 million war chest from the crypto industry and prominent venture capitalists to spend in the 2024 election cycle).325Rick Claypool, Big Crypto, Big Spending: Crypto Corporations Spend an Unprecedented $119 Million Influencing Elections, Pub. Citizen (Aug. 21, 2024), https://www.citizen.org/article/big-crypto-big-spending-2024 [https://perma.cc/LEJ5-6DKL]. But still, techno-solutionism was used as window dressing. When that bill was passed by the House of Representatives with bipartisan support, House Financial Services Committee Chair Patrick McHenry made the following statement:

FIT21 provides the regulatory clarity and robust consumer protections necessary for the digital asset ecosystem to thrive in the United States. The bill also ensures America leads the financial system of the future and remains a hub for technological innovation.326Press Release, Financial Services Committee, House Passes Financial Innovation and Technology for the 21st Century Act with Overwhelming Bipartisan Support (May 22, 2024), https://financialservices.house.gov/news/documentsingle.aspx?DocumentID=409277 [https://perma.cc/8477-6U7E].

Some other non-crypto fintech bills have evinced a less techno-solutionist approach to fintech business models, though. For example, Congressman Jesus García introduced a “Close the ILC Loophole Act,”327H.R. 5912, 117th Cong. (2022). designed to prevent technology platform companies from exploiting a loophole in the Bank Holding Company Act that could allow those companies to acquire banks without being regulated by the Federal Reserve (which would essentially allow them to avoid financial stability regulation).328Senator Sherrod Brown introduced similar legislation in 2023 titled Close the Shadow Banking Loophole Act, S. 3538, 118th Cong. (2023). Congressman Lynch also introduced an “ECASH Act”329Electronic Currency and Secure Hardware (ECASH) Act, H.R. 7231, 117th Cong. (2022). that proposed to direct the Treasury Department to develop and issue “an electronic version of the U.S. Dollar for use by the American public.”330Press Release, Stephen F. Lynch, U.S. Representative (MA-08), Rep. Lynch Introduces Legislation to Develop Electronic Version of U.S. Dollar (Mar. 28, 2022), https://lynch.house.gov/2022/3/rep-lynch-introduces-legislation-to-develop-electronic-version-of-u-s-dollar [https://perma.cc/48X5-M5GE]. This bill is an example of technology-focused public policy that is not techno-solutionist: it is focused on developing technology to solve financial inclusion concerns, but is sensitive to non-technological context. In particular, in response to the kinds of consumer protection and privacy concerns raised in Section II.D, the proposal for ECASH is intended to “preserve a role in our financial system for smaller anonymous cash-like transactions which are currently transacted in physical dollars, and which have seen a rapid decline in use.”331Id.

  1. Administrative Action

While this discussion has focused so far on Congress, the federal financial regulatory agencies are on the front lines of dealing with fintech in the United States (state regulation is also relevant but largely beyond the scope of this Article).332For a discussion of states’ regulatory treatment of crypto, see Arthur E. Wilmarth, Jr., We Must Protect Investors and Our Banking System from the Crypto Industry, 101 Wash. U. L. Rev. 235, 269–71 (2023); Lipton & Yaffe-Bellamy, supra note 312. For a discussion of state regulation of fintech lending, see generally Odinet, supra note 21. Unlike unpassed legislation, the actions taken by regulatory agencies can have more than just normative valence. We will now examine a sample of the fintech-related rulemaking, monitoring, and enforcement activities of financial regulators and consider whether they are perpetuating, or being stymied by, techno-solutionism.

Acting Comptroller of the Currency Michael Hsu identified a dichotomy between regulators “taming” and “accommodating” financial innovation. Taming forces the technology to “conform to regulatory standards,” whereas an accommodative stance that dictates that “regulation should adjust to . . . and accept the new technology and possibilities for what they are” is much more techno-solutionist.333Michael J. Hsu, Acting Comptroller of the Currency, Remarks to the Harvard Law School and Program on International Financial Systems Roundtable on Institutional Investors and Crypto Assets: “Don’t Chase,” 3 (Oct. 11, 2022), https://www.occ.gov/news-issuances/speeches/2022/pub-speech-2022-126.pdf [https://perma.cc/XUR3-8DNS]. Accommodative regulators may take steps to actively loosen regulatory requirements, but often, accommodation takes the form of inaction with regulators simply refraining from exercising their jurisdiction when new technologies are involved. Either way, an overly accommodative stance will subordinate regulatory goals to the claimed promise of the technology, neglecting the reality that sometimes the negative consequences of a technology are such that accommodating that technology is bad policy (particularly if the technology itself is considered by independent experts to have limited utility).334See, e.g., note 162 and accompanying text.

Another framing that financial regulators often use when discussing fintech regulation is “tech neutrality,”335Janet L. Yellen, Secretary of the Treasury, Remarks from Secretary of the Treasury Janet L. Yellen on Digital Assets (Apr. 7, 2022), https://home.treasury.gov/news/press-releases/jy0706 [https://perma.cc/5F9L-SGJ8]. or “same activity, same risk, same rules.”336Wilmarth, Jr., supra note 332, at 314. This is often a good starting point for taming fintech, because it recognizes that regulatory arbitrage should not be allowed simply because a new kind of technology is involved: techno-solutionism may otherwise lull us into believing that new technologies are doing the disrupting, when in reality the only disruption may be lawyers devising new regulatory arbitrage strategies that can be “sold” to lawmakers using techno-solutionist rhetoric. However, a posture of technological neutrality can turn out to be accommodative in practice if regulators are too amenable to the fintech industry’s own techno-solutionist descriptions of activities and risks as novel, or if regulators assume that the technology is just another way of discharging an existing economic function and won’t pose any sui generis risks of its own.

Regulators should dig beneath the techno-solutionism to ask fundamental preliminary questions about whether a technology actually performs the activity its purveyors say it performs—otherwise regulators may mistakenly apply the wrong regulatory regime. They also need to ask whether changes in technological delivery mechanisms are creating new kinds of risks (for example, new technology-related operational risks). Although existing regulatory approaches will often be useful, sometimes new methods will need to be devised in order to discharge existing mandates in a financial system populated by new technologies. Regulators should not be deterred from developing these new methods by a desire to be perceived as technology neutral.

Unfortunately, reality does not always meet these ideals. This is no doubt due, in part, to cognitive capture. The financial industry has long weaponized complexity to deflect regulatory scrutiny,337Awrey, supra note 122, at 275–76. but with the rise of fintech, that financial complexity is being overlaid with technological complexity. Many financial regulatory agencies are primarily staffed with lawyers, economists, and accountants who may need to rely on the fintech industry to help them understand how a particular technology works,338Omarova, supra note 191, at 101. and this can be a fertile environment for cognitive capture to develop. Of course, individual agency personnel are just that—individuals. It is often remarked that “personnel is policy,”339See, e.g., Jeff Hauser & David Segal, Personnel Is Policy, Democracy J. (Feb. 6, 2020, 3:43 PM), https://democracyjournal.org/magazine/personnel-is-policy [https://perma.cc/DB7D-VK8E]. and those with some technological expertise may feel more empowered to push back against techno-solutionism.

An individual regulator’s susceptibility to techno-solutionism may also be impacted by their political ideology. Techno-solutionism is often aligned with libertarianism,340See Short et al., supra note 55, at 4. and those dispositionally opposed to government involvement will, all things being equal, probably be more supportive of agency policies that accommodate private sector innovation. The following discussion of fintech-related administrative actions sometimes demonstrates whipsaws in an agency’s fintech policy that can be partially explained by changes in the political orientation of agency leadership. This dynamic has been most obvious with the CFPB; at the other end of the spectrum, the SEC has been quite consistent in its fintech policy across administrations.341Gary Gensler, Chairman of the SEC, Speech: Kennedy and Crypto (Sept. 8, 2022), https://www.sec.gov/news/speech/gensler-sec-speaks-090822 [https://perma.cc/WT8J-5NMP].

i.  Rulemaking and Guidance

There have been some proposals for formal fintech-specific administrative rulemakings, but federal financial regulatory agencies have often preferred to issue informal guidance when it comes to fintech. The formal rulemaking process has sometimes struggled to address rapid technological change in a timely manner,342See Tim Wu, Agency Threats, 60 Duke L.J. 1841, 1841–43 (2011). and the Supreme Court’s embrace of the major questions doctrine has created greater uncertainty about courts’ willingness to invalidate rulemakings pertaining to new technologies.343Daniel T. Deacon & Leah M. Litman, The New Major Questions Doctrine, 109 Va. L. Rev. 109, 1087–88 (2023). Regarding the application of the major questions doctrine to crypto, see Chris Brummer, Yesha Yadav & David Zaring, Regulation by Enforcement, 96 S. Cal. L. Rev. 1297, 1328–29 (2024). In June of 2024, the Supreme Court also overruled the longstanding Chevron precedent that had previously directed courts to defer to reasonable agency interpretations of statutory provisions.344Loper Bright Enters. v. Raimondo, 144 S. Ct. 2244, 2273 (2024). Given these challenges, it is unsurprising that regulators of all stripes have often preferred to rely on more nimble informal guidance when it comes to fintech.

Like the legislative proposals discussed above, fintech-related informal guidance and proposed rulemakings have been a mixed bag with some embracing, and some rejecting, techno-solutionist approaches. Notably accommodative administrative actions include the OCC’s 2018 announcement of a nonbank fintech charter and the CFPB’s 2019 proposal for a fintech regulatory sandbox. Both of these had a techno-solutionist orientation, although neither were ultimately successful in their accommodations. The OCC’s proposed fintech charter was a response to concerns that nonbank fintech firms had to comply with consumer protection regulations in every state where they did business.345Recent Policy Statement, Office of the Comptroller of the Currency, Policy Statement on Financial Technology Companies’ Eligibility to Apply for National Bank Charters, 132 Harv. L. Rev. 1361, 1361 (2019) (citing Office of the Comptroller of the Currency, Policy Statement on Financial Technology Companies’ Eligibility to Apply for National Bank Charters 1 (2018), https://www.occ.gov/publications/publications-by-type/other-publications-reports/pub-other-occ-policy-statement-fintech.pdf [https://perma.cc/KS3S-JTQC]). A national special purpose charter from the OCC would have preempted many of these state consumer protection regulations—and the OCC justified the proposal on the assumption that it would facilitate technological innovation that would further financial inclusion.346Id. at 1363. Ultimately, however, this proposal was mired in legal challenges and industry largely eschewed the fintech charter.347Id. at 1366–68.

The CFPB’s proposed “Compliance Assistance Sandbox” also sought to preempt the enforcement of state consumer protection laws but was ultimately abandoned for failing to advance its “stated objective of facilitating consumer-beneficial innovation.”348CFPB, Statement on Competition and Innovation (Sept. 30, 2022), https://public-inspection.federalregister.gov/2022-20896.pdf [https://perma.cc/5GN3-2MFG]. Before it was abandoned, though, this sandbox had a very techno-solutionist orientation. For example, in a policy document that was incorporated by reference into the Compliance Assistance Sandbox policy, the CFPB expressly rejected a consumer group’s contention that a sandbox was unnecessary because fintech products rarely raised “novel questions of law and policy.”349CFPB, Policy on No Action Letters 5–6 (Sept. 10, 2019), https://files.consumerfinance.gov/f/documents/cfpb_final-policy-on-no-action-letters.pdf [https://perma.cc/C44L-YMDF]. The policy document also stated the techno-solutionist position that “the Bureau’s statutory mission of protecting consumers is not limited to vigorously enforcing the law. It includes facilitating innovation in markets for consumer financial products and services, as innovation drives competition, which in turn lowers prices and promotes access to more and better products and services.”350Id. at 2.

Regulatory sandboxes have been adopted elsewhere (both internationally and at the state level in the United States) and are generally techno-solutionist in orientation: they loosen financial regulations and use scarce regulatory resources for the primary purpose of promoting private-sector fintech innovation.351Allen, supra note 58, at 580. This implicitly positions “regulation” as the problem that needs to be solved, and if regulators fixate on the private-sector innovation they hope their sandboxes will generate, that may be a distraction from the public goods that regulation was adopted to create and the social harms that regulation was adopted to protect against. Regulatory sandboxes also put regulators in the unusual position of championing participating private sector firms to help them succeed in the marketplace—likely a recipe for cognitive capture.352Id. at 635–36.

Following the appointment of Rohit Chopra as Director of the CFPB in 2021, the CFPB evinced a far less techno-solutionist stance in its informal guidance and proposed rules. In September 2023, the CFPB responded to concerns about algorithmic discrimination by issuing guidance that made clear “that lenders must be able to accurately inform consumers as to why an adverse credit decision was made and explain specifically what factors led to the decision,” emphasizing that the use of AI is not a get-out-of-jail-free card when it comes to compliance with laws like the Equal Credit Opportunity Act.353Chopra, supra note 277. In October 2024, the CFPB finalized a Personal Financial Data Rights rule to implement the previously dormant Section 1033 of the Dodd-Frank Act.354Required Rulemaking on Personal Financial Data Rights, CFPB (Oct. 22, 2024), https://www.consumerfinance.gov/personal-financial-data-rights [https://perma.cc/LB7G-KTLN]. This was an attempt to address a true lacuna in financial regulation and speaks to new kinds of privacy harms and the market power associated with financial data.355Id. In November 2023, the CFPB proposed a rule designed to crack down on regulatory arbitrage by nonbank payments providers, which will be discussed in more detail below.356CFPB Proposes New Federal Oversight of Big Tech Companies and Other Providers of Digital Wallets and Payment App, CFPB (Nov. 7, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-new-federal-oversight-of-big-tech-companies-and-other-providers-of-digital-wallets-and-payment-apps [https://perma.cc/Z9RA-YH4N]. For further discussion, see text accompanying notes 378–80, infra. It is worth noting that the CFPB is itself a creation of the digital era: launched in 2011 with an intentional technological bent, the agency has been praised for its technological savvy, and that savvy may have equipped the agency to push back against techno-solutionist claims.357Rory Van Loo, Technology Regulation by Default: Platforms, Privacy, and the CFPB, 2 Geo. L. Tech. Rev. 531, 531 (2018).

Turning to crypto, regulators have not promulgated any formal rules, but they have issued a significant amount of informal guidance. In June 2018, then-SEC Corporate Finance Director Bill Hinman delivered what has come to be known as the “Hinman speech” in which he expressed his excitement about blockchain’s potential for decentralization, and he suggested that tokens might not be considered securities “[i]f the network on which the token or coin is to function is sufficiently decentralized.”358William Hinman, Director, Division of Corp. Fin., SEC, Digital Asset Transactions: When Howey Met Gary (Plastic) (June 14, 2018), https://www.sec.gov/news/speech/speech-hinman-061418 [https://perma.cc/9N6R-RAUU]. This speech uncritically accepted the crypto industry’s decentralization rhetoric, neglecting the fact that blockchain’s technological decentralization does nothing to prevent the economic centralization that the SEC is concerned with.359See supra notes 251–55 and accompanying text. Overall, however, the SEC has generally looked beyond that rhetoric and concluded that crypto tokens are subject to the securities laws—as SEC Chair Gary Gensler stated in 2022:

Of the nearly 10,000 tokens in the crypto market, I believe the vast majority are securities. Offers and sales of these thousands of crypto security tokens are covered under the securities laws. . . . For the past five years . . . the Commission has spoken with a pretty clear voice here: through the DAO Report, the Munchee Order, and dozens of Enforcement actions, all voted on by the Commission. Chairman Clayton often spoke to the applicability of the securities laws in the crypto space.360Gensler, supra note 341 (internal citations omitted).

As for the banking regulators, the OCC initially took a somewhat accommodative position on crypto, issuing a number of documents authorizing banks to hold crypto assets in custody for their customers and to hold reserves for stablecoins.361Wilmarth, Jr., supra note 332, at 268. These documents sometimes evince an unquestioning acceptance of crypto’s claims to be a wealth-building and payments technology; for example, the letter authorizing banks to hold stablecoin reserves starts from the premise that “[r]eports suggest stablecoins have various applications, including the potential to enhance payments on a broad scale, and are increasingly in demand.”362Off. of the Comptroller of the Currency, OCC Chief Counsel’s Interpretation on National Bank and Federal Savings Association Authority to Hold Stablecoin Reserves, OCC Interpretive Letter No. 1172, at 1 (Sept. 21, 2020), https://www.occ.gov/topics/charters-and-licensing/interpretations-and-actions/2020/int1172.pdf [https://perma.cc/5DTF-NBQB]. This premise lacks a strong foundation, however, given blockchain technology’s inability to scale to the level needed to compete with traditional payments providers.363White, supra note 40.

More recently, guidance from banking regulators has paid less heed to unsubstantiated promises of crypto’s technological innovation. Most notably, in January 2023, the Federal Reserve, FDIC, and OCC jointly issued strong guidance indicating their expectations that banks would remain separated from crypto, in order to ensure the continuing stability of the banking system.364See generally Bd. of Governors of the Fed. Rsrv. Sys., Fed. Deposit Ins. Corp. & Off. of the Comptroller of the Currency, Joint Statement on Crypto-Asset Risks to Business Organizations (2023), https://www.federalreserve.gov/newsevents/pressreleases/files/bcreg20230103a1.pdf [https://perma.cc/QK4N-QXPS]. In that statement, the agencies articulated the following non-techno-solutionist position:

Given the significant risks highlighted by recent failures of several large crypto-asset companies, the agencies continue to take a careful and cautious approach related to current or proposed crypto-asset-related activities and exposures at each banking organization.365Id. at 2.

ii.  Monitoring

Once regulatory bodies have promulgated rules or informal guidance, they must then engage in supervision, examination, or other monitoring to ensure compliance. It can be difficult to interrogate how these processes are being discharged, as they are often confidential, performed away from the public eye.366Peter Conti-Brown & Sean Vanatta, Focus on Bank Supervision, Not Just Bank Regulation, Brookings (Nov. 2, 2021), https://www.brookings.edu/research/we-must-focus-on-bank-supervision [https://perma.cc/CT8H-LR25]. Sometimes information about these processes is made public, however, and Art Wilmarth has used publicly available sources to document many of the entanglements between banking and crypto that banking supervisors have permitted.367Wilmarth, Jr., supra note 332, at 271–78. Although it seems unlikely that these entanglements could presently threaten the stability of the overall financial system—particularly because regulators have not authorized any U.S. bank to invest directly in crypto assets or accept them as collateral—such entanglements did help bring down Signature Bank and Silvergate Bank, which relied heavily on the crypto industry for deposits and fee income.368Id. at 278–88. The failure of these banks exacerbated a broader regional banking crisis in 2023, and in its report on that crisis, the FDIC conceded that “in retrospect, the FDIC could have acted sooner and more forcefully to compel the bank’s management and its board to address these [AML and risk management] deficiencies more quickly and more thoroughly.”369FDIC, FDIC’s Supervision of Signature Bank 16 (Apr. 28, 2023), https://www.fdic.gov/news/press-releases/2023/pr23033a.pdf [https://perma.cc/T3UR-BPZ4]. Nothing was said in the report, though, about whether regulators had accommodative attitudes toward crypto business models and technologies that helped induce their inaction.

Of course, there is a preliminary question when it comes to fintech supervision, which is whether financial regulators even believe they have supervisory jurisdiction over fintech business models in the first place.370“With any novel financial product, the threshold question is always that of its legal and regulatory status as a security, banking product, commodity, insurance contract, and so on.” Omarova, supra note 191, at 82. If industry actors can successfully convince regulators that their technology is too new to fit into existing regulatory structures, then they will avoid supervision, examination, or other monitoring. James Kwak observed that in the lead-up to the 2008 crisis, “[t]he financial sector . . . seems to have gained the cooperation of the federal regulatory agencies . . . [in part] by convincing them that financial deregulation was in the public interest.”371Kwak, supra note 97, at 77–78. Techno-solutionist narratives make these same claims about advancing the public interest by getting law out of the way so that technological solutions can flourish.

With regard to fintech lending, for example, Chris Odinet has spelled out the arbitrage strategies that have allowed these businesses to operate largely outside of the supervisory powers of the CFPB and federal banking agencies.372Odinet, supra note 21, at 1774 (noting that state regulators often have jurisdiction here, but “occupy an interesting position because they are in theory very powerful but can often be very weak in practice”). Odinet argues that this regulatory arbitrage is the main point of the fintech lending business model: to seek an end-run around both state usury laws and bank capital regulations by having fintech providers partner with or “rent” a bank in a way that avoids both types of rules.373Banks have preferential treatment that allows them to export favorable usury laws in their home jurisdiction so that they can make high-cost loans throughout the country, even in states with more restrictive usury rules—nonbank fintech firms cannot do this. Odinet, supra note 21, at 1775–76, 1778. Fintech lenders (and their associated banks), however, describe these business models as driven by superior technological interfaces and credit scoring systems—this allows them to tap into the positive political valence of technological innovation to facilitate cognitive capture.374“The partnership is, in essence, a regulatory arbitrage scheme meant to allow high-cost predatory lending to proliferate online, all while enjoying the political cover accorded by being labeled a ‘fintech.’ ” Odinet, supra note 21, at 1765. When regulators are persuaded into inaction by such rhetoric, then consumer harm can be perpetuated without oversight.

Many fintech payments providers also engage in regulatory arbitrage. To use Venmo as an example, federal banking regulation would apply to balances in Venmo accounts if they were construed as deposits, but Venmo has entered into carefully crafted relationships with regulated banks to avoid such characterization.375John L. Douglas, New Wine into Old Bottles: Fintech Meets the Bank Regulatory World, 20 N.C. Banking Inst. 17, 25–36 (2016). However, nonbank payments providers can pose consumer protection and financial stability concerns. Awrey and Zwieten have explained that some Venmo customers store funds in Venmo accounts and assume that those funds will remain available for transactions, notwithstanding that Venmo may have used the funds elsewhere or that the funds may be commingled in a Venmo bankruptcy.376Dan Awrey & Kristin van Zwieten, The Shadow Payment System, 43 J. Corp. L. 775, 806 (2018). Venmo customers may not appreciate these vulnerabilities now, but if concerns develop about Venmo and the way it holds customer funds, customers may pull their funds out in something that closely resembles a bank run.377Id.

Different nonbank payments providers pose different permutations of these prudential and consumer protection concerns, but have generally escaped the types of stringent regulation that apply to banks and other insured deposit–taking institutions.378CFPB, supra note 356. The CFPB expressed a willingness to help level this playing field, however, by exercising existing authorities over firms that serve as service providers for banks,379Chopra, supra note 277. and by proposing a rule that would establish an examination program for larger nonbank digital consumer payment companies.380CFPB, supra note 356. In so doing, the CFPB rejected the contention that technology companies should be treated differently from legacy financial institutions when they provide equivalent services.

iii.  Enforcement

When regulatory agencies bring enforcement actions against firms deploying fintech business models and technologies, those enforcement actions tend to signal a rejection of techno-solutionism. The mere fact that an enforcement action was brought tends to suggest a willingness on the part of a regulatory body to look behind the techno-solutionist rhetoric and conclude that new technologies are being used to perpetuate familiar harms for which there are legal consequences.

To be clear, enforcement may be made more challenging by increasing technological sophistication. For example, when it comes to the CFPB seeking to address discrimination in the provision of credit, enforcement is “increasingly difficult when decisions . . . are made via criteria deeply embedded in complex algorithms used to detect patterns in masses of data.”381Cohen, supra note 17, at 179. As the Financial Stability Oversight Council (“FSOC”) has noted, “[m]any AI approaches present ‘explainability’ challenges that make it difficult to assess the suitability and reliability of AI models and to assess the accuracy and potential bias of AI output.”382Fin. Stability Oversight Council, supra note 225, at 9. But the harm identified here (discrimination in the provision of credit) is familiar, and the CFPB’s necessary legal authority (pursuant to the Equal Credit Opportunity Act) holds up, despite the technological innovation. The CFPB confirmed that it will enforce the law “regardless of the technology being used” and that arguing that “the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”383Rohit Chopra, CFPB, Kristen Clarke, U.S. Just. Dep’t, C.R. Div., Charlotte A. Burrows, EEOC & Lina M. Khan, FTC, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems 2 (2023), https://files.consumerfinance.gov/f/documents/cfpb_joint-statement-enforcement-against-discrimination-bias-automated-systems_2023-04.pdf [https://perma.cc/Y5VD-CQ74].

A techno-solutionist approach to enforcement, on the other hand, is likely to manifest in accommodative inaction. Financial regulators who are cognitively captured by techno-solutionist rhetoric may come to believe that technological solutions are exceptional and therefore both need and deserve special treatment under the law—and so they refrain from enforcing existing laws. Ryan Calo has argued that technology is exceptional “when its introduction into the mainstream requires a systematic change to the law or legal institutions in order to reproduce, or if necessary displace, an existing balance of values.”384Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 552 (2015). This is the kind of argument the crypto industry makes as to why blockchain-hosted assets should not be subject to the long-standing, technology-neutral “Howey test” for determining whether something is an investment contract regulated by the SEC.385The seminal Supreme Court case interpreting the term “investment contract” does so in a way that “embodies a flexible rather than a static principle, one that is capable of adaptation to meet the countless and variable schemes devised by those who seek the use of the money of others on the promise of profits.” SEC v. W.J. Howey Co., 328 U.S. 293, 299 (1946). Another well-worn trope of techno-solutionism is the belief that technology can solve its own problems: this trope, coupled with exceptionalist arguments that technological change is too rapid and complex for the law to effectively address, is often invoked in support of calls for self-regulation.386Short et al., supra note 55, at 17–18. The crypto industry has made repeated arguments that it should regulate itself.387See, e.g., Joe Light, The Crypto Industry’s Solution for Regulation: We’ll Handle It, Bloomberg (Nov. 19, 2021), https://www.bloomberg.com/news/articles/2021-11-19/crypto-industry-s-solution-to-regulation-is-self-regulation [https://perma.cc/QDT4-6WRT].

Fortunately, many regulatory personnel have not been swayed by these kinds of techno-solutionist arguments. In particular, the SEC has been quite aggressive about enforcing the securities laws against the crypto industry;388For a comprehensive listing of the SEC’s crypto enforcement actions, see Crypto Assets and Cyber Enforcement Actions, U.S. SEC, https://www.sec.gov/spotlight/cybersecurity-enforcement-actions [https://web.archive.org/web/20241227170034/https://www.sec.gov/securities-topics/crypto-assets]. in so doing, it is challenging techno-solutionist claims that the use of decentralized technology changes the economic realities of securities investments.389See supra notes 358–359 and accompanying text. These claims are the latest in a long line of tech industry arguments that decentralization defies regulation,390Short et al., supra note 55, at 8–10. but as of the time of writing, courts have largely agreed with the SEC’s anti-techno-solutionist approach (with one notable partial exception).391See, e.g., SEC v. Telegram Grp. Inc., 448 F.Supp. 3d 352, 352 (S.D.N.Y. 2020); SEC v. Kik Interactive Inc., 492 F.Supp. 3d 169, 169 (S.D.N.Y. 2020); SEC v. LBRY, Inc., 639 F.Supp. 3d 211, 220–21 (D.N.H. 2022); SEC v. Terraform Labs. Pte. Ltd., 708 F.Supp. 3d 450, 471–74 (S.D.N.Y. 2023). The notable partial exception was SEC v. Ripple Labs, Inc., 682 F.Supp. 3d 308, 328–30 (S.D.N.Y. 2023), in which Judge Torres concurred with the SEC’s allegations that a security had been sold to institutional investors, but found against the SEC with respect to “programmatic” sales of the XRP token to retail investors. Judge Torres’s reasoning has been expressly rejected by other SDNY judges, including in SEC v. Terraform Labs. Pte. Ltd., 684 F.Supp. 3d 170, 197 (S.D.N.Y. 2023), and in SDNY Judge Failla’s denial of Coinbase’s motion to dismiss the SEC’s enforcement action. SEC v. Coinbase, Inc., 726 F.Supp. 3d 260, 268, 288–89 (S.D.N.Y. 2024). A district court also upheld the CFTC’s determination that the Ooki DAO, a blockchain-hosted decentralized autonomous organization, was a “person” within the meaning of the Commodity Exchange Act and could therefore be held liable for violations of that law.392Press Release, CFTC, Statement of CFTC Division of Enforcement Director Ian McGinley on the Ooki DAO Litigation Victory (June 9, 2023), https://www.cftc.gov/PressRoom/PressReleases/8715-23 [https://web.archive.org/web/20241214222114/https://www.cftc.gov/PressRoom/PressReleases/8715-23].

Cryptocurrencies have also come to play an important role in funding criminal activities and in sanctions evasion.393Stansbury Testimony, supra note 324, at 2. While Section II.D emphasized the legibility of transactions recorded on a blockchain, sophisticated criminals use tools like mixers and tumblers to make it much harder for authorities to trace funds394“One well-known technique is the use of “mixing” or “tumbling” services, which allow for the commingling of legitimate cryptocurrency transmissions with those involving illicit payments, thereby making the criminal activity harder to trace.” Id. at 3.—in response, the Office of Foreign Assets Control (“OFAC”) has sanctioned virtual currency mixers like Tornado Cash, Blender, and Sinbad.395Press Release, U.S. Treasury Dept., Treasury Sanctions Mixer Used by the DPRK to Launder Stolen Virtual Currency (Nov. 29, 2023), https://home.treasury.gov/news/press-releases/jy1933 [https://perma.cc/DCL8-N5XW]. Another high profile enforcement action in this area was brought by the Department of Justice (working in conjunction with OFAC, Financial Crimes Enforcement Network (“FinCEN”), and the CFTC) against the Binance cryptocurrency exchange for failing to comply with anti-money laundering and other laws. Using decidedly non-techno-solutionist rhetoric, Attorney General Merrick Garland announced the charges by saying “using new technology to break the law does not make you a disruptor, it makes you a criminal.”396Press Release, U.S. Dept. of Justice Off. of Pub. Affs., Binance and CEO Plead Guilty to Federal Charges in $4B Resolution (Nov. 21, 2023), https://www.justice.gov/opa/pr/binance-and-ceo-plead-guilty-federal-charges-4b-resolution [https://perma.cc/X4CY-3J7Q].

Many of these enforcement actions have been criticized by the crypto industry (and sometimes by crypto industry–supportive Members of Congress) for impeding fintech innovation.397See, e.g., Marisa T. Coppel, How OFAC’s Tornado Cash Sanctions Violate U.S. Citizens’ Constitutional Rights, CoinDesk (Apr. 18, 2023, 3:06 PM), https://www.coindesk.com/opinion/2023/04/18/how-ofacs-tornado-cash-sanctions-violate-us-citizens-constitutional-rights [https://perma.cc/EN8S-L3S6]; Paul Kiernan, Republicans Pummel SEC’s Gary Gensler Over Crypto Crackdown, Wall St. J. (Apr. 18, 2023), https://www.wsj.com/articles/sec-chair-gensler-to-defend-climate-crypto-plans-before-gop-led-panel-2e3a6ade [https://web.archive.org/web/20231204050108/https://www.wsj.com/articles/sec-chair-gensler-to-defend-climate-crypto-plans-before-gop-led-panel-2e3a6ade]; David Dayen, Congressmembers Tried to Stop the SEC’s Inquiry into FTX, Am. Prospect (Nov. 23, 2022), https://prospect.org/power/congressmembers-tried-to-stop-secs-inquiry-into-ftx [https://perma.cc/43EX-R8YB]. The crypto industry has in particular decried the “regulatory uncertainty” created by such enforcement actions and court decisions, arguing that such uncertainty has undermined the crypto industry’s ability to thrive.398See, e.g., Chris Prentice & Hannah Lang, Coinbase Rejects U.S. Regulator’s Claim It Broke Rules on Crypto, Reuters (Apr. 27, 2023, 1:00 PM), https://www.reuters.com/markets/currencies/coinbase-does-not-list-securities-company-tells-us-regulator-2023-04-27 [https://web.archive.org/web/20230503124643/https://www.reuters.com/markets/currencies/coinbase-does-not-list-securities-company-tells-us-regulator-2023-04-27/]. However, the SEC has been largely unequivocal in its communications that the vast majority of crypto tokens are securities: as Chair Gensler has said, “not liking the message is not the same thing as not receiving it.”399Gensler, supra note 341. In any event, few areas of the law provide perfect certainty, and as the Supreme Court implicitly recognized in formulating the Howey test, preserving a degree of flexibility often proves quite useful in “future-proofing” the law.400The Supreme Court noted that Congress had chosen to include “investment contracts” within the definition of “security” as it “embodies a flexible rather than a static principle, one that is capable of adaptation to meet the countless and variable schemes devised by those who seek the use of the money of others on the promise of profits.” SEC v. W.J. Howey Co., 328 U.S. 293, 299 (1946). Experience with the legal innovation of the limited liability company also makes it clear that perfect certainty under the securities laws is not necessary for something to thrive: courts have refused to lay down bright-line rules for when interests in limited liability companies will be considered investment contracts under the Howey test,401See, e.g., United States v. Leonard, 529 F.3d 83, 89 (2d Cir. 2008) (“[A]n interest in an LLC is the sort of instrument that requires ‘case-by-case analysis’ into the ‘economic realities’ of the underlying transaction.”). but limited liability companies have nonetheless experienced exponential growth in popularity since they were first created.402“LLCs are far and away the most popular legal entity form for new businesses.” Eric H. Franklin, A Rational Approach to Business Entity Choice, 64 Kan. L. Rev. 573, 586 (2016). Given all of this, crypto industry complaints about the uncertain application of existing laws often seem like a pretext for an unwillingness to comply.

It may be that running a legally compliant business is not economically viable for some crypto industry participants, but without techno-solutionism to cloud our vision, we may be glad to see the end of businesses that have little to recommend them other than regulatory arbitrage. While Brummer, Yadav, and Zaring have argued that regulatory agencies “risk being viewed as less technocratic and expert and driven more by selfish, rather than public interests” when they bring crypto enforcement actions,403Brummer, Yadav & Zaring, supra note 343, at 1302. this assumes a techno-solutionist public interest in seeing the crypto industry and its innovation flourish. While enforcement actions may indeed lessen the legitimacy of regulators in the eyes of the crypto industry and some crypto users, those same enforcement actions may very well bolster the legitimacy of regulators in the eyes of other members of the public (the vast majority of whom are distrustful of crypto).404Faverio, Dawson & Sidoti, supra note 167. And of course, once something goes wrong, the public will always ask, “[w]here were the regulators?” Techno-solutionist accommodative inaction can be very damaging to the legitimacy of a regulatory agency in retrospect.

  1. Looking Forward: Financial Regulation and AI

AI is currently the “buzziest” technology both within and outside of the financial industry. In the wake of OpenAI’s launch of ChatGPT, much of the hype, fervor, and VC funding pertaining to crypto shifted to AI-related technologies.405Hannah Miller, Tech Investors Bet on AI, Leaving Crypto Behind, Bloomberg (July 11, 2023, 11:01 AM), https://www.bloomberg.com/news/articles/2023-07-11/startup-investors-are-betting-on-ai-and-leaving-crypto-behind [https://perma.cc/FFB8-UR7X]. These AI technologies can be applied in any number of different fields,406For an indication of the many policy areas affected by AI, see FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, White House (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/?utm_source=substack&utm_medium=email [https://perma.cc/782F-CNBZ]. but this Section’s discussion will focus primarily on whether financial regulation will be stymied by techno-solutionism associated with the application of AI-related technologies to financial services.

As a starting point, it’s worth noting that AI-related technologies are particularly likely to invite techno-solutionism because they are especially effective in obscuring the reality of human agency and incentives: the very name “artificial intelligence” connotes autonomy and superiority to human flaws and imperfections. The technologies we call “artificial intelligence” do not currently display characteristics of real human intelligence, though—they lack the ability to reflect on or engage with their existence in a world where others exist too.407For an overview of the debate on what is meant by “intelligence” in the context of AI, see generally Christopher Newfield, How to Make “AI” Intelligent; or, The Question of Epistemic Equality, Critical AI , October 2023, at 1. Some have suggested that the term “applied statistics” is therefore a more accurate description of these technologies, but the “AI” label has stuck.408Madhumita Murgia, Sci-fi Writer Ted Chiang: “The Machines We Have Now Are Not Conscious,” Fin. Times (June 2, 2023), https://www.ft.com/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84 [https://perma.cc/CCE7-RVR8]. This label can serve to distract people from the important role that human computer scientists play in programming the software that will “learn” from the data presented to it, and the role that data scientists can play in selecting and curating that data.409While we may hear that “there are no bad AI systems, only bad AI system users” and “there is nothing value-neutral about any information technology, including AI systems.” Hartzog Testimony, supra note 17, at 8–9. The term “learn” is in quotation marks because AI does not learn in the same way a human does. AI does not seek to establish causality or engage in formal reasoning but instead looks for correlations (even weak correlations) in data and uses these to formulate decision-making rules that will guide it in performing an assigned task410Solow-Niederman, supra note 149, at 25. (hence the moniker “applied statistics”).

This explanation of AI encompasses “generative AI” like ChatGPT, as well as earlier generations of machine learning technology that were used in financial services prior to the development of generative AI. The primary difference is that unlike previous iterations of AI, generative AI can generate uniquely constructed content of its own in the form of things like text, images, and code.411Linklaters, AI in Financial Services 3.0: Managing Machines in an Evolving Legal Landscape 5 (2023), https://www.linklaters.com/insights/thought-leadership/fintech/artificial-intelligence-in-financial-services [https://perma.cc/Z2FP-XZWW]. Despite the developments in Generative AI, most AI-driven financial services applications currently rely on machine learning technologies that were available before the advent of ChatGPT, particularly in risk management and portfolio construction contexts.412Id. at 4–5. There is, however, interest in using Generative AI to improve consumer-facing chatbots and for report summarization; some financial services firms have also expressed interest in using generative AI in regtech tools (for example, fraud detection and AML compliance tools, as well as automated reporting).413Id.

There is a particular interest in the efficiency gains that generative AI can make414Fin. Stability Oversight Council, supra note 225, at 91. “The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both.” Cory Doctorow, Cory Doctorow: What Kind of Bubble Is AI?, Locus (Dec. 18, 2023), https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai [https://perma.cc/AJ56-H5JE].—but those claims to efficiency are quite techno-solutionist. The large language models (“LLMs”) used for generative AI are extremely expensive to create, and after those sunk costs have been incurred, they will continue to be extremely expensive to maintain and run—at the most basic level, they require significant amounts of electricity and water415Doctorow, supra note 414. See generally Shaolei Ren, Pangfei Li, Jianyi Yang & Mohammad A. Islam, Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models (Mar. 26, 2025) (unpublished manuscript), https://arxiv.org/pdf/2304.03271.pdf [https://perma.cc/B8NE-QAJE]. (as with blockchains, we should not forget the environmental costs of these technologies). Efficiency gains therefore depend on LLMs eliminating the cost of human oversight, but LLMs can “hallucinate” incorrect answers, often informed by specious correlations drawn from lackluster data.416What Are AI Hallucinations?, IBM (Sept. 1, 2023), https://www.ibm.com/topics/ai-hallucinations [https://perma.cc/6WB9-H8XK]. More generally, AI is poorly suited to predicting low-probability but high-stakes events, and widespread reliance on such AI tools could result in more homogenous behavior that ends up undermining assumptions in the data that the tools were trained on.417Allen, supra note 113, at 55–56, 64–65; see also Juan Luis Perez, How AI Will Change Investment and Research, Fin. Times (Nov. 29, 2023), https://ft.com/content/2390e8f3-88ba-40a0-b684-7fb6fada9bde. Because of these limitations, humans who are highly skilled in domain expertise should be kept in the loop to check the output of AI tools if that output is to be used in a high stakes risk management or portfolio construction situations (individuals without this domain expertise are more likely to fall prey to automation biases and defer to the model’s output unquestioningly).418On the importance of domain knowledge experts scrutinizing AI output, see Perez, supra note 417; Doctorow, supra note 414. A combination of AI and human intelligence will often produce the most accurate answers, but that increased accuracy will be very expensive.419Doctorow, supra note 414.

To reduce costs, some in the financial industry may seek to automate their risk management and portfolio construction practices while limiting or dispensing with the use of domain experts—this could ultimately threaten the stability of our financial system.420Allen, supra note 113, at 55–58. AI may also be used to arbitrage regulation. For example, banks could potentially arbitrage an important kind of microprudential regulation known as capital requirements by using “machine learning-capable risk management models” and “selectively exposing those models to data sets that neglect tail risks.”421Id. at 157–58. If tacitly permitted, this kind of arbitrage could result in lower bank capital levels (undermining a cornerstone of financial stability regulation), and could even harden into a regulatory entrepreneurship strategy if industry participants “pressure regulators to certify that the output of a particular . . . tool constitutes sufficient compliance.”422Id.

This arbitrage is a problem of degree, not an entirely new problem. Financial institutions were attempting complex regulatory arbitrage and entrepreneurship strategies with regard to capital requirements long before machine learning came along.423The complexity of regulatory capital requirements “provides near-limitless scope for arbitrage.” Andrew G. Haldane, Executive Director, & Vasileios Madouros, Economist, Bank of England, Speech at the Federal Reserve Bank of Kansas City’s 366th Economic Policy Symposium, “The Changing Policy Landscape”: The Dog and the Frisbee 8 (Aug. 31, 2012), https://www.bis.org/review/r120905a.pdf [https://perma.cc/JN45-MH6L]. In many ways, these old problems have simply been amped up by the inscrutability of AI. Long-standing calls for capital regulation to be simplified would also be quite effective in making capital regulation more robust to AI-facilitated arbitrage.424See id. at 14–19 for one of the most prominent such proposals. Unless and until such reforms are adopted, though, it is true that banking regulators will need increased technological sophistication to scrutinize algorithms and data sets in order to detect AI-enabled arbitrage of regulatory capital requirements.

The use of AI could also amplify consumer protection problems, like those associated with discrimination in the provision of credit.425See supra notes 148–151 and accompanying text. Once again, we have existing regulatory frameworks within which to respond to many of these issues so long as regulators are not too dazzled or cowed by the technology, and the CFPB has indicated its willingness to continue enforcing its anti-discrimination laws when AI tools have been used.426See supra notes 381–83 and accompanying text. In one speech, CFPB Director Chopra noted that

AI certainly poses new risks, or at least exacerbates old ones. While many new approaches may be necessary, it is clear we must all make use of existing laws and regulations on the books. In the United States . . . there is no ‘fancy new technology’ carveout to existing laws. Even if firms are using a complex new algorithm or AI model, they must follow the law.427Chopra, supra note 275.

This is a promising start. Chopra recognizes that many of the problems likely to be caused by the use of AI in finance are familiar ones that should not be accommodated but instead should be addressed with existing regulatory tools. He also remains humble about truly new problems that could emerge from the use of AI and new regulatory tools that may be needed to address them.428Hartzog has recommended the continued application of time-tested legal doctrines like fiduciary duties and consumer protection laws to activities carried out using AI, and—where harms are significant—licensing regimes or even bans. Hartzog Testimony, supra note 17, at 4–6, 11. The question is—given that “personnel is policy”—will other financial regulators and lawmakers follow suit?

The VC industry has invested heavily in AI and has strong incentives to deploy cognitive capture, regulatory arbitrage, and regulatory entrepreneurship strategies in order to make those investments more profitable.429See supra note 405. Andreessen Horowitz has been particularly aggressive in deploying techno-solutionist rhetoric in lobbying for favorable legal and regulatory treatment for crypto430Lipton, Wakabayashi & Livni, supra note 46. and has made it clear that it plans to deploy a similar strategy for AI. In a December 2023 blog post, Andreessen Horowitz’s co-founder Ben Horowitz announced:

We are non-partisan, one issue voters: If a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them. Specifically, we believe . . . Artificial Intelligence has the potential to uplift all of humanity to an unprecedented quality of living and must not be choked off in its infancy . . . Every penny we donate will go to support like-minded candidates and oppose candidates who aim to kill America’s advanced technological future.431Ben Horowitz, Politics and the Future, Andreessen Horowitz (Dec. 14, 2023), https://a16z.com/politics-and-the-future [https://perma.cc/6NU2-ZMTE].

To give you an example of the kind of “optimistic technology-enabled future” that Horowitz will lobby fiercely to protect from regulation, Andreessen Horowitz has funded a startup at the intersection of AI and crypto known as Worldcoin.432Guo & Renaldi, supra note 130. Co-founded by Open AI-CEO Sam Altman, Worldcoin is using a device known as “The Orb” to collect millions of retinal scans in the developing world in exchange for a crypto asset that has no real value at present, “but someday, Worldcoin says, it’ll form the basis of a new economic system and maybe will also provide a universal basic income stream for the world’s poor.”433Max Chafkin, Don’t Scan Your Eyeballs for Worldcoin’s Magic Beans, Bloomberg (Aug. 7, 2023, 9:30 AM), https://www.bloomberg.com/news/newsletters/2023-08-07/what-s-the-purpose-of-worldcoin-orb-eye-scanning-crypto-token-project [https://perma.cc/5R9K-4DE4]. This is an exquisite example of techno-solutionism: Worldcoin has been designed to respond to problems that do not yet exist, but that Worldcoin’s founder expects his other technology to cause (i.e., the lack of income opportunities that will be available if AI renders many jobs obsolete). If AI does indeed end up eliminating lots of jobs, we will need policy solutions that take into account the dignity of work as well as people’s need for income.434Daron Acemoglu & Simon Johnson, Power and Progress: Our 1000-Year Struggle Over Technology & Prosperity 416–17 (2023). Worldcoin, however, offers (at best) an oversimplified solution to such a complex problem—a potential method for paying people to watch their screens once they no longer have jobs. And Worldcoin downplays the privacy concerns associated with training its models on the biometric data of vulnerable people and the predatory aspects of paying those people for their biometric data with a potentially worthless crypto asset.435Guo & Renaldi, supra note 130.

It remains to be seen how lawmakers and regulators will respond to Silicon Valley’s techno-solutionist appeals to allow this and other kinds of AI-related innovation to flourish.

IV.  A Possible Antidote to Techno-Solutionism

The primary goal of this Article has been to identify the techno-solutionism rife in the fintech industry and to explore how this techno-solutionism has both stymied and been facilitated by financial regulation. Techno-solutionist narratives gain some of their power through unchallenged repetition,436Cohen, supra note 17, at 104. and so this very act of calling out fintech’s techno-solutionist narratives will hopefully go some small way toward inoculating lawmakers, regulators, and the public against fintech’s most outlandish claims.437Campbell-Verduyn & Lenglet, supra note 13, at 469 (stressing “the value added for political economy of scrutinising how the visions and materialisation of technology fail”). As Morozov notes in the postscript to his book, we cannot eliminate solutionism, but we can “ridicule” it,438Morozov, supra note 8, at 355. hopefully depriving it of some of its power.

Right now, there may not be much more that can be done to diminish techno-solutionism and its detrimental impacts on regulatory regimes designed to protect the public from harm. Techno-solutionism is entrenched in our society in many ways: by corporate political expenditures (including expenditures by venture capitalists, as already discussed);439See supra notes 325, 430–31, and accompanying text. by the lack of political access for the very communities impacted by the problems to be solved;440Byrum & Benjamin, supra note 16. by challenges in inducing skilled technologists to work for government agencies;441Hilary J. Allen, Resurrecting the OFR, 47 J. Corp. L. 1, 31 (2021). by tech industry funding of academic research on technology and its impacts;442Joseph Menn & Naomi Nix, Big Tech Funds the Very People Who Are Supposed to Hold It Accountable, Wash. Post (Dec. 7, 2023), https://www.washingtonpost.com/technology/2023/12/06/academic-research-meta-google-university-influence [https://perma.cc/TR6V-33PK]. by limited public support for public sector innovation (which could stand as a counterfactual techno-solutionist narrative);443Mazzucato, supra note 48, at 12–15. by computer science pedagogy that fails to teach students how to conceptualize or contextualize the problem to be solved;444Ohm & Frankle, supra note 36, at 779. and surely much more. This Article has consistently rejected techno-solutionism’s silver bullet solutions, and there are also no silver bullet solutions for addressing techno-solutionism itself.

Still, as this Article has emphasized, personnel is policy, and we have already seen examples of policymakers who are predisposed toward pushing back against fintech’s harms—these kinds of policymakers can be empowered by the articulation of an alternative to techno-solutionism. As a heuristic, techno-solutionism will default to permitting technological innovation, regardless of potential harms: it becomes easy to “simply assume the rightful existence of [technologies] and go straight to building guardrails so they can flourish.”445Hartzog Testimony, supra note 17, at 12. When it comes to assessing fintech’s claims to improve financial inclusion, efficiency, competition, and security, what is needed is a fundamental shift in rhetoric and perspective away from techno-solutionism and toward contextually-informed skepticism regarding technological solutions.

Adopting a posture of contextually informed skepticism is precautionary to a degree but does not require the embrace of an overly strong “precautionary principle” where activities have to be proven riskless before they can proceed. Contextually informed skepticism is therefore not incompatible with innovation; instead, it sets up incentives for the kind of innovation that is mindful of harms and consequences.446Cohen, supra note 17, at 90, 92. It is, however, likely that contextually informed skepticism from regulators will impede some innovation in the name of protecting the public from harm—which will inevitably invite intense criticism from the tech industry.447In his manifesto, Andreessen decries precautionary approaches as preventing “virtually all progress since man first harnessed fire,” as well as calling them “our enemy,” “evil,” and “deeply immoral.” Andreessen, supra note 4. However, a posture of contextually informed skepticism can embolden policymakers to take this industry criticism with a grain of salt, because contextually informed skepticism recognizes that not all innovation is socially beneficial and that the tech industry’s appreciation of potential public harm will often be skewed by financial incentives and lack of domain expertise.448Ford has also stressed that “[r]egulatory staffers . . . need sufficient confidence in their own judgment and a healthy degree of skepticism about industry.” Cristie Ford, New Governance in the Teeth of Human Frailty: Lessons from Financial Regulation, 2010 Wis. L. Rev. 441, 474 (2010). This kind of perspective shift is desperately needed with regard to crypto, for example, where the harms are many, the benefits few, and yet a bipartisan group of lawmakers has shown itself willing to support industry-favored deregulation designed to encourage more crypto innovation.449See supra notes 325–26 and accompanying text.

This is by no means a call for fintech innovators to stand down—society often benefits from techno-optimists’ efforts to push frontiers.450For a discussion of the socially valuable residue of the dot.com bubble, see Doctorow, supra note 414. But when the stakes are high, this yin of techno-optimism needs to be balanced by the yang of contextually-informed skepticism from regulators or else history and domain expertise will be ignored and harms will proliferate unchecked. This Article has already explored why finance is an arena in which the potential harms are too significant for unfettered technological experimentation.451See supra notes 291–300; see also Allen, supra note 113, at 23–24. Finance might also be different in another respect: the potential benefits of technological innovation may prove to be structurally limited in finance. Often, with technology, it is the users who unlock truly unexpected innovative use cases through their experimentation.452“[T]he public has a huge range of intentions and desires and often brings far more imagination to new technologies than those who first market [or design] them.” David E. Nye, Technological Prediction: A Promethean Problem, in Technological Visions: The Hopes and Fears That Shape New Technologies 159, 170 (Marita Sturken et al. eds., 2004). In the financial industry, though, much of the innovation that has occurred has been driven by the supply-side, rather than consumer demand.453Awrey, supra note 122, at 263–67. It may be that where money is at stake, industry (including the crypto industry, which tends towards economic centralization)454Aramonte et al., supra note 182, at 27–29; Allen, supra note 284, at 924. will afford users limited ability to actively construct how they receive their financial services. If this is the case, then unexpected uses of technology will have limited opportunities to emerge—and if technological experimentation is primarily benefitting the supplier rather than the users, then there is far less reason for policymakers to accommodate it.

Conclusion

Further research on how to disrupt techno-solutionism is welcome, because if fintech is to serve as a force for good in society, it needs to be severed from techno-solutionism. We need to recognize that if new technology is adopted without addressing the broader context in which it operates, then discrimination, distributional inequalities, concentrations of power, privacy incursions, and other harms will continue to proliferate. When it comes to fixing finance, technological innovation will not obviate the need for the hard slog of structural reform. Furthermore, where technological tools do have a role to play in addressing complex structural problems, they may be tarnished by “techlash” unless we can find a way to address techno-solutionism.455One meta analysis of public discourse between 2010–2020 found that discussion of big tech is dominated not by solutionist appeals for self-regulation but instead by “calls to regulate big tech, growing critiques of technology’s influence in society, and declining discussion of the tech sector as a driver of economic growth.” Short et al., supra note 55, at 6; see also Shira Ovide, Big Tech’s Backlash Is Just Starting, N.Y. Times (July 30, 2020), https://www.nytimes.com/2020/07/30/technology/big-tech-backlash.html [https://web.archive.org/web/20231029031307/https://www.nytimes.com/2020/07/30/technology/big-tech-backlash.html]; Edward Ongweso Jr., The Incredible Temper Tantrum Venture Capitalists Threw Over Silicon Valley Bank, Slate (Mar. 13, 2023, 11:24 AM), https://slate.com/technology/2023/03/silicon-valley-bank-rescue-venture-capital-calacanis-sacks-ackman-tantrum.html [https://perma.cc/3DC4-WPU3].

Financial regulators need to adopt a posture of contextually informed skepticism instead of techno-solutionism, keeping firmly in mind that they have express statutory mandates to protect the American public from harm—and no express mandates to facilitate technological innovation. If financial regulators can resist cognitive capture and enforce existing laws such that regulatory arbitrage and regulatory entrepreneurship are not profitable strategies, then technology is more likely to deliver benefits without serious social harms. Where technologies pose genuinely new problems, then Congressional action will be needed, and that action should also proceed from a position of contextually informed skepticism. To slightly adapt testimony from AI and privacy expert Woody Hartzog, “[l]awmakers will make little progress until they accept that the toothpaste is never out of the tube when it comes to questioning and curtailing the design and deployment of [technology] for the betterment of society.”456Hartzog Testimony, supra note 17, at 11.

 

98 S. Cal. L. Rev. 761

Download

* Professor of Law, American University Washington College of Law. Many thanks to Tonantzin Carmona, Julie Cohen, Jeremy Kress, Pat McCoy, Chris Odinet, Art Wilmarth, and Jeff Zhang for reading and providing feedback on earlier drafts. This paper also benefitted enormously from comments and conversations during workshops at the University of Florida, the Reserve Bank of New Zealand, the IMF’s Internal Fintech Forum, and the meeting of the Technology Section at the Academy of Legal Studies in Business Conference. Information regarding the status of technologies, regulation, and legislation is current as of October 2024, but these are used to illustrate broader themes that will remain relevant as new technologies and regulatory and legislative approaches evolve.