Renovating Federal Housing Law to Help Protect Tenants with Disabilities

Many individuals with disabilities contact landlords to inquire about rental housing only to learn that the landlord’s dwelling units are inaccessible. And federal anti-discrimination laws applicable to private rentals are often unhelpful. First, Title III of the Americans with Disabilities Act (“ADA”) applies to only the public areas of rental housing complexes and does not extend to dwelling units. Second, the Fair Housing Act (“FHA”) requires persons with disabilities, who have a median household income far below the national average, to pay for any structural modifications needed to facilitate their use of housing even though such retrofitting costs several thousand dollars on average. Third, it is often unclear whether landlords or their properties receive federal financial assistance that subjects them to the Vocational Rehabilitation Act of 1973 (“Rehab Act”), so individuals with disabilities may find it difficult to enforce landlords’ obligation to implement and pay for reasonable modifications under this statute. People with disabilities thus lack equal access to rental housing and cannot fully participate in American society. But the ADA, FHA, and Rehab Act were all enacted with the goal of integrating those with disabilities into public life.

Congress can address this persistent housing inequality by renovating the ADA, FHA, and Rehab Act to eliminate their coverage gaps. These incremental changes to federal law make sense as a policy matter because they will shift the cost of accessible rental dwellings from individuals with disabilities—who tend to have low incomes—to wealthy corporate property managers that can better absorb such expenses. And freeing people with disabilities from the economic constraints of their disability will help them live independently and in turn facilitate their development of a personal identity and full integration into their communities. This increased visibility of individuals with disabilities in everyday life will enhance the diversity of the American social fabric, which is an important step in reducing anti-disability attitudes and prejudices that too often impact interactions between people with disabilities and their nondisabled peers.

Download

Fractionalization to Securitization: How the SEC May Regulate the Emerging Assets of NFTs

Blockchain technology opened the world to a variety of new technological advances that reshaped the way humans interact and transact with one another. One of the most recent and trending applications of blockchain technology is non-fungible tokens or “NFTs.” NFTs are unique digital tokens encoded on a blockchain that represent ownership of specific digital assets such as artwork, collectibles, videos, domain names, and so forth. NFTs can be thought of as certificates of authenticity. Although NFTs resemble cryptocurrencies, NFTs are non-fungible. This means that no two tokens are identical, and they are not interchangeable with one another. They are valuable because each comes with a unique digital signature or ledger that allows it to be easily authenticated, verified, and transferred. This has completely revolutionized the way people trade different assets, and many NFTs are sold online for millions of dollars. Additionally, NFTs can come in different forms, ranging from whole NFTs of digital artwork or real property to fractionalized NFTs (“f-NFTs”) that break up ownership of an NFT into multiple “shards” so a larger number of people can own a piece of a single digital asset.

NFTs are a new and influential technology that can have far-reaching implications for current securities law, intellectual property law, and other legal areas. In securities law, NFTs have established a new way for people to invest and gain returns from digital assets. This has disrupted the legal and financial sectors and created new risks for investors such as fraud and hacking. With the recent rise of NFTs as potential investment assets comes the possibility of government regulation to protect investors. The growing use of NFTs alerted many regulators, such as the Securities and Exchange Commission (“SEC”), to the possibility of regulating these digital assets as some type of security. However, regulatory and securities laws struggle to keep pace with emerging innovations and financial technologies like NFTs. Much of the SEC’s limited guidance focuses on cryptocurrencies and blockchain technology generally, with little guidance specifically on NFTs as a security. Leaders in the industry have requested no-action letters, although the SEC remains silent. Leaders believe “it would be a lot easier to operate in an environment where sensible ground rules are laid out that allow for innovation.” NFT creators, buyers, and exchange platforms must rely on general SEC regulations of other digital assets to guide their decision-making and avoid regulation. Given these issues, it is important for the SEC to provide guidance on NFTs to protect and inform potential investors, while also ensuring issuers can properly develop NFTs and NFT platforms without the fear of strict regulation.

This lack of guidance stems from the fact that many regulators are divided on whether NFTs can be classified as an “investment contract” security or regulated by the SEC. The Securities Act of 1933 defines a “security” as “any note, stock, . . . bond, debenture, . . . [or] investment contract.” In 1946, the U.S. Supreme Court developed a four-pronged test in SEC v. Howey to clarify whether an asset is an “investment contract” security. The Howey test holds that a contract, transaction, or scheme is an “investment contract” when an individual (1) makes an investment of money (2) in a common enterprise (3) with a reasonable expectation of profit (4) derived from the efforts of others. While some argue that NFTs are not an “investment contract” security under the Howey test because they do not satisfy either the second or fourth prong, others believe that fractionalized NFTs could pass all four prongs. There has been little in-depth legal research and analysis that focuses specifically on f-NFTs as securities and the potential regulatory framework that could control this digital asset. The legal field must catch up with rapid technology developments and take a revised look at current regulations to see how they can be applied to NFTs. To analyze if the SEC can regulate NFTs, two main questions need to be addressed: (1) whether certain NFTs can be classified as a security under federal law; and (2) if NFTs are securities, how SEC requirements can be applied to best protect the public’s interests.

To answer these questions, this Note will apply the Howey test to f-NFTs and identify the risks and opportunities of regulating them as securities to better understand how to protect investors while also allowing for the innovation of digital assets. This Note will first conclude that NFTs can be “investment contract” securities and satisfy the four Howey prongs if they are fractionalized. First, when purchasing f-NFTs, buyers make an investment using digital currency that is considered “money.” Second, having an NFT tied to the success of a company or celebrity, or having multiple fractional interests in an NFT that are shared by a pool of investors, are investments in the “common enterprise” of that individual company, celebrity, or whole NFT. Third, f-NFTs have a “reasonable expectation of profit” given that they are easily traded on secondary markets and promoted as a unique way to “unlock liquidity.” Lastly, an f-NFT’s financial return can be derived from the efforts of platforms or issuers to maintain or improve the f-NFT market and support the popularity or price of the digital asset. This Note will then explain that even if f-NFTs are deemed securities, the SEC will need to adopt a clearer regulatory framework for f-NFT issuers, buyers, and platforms by modernizing established regimes of other digital assets. The SEC may have trouble regulating issuers or buyers of f-NFTs because the decentralized networks of f-NFTs already provide a form of digital “registration” that gives sufficient information to investors and prevents fraud through the easily verifiable digital ledgers of an f-NFT’s transactions. However, a platform that creates and trades f-NFTs may be a security “exchange” under federal law, and thus the SEC may be able to place some modified regulations on these f-NFT platforms, such as notice and disclosure requirements or compliance with capacity, integrity, and security standards, which ensure f-NFT and investor protection.

Part I provides a general overview of NFTs by explaining the blockchain technology that powers them. This Part illustrates what NFTs are, how they work, the concept of fractionalizing NFTs, and the principal applications and potential importance of NFTs in the financial markets. Part II lays out the underlying securities law—in particular SEC v. Howey—and the SEC’s current regulatory framework for other blockchain-based financial assets such as cryptocurrencies and digital tokens. Part III applies the Howey test to f-NFTs to show that they can be classified as securities and bolsters this argument by comparing f-NFTs to a digital asset (DAO Tokens) that the SEC has previously determined to be an “investment contract.” Part IV analyzes the pros and cons of regulating certain NFT issuers, buyers, or exchange platforms and provides recommendations for an NFT regulatory framework using comparisons to other developed digital asset platforms. Part V provides a preliminary exploration of existing regulatory models like those that govern traditional stocks and Real Estate Investment Trusts (“REITs”) and how they can be applied to f-NFTs. 

I.  NFT BACKGROUND: A TECHNICAL OVERVIEW OF NFTS

A.  TECHNICAL ASPECTS OF NFTS

An NFT is a cryptographic unit of data or digital signature stored in a “blockchain” that represents the ownership of a unique digital asset or real-life object. Since they use blockchain technology, NFTs are typically bought and sold online with cryptocurrency. NFTs are similar to cryptocurrencies such as Bitcoin and Ethereum because they all use blockchain technology to create a digital object (currency or token) using units of data on a digital ledger. The only difference is that digital currencies are meant to be fungible, in that one Bitcoin is the same as and interchangeable with another Bitcoin, while an NFT is meant to be non-fungible, in that each one is one-of-a-kind and not exchangeable with another NFT. The underlying data of an NFT is unique because there can only be one owner, and that person is the only one who can access or transfer that NFT. This non-fungibility and use of blockchain allow NFTs to have a built-in proof of ownership that is easily authenticated, create exclusivity, and allow for verified transfers.

B.  BLOCKCHAIN TECHNOLOGY

NFTs rely on blockchain technology, which creates a secure, decentralized network for transactions of various digital assets. The blockchain is essentially a “chain” of “blocks,” each containing specific information regarding a digital asset and its transactions that is then stored on a digital, secure, peer-to-peer ledger. An NFT is a digital database that stores data in the form of a “smart contract” and a unique identification hash bundled together in “blocks” that are all “chained” together in a distributed network. A smart contract is defined as “a computerized transaction protocol that executes terms of a contract” and is meant to minimize fraud and transaction costs. In other words, smart contracts are programs stored on a blockchain that automatically execute certain terms of a contract once certain predetermined conditions are met. Each blockchain “block” contains three components: (1) data, (2) the hash of the block, and (3) the hash from the previous block. The hash is a digitally generated string of digits and letters used to identify each block in a blockchain structure and acts as a type of unique fingerprint. The data for an NFT “block” includes a “smart contract” that points to where an NFT is located on the internet and how to retrieve it, dictates the terms of a transaction, provides a verification of ownership, and holds a ledger of the token’s ownership history and transaction record. 

FIGURE 1:  NFT Blockchain Sequence Diagram

 

An issuer creates an NFT by deploying a code to develop a specific type of “smart contract” that contains a blockchain address, typically on the Ethereum Blockchain, where the smart contract resides. Later, when someone buys or sells an NFT, the blockchain automatically creates a new “block” and a new hash for this block to add this new transaction to the “chain.” The blockchain is essentially recording a “change of state” to the NFT in which the “smart contract” updates its internal ledger and changes the structure of the NFT’s underlying blockchain to reflect the transfer of the NFT to and from different addresses. In short, whenever an NFT is sold, this new ownership is noted as a “new block” in the blockchain ledger, and the digital hash of that NFT is changed.

When purchasing an NFT, you are only buying exclusive access to the unit of data that contains the NFT’s location and are relying on the issuer’s obligation to ensure authenticity. You do not gain any property rights of the actual digital asset, such as intellectual property rights (right to copy, right to destroy, and so forth.). Similar to buying a painting, when buying an NFT, you are only buying display rights or the right to say that you own it, but nothing else. You are mostly buying a digital certificate of ownership and authenticity or unique access to a digital object, not the actual digital object itself. In other words, you own the one-of-a-kind map of where the NFT is located and are the only one who has access to it. The underlying NFT is typically hosted or located on a regular Hypertext Transfer Protocol (“HTTP”) Uniform Resource Locator (“URL”) web address on the internet or on an InterPlanetary File System (“IPFS”) hash, which is a “system designed for hosting, storing, and accessing data in a decentralized manner.” Using a regular HTTP web address is typically very risky given that a server owner could easily change the underlying content of that particular address and completely erase the actual NFT content that was originally purchased. However, when housing an NFT on IPFS, the NFT gets assigned a unique content identifier (“CID”) hash that links to the data in the IPFS network. Using an IPFS CID hash, as opposed to an HTTP URL, allows someone to find the NFT based on its content rather than by its location on a server. Thus, if the content of the NFT is changed, the original CID link would break and create a new one.

Even though NFTs only give a type of “bragging rights,” they provide various advantages that have changed the tech and financial markets. The benefits of an NFT are that it is easy to authenticate its originality, establish its exclusivity, and transfer the asset. The permanent digital ledger inherent in an NFT acts as a record of ownership and allows for easy traceability across the blockchain network so that the original creator or past owners can be easily traced through their past transactions. This has made NFTs a highly valuable avenue to establish verified ownership over assets such as digital artwork, digital trading cards, video highlight reels, social media posts, collectibles, and even real property. An NFT also has the unique capability to internally incorporate royalty agreements into its “smart contract,” where it automatically carries out an agreed-upon payment system whenever the NFT is licensed, resold, or used for some particular purpose. This has provided content creators with new ways to continuously and easily monetize their work through NFTs. Lastly, NFTs have created a new way for people to invest their money in digital assets. With billions of dollars recently being poured into the NFT market, many investors have flocked to these digital assets as a potential high-risk investment strategy. However, NFTs have become the target of some security breaches and hacking due to their novelty and outdated or inefficient security protocols. Additionally, the value of NFTs and their potential returns can be volatile and speculative because they are only worth as much as other people are willing to pay for them. An NFT’s appreciating value seems to be derived either from its creator or its scarcity. Thus, depending on these two factors, investors could either win the jackpot to resell their NFT for a large gain or end up with a worthless digital asset and a large loss.

C.  FRACTIONALIZATION OF NFTS

One major innovation that has disrupted the way people view and use NFTs as investments is the concept of fractionalizing NFTs. Fractional NFTs, or “f-NFTs,” break an NFT into pieces, or “shards,” which can be subsequently traded and sold in the market at a lower price than the NFT as a whole. F-NFTs represent a fraction of the larger digital asset in which an investor can now share a partial interest in an NFT with other investors. Given that NFTs are routinely sold individually for thousands or millions of dollars, f-NFTs democratize these investments such that average investors can now purchase a smaller portion of a high-priced NFT. F-NFTs opened up access to NFT markets and allowed more people to invest in these new digital assets.

There are currently multiple platforms that facilitate the creation and trading of f-NFTs, such as Niftex, Fractional.art, and DAOfi. These f-NFT platforms allow owners to break NFTs into multiple shards and sell them at an initial fixed price. The shards can subsequently be traded in an open market on the platform. On Niftex, an f-NFT is created through a four-step process: (1) “Owners of NFTs create fractions (‘shards’) by choosing issuance and pricing”; (2) these fractions are then put on sale on the platform at a fixed price for two weeks or until they sell out; (3) once the fixed sale period ends, the fractions can be traded on a secondary market; and then (4) a whole NFT can be fully retrieved by purchasing all of the shards through the platform’s special “Buyout Clause.” This Buyout Clause is embedded within an f-NFT’s smart contract and gives f-NFT investors who own a particular percentage of an NFT’s shards the opportunity to purchase the remaining shards to now own the whole NFT. F-NFT platforms have also incorporated the ability to automatically give issuers a portion of the f-NFT created or to give some type of “curator fee.”

NFT issuers and platforms have become very creative in the ways in which they utilize and develop this digital asset. One theory is that platforms could put numerous NFTs into one basket and sell f-NFTs of that basket as an investment product or security (“f-NFT bundles”). While some people do not think a traditional NFT could be a security, an f-NFT may be deemed a security under U.S. securities law. SEC Commissioner Hester Peirce warned issuers of f-NFTs that “the whole concept of an NFT is supposed to be non-fungible [meaning that] in general, it’s less likely to be a security,” but if issuers sell fractional interests in NFTs or NFT bundles, “you better be careful that you’re not creating something that’s an investment product—that is a security.” Peirce argued that “the definition of a security can be pretty broad,” and thus f-NFTs could fall within the SEC’s definition of a security and be subject to some form of regulation. With the high costs of a single NFT, the growing availability of blockchain platforms in the mainstream, and the large development of decentralized finance and decentralized applications, “the continued fractionalization of NFTs is almost inevitable.”

II.  LEGAL BACKGROUND: DEFINING “SECURITIES”

The main statutes governing securities regulation are the Securities Act of 1933 (“Securities Act”) and the Securities Exchange Act of 1934 (“Exchange Act”). While the Securities Act mostly deals with the issuance of securities, the Exchange Act governs exchanges, brokers, and trading on secondary markets. Together, these statutes establish a registration and disclosure regime that requires any offer or sale of securities to register with the SEC and any issuers of securities to provide accurate and complete disclosures of material information regarding their securities offering or company. These requirements provide key information to investors so that they can make the most informed decisions. The consequences of being subject to these registration and disclosure requirements include filing documents with the SEC any time you sell securities, such as a Form S-1 registration statement, and filing continuous, periodic reports regarding the company’s business operations and financials, such as Form 10-K, Form 10-Q, or Form 8-K. These statutes, along with regulatory rules, provide definitions and tests to help determine whether an asset is a “security” or an organization is an “exchange” that is subject to federal regulation.

A.  SECURITIES ACT OF 1933

The Securities Act makes it illegal for an issuer to offer or sell any unregistered security within interstate commerce unless the security is exempt from registration. This statute defines an “issuer” as “every person who issues or proposed to issue any security,” where “person” includes “an individual, a corporation, . . . [or] any unincorporated organization.” It also provides a broad definition of different types of assets that could be considered securities under U.S. federal securities law. This definition specifically includes “investment contracts,” which can be seen as a catch-all term for any type of asset that behaves and feels like a security. Thus, it is sometimes difficult to determine if something falls within the definition of a security.

B.  SEC V. HOWEY

In SEC v. Howey, the U.S. Supreme Court created the Howey test to help clarify what an “investment contract” security is under the Securities Act. The defendant, W.J. Howey Company, sold real estate contracts for orange groves in Florida for a fixed price per acre. Howey then encouraged purchasers to set up service contracts in which they would lease the land back to the company to farm the orange groves, and in exchange the buyers would receive a share of the profits. The Supreme Court held that these orange grove service contracts were “securities,” because purchasers were buying shares in Howey’s profits from the orange groves through these service contracts, not the actual orange groves themselves. The Court developed a four-pronged test in which “an investment contract for purposes of the Securities Act means a contract, transaction or scheme whereby a person invests his money in a common enterprise and is led to expect profits solely from the efforts of the promoter or a third party.” There have since been a variety of cases that helped develop and clarify each of the four Howey prongs. The first prong of “an investment in money” does not need to be in the form of cash and can be satisfied using a different form of contribution or investment, such as cryptocurrency.

The second prong of “in a common enterprise” requires that the fortunes of the investor be linked to the success of the overall venture or enterprise. “Fortunes” refers to the “profits” (and benefits) or “losses” (and costs) that occur from a certain asset and that affect a person’s position. There needs to be a kind of commonality or relationship, either among investors or between the “promoter” and investors, in which the investor depends on the actions and decisions of the promoter of the asset. A promoter is defined as any individual or organization that helps found and organize the business or enterprise of an issuer of any security or that receives ten percent or more of any class of the issuers securities or proceeds from the sale of such securities as consideration for their services or property. Federal courts have typically required that there be either “horizontal commonality” or “vertical commonality” for an asset to satisfy the “common enterprise” prong. Horizontal commonality is defined as the relationship between investors and a pool of other investors. There is commonality when an individual investor’s fortunes are tied to the fortunes of other investors in a common venture by the pooling of assets, usually combined with the 

pro-rata distribution of profits. Vertical commonality is defined as the relationship between the promoter and the body of investors. Commonality exists when there is a connection between the fortunes (strict vertical commonality) or efforts (broad vertical commonality) of the promoter and the fortunes or efforts of the investors. This type of commonality does not require a pooling of funds.

The third prong of “a reasonable expectation of profits” requires investors to realize some form of appreciation on the development of the asset or participate in the earnings resulting from the use of investors’ funds. The SEC defines “profits” as “capital appreciation resulting from the development of the initial investment or business enterprise or a participation in earnings resulting from the use of purchasers’ funds.” Courts also include “dividends, other periodic payments, or the increased value of the investment” in the definition of profits. However, the SEC notes that “price appreciation resulting solely from external market forces (such as general inflationary trends or the economy) impacting the supply and demand for an underlying asset generally is not considered ‘profit’ under the Howey test.” This prong is very fact-sensitive, and the SEC looks at several factors, like the trading of the asset on secondary markets, identity of the buyers, and marketing efforts, to determine whether an asset satisfies this prong.

Finally, the fourth prong of “from the efforts of others” is satisfied when the promoter or issuer of an investment creates or supports the market for these assets or the value of the asset is dependent on the promoter’s efforts in generating demand. In Howey, the Supreme Court understood that the Securities Act’s definition of a “security” is broad, so it argued that “[f]orm was disregarded for substance and emphasis was placed upon economic reality.” Thus, when determining whether something can be considered a security, one needs to focus on the specific circumstances, facts, and economic impact of the particular asset. For an asset such as digital currencies or tokens, this test may consider factors such as the token’s design, issuance, and how it interacts with its platform or blockchain. Depending on how an NFT is created, structured, marketed, and sold or distributed, such NFTs could be deemed securities. This would mean that any sale of this NFT would be subject to the existing securities law framework.

C.  CURRENT CASE LAW 

Although there are few settled cases regarding whether certain digital assets are securities, there are a couple of key cases making their way through the court system. One of the leading cases being decided is SEC v. Ripple Labs, Inc., in which the SEC filed an enforcement action against Ripple Labs for selling crypto tokens that the SEC believed were unregistered securities. The SEC argues that Ripple Labs failed to register its offer and sale of about $600 million of its digital asset called XRP to retail investors, which was used to finance the business. The SEC stated that XRPs were investment contract securities because purchasers of XRP invested into a common enterprise, given that XRP’s demand is tied to Ripple’s success or failure in propelling its trading, and Ripple publicly promised investors that it would “undertake significant entrepreneurial and managerial efforts to create a liquid market for XRP” that would in turn increase its uses, demand, and price, and led reasonable investors to expect profits from XRPs. Another notable case that provides arguments for and against regulating NFTs as securities is the class action lawsuit filed against Dapper Labs. Dapper Labs created the National Basketball Association’s (“NBA”) Top Shot, which sells NFTs of NBA highlights or “Moments” that can be bought or sold using the blockchain and marketplace Dapper Labs developed. This class action argues that Dapper Labs is selling securities due to how it operates its resale marketplace and promotes the value of its NFTs. The plaintiffs allege that Moments were sold with “the expectation of profit” where “[t]he reality is that the growing fanatical NBA Top Shot database is all about the investment, speculation and appreciation of the Top Shot NFTs and the NBA Top Shot Marketplace.” However, the plaintiffs conceded that NBA Top Shot’s Service Terms of Use state that users “are using NFTs primarily as objects of play and not for investment or speculative purposes.” NBA Top Shot is promoting the NFTs as collectables as opposed to investments, which weighs in favor of the NFTs not being securities. Nevertheless, some argue that NBA Moments may still be “investment contracts” because Top Shot creates and maintains the sole marketplace for these NFTs and thus could be an unregistered exchange.

D.  SECURITIES EXCHANGE ACT OF 1934 

Once an asset is deemed a “security,” the SEC and the Exchange Act impose numerous regulatory requirements on the “exchanges” or platforms that facilitate the trading of those assets. Section 5 of the Exchange Act makes it unlawful for any broker, dealer, or exchange to effect any transaction in a security unless the exchange is registered as a national securities exchange under section 6 of the Exchange Act or an appropriate exemption applies. Registration as a national securities exchange requires any person or entity that offers or sells securities to the public to provide “full and fair disclosure” through the delivery of a statutory prospectus that contains information necessary to give prospective purchasers the proper opportunity to make an informed investment decision. Under the Exchange Act, an “exchange” is defined as any organization or group of persons (whether incorporated or unincorporated) that maintains or provides “a market place or facilities for bringing together purchasers and sellers of securities” or conducts functions commonly performed by stock exchanges. The Code of Federal Regulations attempts to clarify when an entity must register as a national security exchange and provides a functional test to assess whether an entity meets the definition of an “exchange” under the Exchange Act. Rule 3b-16(a) states that an organization, association, or group of persons is considered to constitute or maintain an “exchange” if it (1) “brings together the orders for securities of multiple buyers and sellers” and (2) “uses established, non-discretionary methods (whether by providing a trading facility or by setting rules) under which such orders interact with each other.” Rule 3b-16(b) then lays out what is excluded from the definition of an exchange. The SEC has argued that when analyzing whether a “system operates as a marketplace and meets the criteria of an exchange under Rule 3b-16(a),” one must look to “the activity that actually occurs between the buyers and sellers—and not the kind of technology or the terminology used by the entity operating or promoting the system.” Thus, any trading system that meets the definition of an exchange under 

Rule 3b-16(a), and is not excluded under Rule 3b-16(b), must register as a national securities exchange or operate pursuant to an appropriate exemption.

Exempted entities do not need to register as a national securities exchange under section 6. Rule 3a-1-1(a)(2) states that an organization, association, or group of persons is exempt from the definition of “exchange” if it is operating as an alternative trading system (“ATS”) and is in compliance with Regulation ATS. ATSs are SEC-regulated electronic trading systems that utilize the process of “dark pools” to match orders for buyers and sellers of securities. The SEC released a report regarding its adoption of new rules and amendments that allow ATSs to “choose whether to register as national securities exchanges, or to register as broker-dealers and comply with additional requirements under Regulation ATS, depending on their activities and trading volume.” ATSs typically face fewer and simpler regulations than national securities exchanges but still have some requirements, such as registering as a broker-dealer, giving notice of initial operations or material changes, providing fair access, keeping records, complying with capacity, integrity, and security standards, and other reporting requirements to safeguarding customer funds and securities.

E.  RULES, REGULATIONS, AND GUIDANCE FROM AGENCIES

In addition to statutes, issuers and platforms of digital assets also rely on statements, reports, and frameworks from the SEC and other regulatory bodies to guide their decisions. As digital assets grew in popularity, the SEC took notice and came out with formal and informal statements regarding its views on cryptocurrencies and tokens. In 2018, SEC Chairman Jay Clayton testified before a Senate committee arguing that cryptocurrencies could be structured as securities products subject to federal securities laws and warned that certain Initial Coin Offerings (“ICO”) structures could implicate securities registration requirements. More recently, at the Security Token Summit 2021, Peirce warned issuers of NFTs to be cautious when they create f-NFTs because when used in certain creative ways, they could create a security that is subject to regulation.

The SEC created a branch in 2018 called the Strategic Hub for Innovation and Financial Technology (“FinHub”) to coordinate and respond to emerging financial technology (“fintech”); serve as a public resource by consolidating, clarifying, and communicating the SEC’s views and actions related to fintech innovation; and inform policy research in these areas. In 2019, FinHub published an SEC document called Framework for ‘Investment Contract’ Analysis of Digital Asset, which provided details on how the SEC applies the Howey Test to analyze whether digital assets could be considered an “investment contract” security. This is one of the few documents available to guide digital asset creators and platforms.

Although guidance from the SEC regarding digital assets is sparse, there is some case law and reports from the SEC. For example, the SEC issued an enforcement order against the creator of EtherDelta, which provides a marketplace for bringing together buyers and sellers of digital asset securities through the combined use of an order book, a website that displayed orders, and a smart contract run on the Ethereum blockchain. This case held that EtherDelta violated section 5 of the Exchange Act because it issued digital asset securities using blockchain technology as an unregistered exchange. This is one of the main cases analyzing whether a platform that houses digital assets can be an unregistered security exchange. Other regulatory bodies have provided reports of their research into the intersection of digital assets and securities law. For example, the Congressional Research Service (“CRS”) published a report containing a broad outline of how federal securities laws and regulations apply to cryptocurrencies, ICOs, and NFTs.

The SEC also published the Decentralized Autonomous Organization (“DAO”) Report, which discusses U.S. federal securities laws and their applicability to the new paradigm of “virtual organizations or capital raising entities that use distributed ledger or blockchain technology to facilitate capital raising and/or investment and the related offer and sale of securities.” The purpose of this report of investigation is to “advise those who would use a [‘DAO Entity’], or other distributed ledger or blockchain-enabled means for capital raising, to take appropriate steps to ensure compliance with the U.S. federal securities laws.” Slock.it created The DAO, which is a “for-profit entity whose objective was to fund projects in exchange for a return on investment.” DAO Tokens represented a type of “crowdfunding contract” that would help raise “funds to grow [a] company in the crypto space.” The DAO offered and sold DAO Tokens in exchange for Ether (“ETH”), a virtual currency used on the Ethereum Blockchain, and the proceeds from these sales were used to fund projects. DAO Token holders had the right to vote on these projects and were entitled to any anticipated earnings from the projects it funded. The DAO platform also had a group of individuals called “Curators” who were given “considerable power” to perform “crucial security functions” and maintain “ultimate control over what projects would be submitted to, voted on, and funded by The DAO.” In applying the Howey test to the DAO Token, the SEC’s DAO report found that the tokens meet the criteria of a security and The DAO was required to register as an exchange under Rule 3b-16.

Even though there is some guidance for blockchain technologies generally, the SEC has not yet provided any guidance regarding NFTs specifically. Given this small amount of advice, many people have requested that the SEC provide regulatory clarity with respect to NFTs so that they know how to proceed. These requests for guidance come in the form of “no-action” letter requests that encourage “the SEC to engage in a meaningful discussion of how to regulate FinTech companies and individuals that are creating NFTs that may be deemed digital asset securities and the platforms that facilitate the issuance and trading of NFTs.” The existing securities framework provides a “crude mechanism” for regulating NFTs, and the SEC needs to reevaluate or reapply these old frameworks to new financial technologies to establish sustainable guidance and prevent NFTs from becoming the “Wild West” of digital investments.

III.  HOWEY TEST: ARE F-NFTS SECURITIES?

Although there are few articles or regulations specifically addressing NFTs, the current view is that NFTs may not be an “investment contract” security that can be regulated by the SEC because an NFT may gain its value through its uniqueness, as opposed to “a common enterprise” (second Howey prong), and any profits realized through an NFT may be derived from regular supply and demand, as opposed to the “efforts of others” (fourth Howey prong). However, to determine an NFT’s ability to be categorized as a security, regulators need to focus on the “economic reality” and specific circumstances, such as how society defines the NFT’s value, how it is utilized, or how it is marketed. On one hand, if the purchaser is a collector and the NFT’s value comes from its uniqueness and artistry, the main purpose of buying the asset is to “consume” it by enjoying its aesthetics; the NFT may also be marketed as allowing buyers to join the ranks of premier owners and connoisseurs of unique digital objects. In such a scenario, an NFT is less likely to be a security. For example, some people may buy a Pudgy Penguins NFT from OpenSea (an NFT exchange website) because they think it is adorable and just want to look at it or display it as a profile picture on social media. On the other hand, if the purchaser is an investor and the NFT’s value comes from its ability to gain a return on investment, the main purpose of buying the asset is to sell it later for a profit; or if it is marketed as an asset that will appreciate in value to give a substantial return, then an NFT is more likely to be security. Some purchasers’ main goal in buying a Pudgy Penguin may be to increase their capital. In the end, NFTs may gain value from both their uniqueness and their ability to provide a return on investment.

Another prevailing view is that fractionalizing NFTs could create a type of security that is subject to regulation. F-NFTs could be an investment contract under the Howey test depending on the facts and circumstances of the particular f-NFT, such as if you put multiple NFTs into one basket and then sell f-NFTs out of that basket. Although the SEC has yet to initiate any enforcement action against creators or platforms that facilitate the offer and sale of f-NFTs, the SEC and courts have held in many cases that fractional interests in an asset can be a security even if the individual asset itself is not. This Part applies the four prongs of the Howey test to analyze whether an f-NFT can be an “investment contract” security and compares 

f-NFTs to the DAO Token, which has already been deemed a security. 

A.  “MAKES AN INVESTMENT IN MONEY”

F-NFTs most likely satisfy the first prong of the Howey test given that people buy f-NFTs using cryptocurrency. The SEC argues that most digital assets, such as f-NFTs, pass the first Howey prong because they are purchased through an exchange for value. It does not matter that this exchange for value is in the form of digital currency such as cryptocurrency. Courts have held that an “investment of money” does not need to be in the form of cash, and thus purchasing something with cryptocurrency, as is the case with NFTs or f-NFTs, would satisfy this definition. When comparing f-NFTs to the DAO Token, both of these digital assets make an “investment in money” because both purchasers of the DAO Token and f-NFTs use ETH, the digital currency used on the Ethereum blockchain, to buy their respective digital assets.

B.  “IN A COMMON ENTERPRISE”

A traditional NFT may not pass this second Howey prong because its value stems from its uniqueness—not a common enterprise—and there may not be a relationship between the seller or promoter of an NFT and a buyer or investors in that NFT. However, the SEC’s FinHub stated that a “common enterprise” typically exists for investments in digital assets because the fortunes of individual purchasers of digital assets are tied to other investors or tied to the success of the promoter’s efforts to expand a digital asset platform. Also, courts have determined that the “common enterprise” prong is a distinct element of an investment contract analysis and “does not require vertical or horizontal commonality per se.” Thus, there are some arguments that f-NFTs may pass the second prong and have a common enterprise.

Horizontal commonality can be shown for f-NFTs through the fact that if a person owns a partial ownership interest in an underlying NFT, the value of this shard is tied to the fortunes of all the owners of the other shards of that fractionalized NFT. If the value of the underlying NFT increases, the value of each of its shards also increases. Thus, a common enterprise can be found through the relationship between an investor of an f-NFT and the pool of other investors who share ownership of the same fractionalized NFT. One of the very reasons to fractionalize an NFT is to enable smaller investors to “pool resources” together to purchase a smaller interest in an NFT and share in the returns of the whole NFT. This is similar to the investors in the DAO Token who pooled together ETH to help The DAO fund large projects with the hope of a return on their investments. Both the DAO Token and f-NFTs can satisfy horizontal commonality by pooling investors’ assets and tying their interests together. Also, an NFT can be part of a series of similar NFTs, like a collection of artworks by the same person, where the value of one will rise and fall along with the value of the others in the series. The fortune of one NFT investor in the series may be tied to the increase and decrease in fortune of the other NFT investors in the same collection. 

F-NFTs may also satisfy the vertical commonality requirement, given the relationship between the original issuer of the f-NFTs (promoter) and all the purchasers of the f-NFTs (body of investors). A common enterprise exists under broad vertical commonality when the investors are dependent on the promoter’s efforts or expertise for their increased returns. For f-NFTs, a common enterprise may exist because the success of f-NFT investors gaining returns is dependent on f-NFT companies making the effort to fractionalize or bundle different NFTs and maintain the platform to protect f-NFTs and keep trading running. Additionally, strict vertical commonality can be established if f-NFT platforms gain some type of fee percentage from their efforts in fractionalizing and selling f-NFTs. Thus, if f-NFT platforms actively manage or charge fees for handling these assets, then the fortunes of f-NFT platforms are connected to the fortunes of the f-NFT investors. When f-NFT investors succeed, so does the f-NFT company.

Even certain, whole NFTs may pass the vertical commonality test. For example, many college and professional athletes have been creating NFTs of themselves through digital artwork, highlight reels, and other digital assets. These NFTs may satisfy the “common enterprise” requirement because the value of the NFT would depend on the rise and fall of the athlete’s career and how much effort that athlete put into increasing their popularity. If the particular athlete who is issuing an NFT does better professionally in their sport or increases in popularity, then the value of their NFT may also increase. In other words, the fortunes of the owners of the athlete’s NFT would increase in correlation with the fortunes or the career of the athlete also increasing. The same argument can also be made for NFTs from specific artists or celebrities, such as Beeple or Martha Stewart. Investors of Beeple’s NFTs have their fortunes tied to the efforts of Beeple and his other artworks. The value of an investor’s Beeple NFT will benefit from Beeple and his other artwork becoming more popular or valuable. Thus, there are good arguments that f-NFTs fulfill the second Howey prong.

C.  “WITH A REASONABLE EXPECTATION OF PROFIT”

F-NFTs can satisfy the third Howey prong if purchasers buy f-NFTs with the expectation that they will realize some type of gain or profit. Given that this prong is heavily fact–sensitive, the SEC provided a list of characteristics that make it more likely for a digital asset to fulfill the “reasonable expectation of profits” prong. F-NFTs seem to satisfy three of the characteristics listed: (1) the digital asset is “transferable or traded on or through a secondary market or platform,” (2) the issuer continuously “expend[s] funds from proceeds or operations to enhance the functionality or value of the network or digital asset,” and (3) the digital asset is marketed or promoted in a way that would cause a purchaser to have an expectation of profits. To determine whether an f-NFT can be classified as a security under this prong, one needs to focus on the transaction itself and the way the digital asset is offered and sold.

The first characteristic that increases the likelihood of f-NFTs fulfilling the third Howey prong is the fact that investors can transfer or trade these assets on secondary markets or online blockchain platforms. The ability to sell or buy NFTs or f-NFTs on secondary markets such as OpenSea provides proof that the investor may expect to realize some type of return or appreciation on the digital asset through secondary trading. This is much like how DAO Token holders were able to monetize their investments in DAO Tokens by reselling and trading them on various secondary trading platforms and markets.

The second characteristic that leans in favor of f-NFTs satisfying the third prong is the fact that f-NFT platforms may “provide essential managerial efforts that affect the success of the enterprise, and investors reasonably expect to derive profit from those efforts.” The more likely that f-NFT issuers made efforts to increase the demand or value of the digital asset, the more likely the f-NFT will have a “reasonable expectation of profits.” Different cases have clarified that efforts to “increase the demand or value” include when issuers or platforms (1) create and manage an “ecosystem” for the digital asset which allows them to increase in value, (2) develop the network to inspire creative uses of its assets, or (3) add a new functionality using the proceeds from the token’s sales. First, f-NFT platforms like Fractional.art and Nitfex made “essential managerial efforts” to increase demand or value of f-NFTs by taking continuous, active steps to fractionalize NFTs and make them more accessible to more investors. This created a new ecosystem where average investors could pool their funds together to share in the gains of valuable NFTs. Second, fractionalization networks inspired new creative uses such as bundling various NFTs together and selling f-NFTs of this bundle. The value of these f-NFTs would be dependent on the values of all the individual NFTs that the issuer chooses to place in the basket. Lastly, f-NFT platforms created a new functionality for NFTs by adding the ability to fractionalize one NFT into multiple shards. This allows purchasers to buy smaller interests in many different NFTs to diversify their collection, thus minimizing the volatility of this digital asset and increasing the potential returns.

This view of f-NFTs can be compared to the DAO Token that satisfied the third Howey prong because the proceeds from selling the DAO Tokens were used to fund different proposed projects in which holders had the potential to gain a share of the profits from these projects. Also, much like how f-NFT platforms have created an ecosystem for the fractionalized assets, The DAO created a type of ecosystem for its “crowdfunding contracts.” While one may argue that f-NFT platforms are not using the proceeds from selling their tokens to directly improve their network, another may argue that f-NFT platforms collect fees from transactions that occur on their platform and then use these fees to maintain a secure network for f-NFT purchasers. Thus, this characteristic may depend on how the specific f-NFT platform is managed.

The third characteristic that makes f-NFTs more likely to provide a “reasonable expectation of profit” is the way in which f-NFTs are marketed to potential buyers. The SEC provided a list of ways a digital asset could be marketed that weigh in favor of the third Howey prong. F-NFTs may satisfy four of these methods: (1) the “intended use of the proceeds from the sale of the digital asset is to develop the network or digital asset”; (2) a key selling feature of f-NFTs is the ability to readily transfer it; (3) “[t]he potential profitability of the operations of the network, or the potential appreciation in the value of the digital asset, is emphasized in marketing or other promotional materials”; or (4) there is an available market for trading the digital asset or the issuer promises to create or support a trading market. F-NFTs can satisfy these marketing characteristics, and many of them are also found in the DAO Token. 

First, although current f-NFT platforms do not directly market that proceeds from f-NFT sales will be used to develop the network, one can assume that these platforms use the fees they collect from sales to maintain the network and allow for continuous fractionalization of NFTs. Second, the fact that f-NFTs are marketed as being easily transferable on platforms such as Niftex, Fractional.art, or DAOfi lean in favor of there being an “expectation of profit.” This is similar to the DAO Token, which was promoted as being readily available to buy and sell on “a number of web-based platforms that supported secondary trading.” Third, certain f-NFT platforms emphasize that these assets are a unique and better way to unlock liquidity, gain greater exposure and price discovery for your NFTs as fractions on the open market, trade NFTs with lower cost and greater diversification, get access to a variety of unique and iconic digital assets with low price thresholds, or provide liquidity for shard markets and earn transaction or curator fees. These platforms focus on f-NFTs’ ability to increase exposure of a particular NFT in a market and diversify one’s investments in NFTs to spread out the risk of a single NFT losing value. Increased exposure and diversification can increase an f-NFT’s profitability, and a platform’s emphasis on this promotes an f-NFT’s appreciation in value. However, f-NFTs may simply be marketed as an easier, more accessible way for the average investor to partake in the NFT market. If this is the case, it is less likely that f-NFTs satisfy the third Howey prong. The DAO platform emphasized its potential profitability by marketing it as an investment where purchasers could share in the profits of the proposed projects the DAO Token funded and thus gain a return on their initial investment. Although this is not exactly similar to how f-NFTs’ profitability were marketed, both seem to promise their purchaser some type of liquidity. Fourth, f-NFT platforms provide a readily available market for the trading of various f-NFTs. Creators or purchasers of f-NFTs can easily sell or buy these assets on different websites. These platforms support an f-NFT trading market by providing information regarding how the platform and fractionalization process operates and how the underlying technology works, a “frequently asked questions” section, a link to create or buy and sell f-NFTs, ways to “join the community,” and so forth. This is similar to the DAO Token issuers who supported a trading market for their token by developing a website, a link to detailed information regarding The DAO entity’s structure and source code, and a link to buy DAO Tokens; providing information on how The DAO operated; soliciting media attention; and posting on online forums. 

A counterargument is that traditional NFTs are less likely to be a security because purchasers of traditional NFTs buy them for their artistry or bragging rights, proving that NFTs gain their value from their uniqueness, scarcity, or collectable status—not from any expected profits. An NFT’s value may just be based on the normal market forces of supply and demand, which is not considered “profit.” The SEC also notes that digital assets are less likely to satisfy the Howey test if “[a]ny economic benefit that may be derived from appreciation in the value of the digital asset is incidental to obtaining the right to use it for its intended functionality.” The intended functionality of an NFT may just be bragging rights or display rights, such as displaying a rare NFT artwork as your profile picture on your social media account. Thus, when an NFT increases in value, this may just be incidental to using the asset for its intended functionality of bragging rights. Also, if an f-NFT is marketed in a way that focuses on its role as a piece of digital artwork or a collectible, and not as an opportunity to gain any returns, this may work against f-NFTs being a security. For example, some platforms market f-NFTs as a way to create more accessibility to the NFT market and not necessarily as a way to increase one’s returns. Regulators will need to analyze the specific characteristics of certain f-NFTs and f-NFT platforms to determine whether they satisfy the third Howey prong. 

D.  “THROUGH THE EFFORTS OF OTHERS”

Some argue that although an NFT may provide the purchaser with a reasonable expectation of profits, this increase in financial returns is not derived from the “efforts of others” and instead comes from the NFT’s own scarcity and uniqueness. Thus, it may be more difficult to argue that an NFT satisfies the fourth and final Howey prong, which requires the asset’s increase in value to come from the “efforts of others.” While a traditional NFT may not fulfill this prong given that its value comes from its uniqueness, an f-NFT may be an exception because its value is derived from the efforts of the f-NFT platforms or issuers who support the f-NFT market. The fourth Howey prong is satisfied if an f-NFT issuer supports a market for f-NFTs or the value of these assets depend on the issuer’s efforts in generating demand. Thus, if an NFT issuer or exchange puts in the work to develop the platform and increase buyers, and the purchasers reasonably expect a return based on this work, then an NFT may pass this last prong.

The SEC Framework for “Investment Contract” Analysis of Digital Assets lays out two key questions to consider when determining whether a digital asset can satisfy the “efforts of others” prong: (1) does the purchaser reasonably expect to rely on the efforts of an “Active Participant,” and (2) are those efforts “the undeniably significant ones, those essential managerial efforts which affect the failure or success of the enterprise”? To help answer these questions, the SEC provided a list of six characteristics that lean in favor of a digital asset fulfilling the fourth Howey prong. While none of the characteristics are dispositive, they provide a good framework to help determine when a digital asset gains its value through the “efforts of others.” F-NFTs may satisfy some of the characteristics and thus satisfy the last Howey prong.

The first characteristic is that an issuer is “responsible for the development, improvement (or enhancement), operation, or promotion of the network, particularly if purchasers of the digital asset expect an [issuer] to be performing or overseeing tasks.” Platforms that issue f-NFTs may have this characteristic because they are responsible for promoting the f-NFTs on their platforms, bringing more buyers onto their networks, and improving their networks by offering more products such as f-NFT bundles or automatic royalties embedded in smart contracts. These development efforts can increase the value of the actual platform and thus increase the value of the f-NFTs traded on that specific platform. Also, if a platform markets f-NFTs as producing profit based on royalty payments or f-NFT bundles, purchasers may expect that the issuers are putting in some type of managerial efforts to oversee the asset and increase its value. The value of an f-NFT could come from the efforts of a person or entity promoting, selling, choosing, developing, and managing different f-NFT royalties or bundles. This is similar to the DAO Token Curators who managed different projects for investors to create returns by deciding what projects would be submitted to, voted on, and funded by DAO Token holders. DAO investors relied on the “managerial and entrepreneurial efforts” of the Curators to manage The DAO network and project proposals because the creators of The DAO represented that they “could be relied on to provide the significant managerial efforts required to make The DAO a success.” 

The second characteristic is that the issuer performs essential tasks or responsibilities, as opposed to “an unaffiliated, dispersed community of network users (commonly known as a ‘decentralized’ network).” This reference to a “decentralized” network may work against f-NFTs being deemed a security because they are inherently run on a “decentralized” network. One can argue that the blockchain technology, smart contracts, and digital ledger perform the “essential tasks or responsibilities” for f-NFTs as opposed to the issuer or platform. However, the DAO Token was still deemed a security even though it utilized blockchain technology, and smart contracts performed tasks for the usage of the DAO Tokens. Although 

f-NFTs are run on a “decentralized” network, issuers can perform essential tasks such as fractionalizing NFTs, using their expertise to bundle NFTs, or maintaining the network to ensure the f-NFTs are protected.

The third characteristic is that an issuer “creates or supports a market for, or the price of, the digital asset,” which can include (1) “control[ing] the creation and issuance of the digital asset,” or (2) “tak[ing] other actions to support a market price of the asset, such as by limiting supply or ensuring scarcity” through activities like buybacks. Issuers of f-NFTs, such as Niftex and Fractional.art, may embody this characteristic because issuers set the original fixed price of an f-NFT when they initially fractionalize an NFT, and many f-NFT platforms have some type of “buyout” provision which lets f-NFT investors purchase the remaining shards to gain ownership of the full NFT. This buyout provision is similar to a buyback because the original f-NFT issuer can buy back the whole NFT, which can subsequently support a market price of the f-NFTs. Also, as more NFTs are bought and sold on a platform, the rarity and scarcity of a specific NFT may increase, which then affects the price of that NFT. Thus, if f-NFT platforms support the growth of their platforms to include more f-NFTs or other products, then these platforms can create a market for and support the price of f-NFTs. A counter argument is that an NFT’s lack of exchangeability with other NFTs impedes its ability to be classified as a security. Traditional securities increase their value from price fluctuation and exchangeability, but due to its uniqueness, an NFT only increases its value through profit increases and not exchangeability. This issue may be limited with f-NFTs, whose value is tied to other types of price fluctuations.

The fourth characteristic is that the issuer has a “lead or central role in the direction of the ongoing development of the network or the digital asset.” By simply maintaining the f-NFT network, these platforms are providing an active management role that contributes to the development and stability of f-NFTs and f-NFT networks. Since the actual NFT is typically hosted on external URLs or IPFS, some caution that NFT networks must be maintained to ensure that NFTs sold on the platform do not disappear, buyers do not lose their purchases, and NFTs do not lose their value. This dynamic can create a system in which “the value of the art is tethered to the value of the platform hosting it.” The managerial efforts of the NFT platforms would be directly tied to the value of the NFTs because if the NFT platforms are not run properly or are shut down, the value of the NFTs decreases or disappears altogether. An issuer can also take a lead role in continuously developing f-NFTs if the issuer is an artist, athlete, celebrity, or company, and the value of their f-NFT is tied to that specific issuer’s popularity or the efforts they undertake to grow their popularity. When buying an f-NFT, you are not buying the underlying artwork but instead are purchasing the right to gain profits from the increased popularity of the creator, whether it be an artist like Beeple or an athlete like Patrick Mahomes. People may invest in NFTs with the hope that the creator increases in fame, which can then increase the profits from the particular NFT. For example, many college athletes are creating their own NFTs, and as an athlete’s career progresses to professional sports, the value of that NFT could exponentially increase. NFTs issued by corporations or influential public figures may also satisfy the “efforts of others” prong. For example, Nike recently announced its plan to sell “digital shoes,” which resemble an NFT for its iconic shoes; Martha Stewart also created an NFT collection consisting of digital art of her home décor. Nike and Martha Stewart may have a central role in the ongoing development of their respective NFTs because as they put in effort to continuously grow the popularity and profitability of their brand, their NFTs may also grow in value. If an NFT is tied to a specific company or person, the NFT’s value relies on the efforts of that issuer to increase their popularity, which will in turn help develop the underlying NFT.

The fifth characteristic is that the issuer has “a continuing managerial role in making decisions about or exercising judgment concerning the network or the characteristics or rights the digital asset represents.” Some examples of what constitutes a “managerial role” include: “determining whether and where the digital asset will trade,” having “responsibility for the ongoing security of the network,” and “making other managerial judgements or decisions that will directly or indirectly impact the success of the network or the value of the digital asset generally.” The DAO Curators had a large managerial role over the DAO Token—and its potential value—because investors relied on the Curators’ expertise to monitor the operation of The DAO, safeguard their funds, and determine when proposed contracts should be put to a vote to fund projects. F-NFT platforms may serve this “managerial role” through providing ongoing security for the network. For example, f-NFT platforms must manage their networks to prevent any hacking attempts or fraud that could steal funds during an NFT transaction or destroy the linkage to the underlying NFT. This is similar to how The DAO and its Curators were relied on for “failsafe protection” and for protecting the system from “malicous [sic] actors.” Current f-NFT platforms have yet to show how their managerial decisions can significantly impact the success of f-NFTs, given that they do not have Curator-type workers who actively control f-NFTs. However, if f-NFT platforms sold 

f-NFT bundles, investors would have to rely on the platform’s judgment for what types of NFTs were being pooled together in a bundle and sold as 

f-NFTs. The platform’s expertise may then affect the value of the f-NFT bundle, and it would be more likely that f-NFTs had continuous management from others.

The sixth characteristic is that “[p]urchasers would reasonably expect the [issuer] to undertake efforts to promote its own interests and enhance the value of the network or digital asset” where the issuer has a stake in the digital asset and can realize its own gain from the digital asset or monetize the value of the digital asset. Issuers or creators of f-NFTs may satisfy this characteristic because they can program a smart contract to automatically charge a type of royalty or curator fee any time an f-NFT is resold or used in a specific way. This enables the issuer to monetize the value of the digital asset and promote its own interests in the digital asset. Some platforms such as Niftex have also automatically programmed their f-NFT smart contracts to set aside five percent of an NFT’s fractions for the artist. In this system, instead of the creator taking a cut every time a fraction is traded on the open market, they now get to share in the profits of just owning some of the shards. It seems that f-NFT issuers may promote their own interests and enhance the value of the digital asset, because the higher the value of the asset, the more money they can make off their own shards.

Whether or not f-NFTs satisfy the fourth Howey prong will once again come down to the specific facts of how the f-NFT is marketed to purchasers and the specific platform or issuer. However, given the various SEC characteristics taken together and their application to f-NFTs, there may be a good argument that f-NFTs can gain their value from the “efforts of others.” After analyzing f-NFTs under the four Howey prongs and comparing them to other established digital asset securities, f-NFTs can be considered securities.

IV.  HOW CAN NFTS BE REGULATED?

Even if f-NFTs can satisfy all the Howey prongs and be classified as a security, the question still remains whether the SEC should regulate these digital assets and what regulatory framework should be adopted. The SEC cautioned that as financial technologies continue to innovate, there is a possibility that market participants (such as f-NFT buyers, sellers, and platforms) may be conducting activities that fall within the SEC’s jurisdiction in which their transactions, persons, or entities may be subject to registration, regulation, or oversight. The SEC can regulate three different types of actors: (1) buyers of a security, (2) sellers or issuers of a security, and (3) platforms facilitating exchanges. If designated as a security, buyers, sellers, or platforms of f-NFTs sold without registration may be subject to penalties, registration requirements, or filing periodic reports with the SEC. 

The SEC needs to discover what types of regulations it can impose on buyers, sellers, and platforms of f-NFTs. This Part analyzes the risks and opportunities of regulating f-NFTs under the existing regulatory framework and how regulations can be applied to the three different actors within the NFT space to recommend a new, modified framework better suited for this digital asset.

A.  REGULATION OF BUYERS

The SEC regulates buyers of securities by only allowing certain “accredited investors” to purchase unregistered securities, which typically are subject to fewer requirements and regulations. SEC Regulation D (“Reg. D”) governs unregistered securities and explains the exemptions from being required to register with the SEC. Under Rule 501(a) of Reg. D, accredited investors can be institutional investors and entities such as banks, mutual funds, insurance companies, or pension plans; insiders within an issuer such as officers or directors of the issuer of the securities; or wealthy natural persons such as those with a net worth of greater than $1 million, excluding primary residence and mortgage, or those with an annual income of greater than $200,000 for the last two years ($300,000 if filing jointly with one’s spouse).

The policy behind limiting buyers from purchasing certain securities through this regulation is to protect less-knowledgeable individual investors, who may not have the financial stability to absorb the high risks of investing in unregistered securities, while also promoting investments into risky entrepreneurial ventures. Accredited investors are treated differently from the general public because they are sophisticated enough to bear the risks, are more knowledgeable, or have the money to hire someone like a financial advisor to help them make informed decisions. Given that f-NFTs may be unregistered securities, the SEC could regulate f-NFT buyers by only allowing accredited investors to purchase them. However, it may be difficult to prevent people from buying a certain digital asset on a decentralized and easily accessible platform. This would mean that every time an f-NFT was created or sold, an issuer or platform would have to go through the 

time-consuming and costly process of ensuring that every purchaser complies with the definition of an accredited investor. The whole purpose of fractionalizing NFTs was to make these digital assets more accessible to average investors. Thus, it seems counterintuitive to place a new barrier in front of average investors and their ability to participate in this emerging market. The accredited investor regulation is meant to protect average investors from more risky activities, but there may be other ways to prevent harm to less-knowledgeable investors than completely cutting them off from these new assets, such as requiring NFT platforms to provide easily accessible and relevant information regarding trading NFTs and maintaining certain security protocols to protect f-NFT investors and their funds. Thus, it is unlikely that the SEC could or should place any regulations on buyers of f-NFTs.

B.  REGULATION OF SELLERS OR ISSUERS

The SEC may be able to place registration requirements on the initial creators or issuers of f-NFTs. Under section 5 of the Securities Act, any issuer offering or selling an unregistered security in interstate commerce must register non-exempt securities with the SEC. These registration requirements serve two main goals: (1) to provide investors with financial and other material information regarding the securities being offered or sold and (2) to prohibit and minimize fraud, deceit, misrepresentations, and other dangers in the sale of securities. Requiring issuers to provide information regarding their assets to investors through the SEC increases the likelihood that investors will make well-informed decisions and provides a certain standard to minimize fraudulent sales. If f-NFTs are deemed to be securities, the individual or entity that initially fractionalizes the NFT and sells these 

f-NFTs may be considered an issuer under section 5 and thus be subject to SEC requirements such as filing a registration statement and periodically disclosing material information.

The SEC has cracked down on digital assets and ICOs by bringing and winning enforcement actions against a variety of issuers who have offered and sold digital assets that are deemed securities and were not registered pursuant to the Securities Act. In 2019, the SEC brought two high-profile enforcement actions against Kik Interactive Inc. (“Kik”) and Telegram Group Inc. (“Telegram”) arguing that the Kik and Telegram tokens were sold to investors as unregistered securities and thus violated federal securities law. The courts applied the Howey test and found that both tokens were securities because the funds from the token sale were used for operating the companies’ respective ecosystem and messaging apps, the tokens were marketed to prospective investors as a way “you could make a lot of money,” and the value of the investments depended on the companies’ respective efforts to develop their messaging apps. While some issuers of digital assets like cryptocurrency were subject to registration requirements, other issuers of digital assets such as tokens for a membership rewards program (TurnKey Jet, Inc.) or tokens for video game currency (Pocketful of Quarters, Inc.) were given “no-action” letters from the SEC promising that the it would not take any enforcement action against these issuers for selling the digital assets without registration. The SEC held that these rewards and video game tokens were not securities because none of the funds from the token sales were used to develop the issuer’s platform, the tokens were immediately usable for their intended functionality (purchasing air charter services or gaming) at the time they were sold, token transfers were restricted to only the company’s internal “wallets,” and the tokens were both marketed in a way that emphasized the functionality of the token for consumption.

Given the unchartered territory of f-NFTs, it is difficult to apply the regulation of issuers to the creators of f-NFTs. Although selling f-NFTs may look like a type of ICO, there may be policy reasons not to require registration every time creators wish to fractionalize their NFT. Registering the sale of an asset is a time-consuming and costly process, and it seems unnecessary to require extensive disclosures given that the costs of registration may outweigh the benefits of having an accessible f-NFT marketplace. The main goals of these registration requirements are to provide investors with sufficient information regarding the f-NFT and to prevent fraud. However, f-NFTs’ blockchain and smart contract technology may satisfy these goals without the need for costly registration. Many platforms always display relevant information regarding an NFT right next to the image of the NFT. This information typically includes a description of the NFT, the total supply of fractionalized shards, the valuation, and some type of table showing all the transactions of that specific NFT, the date on which each sale occurred, the buyers and sellers for each sale, and the price at which it was sold. Thus, potential purchasers can already easily see the relevant financial information regarding the assets to help them make an informed decision. Also, since each f-NFT has a digital ledger that automatically records every transaction and every buyer and seller of that f-NFT, it can be easier to fend off certain types of fraud and easily authenticate true ownership. F-NFTs’ blockchain technology, decentralized network, and easy authentication process can help satisfy the goals that registration requirements aim to reach.

One may argue that if an f-NFT is being sold by a specific entity, artist, or athlete, and the value of that f-NFT is tied to that entity or individual’s external success, then the issuer may need to provide disclosure regarding the entity or individual. For example, would a professional athlete’s f-NFT issuance require a registration statement about their professional sports career? Brands such as Nike and Martha Stewart have recently announced their digital asset plans such as Nike’s “digital shoes” and Martha Stewart’s NFT collection of digital images depicting her home decor and designs. Thus, if a company or brand is issuing an NFT or f-NFT, it seems more likely that the SEC may impose registration requirements and disclosures regarding that specific company or brand. Even if the SEC decides to impose registration requirements for the initial fractionalization of an NFT, there should be exemptions for small NFTs of little value or where there is a low number of shards in the initial fractionalization. For example, Reg. D under Rule 504 provides an exemption from registration requirements for companies that issue a small amount of securities, in which they are not allowed to sell more than $10 million worth of securities in any twelve-month period. This rule could easily be applied or adapted to fit small sales of f-NFTs such that issuers would not be required to register the sale of their f-NFTs if the total value of the sale was below a certain threshold. The SEC will need to balance the costs of the registration requirements for initial 

f-NFT issuers with the need to promote or encourage new markets and assets and not stifle innovation and creativity.

C.  REGULATION OF PLATFORMS OR EXCHANGES 

Although it may be more difficult to regulate buyers or the initial creators of f-NFTs, it may be more reasonable to focus securities regulation on f-NFT platforms or networks that provide for the fractionalization of NFTs and manage the secondary market trading of these digital assets. If an f-NFT platform such as Niftex, Fractional.art, or DAOfi satisfies the definition of an “exchange” under Exchange Act Rule 3b-16(a)’s test, then these types of platforms will need to register with the SEC under section 6 of the Exchange Act as a national securities exchange or be exempt from registration, such as by operating as an alternative trading system (“ATS”) in compliance with Regulation ATS. The registration requirements for exchanges apply regardless if the issuing entity is a decentralized autonomous organization as opposed to a traditional company, if purchased using virtual currencies as opposed to traditional paper currency, or if distributed through ledger technology as opposed to certificated form.

Under the Exchange Act Rule 3b-16(a), an entity is an “exchange” if it (1) “brings together orders for securities of multiple buyers and sellers,” and (2) uses “established, non-discretionary methods.” The SEC clarifies this two-pronged functional test by stating that a system “brings together orders” “if it displays, or otherwise represents, trading interests entered on the system to system users” or “if it receives subscribers’ orders centrally for future processing and execution.” The SEC also explains that a system uses “established, non-discretionary methods either by providing a trading facility or by setting rules governing trading . . . among the multiple buyers and sellers entering orders into the system.” These methods include a computer system in which orders interact, a “trading mechanism that provides a means or location for the bringing together and execut[ing] of orders,” or rules that impose execution procedures or priorities on orders. 

Recently, this test was applied to the EtherDelta, which is an online trading platform that allows buyers and sellers to trade digital assets such as Ether and ERC20 tokens in secondary market trading. The SEC entered an enforcement order arguing that EtherDelta violated section 5 of the Exchange Act because its digital token was a security and the EtherDelta platform was an unregistered “exchange” that was transacting in a security. This enforcement action found that EtherDelta satisfied the criteria of an “exchange” under Exchange Act Rule 3b-16(a) because it (1) operated as a marketplace for bringing together the orders of multiple buyers and sellers of a digital asset that was considered a securities under the Howey test “by receiving and storing orders in token in the EtherDelta order book and displaying the top 500 orders (including token symbol, size, and price) as bids and offers,” and (2) “provided means for orders to interact and execute through the combined use of the EtherDelta’s website, order book, and pre-programmed trading protocols on the EtherDelta smart contract.” The EtherDelta website also had numerous features that were similar to online securities trading platforms, such as providing access to the EtherDelta order book, sorting the tokens by price and color, and providing account information, market depth charts, lists of user’s confirmed trades, daily transaction volumes per token, and fields for users to input deposits, withdrawals, and trading interests. Many of these features are similar to the online trading platforms of f-NFTs. When applying this functional test to f-NFT platforms and comparing them to the EtherDelta, it seems like f-NFT platforms can satisfy Rule 3b-16(a)’s two requirements.

First, f-NFT platforms bring together multiple buyers and sellers onto a single network to transact orders of f-NFTs. f-NFT platforms satisfy the “multiple buyers and sellers” aspect since there is a wide variety of f-NFTs issuers and multiple buyers who can purchase these f-NFTs. These platforms satisfy the aspect of “bringing together” people to “transact orders” because they not only provide a place to fractionalize NFTs but also create and maintain marketplaces for users to trade their f-NFTs. Platforms typically receive and store f-NFT orders in a ledger on the Ethereum blockchain that keeps track of all the transactions of a specific f-NFT, much like the EtherDelta order book. All of these orders and f-NFTs are easily displayed on f-NFT platforms where users can see any past f-NFT transactions and execute orders to buy or sell these digital assets. Similar to EtherDelta, f-NFT platforms like Fracitonal.art also display the top orders and include information such as the token name, number of fractions, and price.

Second, f-NFT platforms use a decentralized network that acts as a trading facility and sets rules for any f-NFT transaction through the underlying smart contracts that these platforms embed in the f-NFTs. Like EtherDelta, current f-NFT platforms provide a network or trading facility for orders to interact and execute through their individual websites such as Niftex, Fractional.art, or DAOfi, their digital ledgers, and their pre-programmed smart contracts with embedded trading protocols. These websites provide the “means or location” for bringing together users and executing orders for f-NFTs. Also, smart contracts use execution procedures and priorities to impose rules and determine the terms for any 

f-NFT transaction on the network. Smart contracts can confirm the validity of the transactions and set the conditions of the order by checking certain information, such as whether the f-NFT contains a valid cryptographic signature, if the f-NFT comes with some type of royalty, if there is a buyout option, or if there is some type of curator fee. These characteristics provide the “established, non-discretionary methods” that govern how f-NFT orders interact with each other.

If an f-NFT platform is considered an “exchange,” it could still escape registration requirements if it satisfies one of the exemptions in Exchange Act Rule 3a1-1(a). It is unlikely that an NFT trading platform would fall under the 3a1-1(a)(1) (exemption for an ATS operated by a national securities association) or 3a1-1(a)(3) (exemption for an ATS not required to comply with Regulation ATS pursuant to Rule 301(a) of Regulation ATS) exemptions. However one could analyze whether an f-NFT trading platform could be considered an ATS that complies with Regulation ATS and thus fits into the 3a1-1(a)(2) exemption for ATSs. This exemption would allow f-NFT exchanges to register as a broker-dealer, which has lower regulatory costs and fewer notice and reporting requirements, instead of as a national securities exchange. Although operating under this exemption would still come with some notice, reporting, and recordkeeping requirements, it could prevent f-NFT platforms from spending even more time and money on registering as a national securities exchange and dealing with periodic disclosures.

Although digital asset trading platforms resemble traditional exchanges or alternative trading systems, regulators may need to adjust the regulatory framework, much like they did for ATSs, to account for differing characteristics of blockchain-based exchange platforms. Differences between digital asset exchanges and national securities exchanges can include transparency, fairness, and efficiency. The decentralized aspects of f-NFT platforms may provide their own form of protection that may be more or equally as transparent, fair, and efficient as the regulations the SEC would impose. Thus, the SEC could adopt another new regulatory framework for exchange platforms of digital assets such as f-NFTs that requires less registration or fewer requirements than a national securities exchange and recognizes the fraud and misrepresentation protection that a blockchain platform already affords.

A decentralized platform may be better than SEC-imposed regulation at detecting fraud and protecting users on these types of f-NFT platforms. First, these platforms’ “decentralized” and public nature provides fairness because no one entity controls the network, and therefore anyone can easily access and interact on the platform and all transactions are verified by others on the network. Second, “decentralized” exchanges provide efficiency because the blockchain technology allows them to easily show users “verified business logic [in a publicly verified smart contact],” which a centralized exchange could not do. Third, f-NFT platforms provide transparency because while traditional exchanges hold your funds with an “exchange owner,” decentralized ones hold your funds through easily verifiable and public digital ledgers that also contain a list of all transactions for a specific f-NFT, including the buyer, seller, and price. The cryptographics embedded in f-NFTs make everything in a sense “registered” through its digital ledger, and all transactions are verified through the whole blockchain network. Thus, the sale of these digital assets may not need SEC regulation.

Even in decentralized networks, there is still a chance of hacking, fraud, and loss that may be mitigated through government regulation. Just as the SEC modernized the regulatory framework to “better integrate alternative trading systems into the national market system,” the SEC may need to modernize the regulatory framework again to integrate NFT trading systems and digital asset sales. For example, the SEC may adopt a new regulatory framework that requires an f-NFT exchange platform to provide or display either convenient one-time reports or costly regular reports on its security protocols and how it deals with bad actors such as hackers that manipulate code to steal the proceeds of an NFT sale. The SEC may also implement a limiting framework, similar to how Regulation ATS requirements are limited to a subset of ATSs that occupy a certain large percentage of the total trading volume of any security. For example, the SEC could only require registration for f-NFT exchanges that account for a large volume of the overall traded f-NFTs. This may ensure investor protection from large actors while still allowing for innovation through smaller actors. The SEC can also require platforms to comply with certain capacity, integrity, and security standards to ensure f-NFT investors’ funds and assets are protected, given that an f-NFT’s value may be tied to the platform’s ability to maintain and retrieve the NFT.

SEC Commissioner Hester Peirce’s proposal for a “safe harbor” for digital assets and exchanges shows a glimpse into the beginning of a new framework that can provide guidance for digital asset issuers and exchanges. Peirce proposed a regulation in which digital asset exchanges would be allowed to begin distributing their tokens broadly if they provide disclosures such as plans for the network and who is behind the network. These exchanges would then have three years from a token’s initial distribution to develop the network before they would be subject to any securities laws. This three-year safe harbor allows issuers of digital assets to be exempt from SEC regulation for a certain time period and prevents their digital asset from being immediately classified as a security. It also gives digital asset creators time to set up their networks without government regulation and establish whether their digital asset can be classified as a security. This framework may allow creators to innovate digital and financial assets while continuing to protect investors. At the end of the day, the SEC needs to balance “encourag[ing] market innovation while ensuring basic investor protections.”

V.  PRELIMINARY EXPLORATION OF EXISTING REGULATORY MODELS

The SEC has existing regulation for non-digital securitized products, such as traditional stocks in companies or REITs, which may be applicable to f-NFT products and provide regulators with a starting point from which to develop regulations specific to f-NFTs. 

When an individual or entity initially fractionalizes and issues their 

f-NFTs, it could be called an “Initial Fractionalization Offering,” or “IFO.” An IFO, in which the issuer sells multiple shards of the same NFT to multiple buyers, is similar to a type of IPO or ICO, in which the issuer sells multiple stocks or tokens of the same company to multiple buyers. F-NFTs can be treated as a stock in the original whole NFT, and the sale of these f-NFTs can be the same as selling a share in an individual company. Thus, instead of developing a whole new set of regulations for f-NFTs, regulators can just look at existing securities laws for traditional stock sales and apply them to f-NFTs sales. The rules governing traditional, non-digital securities such as stocks could be slightly modified to better apply to f-NFT sales. For example, f-NFT creators could be required to register their f-NFT sale or IFOs with the SEC by filing a modified Form S-1 that contains information regarding the past performance of the NFT such as its trading history, information regarding the performance of other similar NFTs if the NFT is part of a collection, or information regarding the company or individual creating the NFT. Providing the financial disclosures required by traditional IPOs may be more difficult for traditional NFTs because there is not any managerial or financial information behind a regular NFT besides its intrinsic or artistic value. However, NFTs from a particular brand, celebrity, or company would have an easier time producing accurate managerial and financial disclosures or material information regarding an NFT because these brands and celebrities typically have established financials or data regarding their performance, such as how popular a brand is or the performance statistics of an athlete. For example, Martha Stewart could be required to disclose managerial and financial information regarding her retail company if she tries to issue another NFT collection of her home décor, or Patrick Mahomes could be required to disclose information regarding his football statistics or other brand deals if he issued more NFTs. Thus, traditional registration requirements for issuing stock could be particularly appropriate for a celebrity or company that issues f-NFTs or NFTs and uses the proceeds from the sales to develop their brand or business.

REITs are another securitized product with an established regulatory structure that can be applied to f-NFT regulation. REITs are entities that own and typically operate various “income-producing real estate or real estate-related assets,” such as office buildings, apartments, shopping malls, hotels, or warehouses. In addition to other requirements, a REIT must have seventy-five percent of the entity’s total assets coming from real estate investment, be managed by a board of directors, and distribute at least ninety percent of its taxable income to shareholders annually in the form of dividends. REITs register and file reports with the SEC, can list and trade their shares on a public stock exchange, and allow investors to invest in and own shares of multiple large-scale, income-producing real estate properties without actually having to buy the real estate. In other words, REITs take a bunch of commercial real estate assets, bundle them together in one company, and then sell shares of that company to investors so they can reap the benefits of owning commercial real estate. Issuing shares of a REIT is like issuing fractional shares of a basket of NFTs. For example, one way to issue f-NFTs is to take multiple whole NFTs, bundle them together in one large NFT basket, and then sell f-NFTs or fractional shares of that basket 

(“f-NFT bundles”) to investors so they can own shares in multiple NFTs. Just as REITs sell investors shares of a basket of real estate investment properties, f-NFT bundles sell investors shares of a basket of NFTs.

Given these similarities, securities regulations that apply to REITs may also translate and apply to f-NFTs. Most REITs are registered with the SEC and publicly traded on a stock exchange. Under the Securities Act, REITs are required to register their securities using Form S-11 to make disclosures regarding the REIT’s management team and other significant information and make regular SEC disclosures such as quarterly and yearly financial reports. Regulators could follow this existing regulatory model from REITs and impose similar requirements for fractional shares of NFT bundles. Form S-11 requires REIT issuers to disclose information detailing the price of the deal, how the REIT plans to use the proceeds, certain financial data like trends in revenue and profits, descriptions of the real estate, operating data, information on its directors and executive officers, and other data. These types of requirements could easily be adopted to regulate f-NFT bundles by creating a new, similar form to Form S-11 for issuers of f-NFTs to file with the SEC. For example, issuers of f-NFT bundles could be required to file a form like Form S-11 that discloses information like the price of each f-NFT; how the individual or entity issuing these f-NFT bundles plans to use the proceeds, such as to purchase more NFTs to add to the bundle; a description of the NFTs currently in the bundle as if they are part of a trending collection; certain financial data of each NFT, such as its past transactions or price; information on the individuals managing the bundle, such as their credentials or how they have managed digital assets in the past; and so forth.

However, REITs are different from f-NFTs in that REIT investors earn a share of the income produced through the rent or mortgage interests from the commercial real estate, while f-NFT investors can only earn a share of the increased value of the underlying NFT. It may be possible that an NFT’s smart contract could charge money to anyone who views the particular NFT and then automatically distribute these proceeds out to the 

f-NFT investors as a type of dividend, but this has yet to be seen. Thus, REIT regulations may not translate perfectly to regulating f-NFT bundles since it is difficult to see how f-NFT bundles would file quarterly and yearly financial statements regarding just NFTs. REITs can provide financial disclosures regarding the profits and losses of their various real estate properties, but f-NFTs do not have similar financials beside the increase and decrease in value of the various NFTs within the bundle. However, as described above, there may be more financial information when f-NFTs are issued by specific celebrities or companies. Regulators will need to determine how f-NFTs can disclose financial information to best inform investors. Additionally, there has been a surge of investors buying Metaverse Real Estate, which is real estate in virtual worlds bought and sold using NFTs and cryptocurrency. People can now go onto virtual real estate platforms such as SuperWorld where they can buy a plot of land in the form of an NFT and then share in any of the commerce that happens on that piece of property. These types of real estate NFTs would be able to charge rent or gain interest on these virtual properties and thus could then distribute income out to the NFT owners, much like REITs, and be subject to similar regulation. This analysis is outside the scope of this Note, but it is a relevant issue that regulators will need to face in the future. Nevertheless, REITs can provide a baseline to help regulators analyze and develop ways to regulate different types of f-NFTs and NFTs.

CONCLUSION

Given the foregoing analysis, f-NFTs can be deemed an “investment contract” security under the Howey test, and the SEC may be able to regulate the issuers or exchanges that facilitate these fractionalization and trading. 

F-NFTs satisfy the four Howey prongs because (1) f-NFT buyers make an investment using money in the form of cryptocurrency; (2) this investment is in a “common enterprise” where the fortunes of the buyer are tied to the successes of either other fractional investors of one NFT or the brand or celebrity that issued the NFT; (3) buyers have a “reasonable expectation of profit” because f-NFTs are traded on secondary markets and promoted as a unique liquidity opportunity; and (4) these financial returns are derived from the efforts of issuers to support the popularity and price of an f-NFT and platforms to maintain and develop f-NFT exchanges and marketplaces.

If f-NFTs or NFTs are deemed securities, the SEC can use the existing regulatory models of digital currencies, traditional stock, and REITs to create initial regulations of a continuously developing digital asset. Due to the wide variety of f-NFTs and the ways in which they are owned and operated, regulators will have difficulty developing one standard that applies broadly. However, by comparing issuers and exchanges of f-NFTs or NFTs to existing securitized products, one can apply slight modifications to established regulations and require disclosures such as an NFT’s transaction history or how an issuer and exchange will use the proceeds from the sale.

Hopefully, this analysis will appeal not only to the legal field and regulators but also to the average investor who is interested in buying, selling, or understanding new digital assets like NFTs. The legal field and the government must face the current issues with NFTs and their classification and regulation as a financial instrument in order to protect investors while also allowing for the innovation of new financial technologies.

 

96 S. Cal. L. Rev. 253

Download

*  J.D., University of Southern California Gould School of Law, 2023. B.A., University of California, Los Angeles, 2019.

Ditching Daimler and Nixing the Nexus: Ford, Mallory, and the Future of Personal Jurisdiction under the Corporate Consent and Estoppel Framework

While personal jurisdiction is intended to assess whether a defendant should be forced to defend a lawsuit in a location due to the defendant’s contacts with that forum, the doctrine has shifted to require the plaintiff to show a connection to the forum, even if the defendant otherwise has substantial contact with it. In its 2014 decision Daimler AG v. Bauman, the Supreme Court further limited the personal jurisdiction of corporate defendants in the spirit of curtailing forum shopping. But the Court’s 2021 decision concerning personal jurisdiction, Ford Motor Co. v. Montana Eighth Judicial District, and the Court’s granting of certiorari in Mallory v. Norfolk Southern Railway Co. cast doubt on the viability of Daimler. The 2021 Ford decision marks the beginning of an expansion of personal jurisdiction for corporate defendants. Justices Thomas, Sotomayor, and Gorsuch have expressed concerns over the protections afforded to corporate defendants under current doctrine. This Note elaborates on that skepticism. It traces the history of personal jurisdiction to reveal that the doctrine originates from the corporate consent and estoppel model—the very model at issue in Mallory. This Note argues that, absent guidance from Congress, courts must apply the original model—one that is inconsistent with Daimler and the nexus requirement. Finally, this Note argues that returning to the pre-Daimler and pre-nexus era produces favorable policy: it removes baseless corporate protections under the guise of the Fourteenth Amendment, clarifies the murky application of the doctrine in internet and stream of commerce cases, opens more fora for plaintiffs to allow free-market considerations to shape state law, and leaves the door open for Congress to legislate if it deems it necessary.

 

INTRODUCTION

Ask any athlete, and they will confirm the importance of home-field advantage. Over a large sample size, home teams win between 55% and 60% of National Football League games. A similar phenomenon takes place in the Major League Baseball. In the National Basketball Association, the numbers are usually higher at around 65% home-team wins. Needless to say, if offered a choice, teams would prefer to play at home. The same is true for litigants. The Constitution recognizes that a litigation forum, the location in which a lawsuit is permitted to take place, is limited. The limitation of where a defendant may be sued is known as where the defendant is subject to the court’s “personal jurisdiction.”

Traditionally, a defendant was subject to personal jurisdiction in a particular location if the defendant was “at home” in that location. But the definition of where a corporate defendant is “at home” has changed dramatically. Prior to 2013, corporate defendants were “at home” in any location in which they engaged in “continuous and systematic” contact. But under the Supreme Court’s 2013 decision Daimler AG v. Bauman, corporate defendants are now “at home” only in the locations in which they (1) maintain their headquarters or (2) are incorporated. Consequently, in order for a plaintiff to sue a corporate defendant outside of these two locations, the plaintiff must comply with a significantly more complicated framework, the most perplexing aspect of which is the “nexus” requirement: in order to sue a defendant away from the defendant’s “home” and ensure that the defendant’s due process rights are not offended, the plaintiff must show a connection between the selected location and the plaintiff’s lawsuit.

This relatively new doctrine produces peculiar results. Masquerading as due process, the doctrine inordinately shields corporations from having to defend lawsuits in locations where they previously would have had to. For example, current doctrine forbids Michigan plaintiffs from suing a New York company in California but permits an identical lawsuit in the same venue for the same injuries based on the same conduct by California-residing plaintiffs. Moreover, the doctrine forbids a Florida-residing plaintiff from suing a Texas corporation in Florida, even though the corporation was registered to do business in Florida; had an agent for service of process in Florida, a distributor in Florida, and a plant in Florida; had been sued for similar claims in Florida; and had itself initiated lawsuits in Florida. In other words, in locations where the defendant is not “at home,” current doctrine erroneously assesses the plaintiff’s connection to the litigation forum in determining whether the defendant’s due process rights have been violated. The scenarios described above, and other recent Supreme Court decisions, illuminate how far astray from its origins personal jurisdiction doctrine has drifted.

In 2021, the Court handed down its decision in Ford Motor Co. v. Montana Eighth Judicial District Court, which revealed that at least three sitting Supreme Court Justices are skeptical of the current personal jurisdiction doctrine, arguing that it provides too much protection for corporate defendants under the guise of the Fourteenth Amendment’s Due Process Clause. In April 2022, the Court also granted certiorari to address the corporate consent and estoppel model head on. This model, described in further detail below, suggests that if a corporation registers to conduct business in a forum, it implicitly consents to jurisdiction in that forum and is estopped from arguing otherwise. This Note expands on the justices’ concerns and offers a way forward consistent with the way personal jurisdiction has historically been understood.

This Note will illustrate that the modern personal jurisdiction doctrine—and the nexus requirement in particular—was improperly created to curtail forum shopping. It will then show that while Congress has passed statutes limiting or expanding jurisdiction in other contexts, and has narrowed jurisdiction of federal courts through venue statutes, it has not done the same to limit personal jurisdiction. Therefore, the sole consideration for personal jurisdiction is due process. And under the Due Process Clause, personal jurisdiction is based on the corporate consent and estoppel model, which inquires only into the corporate defendant’s contacts with the selected forum—it is not so concerned with the plaintiff’s connection to the forum. Accordingly, Daimler and the nexus requirement are inconsistent with this traditional model. This Note will also show how a reversion to this model of personal jurisdiction will clarify the doctrine’s application to cases involving internet sales and the “stream of commerce.”

This Note begins by synthesizing the genesis and evolution of personal jurisdiction doctrine, discussing first the nineteenth century norms and moving into how Supreme Court jurisprudence has developed under the lens of the Fourteenth Amendment. The next Part of this Note narrows in on the relatively new distinction between general and specific personal jurisdiction and the “nexus” requirement that has attached to the latter. The Note continues by listing reasons the nexus requirement is troublesome and difficult to apply, given the narrowing of the “at home” definition for corporate defendants. Finally, it ends with a preview of where the Court may be heading: given the granting of certiorari in Mallory, the Court appears to be in favor of reverting to the corporate consent and estoppel model and determining personal jurisdiction through assessing the defendant’s connection to the selected forum alone, consequently ditching Daimler and nixing the nexus requirement.

  1. BACKGROUND
  2. An Explanation of Personal Jurisdiction

Personal jurisdiction refers to the power a court has to make rulings relating to a party. Practically, it refers to the location in which a plaintiff may sue a defendant and hold the defendant to answer for that lawsuit. If a defendant is subject to personal jurisdiction in a particular location, known as a “forum,” the defendant must respond to the lawsuit, and any decision impacting the defendant can be enforced in other jurisdictions. If a case is in state court, personal jurisdiction answers the question “which state’s court system?” If the case is in federal court, personal jurisdiction answers the question “the federal court in which state?”

Personal jurisdiction analysis is twofold: statutory and constitutional. States are free to pass statutes defining the personal jurisdiction of their state courts. These are referred to as “long-arm statutes,” as they extend or retract how far the “arm” of their court system can reach. Under Rule 4(k)(1)(A) of the Federal Rules of Civil Procedure, a federal court applies the long-arm statute of the state in which it is located. In practice, a federal court in California will first determine whether there exists personal jurisdiction over a defendant under California’s long-arm statute. After making a determination under the long-arm statute, the court would turn to the constitutional analysis of personal jurisdiction.

The constitutional analysis of personal jurisdiction is based on the Due Process Clause of the Fourteenth Amendment. The analysis involves considerations of state sovereignty, federalism, and fairness. Because most states have long-arm statutes that permit personal jurisdiction to the limits of the constitution, the personal jurisdiction analysis often blends into just a constitutional question. As such, courts in states with to-the-limits-of-the-constitution long-arm statutes will only undertake a single analysis and have to answer one question: Is the exercising of personal jurisdiction in this forum consistent with the Due Process Clause? The remainder of this Note focuses only on the constitutional analysis of personal jurisdiction.

  1. The Distinction Between General and Specific Personal Jurisdiction

Another concept crucial to the understanding of this Note is the distinction between two kinds of personal jurisdiction: “general (sometimes called all-purpose) jurisdiction and specific (sometimes called case-linked) jurisdiction.” The former refers to a forum in which any plaintiff can bring any cause of action against the defendant. The latter is a forum in which, under current doctrine, plaintiffs may only bring causes of action that “arise out of or relate to” the forum. The specific personal jurisdiction requirement that the claim “arise out of or relate to” the forum is known as the “nexus” requirement because the plaintiff must show a “nexus” between the claim and the selected forum.

An example may help illustrate how the doctrine functions. Suppose a defendant is subject to general jurisdiction in Delaware. In that situation, a plaintiff from New York may sue the corporation in Delaware, even if there is no relation between the claim and Delaware—that is, even if the wrong alleged in the complaint took place in Maine. By contrast, suppose that the same defendant is not subject to general personal jurisdiction in Delaware. In the event that the wrong alleged in the complaint took place in Maine, the courts in Delaware would not have personal jurisdiction over the defendant and would not be able to adjudicate the dispute—this is because the plaintiff is unable to show a “nexus” between the claim and the selected forum in Delaware.

  1. The Venue Statutes

Besides the due process requirements that plaintiffs must comply with in deciding where to file a lawsuit, Congress has acted to pass statutes narrowing potential venues for litigation. Specifically, Congress has outlined three locations in which a civil action may be brought: (1) where a defendant “resides”; (2) where a “substantial part of the events or omissions giving rise to the claim occurred”; and (3) if there is no venue that fits (1) or (2), wherever the defendant is subject to the court’s personal jurisdiction.

In situations where a plaintiff files a lawsuit in a location in which the defendant is subject to personal jurisdiction, Congress permits defendants to nevertheless file motions to transfer venue or dismiss the case. Congress envisioned two main reasons to permit a transfer of venue despite compliance with the requirements of personal jurisdiction. The first reason is when the plaintiff complies with the requirements of personal jurisdiction but does not comply with the requirements of the venue statute. For example, suppose that a corporation is headquartered in San Francisco, California (which is in the Northern District of California) and finds itself to be the defendant in a federal-law dispute with an employee over conduct that took place in San Diego, California (which is in the Southern District of California). If the employee files suit in the Central District of California, the defendant corporation may request a transfer to either the Northern or Southern District of California because, while the corporation is subject to personal jurisdiction in California, the Central District of California is an improper venue (it is not the venue where the defendant corporation is located, and it is not the location where a “substantial part of the events or omissions giving rise to the claim occurred”).

The second reason is for convenience. That is, even if a plaintiff complies with the requirements of personal jurisdiction and with the requirements of the venue statutes, a defendant may nevertheless request and be granted a motion to transfer venue if “the interest of justice” so demands. In making the discretionary determination to transfer a case for convenience purposes, courts consider the following factors, among others: the relative ease of access to sources of proof, the cost of obtaining the attendance of required witnesses, administrative dealings of court congestion, and the local interests of having controversies decided where they took place. For example, suppose a plane company is headquartered in Great Britain and flies a plane in Scotland. The plane’s parts were manufactured in Pennsylvania and Ohio. While the company was flying the plane in Scotland, it crashed and killed everyone on board. The heirs of the passengers sued the plane company in Pennsylvania. The Pennsylvania court could dismiss the case under a forum non conveniens theory, concluding that the case should be tried in Scotland. In reaching this conclusion, the court would note that the crash had occurred in Scotland; the crash investigation had been conducted in Scotland; the witnesses are in Scotland; and the pilot’s estate, the plane’s owners, and the charter company were all located in Scotland.

The venue statutes supplement personal jurisdiction doctrine. Importantly, though, they are acts of Congress and not judge-made interpretations of the Due Process Clause of the Fourteenth Amendment. As for the forum non conveniens doctrine, there is a common law background to the doctrine, and it existed before the ratification of the Fourteenth Amendment. This is crucial for originalist judges who believe that that Court should apply common law doctrines only if they existed at the time of the ratification of the amendment at issue. Of course, should Congress desire to narrow or expand the jurisdiction of federal courts and permit more or fewer fora for plaintiffs to file lawsuits, Congress is free to do so.

  1. Recent Supreme Court Doctrine

The Supreme Court has historically been deadlocked in its personal jurisdiction doctrine. Justices seem to agree on dispositions but not the underlying reasoning for them. In March 2021, the Court handed down its decision in Ford Motor Co. v. Montana Eighth Judicial District. That decision doubled down on the Court’s previous personal jurisdiction decision, Bristol-Myers Squibb Co. v. Superior Court, in which the Court required plaintiffs to show a nexus between their claim and the forum state in order to establish specific personal jurisdiction over a defendant corporation that has long been established in the forum selected for the litigation. In both Ford and Bristol-Myers Squibb, the defendant corporation was not subject to general jurisdiction in the forum state despite its significant market presence there; in other words, the defendant corporation had purposefully availed itself of the forum state, arguably had continuous and systematic contact in the forum, but nevertheless was not “at home” there. In Ford, a plaintiff purchased a malfunctioning car outside of Montana, yet she was permitted to sue Ford in Montana because Montana was the plaintiff’s home state. In Bristol-Myers Squibb, a group of plaintiffs from Michigan was not permitted to sue in California (even though a group of California residents was permitted to sue there) because the Michigan plaintiffs had no connection to California. The plaintiffs’ place of residency was thus determinative in failing to establish personal jurisdiction over the corporation defendant and offended the corporation’s right to due process under the Fourteenth Amendment. Under both the first personal jurisdiction case since the passing of the Fourteenth Amendment, Pennoyer v. Neff, as well as under the revamped “minimum contacts” test in International Shoe, both Ford and Bristol-Myers Squibb would arguably be permitted to proceed in the selected fora. This Note will explain how the doctrine has evolved to the point of irreconciliation with these landmark cases.

  1. THE STAKES AND HISTORY OF PERSONAL JURISDICTION
  2. The Stakes of Personal Jurisdiction

Before delving into the history of personal jurisdiction and its development over the turn of two centuries, it is necessary to explain why it has been an area of such fierce contention. Personal jurisdiction is not about geography, not about which physical courthouse may entertain a controversy. Rather, it is about who adjudicates that controversy.

  1. The Concern of State Judges and Congress’s Statutory Remedy

In most states, state judges are elected by the general public. Accordingly, a case pending in state court is adjudicated by a judge subject to at least some public pressure. The case also has the potential of being tried before a jury composed of individuals from that state. These factors may create disadvantages for an out-of-state corporation, especially if the plaintiff is from the forum state. Hence the saying, “Though the courtroom be an adversarial arena, [the judge] is more than a referee . . . more than a linesman. [The judge] is the game.”

Congress has addressed the fairness concerns of defendants being sued in state courts outside their place of residence through the mechanism of federal court removal. The process of removal, a product of congressional statute, allows defendants to move a case from state court—where judges are usually elected, and plaintiff-friendly state procedural law is likely to apply—to more defendant-friendly federal court if certain criteria apply. One such criterion is when there exists “diversity jurisdiction.” Diversity jurisdiction occurs when the litigating parties are citizens of different states. Corporations are citizens of the state in which they are incorporated and the state in which their headquarters is in. Notably, though, diversity jurisdiction is permitted only in cases of “complete diversity,” which requires all parties on either side of the litigation “v” to be citizens of different states. Given the complete diversity requirement, plaintiffs will oftentimes strategically sue along with a co-plaintiff from the same state as the defendant in order to preclude removal under diversity jurisdiction. Congress has taken steps to address these concerns as well. In class actions, defendant-corporations rely on the Class Action Fairness Act, another congressional statute that allows defendants to remove a case to federal court so long as the amount in controversy exceeds $5 million and there is diversity of citizenship. The Class Action Fairness Act does not require complete diversity.

  1. The Concern of Forum Shopping and Congress’s Inaction

Then there is the issue of “forum shopping.” This term refers to plaintiffs seeking fora that offer the best choice-of-law and substantive law combinations to benefit their case. Plaintiffs also prefer to file claims in their hometown jurisdictions, where juries and judges are more likely to be sympathetic to the hometown plaintiff. Put another way, plaintiffs will choose to sue in locations where the law the court applies is most favorable to them and courtroom decisionmakers are more likely to favor them. One prominent example of the implications of forum shopping is the application of anti-SLAPP laws in various states and their availability in federal court. SLAPP stands for “strategic lawsuits against public participation.” An anti-SLAPP motion is a state-law procedural rule available in many states that allows a defendant to repel and quickly dismiss lawsuits that threaten the defendant’s free-speech rights or matters of public concern. When this motion applies, the burden shifts to the plaintiff to show a likelihood of prevailing in the lawsuit. Without such a showing, the plaintiff’s case is dismissed. Because anti-SLAPP motions are not creatures of federal law, different circuits have different interpretations of when they can apply in federal court. Some circuits permit the invocation of state anti-SLAPP motions in federal court in diversity jurisdiction cases while others do not. The difference in these circuits could mean extra litigation costs and a higher potential for settlement. Accordingly, the location of where a lawsuit is filed is a crucial strategic decision plaintiffs make.

Congress has not fully addressed forum shopping concerns by statute. While Congress has required certain claims to be litigated exclusively in federal court, Congress has few guidelines about which federal court plaintiffs are required to file in. This is where personal jurisdiction comes in. Personal jurisdiction’s roots are grounded in the Constitution alone, but its newfound application is in part to curtail forum shopping. The tension between personal jurisdiction doctrine’s roots and its modern significance, along with Congress’s inaction to curtail forum shopping, is the premise of this Note.

  1. The History of Personal Jurisdiction
  2. The Consent and Estoppel Model for Corporations

In the nineteenth century, corporations were subject to personal jurisdiction only in the state in which they were incorporated because they did not have the privilege to exist in other states. Other states could agree to recognize a corporation by a process called comity. As part of comity, states could require corporations to consent to being subject to the personal jurisdiction of the state in which they are licensed to conduct business. Accordingly, the estoppel model took form: if a corporation exercised corporate privileges in a state, it would be estopped from arguing that it was not subject to the personal jurisdiction of that state.

The history of this model arose in the 1800s to address the “injustice” that would result if a corporation could not be subject to suit in a forum where it does business but nonetheless is not headquartered. States passed statutes that required corporations to consent to being sued in the state in exchange for the privilege of doing business in the state. One of the first cases to recognize this model was Ex parte Schollenberger. The Pennsylvania statute at issue in Schollenberger required corporations to appoint an agent to receive service that would have “the same effect as if served personally on the company within the State.” The statute in question did not explicitly grant jurisdiction, but the Court held that

if the legislature of a State requires a foreign corporation to consent to be ‘found’ within its territory, for the purpose of the service of process in a suit, as a condition to doing business in the State, and the corporation does so consent, the fact that it is found gives the jurisdiction, notwithstanding the finding was procured by consent.

A few years later, the Court explicitly held that this model was constitutional.

Importantly, the consent and estoppel model did not originally require a nexus between the litigation and the forum. Instead, courts have held that the corporation’s consent to be sued subjects the corporation to general jurisdiction in the forum. For example, in Pennsylvania Fire Insurance Co. v. Gold Issue Mining and Milling Co., an insurance company based in Pennsylvania conducted business operations in Missouri and, as required by Missouri law, appointed a Missouri in-state agent for service of process. The insurance company contracted with an Arizona company to insure its buildings in Colorado. After the Colorado property was struck by lightning and significantly damaged, the Arizona company sued the Pennsylvania insurance company in Missouri over the Colorado contracts. The Pennsylvania insurance company argued that it was not subject to personal jurisdiction in Missouri because the contracts did not involve Missouri whatsoever; that is, there was no “nexus” between Missouri and the plaintiff’s claim. The Court disagreed, explaining that “the construction of the Missouri statute thus adopted hardly leaves a constitutional question open.” The appointment of an agent to receive service in Missouri, the Court held, showed the insurance company’s consent to be sued in Missouri. This line of reasoning continued in at least three other cases.

  1. The Erosion: Shift from Corporate Consent to Corporate “Presence”

The explicit corporate consent model could no longer hold up after the Court, in International Textbook Co. v. Pigg, held that states could not impede interstate commerce by denying out-of-state corporations from exercising corporate privileges in their states. Put another way, the Court forbade states from denying corporations permission to conduct business within their borders. As such, corporations no longer affirmatively consented to being subject to the personal jurisdiction of states in which they engaged in business activities. To remedy the doctrine, the Court, in International Harvester Co. v. Kentucky, held that when a corporation was “present” in a jurisdiction, it was subject to the personal jurisdiction of that forum through, presumably, an implied consent.

Corporate “presence” proved to be a tricky term to define. Nevertheless, the remnants of the consent model held up well. Pennsylvania maintained its consent-by-jurisdiction framework and was the only state to explicitly inform corporations of what they were agreeing to by doing business in the state. Under Title 42, Section 5301(a) of the Pennsylvania Consolidated Statutes, registration to do business in Pennsylvania—which foreign corporations are required to do—constitutes consent to general jurisdiction in Pennsylvania courts. For a time even after International Shoe, courts continued to enforce the consent and estoppel model in Pennsylvania. For example, the Third Circuit in Bane v. Netlink held that there was no need to conduct a personal jurisdiction analysis (that is, to assess whether the defendant had systematic and continuous contact in the forum) because the defendant corporation consented to being subject to general personal jurisdiction in the state by virtue of the Pennsylvania statute. The court distinguished that situation from another Third Circuit case, Provident National Bank v. California Federal Savings and Loan Association, where the defendant had not registered to do business in Pennsylvania.

However, in late 2021, the Pennsylvania Supreme Court struck down the law requiring out-of-state corporations to submit to jurisdiction as a requirement of registering to do business in the state, finding that the statute is incompatible with the Fourteenth Amendment, as interpreted in Daimler. The most recent Pennsylvania Supreme Court decision highlights the split over the constitutionality of such statutes. A number of other state high courts have reached similar conclusions, rejecting the constitutionality of jurisdiction-by-consent statutes. And a number of other state high courts have reached the opposite conclusion, finding that such statutes are constitutional. Other state high courts have used state law to reach conclusions in this area of the law. In April of 2022, the Supreme Court granted certiorari over the Pennsylvania Supreme Court’s decision.

But what about states that do not explicitly by statute inform defendant corporations that they would be subject to general personal jurisdiction in the state? Nearly every state requires foreign corporations to appoint an agent to receive service of process in the state. Courts were split as to whether these schemes subjected corporations to general personal jurisdiction in the state. Minnesota, for example, has a statutory scheme that allows service of process over a foreign corporation through service on the Minnesota Secretary of State. In that situation, though, the service is valid “only when based upon a liability or obligation of the corporation incurred within this state or arising out of any business done in this state by the corporation prior to the issuance of a certificate of withdrawal.” Various Minnesota state and federal courts have interpreted these statutes as creating consent to general jurisdiction for registered foreign corporations. In Knowlton v. Allied Van Lines, the Eighth Circuit held that the Minnesota statute requiring a registered agent within the state creates general jurisdiction in that state when service is processed on that agent. Particularly, the court noted that “[t]he whole purpose of requiring designation of an agent for service is to make a nonresident suable in the local courts,” and, as such, “appointment of an agent for service of process . . . gives consent to the jurisdiction of Minnesota courts for any cause of action, whether or not arising out of activities within the state.”

A nearly identical phenomenon has occurred in Iowa. Iowa federal courts, relying on Knowlton, found that an Iowa statute is “almost identical to that of Minnesota.” As such, even though it does not explicitly address jurisdictional consequences of registration, the statute confers general jurisdiction in Iowa courts. The same has been held to be true in Kansas and New Mexico. The Georgia Supreme Court reaffirmed the concept as well. And even after International Shoe fundamentally changed the personal jurisdiction analysis, several circuit courts continued to hold that consent by registration obviated the due process analysis and that states could exercise general jurisdiction based on that consent. This is not to say that there are no federal circuits holding to the contrary. While six circuits have found jurisdiction-by-consent statutes to be constitutional, five circuits reached the opposite conclusion. And two circuits avoided the constitutional question. These decisions are all in flux, given the Supreme Court’s decision in 2022 to grant certiorari and review the Pennsylvania statute.

But “presence” and “consent” are two distinct ways of submitting to jurisdiction. Putting aside the question of whether a corporation “consents” through registering to do business—the question that the Supreme Court will aim to answer in Mallory—there is a simpler way to determine the existence of personal jurisdiction: assessing whether the corporation has engaged in systematic and continuous contact in the forum state. The Supreme Court’s guidance in Ford sheds light on where the Court may be heading on the “presence” front. The hallmarks of due process in the context of the consent and estoppel model are reciprocity and fairness. Ford seemed to reiterate the underlying theme of “reciprocal obligations” between a defendant and the forum as the basis for what makes the exercise of personal jurisdiction “fair.” In that case, because Ford Motor Company enjoyed “the benefits and protections” of state law while doing business in the forum, “allowing jurisdiction in these cases treats Ford fairly.”

  1. The Fourteenth Amendment: Personal Jurisdiction and the Current Doctrine

With the passing of the Fourteenth Amendment in 1868, the Supreme Court saw it proper to provide guidance on personal jurisdiction under a now-federalized due process standard. In Pennoyer, the Court held that a court may exercise personal jurisdiction over a party only if that party was served with process in the state seeking to adjudicate the controversy. As explained above, this ruling was consistent with the consent and estoppel model and the subsequent corporate presence model.

Despite Pennoyer’s overruling by International Shoe, the Court remained true to the spirit of the corporate consent and estoppel model. Pennoyer was overruled and substituted with the “minimum contacts” test in International Shoe. Under the new International Shoe standard, a defendant becomes subject to the personal jurisdiction of a state with which it engages in “minimum contacts.” The test was later refined in Hanson v. Denckla to define “minimum contacts” as contacts that demonstrate a defendant’s “purposeful availment” of the jurisdiction. In other words, a corporate defendant becomes subject to the personal jurisdiction of a forum if it takes a purposeful action to benefit from the privilege of doing business in that forum. Similarly, under World-Wide Volkswagen Corp. v. Woodson, the foreseeability of causing injury in a particular location was held not to be enough to subject a corporation to the personal jurisdiction of the courts in that location. Therefore, while the International Shoe test, along with its refinements in Hansen and World-Wide Volkswagen, departed from the Pennoyer service-of-process test, it remained consistent with the consent and estoppel model and the corporate presence model. “Minimum contacts” and “purposeful availment” became the tests to determine whether a corporation was “present” in a forum such that it should be subject to the personal jurisdiction of the forum. Foreseeability of injury, on the other hand, is not synonymous with corporate presence and therefore was not a basis for personal jurisdiction, just as a corporation cannot be “present” in a location based on foreseeability of injury alone and cannot be said to have “consented” to jurisdiction based on foreseeability of injury alone. Applying the new test, the Court in McGee v. International Life Insurance Co. found that a California court could subject a Texas insurance company to its personal jurisdiction, even though the insurance company had a single policy contract with a California resident. The corporation was found to have been present in California because it entered into a contract directly in California.

The adherence to the origins of the personal jurisdiction consent and estoppel and corporate presence model did not last. In Burger King v. Rudzewicz, the Supreme Court subtly revised its test for personal jurisdiction beyond McGee and bifurcated what was previously a one-step “minimum contacts” test. The Court fractured the original intention of International Shoe, holding that personal jurisdiction can be established if two elements are met: (1) the defendant engaged in minimum contacts/purposeful availment of the forum state; and (2) the subjugation of personal contacts does not offend “traditional notions of fair play and substantial justice.” The Court, in a split decision, later created five factors by which to determine whether establishing personal jurisdiction over an out-of-state defendant would violate “traditional notions of fair play.”

Two concurrences are most telling in just how far this doctrine has gone adrift. Justice Brennan’s concurrence in Asahi v. Superior Court argued that a defendant’s placing of a product into the stream of commerce may very well satisfy the minimum contacts prong but that it would not satisfy the “fair play and substantial justice” prong. That is, showing minimum contacts is not enough. Justice John Paul Stevens, in concurrence, agreed that jurisdiction would be “unreasonable and unfair,” but he did not join Justice O’Connor’s opinion, in part because the Court should not have even considered minimum contacts. He wrote that “it is not necessary to the Court’s decision. An examination of minimum contacts is not always necessary to determine whether a state court’s assertion of personal jurisdiction is constitutional.” Minimum contacts, however, is the key framework under which corporate presence is determined.

III.  THE HISTORY OF THE NEXUS REQUIREMENT

Under current doctrine, a defendant is subject to the specific personal jurisdiction of a forum if the controversy “arises out of” or “relates to” the defendant’s contact with the forum state. This was first hinted at in Shaffer v. Heitner, in which the Court held that in rem jurisdiction—jurisdiction based solely on the presence of a defendant’s property in the forum—is insufficient on its own to establish personal jurisdiction. In Shaffer, plaintiffs filed a shareholder derivative suit against a corporation and corporate executives. The basis for personal jurisdiction in the selected forum was the defendant’s property in the forum. The Court held the following:

The presence of property in a State may bear upon the existence of jurisdiction by providing contacts among the forum State, the defendant, and the litigation, as for example, when claims to the property itself are the source of the underlying controversy between the plaintiff and defendant, where it would be unusual for the State where the property is located not to have jurisdiction. But where, as in the instant quasi in rem action, the property now serving as the basis for state-court jurisdiction is completely unrelated to the plaintiff’s cause of action, the presence of the property alone, i.e., absent other ties among the defendant, the State, and the litigation, would not support the State’s jurisdiction.

The Court further explained that

although the presence of the defendant’s property in a State might suggest the existence of other ties among the defendant, the State, and the litigation, the presence of the property alone would not support the State’s jurisdiction. If those other ties did not exist, cases over which the State is now thought to have jurisdiction could not be brought in that forum.

In making its determination, the Court acknowledged that it was backtracking from the “long history of jurisdiction based solely on the presence of property in a State” by now requiring a “relationship among the defendant, the forum, and the litigation” in order to establish personal jurisdiction. Therefore, the Court engaged in a policy analysis in its departure from the traditional doctrine. It did so presumably to curtail the shareholders’ forum shopping, despite the fact that the defendant corporation was “present” in Delaware due to its property in the state. As such, the Court looked beyond the original understanding of personal jurisdiction. Under the “long history of jurisdiction,” personal jurisdiction could be established “based solely on the presence of property in a State.” Even under the “minimum contacts” test from International Shoe, if a defendant owns property in a state, then that defendant has the minimum contacts necessary to subject it to personal jurisdiction in that forum. Given that Congress has provided no guidance on jurisdiction besides the venue statutes, the proper remedy for defendants faced with lawsuits in locations they prefer not to litigate in is to seek to transfer the case to a more appropriate venue.

In future cases, the Supreme Court attempted to assert that the nexus requirement is, in fact, rooted in the original understanding of the Fourteenth Amendment’s due process standard. In Daimler AG v. Bauman, the Court explained that the concept of “reciprocal fairness” between corporations and the states in which they conduct business implies a nexus requirement. The Court never attempted to argue that the nexus requirement is rooted in the Pennoyer test, but the Court quoted a passage in International Shoe in support of its argument:

The exercise of th[e] privilege [of conducting corporate activities within a State] may give rise to obligations, and, so far as those obligations arise out of or are connected with the activities within the state, a procedure which requires the corporation to respond to a suit brought to enforce them can, in most instances, hardly be said to be undue.

But the reliance on International Shoe for this proposition is not entirely accurate. While International Shoe blessed the exercise of jurisdiction in cases where the suit arose out of the defendant’s contact with the state, it explicitly left open the possibility of the exercise of jurisdiction without such a nexus requirement:

While it has been held in cases on which appellant relies that continuous activity of some sorts within a state is not enough to support the demand that the corporation be amenable to suits unrelated to that activity, there have been instances in which the continuous corporate operations within a state were thought so substantial and of such a nature as to justify suit against it on causes of action arising from dealings entirely distinct from those activities.

Historically, regarding the personal jurisdiction of corporations, there were instances in which a nexus requirement was explicitly rejected, that is, situations in which the exercise of jurisdiction was upheld despite the lawsuit not arising from the defendant’s contact with the forum.

While the origins of the nexus requirement have to do with the defendant’s presence connecting with the litigation filed against it, the nexus requirement has now shifted to require the plaintiff’s connection with the forum state as well. The case cited by recent decisions for this proposition is Helicopteros Nacionales de Colombia, S.A. v. Hall. But importantly, Helicopteros’s understanding of “general jurisdiction” differs from what the term means today. Heliopteros specifically maintains that

[e]ven when the cause of action does not arise out of or relate to the foreign corporation’s activities in the forum State, due process is not offended by a State’s subjecting the corporation to its in personam jurisdiction when there are sufficient contacts between the State and the foreign corporation.

The Court cited Perkins v. Benguet Consolidated Mining Co. for this proposition. In Perkins the Court found that a foreign corporation not incorporated or headquartered in Ohio could be subject to general jurisdiction in Ohio in a suit filed by a nonresident of Ohio when the cause of action did not arise out of or relate to the forum because “the foreign corporation, through its president, ‘ha[d] been carrying on in Ohio a continuous and systematic, but limited, part of its general business,’ and the exercise of general jurisdiction over the Philippine corporation by an Ohio court was ‘reasonable and just.’ ” In other words, Helicopteros does not require a nexus between the litigation and the forum so long as there is a “continuous and systematic” existence of the corporation in the forum. Presumably, then, Helicopteros only requires the plaintiff to show that the litigation “arises out of” or is “related to” the forum in situations where the defendant is not “continuos[ly] and systematic[ally]” present in the forum.

The root of the confusion regarding the nexus requirement is that it was created before the concepts of “specific” and “general” jurisdiction existed or were properly defined. The Helicopteros court, relying on Perkins, held that if a corporation’s presence was “systematic” and “continuous” in a forum, then it would be subject to general jurisdiction in that forum such that no nexus is required at all. This is no longer the case today. A major reason why this is no longer the case is because of a prophetic article written by two Harvard Law School professors, which influenced the Court significantly. These professors have dubbed the terms we currently refer to as “general jurisdiction” and “specific jurisdiction,” while also defining the two to their near identical meanings in the current doctrine. The Court in Daimler adopted the policy proposed by the article, holding that a corporation is subject to general jurisdiction only in its place of corporation and its principal place of business. There is one crucial problem with the article, however: it is not premised on the Due Process Clause of the Fourteenth Amendment; instead, it is premised on creating the best policy for which to adjudicate matters and includes forum shopping and convenience for the parties as some of its major supporting propositions. But the underlying reasoning for personal jurisdiction is not convenience or effective policy—these are considerations Congress ought to consider in statutes dictating proper venues for litigation. The sole consideration in personal jurisdiction jurisprudence is due process.

After the terms general and personal jurisdiction were given their current definition, the Court in Bristol-Myers Squibb applied the Helicopteros rule without consideration of the Helicopteros Court’s understanding of general jurisdiction. As a result, it muddied the waters significantly. In Bristol-Myers Squibb, a group of medicine users sued in California state court a corporation that manufactured the drugs in California. Some of the plaintiffs were not California residents. The Court held that the non-California plaintiffs could not sue in California because there was no nexus between their litigation and the forum; the non-California plaintiffs’ claims did not “arise out of” or “relate to” California. Put another way, the Court held that it violated the defendant’s due process right to be sued in California by one group of plaintiffs but not another group of plaintiffs for the same cause of action and the same set of events, and the differentiating factor was the plaintiffs’ place of residency This analysis of the plaintiffs’ connection to the forum is the current understanding of personal jurisdiction, specifically the nexus requirement.

Notably, for the sake of judicial economy, the Bristol-Myers Squibb litigation was consolidated through multi-district litigation, commonly referred to as “MDL,” and pretrial proceedings for both groups of plaintiffs took place jointly in California. Through the MDL process, the two groups of plaintiffs could litigate only pretrial issues together in California without regard to personal jurisdiction. Courts have struggled with the application of personal jurisdiction to MDL proceedings. While personal jurisdiction in MDL is outside the scope of this Note, this set of events illustrates that courts take no issue with altering personal jurisdiction doctrine to promote judicial economy and the MDL process, yet they will continue to unnecessarily protect corporate defendants by rigidly upholding the nexus requirement in cases that are not large enough to consolidate through the MDL process.

  1. ISSUES WITH THE CURRENT DOCTRINE

Based on the above explanations, personal jurisdiction doctrine has strayed away from its original roots of the Fourteenth Amendment and has drifted into a way of curtailing forum shopping. In many circles, this reason alone is enough to demand alteration. However, as I explain below, not only is the current doctrine inconsistent with the original understanding of personal jurisdiction, but it also causes complications in the context of internet sales and stream of commerce cases. In this section, I detail the current doctrine’s shortcomings. In the following section, I preview a direction the Court may be heading: a reversion to the original understanding of personal jurisdiction based on the corporate consent and estoppel model.

  1. It Provides Corporations with Protections They Are Not Entitled to

While the inconsistency with prior case law and the historical application of personal jurisdiction doctrine are by themselves sufficient to question the nexus requirement in its current form, the present standard is also problematic from a policy perspective. It provides corporations with additional protections not mandated by the Constitution—and nonexistent under statute—under the guise of due process.

Take Ford as an example. The Court held that the Ford Motor Company can be subject to personal jurisdiction in Montana for a case involving Ford Explorer vehicles because it sells Ford Explorers in Montana such that it “cultivated a market” there. But, presumably, if Ford sold different models in Montana and did not sell Explorers, there would be no jurisdiction over the plaintiff’s case in Montana because requiring Ford to answer a complaint in Montana under those circumstances would violate the Due Process Clause. This framing of the “market” being “cultivated” is shaky at best. Does it matter which year Ford started selling the Explorer in Montana? What if the model in question was older than the models Ford has sold in Montana? Does the trim of the model matter? What about the model’s color?

The Court also held that the plaintiffs’ contacts with Montana are determinative. The Court held that if Ford sells Explorers in Montana, then Montana can decide any case involving an Explorer accident within its borders, regardless of how it got there, so long as the plaintiff has a connection to Montana. So, no matter how extensive Ford’s contacts with Montana might be, the determinative factor is the plaintiff’s connection with the forum. But what difference does it make to Ford whether the Explorer crash took place in Montana or in Idaho? If Ford already has significant contact with Montana such that it has cases pending in Montana, Ford would not be required to conduct any additional expenses to defend itself in Montana. Requiring Ford to defend one lawsuit in Montana while allowing Ford to dismiss an identical lawsuit solely on the basis of the plaintiff’s place of residency and connection to Montana is perplexing. Requiring Ford to defend the first lawsuit is no more a violation of Ford’s due process rights than it is to require Ford to defend against the second lawsuit.

Similarly, Bristol-Myers Squibb emphasized that the Michigan plaintiff’s suing the defendant corporation in California violated the defendant’s due process rights because the Michigan plaintiffs “did not ingest Plavix in California.” Nevertheless, a group of plaintiffs from California was permitted to sue in California for the same cause of action relating to the same drugs. The only difference between the two groups of plaintiffs is where they ingested the drugs. But that fact should not have been determinative. What difference does it make if a Texan brings his pills on a California vacation and ingests them there or if the Texan ingested the pills in Texas? It is odd to argue that these hypothetical cases, as opposed to the ones previously before the Court, would not violate the defendant’s due process rights by allowing jurisdiction in each of these otherwise-identical cases. Figures 1 through 4 below illustrate how the doctrine plays out.

 

Figure 1. Nexus with Forum State Through Plaintiffs’ Residence

Figure 2. Nexus with the Forum State Through Plaintiffs’ Vacation

Figure 3.  No Nexus Despite Defendant’s Continuous

Figure 4.  Scenarios Analyzed Under the Current Doctrine

 

Scenario #1

Scenario #2

Scenario #3

Did the Manufacturer purposefully avail
itself of CA?

YES.  It sold pills in CA.

YES.  It sold pills in CA.

YES.  It sold pills in CA.

Do the plaintiffs have
a “nexus” to CA?

YES.  They bought and ingested the pills in CA.

YES.  They ingested the pills
in CA. 

NO.  They did not buy or ingest the pills in CA.

Conclusion

CA courts have personal jurisdiction over CA plaintiffs’ claims.

CA courts have personal jurisdiction over TX plaintiffs’ claims.

CA courts do not have personal jurisdiction over TX plaintiffs’ claims.

The Ford decision also raises questions about the general jurisdiction framework. The Court seems to erode that concept, perhaps unintentionally. If Ford can “cultivate a market” in a forum, then it can be subject to personal jurisdiction for claims relating to that forum, so long as the plaintiff has a connection to the forum as well. As explained above, the “market” being “cultivated” can prove to be a difficult term to define. And requiring the plaintiff’s connection to the forum results in illogical and arbitrary grants and denials of jurisdiction, as illustrated in the above figures.

Under current general jurisdiction jurisprudence, a corporation is subject to general jurisdiction wherever it is “at home,” which has been held to mean its place of incorporation and headquarters. But why is it any more compliant with due process for a plaintiff residing in Idaho to sue General Motors (incorporated in Delaware and headquartered in Michigan) in Delaware and Michigan as opposed to Texas, where the company has had a factory and has done business since 1954?

The Court’s attempt at showing that Ford cultivated a market in Montana begins to bleed into the general jurisdiction framework. It would be a much simpler and more predictable test to ask whether Ford has “minimum contacts” such that it “purposefully availed” itself of the privilege of doing business in Montana, and consequently, it is subject to personal jurisdiction in Montana.

Justice Sotomayor pointed this out in her Daimler concurrence, noting that limiting general justification to a corporation’s principal place of business and its place of incorporation would lead to “deep injustice.” She pointed out that “the majority’s approach unduly curtails the States’ sovereign authority to adjudicate disputes against corporate defendants who have engaged in continuous and substantial business operations within their boundaries.” She then called into question the special protections corporations would be receiving under the newly defined due process requirements: “Put simply, the majority’s rule defines the Due Process Clause so narrowly and arbitrarily as to contravene the States’ sovereign prerogative to subject to judgment defendants who have manifested an unqualified ‘intention to benefit from and thus an intention to submit to the[ir] laws.’ ”

There is some indication based on Ford, the Court’s personal jurisdiction case from 2021, that at least some of the Justices are questioning the existing precedent. In particular, Justice Gorsuch’s Ford concurrence, which was joined by Justice Thomas, expressed skepticism of the “at home” test for corporations regarding general jurisdiction. He wrote, “[I]t seems corporations continue to receive special jurisdictional protections in the name of the Constitution. Less clear is why.”

  1. Difficult Application to Internet Sales Cases

The current doctrine does not adequately address how courts should apply it to cases involving internet sales. When it comes to determining purposeful availment, courts look to whether online conduct was purposefully directed at the forum state. Courts also use a sliding scale to determine whether the contacts constitute purposeful availment. For example, if a website is passive because it only advertises or posts information without any option for users to interact with it, the website may not provide a basis for personal jurisdiction. On the other hand, if the website involves making transactions or entering into contracts through knowing and repeated transmission of files over the internet, personal jurisdiction seems more likely. If the website falls in between these two categories of interactivity, the level of the interactivity and the nature of the website must be examined. In other words, the greater the commercial nature and interactivity associated with the website, the more likely the website operator engaged in purposeful availment of the forum state.

As a corollary to the sliding scale, courts have also recognized that tortious conduct that takes place online can subject a defendant to personal jurisdiction. If the defendant’s actions were intentional, uniquely or expressly aimed at the forum state, and caused harm in the forum state, personal jurisdiction is proper there, because the defendant is said to have “purposefully directed” actions at the forum state. A refined test examines whether the defendant knew and intended the consequences of its actions to be felt in the forum state, not just that the defendant knew where the plaintiff lives. That is, if the mention of the state is incidental and not included for the purposes of having the consequences felt in the forum state, there is likely no personal jurisdiction there.

For example, suppose that an Idaho newspaper, which distributes only in Idaho and the bordering towns in Washington State, publishes a story defaming a California celebrity. Can it be said that the newspaper intended the consequences of its story to be felt in California, given that it does not distribute in California? The newspaper has no contacts with California, so how can it be said that the newspaper purposefully availed itself of the privilege of doing business in California?

The Ford case adds an additional element that muddies the water even more. What if a corporate defendant “cultivates a market” in a forum? Under Ford, plaintiffs would be permitted to sue in that forum so long as they have a nexus to that market. In the above hypothetical, would the California celebrity be permitted to sue in Washington State because of the market the newspaper cultivated there? Also, as explained above, it is difficult to define the product that a company cultivates a market for, and framing the market being cultivated is highly malleable. For example, does Amazon cultivate a market for delivery in California? Or is the cultivated market analyzed by specific products, as it was in Ford? Assuming the latter, what is the justification of looking at the plaintiff’s connection to the forum to assess the due process rights of the defendant?

  1. Difficult Application to Stream of Commerce Cases

There is no agreed-upon framework by which to address stream of commerce cases. A “stream of commerce” case refers to a situation where a manufacturer sells products to a regional distributor and the regional distributor sells the products elsewhere. For example, assume that a car company manufactures its cars in China and then sells the fully manufactured cars to a distributor in California and no distributors in Oregon. Then assume that the California distributor sold the cars to a dealership in Oregon, and an Oregon resident bought a car from that dealership. If the car malfunctions, may the Oregon resident sue the manufacturer in Oregon? The question is whether the car manufacturer engaged in minimum contacts or purposefully availed itself of doing business in Oregon through the stream of commerce that brought its product to Oregon.

Justice White, in dicta in in World-Wide Volkswagen, suggested that there may exist personal jurisdiction over a manufacturer in a forum even if the manufacturer itself did not sell in that forum; he wrote that personal jurisdiction would exist in such a situation only if the sale in the forum was “not simply an isolated occurrence, but arises from the efforts of the manufacturer or distributor to serve, directly or indirectly the market for its product.”

The Court has been unable to agree on what these instructions practically mean. Justices Breyer and Alito understand this to refer to the number or substantiality of the sale in the forum. Justice O’Connor’s plurality opinion in Asahi held that there would be personal jurisdiction over a defendant manufacturer only if Justice White’s criteria were satisfied and the manufacturer engaged in “additional conduct . . . [that indicates] an intent or purpose to serve the market in the forum state.” This could include designing the product for the forum state, advertising the product in the forum state, or establishing channels for providing regular contact to consumers in the forum state. Justice Brennan, writing for the split court in Asahi, indicated that if the maker foresees and benefits from the contact with the forum, personal jurisdiction is satisfied, even without an intentional act targeting the forum.

The Court’s split continued in McIntyre, in which the Justice Kennedy plurality held that a foreign manufacturer that sold products to a U.S. distributor was not subject to personal jurisdiction in the states the distributor subsequently distributed to. Justice Ginsberg dissented and wrote that, in her view, there is personal jurisdiction in a state when a manufacturer chooses a distributor who distributes to the entire United States. Justices Breyer’s and Alito’s concurrence explained that personal jurisdiction should be dependent on the number of products sold in the state.

Figure 5 synthesizes current steam of commerce doctrine:

Figure 5.  Current Stream of Commerce Doctrine

The Ford case presented a slightly nuanced version of the hypothetical discussed above. In Ford, the vehicle that malfunctioned was designed, manufactured, and sold outside of Montana. Later resells and relocations by consumers brought the vehicle to Montana, where it malfunctioned. As such, it was only through the “stream of commerce” that the particular vehicle at issue was brought to Montana. The Court united in its holding that Ford’s advertising in and manufacturing in Montana constituted sufficient purposeful availment, and held that the plaintiffs had a sufficient nexus to the forum simply because the car malfunctioned in Montana, even though they did not purchase the vehicle in Montana. However, if, through the stream of commerce, the plaintiffs were in the neighboring state of Idaho, then presumably there would be no nexus and no personal jurisdiction in Montana, even though the facts—and Ford’s purposeful availment of the Montana forum—would be identical. It is unclear why Ford’s due process rights would be violated if an Idaho plaintiff sues Ford in Montana but would not be violated if a Montana plaintiff who purchased Ford’s product in Wisconsin and drove to Montana sues in Montana.

Figure 6.  The Ford Litigation

  1. WHERE THE COURT IS HEADED AFTER FORD

A reversion to the constitutional underpinnings of personal jurisdiction doctrine means removing the corporate protections available under the guise of the Fourteenth Amendment. The case is easier for removing plaintiffs’ requirement to show a nexus to the litigation when suing a corporation in a forum in which the corporation has systematic and continuous contact. As explained above, the nexus requirement came into being with the explicit understanding that it is a requirement only if the corporate defendant has no systematic and continuous contact with the forum. In other words, the case is easy for overruling Daimler and removing the narrow understanding of general jurisdiction, finding instead that general jurisdiction exists wherever corporations implicitly consent to personal jurisdiction through systematic and continuous contact.

However, some Justices may propose going a step further and removing the distinction between specific and general jurisdiction as it is inconsistent with the Court’s original understanding of personal jurisdiction. Accordingly, plaintiffs would be permitted to pursue causes of action in any forum in which a corporation engages in minimum contacts sufficient to constitute purposeful availment without showing a nexus to the litigation. Corporate defendants would be permitted to transfer cases under the venue statutes alone.

In situations where a corporation had purposefully availed itself of a forum in a previous one-off occurrence, the claim brought in that forum must allege conduct that took place during the purposeful availment of the selected forum. That is, a corporation would not be able to retroactively cease purposeful availment.

I have already explained why this model is consistent with the original understanding of personal jurisdiction. In many circles, this reason alone would be sufficient to adopt it. However, in this Part I detail why a reversion to this original understanding is good policy as well. It increases fora for plaintiffs, makes for a more predictable personal jurisdiction doctrine (especially in cases involving internet sales and the stream of commerce), and leaves room for Congress to act should it find the need to.

  1. Removing the Distinction Between General and Specific Personal Jurisdiction

One avenue of development post-Ford envisions significantly expanding general jurisdiction in the way it is understood today. This would mean overruling the holding in Daimler, which permits general jurisdiction over a corporation only in its principal place of business and place of incorporation. As explained above, the case is easy to revert to the pre-Daimler jurisprudence, where general jurisdiction existed in each location where a corporation engaged in continuous and systematic contact with a forum, because that was the original understanding of personal jurisdiction. Nothing in the original understanding of personal jurisdiction, early cases dealing with the doctrine, or the Fourteenth Amendment compels affording corporations protections from being forced to defend lawsuits in fora besides their place of incorporation and headquarters. Such protections limiting personal jurisdiction can only come from statutes, and Congress has not legislated in the arena of personal jurisdiction.

However, assessing personal jurisdiction solely through the lens of purposeful availment reveals that the concept of general jurisdiction is unnecessary, especially after it was eroded in Ford. Ford held that if a corporation systematically serves a market, and the plaintiffs are from that forum state, it is as if there is general jurisdiction for those specific plaintiffs in the forum. But if a corporation is already prepared to defend against lawsuits in a particular jurisdiction, it does not offend due process rights to require the corporation to defend against all lawsuits in that jurisdiction, subject to transfer of venue “in the interest of justice.”

Furthermore, as Douglas D. McFarland points out in his scholarship,

The original, unpolished International Shoe test is a one-step, unitary test. A court is not required to find “minimum contacts” and “fair play and substantial justice.” Neither is a court required to find “minimum contacts” or “fair play and substantial justice.” The opinion requires a court find “minimum contacts with [the state] such that the maintenance of the suit does not offend “traditional notions of fair play and substantial justice.”

Given this understanding, it becomes clear that it is more consistent with the Due Process Clause and International Shoe to allow jurisdiction for all claims in a forum where the defendant corporation engaged in “minimum contacts.” There need not be a difference between the type of claim permitted to originate in that forum. In other words, personal jurisdiction in a forum should be defined by defendant, not by claim. If a defendant is subject to personal jurisdiction in a particular location, then that defendant should be subject to personal jurisdiction in that location for all claims and should be permitted to transfer cases under the guidelines provided by Congress alone. The following figure is identical to Figure 4 above, referencing the three scenarios in Figures 1, 2, and 3, but it includes an extra row showing how the ultimate conclusion would change should Daimler be overruled.

 

Figure 7. Scenarios Analyzed Under the Consent and Estoppel Model
 Scenario #1Scenario #2Scenario #3
Did the Manufacturer purposefully avail itself of CA?YES.  It sold pills in CA.YES.  It sold pills in CA.YES.  It sold pills in CA.
Do the plaintiffs have a “nexus” to CA?YES.  They bought and ingested the
pills in CA.
YES.  They ingested the
pills in CA.
NO.  They did not buy or ingest the
pills in CA.
Conclusion under current doctrineCA courts have personal jurisdiction over CA plaintiff’s claims.CA courts have personal jurisdiction over TX plaintiff’s claims.CA courts do not have personal jurisdiction over TX plaintiff’s claims because there is no nexus.
Conclusion without DaimlerCA courts have personal jurisdiction because the manufacturer purposefully availed itself of California law.CA courts have personal jurisdiction because the manufacturer purposefully availed itself of California law.CA courts have personal jurisdiction because the manufacturer purposefully availed itself of California law.
  1. Clarifying the Doctrine for Internet Sales Cases

A straightforward and predictable test for personal jurisdiction solves issues relating to internet sales cases. Internet sales would be analyzed in the same way as all other sales cases: if the seller does business in the forum, then the plaintiff should be permitted to sue the seller in that forum. Doing business means selling a product in that forum. If a seller wants to avoid being subject to personal jurisdiction in a particular forum, then it can choose not to sell in that forum.

Take, for example, a 2021 Third Circuit case involving a lawsuit against Imgur and Reddit, two internet companies, alleging that the companies were compliant in an authorized use of the plaintiff’s likeness when a photo of her in a convenience store began circulating on these websites in an advertisement for erectile dysfunction and dating websites. Living in Pennsylvania, the plaintiff decided to sue Imgur and Reddit in Pennsylvania, despite knowing neither the convenience store’s location nor how the image was posted online. Both companies conceded that they had purposefully availed themselves of the privilege of doing business in Pennsylvania. They nevertheless argued that their minimum contacts with Pennsylvania were not related to the litigation—in other words, they argued that there was no nexus between the plaintiff’s claim and the forum. The Third Circuit agreed with the District Court’s dismissal for lack of personal jurisdiction.

There are troubling implications with this holding. First, a plaintiff is required to do additional research before the opening of discovery to determine where online harm originated. The court found unconvincing the argument that personal jurisdiction is proper in Pennsylvania because that is where the harm took place. Second, the court’s attempted distinction from Ford draws an arbitrary line. Just as in Ford, in which the motor company “systematically served a market in Montana and Minnesota for the very vehicles that the plaintiffs allege malfunctioned and injured them in those States,” here, the internet companies systematically served a market for the very product that was used to cause the harm—the platform in which the unauthorized posting of the plaintiff’s photo took place.

In other words, distinguishing the type of product that the market was “systematically served” with is unpredictable and malleable. A much more straightforward approach would be to look only at whether Reddit and Imgur continuously and systematically served the market, which they in all likelihood did. And even if they did not continuously and systematically serve the market, they certainly had minimum contacts with Pennsylvania such that they purposefully availed themselves of the state. The burden would then be on the defendant corporations to move for a transfer of venue. The presumption should be that due process is not violated because of the companies’ purposeful availment within the forum. If the corporate defendants seek to transfer venue, they would need to provide evidence for why there is a more suitable venue.

Accordingly, removing the nexus requirement and analyzing personal jurisdiction solely through purposeful availment—and assessing whether the online activity is in fact a purposeful availment—resolves the issue. To be clear, a company may “cultivate a market” in a forum without ever stepping foot into that forum. As such, in the case described above, the defendants Imgur and Reddit would be subject to personal jurisdiction in the selected forum because of their admitted purposeful availment and presence in the forum.

This approach is consistent with other cases that have analyzed the issue of internet sales: under current doctrine, a corporate defendant can expect to be subject to personal jurisdiction in a venue in which a “substantial number of copies are regularly sold and distributed.” In Keeton v. Hustler Magazine, Inc., the Supreme Court upheld the exercise of jurisdiction in New Hampshire over a nonresident magazine publisher defendant. The Court reasoned that although the magazine publisher had a nationwide audience and had not targeted the forum particularly, it should reasonably anticipate an action “wherever a substantial number of copies are regularly sold and distributed.” The same should be true when it comes to internet sales. Therefore, if a corporation wants to avoid being subject to personal jurisdiction in a particular location, it may cease its business operations in that state.

  1. Clarifying the Doctrine for Stream of Commerce Cases

A reversion to the original understanding of personal jurisdiction would simplify the analysis in stream of commerce cases. Removing the nexus requirement shifts the analysis solely to determining whether the defendant purposefully availed itself of a forum. Courts would look not at whether the plaintiff’s alleged harm has a connection to the forum, since these are venue concerns, not due process concerns.

Instead, courts would assess, as the Court did in Ford, whether the manufacturer “cultivated a market” in the forum state such that it is fair and just to require the defendant to defend a lawsuit in that jurisdiction. As with internet sales, if a manufacturer does not want to be subject to personal jurisdiction in a particular state, it may direct its distributor not to distribute products into that state. Without such instruction, and if the distributor supplies products in a state, the manufacturer would have minimum contacts with that state that constitute purposeful availment. Under the original understanding of personal jurisdiction, any plaintiff would be permitted to sue the manufacturer in that state, irrespective of whether the plaintiff’s cause of action arises from the manufacturer’s contacts. The manufacturer would then be permitted to transfer the case using the venue statutes.

  1. Addressing Forum Shopping

It is clear that reverting to the previous personal jurisdiction doctrine would pave a path to forum shopping. As an initial matter, one must ask whether the negative effects of forum shopping warrant such significant constitutional maneuvering to counter the practice. Perhaps a free market that permits forum shopping is beneficial, as some scholars have argued. Forum shopping may cause beneficial competition among states to alter their laws if they want to stimulate businesses. Just as a company considers taxes, state law, and other benefits, so too should companies consider being subject to personal jurisdiction in a state if they want to maintain a presence in that state.

Some scholars have pointed out that the possibility of forum shopping provides judges with incentives to make the law more pro-plaintiff and that these judges’ actions have the possibility of creating wide-ranging effects, given that their courts will likely attract a disproportionate share of cases. Professor Dan Klerman points to several examples of this phenomenon taking place, the most prominent being the patent-plaintiff-friendly Eastern District of Texas and plaintiff-friendly mass tort jurisdictions such as Madison County, Illinois. Both of these venues have seen a dramatic uptick in the number of claims filed there.

As a result of these observations, scholars conclude that “[c]onsideration of forum selling helps justify constitutional constraints on personal jurisdiction. Without constitutional limits on jurisdiction, some courts are likely to be biased in favor of plaintiffs in order to attract litigation.” However, these policy considerations are for Congress to consider. The solution to these concerns is not judge-made constitutional limits on jurisdiction, because the Constitution is silent on forum shopping. Instead, the solution may be statutory limits on jurisdiction.

It goes without mentioning that parties engage in forum shopping in drafting forum-selection and choice-of-law clauses, which require any dispute arising from a transaction to be filed in a particular location and apply particular law. If the Constitution prohibits forum shopping, it presumably prohibits forum shopping no matter the context and whether both parties engage in it. Given that courts have continuously upheld forum selection and choice-of-law clauses, it cannot be said that forum shopping is per se unconstitutional.

More fundamentally, it is important to remember that personal jurisdiction is rooted in due process. Those who argue that it is the Court’s, rather than Congress’s, job to curtail forum shopping assert that impartial judging is a core concept of due process and, as such, personal jurisdiction is the proper route to address these concerns. However, this argument fails to consider that corporations have the option to remove themselves from being subject to personal jurisdiction wherever they feel the judging would not be impartial. Therefore, so long as the defendant purposefully availed itself of a forum, the defendant should be prepared to face a lawsuit in the forum, irrespective of whether the Constitution permits forum shopping.

Furthermore, should forum shopping cause such significant burdens, or should the public demand reform, Congress has authority to act. Congress’s venue statutes currently permit parties to transfer venues in cases of forum shopping, and Congress has permitted removal to federal courts specifically to address bias in state courts. In cases where the defendant is subject to personal jurisdiction in a forum, the defendant may, if it is more convenient for witnesses or collecting evidence, move to transfer the case to a different jurisdiction. Various articles have also proposed statutes that codify personal jurisdiction.

  1. Miscellaneous Considerations

There is merit to the argument that corporations should be permitted to organize their business strategically to avoid lawsuits in unfavorable locations. This is especially true in situations where the removal statutes do not permit a corporate defendant to remove a proceeding to federal court. The way the Court is heading comports with the notion of strategic business organization. Corporations may choose to engage in business in locations by considering whether the risk of liability is worth the profits of doing business in the forum. Just as corporations assess tax, employment law, and various other factors, so too should personal jurisdiction be another factor. This potential future course undoubtedly increases the fora where a business may be sued, and it may encourage states to pass laws that are more plaintiff friendly. The free market should correct any radical laws because corporations can choose whether to engage in commerce in a particular forum based on the laws that forum passes.

True, without Daimler, corporations that engage in internet sales would be subject to personal jurisdiction in many more locations than they otherwise would have been. But these corporations can decide as a matter of corporate policy not to sell to individuals located in a certain jurisdiction for lack of desire to be forced to defend a lawsuit there. To be clear, a corporation would not need to suspend access to its passive website in certain locations to avoid being subject to personal jurisdiction there. Making a website available solely for consumer browsing (not purchases) in a certain location would not constitute “purposeful availment” because the website would be passive in nature and the corporation’s contact with website visitors would be unilateral action on the part of the website browsers, which the Court has already ruled is insufficient to constitute “purposeful availment.” For similar reasons, a corporate defendant would not be subject to personal jurisdiction in a forum if, by the stream of commerce, one of its products makes its way into a state where the corporation does not “serve [the] market.” Therefore, to avoid being subject to personal jurisdiction in certain locations, a corporation can decide not to cultivate a market in the locations where it wants to avoid defending lawsuits.

Another consideration is the discretionary nature of transfer of venue and choice of law. Review of personal jurisdiction is a matter of law that is conducted de novo. By contrast, transfer of venue is discretionary and is conducted under an abuse of discretion standard. But appellate courts have not been shy to tell the district courts they have abused discretion when it comes to motions for transfer of venue. In other words, plaintiffs would be encouraged to forum shop and choose venues that are less willing to transfer cases out of their jurisdiction. While a valid concern, it is not one that should factor into a constitutional analysis of personal jurisdiction. Congress may feel compelled to alter the venue statutes. Even so, despite the discretionary nature of venue transfer, courts have not been afraid to reverse denials of transferring venue, even under the abuse of discretion standard.

CONCLUSION

This Note began with an analogy to sports teams preferring to play in front of their home crowds. There is no question that teams have such a preference. But the defiance of this preference does not constitute a violation of rights. Surely the Los Angeles Lakers, because the team plays in the National Basketball Association, must play away from home across the nation, including in front of less-than-welcoming Boston fans when they face the Celtics.

When a corporation conducts business in a particular location, it avails itself of that location. Under the traditional corporate consent and estoppel model, the privilege of conducting business creates a reciprocal obligation on the corporation to subject itself to the jurisdiction of that location, irrespective of who sues it there.

While personal jurisdiction purports to assess whether a defendant should be forced to defend a lawsuit in a forum due to the defendant’s contacts with that forum, the doctrine has shifted to requiring the plaintiff to show a connection to the forum, even if the defendant has substantial contact with the forum. This Note has explained the history and development of personal jurisdiction doctrine and showed how the Court has narrowed where corporate defendants are “at home.” Consequently, the Court requires the plaintiff to comply with the nexus requirement when suing in locations besides the corporation’s “home.” In doing so, this Note revealed that the evaluation of personal jurisdiction doctrine is a diversion from the traditional corporate consent and estoppel model and is a result of the Court substituting its judgment for Congress’s regarding the need to curtail forum shopping. It offered a prediction of where the Court may be headed: toward an expansion of corporate personal jurisdiction—by ditching Daimler and nixing the nexus requirement.

 

96 S. Cal. L. Rev. 207

Download

*  University of Southern California Gould School of Law, 2023. B.A., University of Southern California, 2020.

Toward a New Fair Use Standard: Attributive Use and the Closing of Copyright’s Crediting Gap

A generation ago, Judge Pierre Leval published Toward a Fair Use Standard and forever changed copyright law. Leval advocated for the primacy of an implicit, but previously underappreciated, factor in the fair use calculus—transformative use. Courts quickly heeded this call, rendering the impact of Leval’s article nothing short of seismic. But for all of its merits, Leval’s article failed to acknowledge or consider the salience of another largely underrecognized and heretofore unnamed factor: attributive use. 

This Article attempts to address this oversight, particularly when viewed in light of the current law of crediting in the twenty years since Dastar Corp. v. Twentieth Century Fox Film Corp., the Supreme Court’s decision to permanently foreclose the most common method by which creatives had previously vindicated their crediting interests—the Lanham Act’s prohibition on false designations of origin. After assessing the recent body of empirical work highlighting both the quantitative and qualitative importance of attribution to authors and the value of crediting to consumers, investors, and the broader public, the Article scrutinizes the current state of attribution rights to argue that, post-Dastar, the remaining legal mechanisms for securing crediting, including private contracting, have proven insufficient. 

To address this crediting gap in the law, the Article considers, but rejects, calls to overturn Dastar or enact an independent general attribution right under the Copyright Act. Instead, I propose a more modest solution that needs no congressional action. Like transformative use, attribution promotes progress in the arts by motivating and incentivizing authorial production. Moreover, as this Article’s careful exegesis of the relevant case law demonstrates, issues of crediting have long shaped the contours of the fair use defense. As such, I advocate for the formal adoption of attributive use as an express consideration in the fair use calculus. The Article therefore builds on Leval’s influential work and calls for the formulation of a new fair use standard that more closely calibrates the defense with the utilitarian goals of our copyright regime.

INTRODUCTION

A.  Pierre Leval’s Toward a Fair Use Standard and Copyright’s Crediting Gap

Thirty years ago, Pierre Leval penned what would become one the most influential pieces of legal scholarship of the past generation. As a federal judge who had then served for twelve years on the Southern District of New York, Leval crafted Toward a Fair Use Standard after he had watched two of his copyright decisions, including his finding that a biographer had made fair use of J.D. Salinger’s unpublished letters, eviscerated by the Second Circuit. In reflecting upon these repudiations, Leval critiqued his erstwhile approach to the fair use calculus as excessively ad hoc and sought, instead, to fashion a series of governing principles to guide application of the doctrine in future cases. In so doing, Leval scrutinized the metes and bounds of the copyright monopoly holistically and posited that infringement claims and fair use defenses both serve the overarching utilitarian goal of the copyright regime, which “stimulate[s] activity and progress in the arts for the intellectual enrichment of the public.” With the premise that fair use is not an exception to copyright protection but, rather, a part of the design of copyright law to encourage creativity, he then identified transformative, or productive, use—use that “employ[s] the [original] matter in a different manner or for a different purpose”—as a driving concern in the calculus.

Toward a Fair Use Standard quickly precipitated a sea change in the way courts approached application of the fair use doctrine. Only a few years after its publication in the Harvard Law Review, the Supreme Court drew heavily on Leval’s article in famously holding that the transformative nature of 2 Live Crew’s unauthorized parody of Roy Orbison’s Pretty Woman insulated the song from infringement liability under the fair use doctrine. In the process, the Supreme Court elevated the standing of Leval’s work, enshrining it as a seminal tome on copyright law—one that took a rightful place right beside the actual text of section 107 of the Copyright Act in guiding fair use determinations. Toward a Fair Use Standard continues to enjoy a prized place in the copyright firmament. In 2021, in its first fair use pronouncement since Campbell v. Acuff-Rose, the Supreme Court liberally sprinkled citations to Leval’s article through the course of its opinion determining that Google’s exploitation of Oracle’s copyrighted Sun Java application programming interface (“API”) constituted fair use. Transformation once again lies at the heart of the analysis, as the Court posited that Google’s actions helped “expand the use and usefulness of Android-based smartphones. . . . [by] creat[ing] a new platform that could be readily used by programmers” to develop new programs in the Android environment.

It should come as no surprise then that, in the words of one observer, “Leval’s commentary on the centrality of transformativeness in interpreting fair use decisively changed the way the copyright doctrine was interpreted. He leveraged the forum successfully to accomplish what he had been unable to accomplish thereto in judicial decision-making.” This view is not merely anecdotal or impressionistic. The rapid rise of transformation as a crucial, if not decisive, factor in fair use decisions due to Leval’s article is nothing short of stunning. A recent empirical study determined that almost ninety percent of cases now ultimately turn, at least in part, on determinations of transformative use.

Thus, with Toward a Fair Use Standard, Leval achieved what most authors of law review articles can only dream of. Of course, his deserved reputation as a thoughtful jurist no doubt assisted in propelling his proposal, and his article’s placement in the venerable Harvard Law Review did not hurt either. But, above all, his prescient thoughts on the limitations on copyright protection embodied in the fair use doctrine made eminent sense in any era when courts were just beginning to grapple with the digital implications of a Copyright Act written before the advent of the modern internet.

To be sure, Leval’s work is not without its critics—in industry, on the bench, and in the bar. These interventions have largely questioned the primacy that Leval’s article and interpreting courts have given to transformative use. Yet for all of its merit, Leval’s article wholly ignored one area of grave importance in both the utilitarian logic of copyright law and, implicitly, the extant jurisprudence on fair use: attribution. Crediting serves as a prime motivator for authorial production, goes to fundamental issues of equity in our copyright regime, and has enjoyed a tacit (but not entirely express) role within the fair use calculus. Nevertheless, it finds no place in Leval’s article, a fact that Leval’s critics have ignored as well. Indeed, even though Leval dedicated a portion of his article to pondering (and rejecting) the value of “other” fair use factors not expressly detailed in section 107’s text—including good faith, artistic integrity, and privacy—he never expressly discusses or even implicitly addresses the issue of crediting. In short, attribution appears to play no role in Leval’s analysis of fair use.

Of course, crediting was not Leval’s focus. Nevertheless, this Article attempts to address and assesses this oversight, particularly when read in light of the current law of crediting in the twenty years since the Supreme Court announced its decision in Dastar Corp. v. Twentieth Century Fox Film Corp., which permanently foreclosed the most common method by which creatives had previously vindicated their crediting interests—the Lanham Act’s prohibition on false designations of origin. Specifically, this Article proposes to supplement Leval’s work—which lead to the formal adoption of transformative use as a critical part of the first factor in the fair use analysis—by advancing a proposal for the explicit introduction of attributive use to the fair use balancing test.

B.  Giving Credit: Toward an Attributive Fair Use Standard

Give credit where credit is due. It is a principle widely embraced in our social norms. But like many of the things we learned in kindergarten, adherence to the precept is far from perfect. Moreover, while the exhortation remains a universal aspiration, it enjoys little legal bite. Indeed, the lack of development in the law of crediting is nothing short of surprising. As Jane Ginsburg has argued, “Of all the many counter-intuitive features of US copyright law—and they abound—the lack of an attribution right may present the greatest gap between perceived justice and reality.”

If the Copyright Act and our broader intellectual property regime seeks to serve its constitutionally mandated purpose—to promote progress in the arts—by incentivizing the creation of works of authorship, it should ideally respond to what actually motivates creators. To be sure, the exclusive rights of reproduction, distribution, public performance, public display, and derivatization secured for authors under section 106 of the Copyright Act appeal to authorial incentives in at least two ways. First, they serve utilitarian interests by providing monetary rewards to creators by necessitating licenses for the exploitation of their work. Second, they promote natural law and the dignitary interests of authors by enabling them to decide whether (and under what terms) their works are made available to the public at all. But control over reproduction, distribution, public performance, public display, and derivatization are not the only rights that galvanize creators. Specifically, authors gain value—both monetary and otherwise—through other mechanisms. For example, building one’s brand and reputation for creative excellence—achieved only through attribution—is a powerful means toward earning long-term economic rewards and satisfying the dignitary interests that can also motivate authors. As a result, traditional copyright enforcement is not necessarily profit maximizing for creators, and there is often a disconnect between how creators feel about the unauthorized exploitation of their work and how distributors/publishers might feel about it. Meanwhile, authors who may value attribution over enforcement of their copyrights are not necessarily immune to the temptations of the marketplace or more noble than the rest of us. To be sure, some authors may create solely to fulfill their own needs or to edify, amuse, or impact others, and they may merely seek recognition rather than profit. But there is also a monetary component to proper attribution. In the long run, attribution promotes one’s name and its standing, a phenomenon that eggs on economic demand in a variety of forms—whether it is for further creative production, appearances, endorsements, or ancillary activities. Indeed, attribution is part and parcel of the economic equation of copyright and its incentive structure.

With all of this in mind, if our intellectual property regime serves to encourage progress in the arts by motivating and incentivizing authors, the absence of attribution rights would appear to leave the regime wanting. The Lanham Act, which for a time served as a powerful vehicle for protecting attribution rights, can no longer do so as a result of the Supreme Court’s decision in Dastar two decades ago. Meanwhile, for a variety of reasons we shall explore, alternative legal theories for protection have proven inadequate to provide for general crediting rights. Thus, current law provides little protection for crediting. And it is this crediting gap—what it means, how it came into existence, and how it might be solved—that is the focus of this Article.

Before proceeding further, two important caveats bear mentioning. As a preliminary manner, it is important to lay out what this Article means when it talks about giving credit. Put in the traditional parlance of moral rights, crediting issues can take two general forms. First, there is the positive right of attribution—the ability to have one’s name associated with one’s work. Second, there is the negative right of attribution—the ability to prevent having the work of another falsely attributed to you. The former right is about giving credit when credit is due and forms the subject matter of this Article. The latter phenomenon, while important and rife for further analysis, is about misattribution and therefore falls outside of the scope of this analysis.

In addition, as its title suggests, this Article seeks to directly build on Pierre Leval’s influential article. Indeed, it is no less ambitious than Leval’s piece and aims to highlight the fact that considerations of attributive use already permeate (with good reason) the jurisprudence applying and interpreting section 107 of the Copyright Act and seeks to alter the way that courts formally frame the fair use calculus going forward. At the same time, however, the author also understands that he is no Pierre Leval and, as detailed infra, is prone to delusions.

With these caveats in mind, the Article’s analysis begins by scrutinizing the value of crediting. Rather than resting on the mere intuition that attribution matters, Part I delves into both the quantitative and qualitative literature on crediting to determine just how and to what extent crediting fuels authorial motivations and serves broader societal interests. We start with an anecdote to illustrate how authorial reactions to infringement both with and without attribution can differ radically. In the process, we identify and critique the peculiar disconnect between our current legal regime—which fetishizes protection against infringement over failure of attribution—and the economic and dignitary interests of at least a sizeable percentage of creators. Next, we examine the burgeoning scholarship in law, economics, psychology, and organizational behavior to assess and interrogate the value of attribution to creators. As we see, a growing body of empirical work supports the intuition that crediting matters—a lot—and, in fact, authors are often willing to forgo substantial amounts of compensation in return for securing attribution. As such, crediting can and does play a primary role in motivating authorial production. At the same time, a fulsome attribution regime would not merely serve authorial interests. As I argue, it would also inure to the benefit of consumers, investors, and society at large by promoting the efficiency of resource allocation in intellectual property-driven fields (thereby benefiting investor welfare and broader economic interests in optimized markets operating with superior information), reducing consumer search costs, advancing the organizational integrity and coherence of literary and artistic endeavors, and even enhancing public support for the protection of intangible rights such as copyright by bringing the legal regime governing creative works in greater harmony with norms of equity and by humanizing creativity-driven products.

Having established the value of crediting to both authors and the public, this Article turn its attention to assessing the current state of attribution law. Part II therefore begins by exploring the rationale and implications of the Dastar holding and detailing the ways in which the decision effectively ended the ability of creators to bring attribution-related reverse passing off claims under the Lanham Act. Next, we identify the crediting gap left in Dastar’s wake by examining what alternative theories of liability remain to vindicate crediting interests post-Dastar and how said theories have fared in the intervening two decades. In the process, we scrutinize the extant jurisprudence on false advertising claims under the Lanham Act, attribution claims under the Visual Artists Rights Act (“VARA”), falsification and removal/alteration of Copyright Management Information (“CMI”) claims, state unfair competition law, and private contracting. As this Article’s analysis suggests, these theories are insufficient to protect the crediting rights of the vast majority of creators. False advertising claims can, at best, only provide relief to famous authors and only in circumstances of material reliance by consumers in purchasing decisions. VARA claims suffer from myriad subject matter constraints that makes the protections available to only a small corner of the creative universe (certain types of visual art works that are not works made for hire, only in originals or prints of two hundred or less, potentially only in digital form). Claims pertaining to falsification, removal, or alteration of CMI have a high double-scienter requirement that has made relief unlikely. Meanwhile, the statutory scheme regarding CMI primarily serves the goal of infringement prevention rather than the protection of any independent interests that authors may have in crediting. Finally, state unfair competition laws have suffered either from federal preemption under Dastar or from the fact that they are viewed as coterminus with Lanham Act protections.

In the end, therefore, we are left only with private contracting for relief. And while a few notable industries (such as Hollywood and academia) have implemented meaningful attribution regimes, private ordering suffers from the leverage and bargaining disparities inherent to contractual solutions. Indeed, as we demonstrate, the history of private crediting systems is riddled with instances where power dynamics trump actual origination. Drawing on several notable examples where crediting abuses fell on racial, gender-based, and socioeconomic fault lines, I argue that continued reliance on such systems could have particularly deleterious implications for social justice issues in intellectual property. In short, therefore, there exists a sizeable crediting gap—a vast disconnect between the high value of attribution to authors and the public and the low value given to it by way of legal protection. Moreover, without the legal vesting of more stout attribution entitlements, ongoing reliance on the pure operation of the marketplace for crediting determinations could continue to have vexing consequences.

Part III considers potential reform to the current state of affairs. Although I caution that social value should not always translate into legal mandate and that good norms do not always make good law, the particular inadequacies of the extant crediting regime and the social and economic (rather than private or familial) interests at play warrant examination of potential legal solutions. To that end, I evaluate but reject two of the most significant mechanisms for change: reversal of the Dastar holding by legislation amending the Lanham Act and passage of an affirmative cause of action under the Copyright Act to provide for crediting rights. As I argue, despite its shortcomings, Dastar revealed the poor fit that the Lanham Act—with its focus on consumer confusion—ultimately provided for attribution protection. Moreover, even if an amendment to the Lanham Act were limited to situations involving works still under copyright protection (so as to avoid the issue of erstwhile rightsholders with expired copyrights attempting to extend their monopoly over creative works through trademark law), it would still raise significant concerns about the potentially onerous scope of crediting requirements and the fine line between providing proper attribution and triggering false endorsement claims. Meanwhile, amendment of the Copyright Act to provide for an affirmative crediting claim has its own shortcomings. In particular, this Article examines the way in which the allocation of a formal crediting entitlement could stifle licensing efforts in the marketplace, a result exacerbated by the combined impact of the endowment and creative effects—behavioral phenomena that have caused economists to question traditional neoclassical assumptions in entitlement allocations in recent years. Finally, as a practical matter, I also observe the particular difficulties in pursuing a legislative change.

Instead, I propose a more modest solution, and one that I argue is already an implicit part of the existing jurisprudence: formal accounting of attributive use as a part of fair use calculus. In conducting a careful exegesis of the extant case law on fair use, I argue that courts have often woven consideration of attributive use into the first, fourth, and “fifth” factors—a move that the important Second and Ninth Circuits have blessed. Further, I argue that the implementation of an attributive use subfactor makes doctrinal sense given both the utilitarian and equitable functions of the fair use doctrine and that such a consideration is strongly supported by our existing copyright clearance norms. Thus, just as Pierre Leval identified transformative use as a critical but underappreciated consideration in the fair use calculus, I make a similar argument with respect to attributive use. In the process, I call for crediting to take its place alongside commercial and transformative considerations in courts’ assessment of the first fair use factor—the purpose and character of use. In this way, I advance an incremental, but important, step toward recognition of the value of crediting while also avoiding some of the broader concerns that a general right of attribution—whether achieved under the Lanham Act or the Copyright Act—might present.

I.  WHY CREDITING MATTERS

A.  One Author, Two Moments

I begin my assessment of the importance of attribution by reflecting upon the perspective of one author—this author—on what motivates creative enterprise. Such a focus is admittedly biased and may be completely unrepresentative. But it illustrates how the current state of affairs in copyright law—where infringement receives stiff punishment but failure to credit receives none—can be inadequate to protect the motivating interests of at least some creatives. Specifically, two recent incidents involving the use of my published work provoked strong, but diametrically opposing, reactions within me. And while I do not claim that my attitude toward these events reflects on how typical authors might respond, my contemplations are nonetheless instructive as to how some authors might experience issues related to infringement and crediting.

Not long ago, while doing some research on the dirty underbelly of the piratical dark web, I came across a site that resembled a veritable Library of Babel, providing free access to a remarkable collection of digital books to all comers. While publishers would not hesitate to characterize this “celestial jukebox” of books as a cesspool of wanton infringement, I felt compelled, as a good academic, to investigate further before drawing any definitive conclusions. So, in an act of curiosity and thorough vanity, I punched my own name into the site’s search box. To my surprise, a beautiful e-book edition of one of my tomes popped up, available freely to all who had interest in it. I can neither confirm nor deny that I immediately downloaded a copy, but some context might help explain why I may have made a decision to do so.

A number of years earlier, I had posed what I thought was an innocent request of the publisher of one of my forthcoming books: I sought a final PDF copy of the work for my records and personal use. The response was rapid and reproachful. “We don’t do that, John.” The implication was clear: they had to protect against piracy, even if it meant denying authors copies of their book in digital format. Instead, they offered me a compromise: the first three chapters. Since they made it clear this was not a negotiation, I took what they gave me. Fast forward a decade and it should be easy to understand why I may have been elated when this website offered me what my own publisher had denied me: a final, electronic form of my book without any encryption or digital rights management (“DRM”) associated with it.

My book’s appearance on the website also pleased me for an entirely different and more fundamental reason. As a delusional academic, I dream of my ideas getting attention, impacting the way people might approach or think about an issue and, ultimately, influencing policy. So I naturally fantasized about individuals (at least one or two!) potentially stumbling on this website, finding my book, and then reading it when they otherwise may have never known about my work or ponied up the cash necessary to buy it. The royalties I see from my writing are trivial and economically irrelevant. Instead, I want my books to reach as many people as possible (that is, more than my mother) and I want to maximize their exposure. Whether that is accomplished through sales or piracy, I care not. After all, as science fiction writer and librarian Eric Flint once put it, “The real enemy of authors— especially midlist writers—is not piracy . . . It’s obscurity.”

So far from making me angry, the discovery of my book pirated online for anyone to read resulted in nothing but sheer delight. Yes, this was infringement of my copyright, pure and simple. But I was all smiles.

Around the same time, I had another experience related to one of my publications—but, this time, what occurred was far less welcome. One day, I received an email from a local law school about an upcoming distinguished lecture. While I might usually give only fleeting attention to such a notification, this one caught my eye because of the topic, which just so happened to be the exact subject matter of my first book, which I wrote in 2008. So I naturally took an interest and thought about attending. But as I read further, my curiosity turned to disappointment. The talk was about a recently released book whose description was, almost word for word, based on the book I had written. And then I saw the name of the author, which was one I recognized.

I had met the author a decade ago when she was a graduate student attending a talk I had given about my own book. I distinctly remember her chatting with me after my lecture and expressing how much she had enjoyed and appreciated my book. It turns out those comments weren’t mere puffery. She had proven that by writing her own version. As I read the full description of her lecture and then found her recently released book, I was struck by how the summation literally encapsulated my own book in its entirety.

Imitation is the sincerest form of flattery, I told myself in a failed attempt to downplay the anger I felt. But more than flattery, I wanted acknowledgement. Of course, no one’s work is wholly original. But her book came uncomfortably close to mimicry of mine. It was not just drawing on or borrowing ideas to build and expand on my book; it was literally taking my entire work and rehashing it as purportedly original material. Specifically, and most egregiously of all, the use of my work was wholly without proper credit. And, to add insult to injury, as the email before me indicated, she was now giving a distinguished lecture at a local university that had never shown the least bit of interest in my work.

Even if I were inclined to pursue some kind of legal remedy against her, there was none readily forthcoming. Because of the nature of the use, an infringement claim would be difficult to make. Meanwhile, current law provides no general right of crediting or attribution. Admittedly, I did have a form of extralegal relief; if I wanted to pursue the matter, university policies against plagiarism and the failure to properly credit sources offered some remedies. Certainly, I could have notified the author’s publisher and her university-employer to trigger potential investigations. But, at the end of the day, such an effort would be purely punitive and would not undo the real damage I had already suffered; the book, after all, was already published. So, in the end, I concluded that, instead of going through the pain of a vindictive letter-writing campaign that would only waste my own time, I would work out my issues far more constructively: by writing a law review article.

These two incidents—close in time—provided a remarkable study in contrasts. In the first matter, I encountered the wholesale piracy of my work, and I found myself not merely indifferent but hopeful. After all, I had received credit for my work and the work’s unauthorized distribution helped disseminate my ideas more broadly. I would take whatever boost I could get. The website in question had undoubtedly infringed my work, but I was perfectly content to let that happen. In the second instance, while it was arguable whether the subsequent author had infringed my work, she had indisputably failed to give me proper credit for my work—upon which she had indisputably and heavily drawn—and had, in my view, violated basic norms of attribution. I was disturbed and troubled by what had happened.

While I found the second incident far more offensive than the first, the law saw things differently. I possessed a colorable claim for infringement if I were inclined to fight the piracy of my book. By contrast, I had little hope of a legal remedy for my fellow academic’s abysmal failure to provide me appropriate attribution. In short, our copyright regime provided no shortage of remedies for an injury that I cared little about—infringement. By sharp contrast, it provided no remedy for something that more directly motivates my production of content—crediting and recognition.

As a result of these two incidents, I began to wonder whether I was the kind of author the Copyright Act wanted to encourage in the first place. As a writer, I meet the definition of what the Framers referred to as “authors” in the Intellectual Property Clause of the Constitution, and my work comes under the subject matter of the Copyright Act. But I am also not the kind of author who makes a living (or even seeks to make a living) on the sale of my works. Consequently, my incentives might be quite different from someone whose income solely or largely comes from authoring works—the kind of author who might care substantially more about piracy.

That said, however, the vast majority of authors make little to no money from their work. Some, of course, may still pursue the craft for (in part) future potential riches. But, for many, remuneration is far lower on the list of their motivations than other factors, such as attribution and recognition. As Laura Heymann points out,

[F]or many creators, particularly individual creators, the profit motivation is not paramount. Rather, the creator is motivated most by the public knowledge that she is the creator—by attribution of the work to her. Indeed, as others have noted, such creators value wide dissemination of their work over compensation, and so benefit from the fair use doctrine and, even, the movement of their work to the public domain, both of which ensure that their work reaches as large an audience as possible.

As my reaction to the two incidents indicates, I belonged to this class of authors. And, by failing to reflect the importance of attribution and recognition as a motivating factor in the production of creative content, it appeared that the existing copyright regime did not know members of this class very well and did not appear fully responsive to their incentives and needs. For a utilitarian regime dedicated to progress in the arts, this curious result begs further investigation.

B.  The Empirics of Attribution

There is no doubt that our moral sensibilities strongly support the practice of proper attribution, and common sense tells us that authors value crediting and recognition as well. As Heymann has posited, “[I]t seems safe to conclude that the two things that virtually all creators desire is to receive credit when appropriate and to eliminate the suggestion of association when it is not.” But before taking these assumptions to heart based on mere intuition, it is worth scrutinizing them more closely. While the value of attribution has traditionally received scant attention in the academic literature and little empirical testing, all of that has changed in recent years as an emerging body of data and experimental work has provided overwhelming support for the notion that attribution serves a vital role in motivating and incentivizing creatives.

One of the largest innovations and behavioral experiments to ever take place in the creative world occurred with the launch of the Creative Commons some twenty years ago. Founded by law professor Larry Lessig, computer scientist Hal Abelson, and literary advocate Eric Eldred, the Creative Commons sought to give creators the ability to opt out of the protection-heavy default rules of copyright, which automatically vest in authors the exclusive right to control reproduction, distribution, public display, public performance, and derivatization of their works for a period of their lifetime plus seventy years after their death. Such rights spring into existence for all original works of authorship fixed in a tangible medium, regardless of formalities. In subverting these default protections and the “permission culture” that they serve and support, Creative Commons allowed authors to make their works available to the public to promote educational access and spur further creativity by increasing the pool of works from which others can freely build without the need for costly licenses. By ceding their works to the Creative Commons, creators opt into a different regime, where all rights are not reserved. Thus, under various Creative Commons licenses, they can make work available for use without payment for noncommercial purposes—to create new derivative works or for any purpose whatsoever.

The notable success of the Creative Commons and the particular manner in which it has operated illustrates two important points. First, millions of creators have deeded hundreds of millions of creative works to the Creative Commons. As Eric E. Johnson has put it, this fact illustrates “the contemporary existence of an attitude held by at least a significant number of people that the full panoply of copyright entitlements is not important to them.” Second, while many, but not all, authors want to stop infringement of their works, virtually all authors want attribution and the operation of the Creative Commons provides empirical support for this view. As the data collected over the past twenty years show, authors putting their work on the Creative Commons almost always choose to condition any use on one requirement: proper attribution. For at least a certain set of creators, therefore, the right of attribution trumps the right of exploitation and the ability to receive license fees from the use of one’s works.

Even aside from the Creative Commons, the widespread sharing, rather than exclusive reservation, of intellectual property rights in many sectors illustrates the strong incentive social validation can play in promoting creative enterprise. Though long underappreciated in the intellectual property literature, this widespread “non-market form of exchange,” characterized by sharing, is particularly attractive for an enormous body of works that may not enjoy clear-cut commercial profitability but are also not entirely valueless. In these sharing regimes, such as open-source software licensing pools and microstock photography collections, pecuniary gain is largely forgone but, quite notably, attribution is retained and reputational satisfaction constitutes a key part of the value proposition for creators, as they derive “a feeling of satisfaction and a sense of social connectedness out of sharing.”

The thriving of the “sharing” economy and of the Creative Commons—where millions of creators are eager to opt out of the default protections of the Copyright Act, but only as long as they continue to receive recognition for their creative efforts—should come as no surprise. Indeed, recent experimental work has validated the intuition and experience that suggests that creators place significant weight on crediting and recognition. Notably, researchers Christopher Sprigman, Christopher Buccafusco, and Zachary Burns have conducted a series of empirical tests in mimic conditions of real-world bargaining meant to put a tangible monetary value on attribution rights. In the first experiment, they found that 180 casual photographers were, in the aggregate, willing to receive far less payment for publication of their work when it came with, rather than without, attribution. These findings were even more pronounced in their second experiment, which involved professional and advanced amateur photographers. In short, these tests produced robust results, leading Sprigman, Buccafusco, and Burns to conclude that, on average, creators actually value attribution and the receipt of recognition for their work more than getting paid and that they are “willing to sacrifice financial benefits to obtain [attribution].”

Beyond the valuable case study provided by Creative Commons and the experimental evidence that has quantitatively established the worth of crediting to authors, important qualitative ethnographic and observational work has also supported the stock that creators put in attribution. For example, in her comprehensive qualitative study of innovation, in which she conducted dozens of interviews with creatives and intellectual property professionals across a wide variety of industries, Jessica Silbey concluded that attribution serves as a primary motivator for creative enterprise. As she notes, “[T]he interviews are replete with expressions of how attribution and integrity are crucial to the work’s optimal promotion and dissemination, whether or not for profit, because they safeguard and manage the development of professional identity and audience.” Similarly, in her sweeping survey on the legal and normative standards of attribution across a wide range of industries—including Hollywood, journalism, political speechwriting, software, advertising, graphic design, science, and medicine—Catherine Fisk has also documented the significant value creators place on crediting. As she puts it, “Attribution is foundational to the modern economy” and, as such, “greater legal recognition of attribution rights is desirable.” 

       Finally, recent literature in the field of organizational behavior and psychology has emphasized the crucial role of crediting in nurturing innovation and promoting perceptions of fairness in creative environments. For example, Teresa Amabile, a leading theorist on creativity and innovation, has highlighted how proper credit allocation can motivate employees to work harder and enhance productivity. In short, a burgeoning body of work in the social sciences has strongly supported the intuition that crediting matters—a lot. 

C. The Societal Value of Attribution

But in focusing largely on the impact of attribution on creatives—both in the way that crediting incentivizes innovation and how it serves the dignitary interests of authors—this emerging literature has actually understated the case of attribution rights. Quite critically, attribution does not merely serve authorial interests. Rather, it also benefits other players in the marketplace for creative works and advances broader societal interests.

First, crediting advances the efficacy of the marketplace, particularly in an information economy dominated by the production of intellectual property. Generally speaking, free and open exchange of relevant information facilitates the optimal functioning of markets by improving the efficiency of allocation decisions. Information about inputs, such as labor, guides the dedication of scarce resources. Crediting provides actionable data about labor involved in the production of intellectual property—data that are often onerous to divine elsewhere or without substantial additional cost. Indeed, “because it is difficult to measure worker knowledge directly in the way that the ability of the typists and machinists of the industrial economy could be tested simply by watching them perform a task,” credit is particularly valuable in an information economy. A reliable, accurate, and comprehensive crediting regime can therefore dramatically advance interests in the efficient allocation of resources in creative enterprises. Crediting, after all, provides vital information to financiers of those enterprises about the nature and quality of a particular author’s work.

Secondly, crediting advances the interests of those who consume creative works and other forms of intellectual property. Authorship represents a form of branding akin to trademark, and accurate authorship labeling helps promote many of the basic goals of the trademark regime, which serves consumers (and not just authors) by “reduc[ing] the customer’s costs of shopping and making purchasing decisions” and “help[ing] assure a producer that it (and not an imitating competitor) will reap the financial, reputation-related rewards associated with a desirable product.” Heymann has highlighted the value that a meaningful crediting regime provides to even nonauthors. Instead of calling for an attribution right that recognizes an inherent moral right authors might have in proper attribution, she calls for what she dubs “authornymic” attribution, the recognition of crediting for the sake of “organizational integrity”—a “reader-centered” law that ensures that “reader responses [to creative works] will be informed and minimizes the likelihood of confusion a consumer of creative commodities might otherwise experience.” In this way, she argues, a law of attribution is vital to supporting “efficient literary consumers” who can have “some confidence that the works that we read—and later draw on for our own creative activity—are situated within a coherent literary structure.”

Thirdly, an attribution regime can also promote public respect for intellectual property law. It does so in at least two different ways. As Stephanie Plamondon Bair argues in her study of the role of fairness in copyright law, when we align copyright law more closely with public perceptions of equity, we heighten the regime’s legitimacy in the eyes of society. Given the strong popular support for the norm of attribution, a copyright system that protects crediting rights bolsters respect for the regime itself. Separately, the act of putting a real face (or, at least name) behind creative works humanizes them and can help buttress support for the intellectual property rights that protect them. As Catherine Fisk explains, such a task is particularly important “[i]n a world of corporate production, and in particular skepticism about corporate production.” After all, it is no secret why corporate interest groups bring relatable artists to the forefront when making pitches for greater protection and rights enforcement, especially in the war on piracy—even when those artists are not the real rightsholders. When the music industry sought to apply pressure to Google to provide more favorable use fees for the exploitation of music on YouTube, it had the likes of Taylor Swift and U2 sign off to an open letter that was used to drum up public support for the cause and to lobby Congress for reform. And when the Motion Picture Association (“MPA”) sought an alternative to an unpopular litigation campaign against piracy, it put together testimonial advertisements that highlighted the ways in which piracy hurt those people whom we only know as lines at the end of the credit roll. Thus, attribution promotes the very operation of the intellectual property regime by giving it a human face that legitimizes the sometimes impersonal and intangible rules it enforces. In an era where digital technology has made mass piracy on a global scale all too easy, this function is perhaps of greater value now than ever before.

All told, therefore, our common sense tells us that crediting is deeply important to authors, a position backed by the emerging social science literature on the subject. Meanwhile, a proper attribution regime also has critical benefits to the efficient functioning of the marketplace for creative works and thus has strong benefits for consumers and investors as well. Despite all of this, however, as we have alluded to, the law provides shockingly little protection for crediting rights. This state of affairs that has grown particularly dim in the past two decades in the wake of the Supreme Court’s decision in Dastar, a subject to which we now turn. 

II.  THE LAW’S SIZEABLE CREDITING GAP

A.  Dastar and the Decline of Crediting Law

Although we have established the important value of attribution—to creators, investors, and the public as a whole—we are left with a strange conundrum: the law of crediting is surprisingly thin and underdeveloped. Indeed, it is counterintuitively so, as the wholesale absence of any broad law of attribution runs counter to the assumptions of many in the creative community. As Silbey reported for her survey of artists and authors, “Many interviewees were stunned to learn that copyright law does not require attribution or prohibit misattribution.”

That said, for a period of time in the recent past, rightsholders enjoyed one particular means of crediting protection: a direct vehicle for legal redress when their creative works were being used by others without proper attribution. Specifically, a line of case law had emerged that considered improper crediting of someone else’s work as one’s own to constitute a “false designation of origin . . . or false or misleading representation” actionable under section 43(a) of the Lanham Act. In these cases, courts found that, in the words of Thomas McCarthy, the Lanham Act “has progressed far beyond the old concept of fraudulent passing off, to encompass any form of competition or selling which contravenes society’s current concepts of ‘fairness.’ ” Such a capacious reading of the Lanham Act allowed for recognition of a cause of action for reverse passing off—when someone passes off the goods or services of another as their own—and, therefore, provided a viable claim for their failure to provide credit.

In 1981, this reading of the Lanham Act received the blessing of the Ninth Circuit for the first time, a move that propelled it to widespread acceptance. In Smith v. Montoro, the Ninth Circuit held that the failure to credit an actor for his role in the movie Convoy Buddies (and, in fact, the substitution of his name with that of another actor in both the film credits and advertising material), constituted reverse passing off under section 43(a). As a matter of public policy, the court opined, such conduct was “wrongful” in that “the originator of the misidentified product is involuntarily deprived of the advertising value of its name and of the goodwill that otherwise would stem from public knowledge of the true source of the satisfactory product.” The Montoro decision proved widely influential, and within a few short years, federal courts throughout the country were entertaining attribution-related claims under section 43(a) for reverse passing off. But all of that changed in 2003 when the Supreme Court announced its decision in Dastar.

It was in the shadow of Montoro and its progeny that the Dastar controversy began. To commemorate the fiftieth anniversary of the ending of World War II, Dastar Corporation had decided to put out a new video set titled World War II Campaigns in Europe. The collection made extensive, but unauthorized, use of a television series based on President Dwight Eisenhower’s book, Crusade in Europe. Twentieth Century Fox had owned the copyrights to this program until it had inadvertently forgotten to renew them and the show fell into the public domain. As a result, Fox could not sue Dastar for infringement of its Crusade in Europe television series to prevent publication and distribution of World War II Campaigns in Europe. So, like many other entities who have lost their erstwhile rights in one of our intellectual property regimes, Fox turned to a neighboring intellectual property regime upon which to rest its claims. It made a Lanham Act claim instead.

The procedural posture of the case was unusual and suggested something significant was afoot by the time it got to the Supreme Court. Both the district court and Ninth Circuit upheld the reverse passing off claim. Indeed, the Ninth Circuit thought so little of the issue’s weight overall significance that the decision was unpublished. The Supreme Court, of course, typically grants certiorari to only a tiny fraction of cases; so, it certainly raised eyebrows when the Supreme Court granted certiorari to a seemingly routine and mundane decision that the Ninth Circuit did not even bother to designate for publication. The action presaged the Court’s view that the unpublished decision from the Ninth Circuit missed something fundamental and significant, about which the Court appeared ready to opine.

In its decision, the Supreme Court unanimously reversed the Ninth Circuit and rejected Fox’s attempt to use a Lanham Act claim for false designation of origin as a means of preventing Dastar’s reproduction of an audiovisual work (to which Fox had previously owned the copyright) that had fallen into the public domain. In so holding, the Court warned against the risk of creating a “species of mutant” intellectual property protection that would impede the public’s right to make unfettered use of creative works that no longer enjoy copyright protection.

As the old saw goes, hard facts make bad law. Fox’s gambit to eschew the “limited times” requirement in copyright law by ginning up trademark claims against Dastar struck a nerve with the Court, and the case came before it at a particularly opportune time (as far as Dastar was concerned). As Justin Hughes points out, the close proximity of the Dastar decision to the holding in Eldred v. Ashcroft suggests that that the former may have intentionally served as a “2003 Term counterweight” to the latter, which rejected concerns about the public domain in declining to find a twenty-year extension of copyright terms unconstitutional. Indeed, concerns about aggrieved former rightsholders, like Fox, attempting to circumvent copyright’s careful calibrated balance between private protection and public access expressly animated the Dastar decision. Specifically, the Court sought to thwart future efforts by lapsed copyright holders to make disingenuous use of trademark law to assert monopolistic control over the exploitation of works that had fallen into the public domain, in contravention of the very intent of the copyright regime and its (constitutionally mandated) policy of allowing ownership over creative works to eventually expire so that the public may make free use of them.

Thus, under Dastar, the Supreme Court found that reference to “origins of goods” in the Lanham Act could not be read to mean the authorial origins of a work; instead, it referred only to the physical source of the embodiment of that work in tangible products. As the Court rationalized, the reference to “origin of goods” in section 43(a) was “incapable of connoting the person or entity that originated the ideas or communications that ‘goods’ embody or contain. Such an extension would not only stretch the text, but it would be out of accord with the history and purpose of the Lanham Act and inconsistent with precedent.” So, even if Dastar had failed to give proper credit to the intellectual source(s) of the materials contained in its video collection, this did not, and could not, constitute a violation under the Lanham Act. All that mattered for the purposes of the section 43(a) was that there was not false designation of the origin of the actual physical video collection. Since Dastar literally published and distributed the video collection, self-attribution was entirely proper as far as the Lanham Act was concerned. As such, Fox had no actionable claim for false designation of origin. 

At the same time, however, the Court’s holding reached broader than necessary to achieve the laudable goal of protecting the public domain. By grounding its ruling in a reading of the Lanham Act that definitively excluded the intellectual wellspring of a product from the meaning of “origin,” the Court precluded attribution claims under section 43(a) for all creative works. Thus, in the past two decades, courts have generally rejected all such claims, whether they apply to public domain works (as in Dastar) or works still under copyright protection (unlike Dastar). In the process, therefore, Dastar eliminated relief for those seeking remediation of a harm quite distinct from unlawful reproduction, distribution, display, or performance of a creative work: the act of not giving credit to its original author. In one fell swoop, “the Court swept away close to twenty-five years of precedent that held that failure to give credit to an entertainment product such as a film or song, or providing misleading credit, was a violation of trademark law.”

In part, two other concerns can explain and warrant broader application of the holding to all creative works, not just ones in the public domain. First, the Court noted the difficult position that an attribution-related reverse passing off claim could put manufacturers of products containing creative works. “On the one hand,” notes the Court, “they would face Lanham Act liability for failing to credit the creator of a work on which their lawful copies are based; and on the other hand they could face Lanham Act liability for crediting the creator if that should be regarded as implying the creator’s ‘sponsorship or approval’ of the copy.” In other words, if Dastar had put out its video set and kept the original credits to Fox, Fox could have sued Dastar for violating the Lanham Act for direct passing off by suggesting that Fox sponsored or approved Dastar’s product. Meanwhile, because it had removed Fox’s name, Dastar now faced a claim for failure to attribute under a theory of reverse passing off. If the Court had affirmed the availability of an attribution-related passing off claims, the resulting quagmire could stifle the use of works—both those in the public domain (for which no licensing is required) and for those still under copyright protection (when lawful copyright clearance might leave a licensee subject to exposure for a Lanham Act violation). 

Second, the Court raised its concern that an attribution requirement could leave distributors of copyright content with a duty to credit that might grow impossibly burdensome and impractical. The opinion put a fine point on the scope of crediting that a broader reading of section 43(a)’s “designation of origin” reference would compel by assessing the type of attribution that might be required to distribute the film Carmen Jones. As the Court posited, to avoid liability for reversing passing off under Montoroand its progeny, a distributor might have to give attribution “not just to MGM, but to Oscar Hammerstein II (who wrote the musical on which the film was based), to Georges Bizet (who wrote the opera on which the musical was based), and to Prosper Mérimée (who wrote the novel on which the opera was based).” Determining origin could amount to a complicated task. To illustrate this point, the Court turned no further than the case at hand, opining that

[w]hile Fox might have a claim to being in the line of origin, its involvement with the creation of the television series was limited at best. Time, Inc., was the principal, if not the exclusive, creator, albeit under arrangement with Fox. And of course it was neither Fox nor Time, Inc., that shot the film used in the Crusade television series. Rather, that footage came from the United States Army, Navy, and Coast Guard, the British Ministry of Information and War Office, the National Film Board of Canada, and unidentified ‘Newsreel Pool Cameramen.’ If anyone has a claim to being the original creator of the material used in both the Crusade television series and the Campaigns videotapes, it would be those groups, rather than Fox.

Interestingly, the Court’s language on this issue referred only to the context of uncopyrighted works, noting that “[w]ithout a copyrighted work as the basepoint, the word ‘origin’ has no discernable limits.” But unless the reference to uncopyrighted works meant works that had never enjoyed copyright protection in the first place, it is unclear why this problem would be greater with once-copyrighted works that have fallen out of the public domain as opposed to works still under copyright protection. 

While these rationales offer substantive justification to eliminate attribution-related reverse passing off claims through the Lanham Act in all instances—not just claims relating to public domain works—the holding in Dastar was not without its significant problems. First, Dastar suffered a seemingly significant incongruity with the purpose of the federal trademark regime. If the goal of the Lanham Act is, indeed, consumer protection, the Supreme Court’s central holding in Dastar—that the Lanham Act’s reference to origin means the source of an actual physical product and not the wellspring of the idea or intellectual property embodied in a particular product—fails to reflect the reality of what factors animate consumer behavior, particularly with respect to intellectual property. Crediting is not just important to authors; it is vital the public’s decision-making process when it comes to consuming entertainment content. As Mary LaFrance has pointed out, contrary to the ultimate thrust of Dastar, which held that trademark law only protects against misidentification of the maker of the actual product rather than the ideas behind it, 

in the case of literary works or entertainment works, the identity of the actual author, performer, or creative overseer may frequently be more crucial to the consumer’s purchasing decision, than the identity of the party that manufactured the physical embodiment [because] the identity of key creative participants is often viewed as a source indicator that is an important predictor of the quality or content of the goods. 

Indeed, Dastar creates an unusual result for physical products containing intellectual property, as it provides protection to the designation of origin about which consumers arguably care the least. To put a finer point on it, consumers do not care if the movie they are watching was printed on Kodak film or released by Warner Brothers; they care about the fact that it was directed by Martin Scorsese or written by Charlie Kaufman. Readers do not care about whether Random House or Harper Collins was responsible for the paper and ink on which a book appears; they care about whether the book was written by J.K. Rowling or Thomas Pynchon. Music listeners do not care if the album was issued by SubPop or Merge Records; they care about whether it contains performances by Spoon or The Mountain Goats. The disconnect between the law’s protections and this reality could not be more stark or problematic.

Most importantly, for a large swath of creatives, Dastar all but eliminated hope for securing crediting rights through legal claims. Admittedly, the Supreme Court took pains to caution that its decision had not necessarily eliminated all means to vindicate attribution rights and that Dastar did not speak to alternative causes of action to enforce crediting under common, state, and federal law, including other theories (such as false advertising) available under the Lanham Act. But as one practitioner euphemistically noted in the wake of Dastar, the remaining options relied on “creative lawyering.” This turned out to be shorthand for shots in the dark that have little chance of working. For as we shall analyze in great detail infra, Dastar marked a significant inflection point in the state of attribution rights—significantly curtailing (if not altogether eliminating) the ability of most creators to receive credit under the law.

B.  The State of Crediting Rights in the Two Decades Since Dastar

In his post-Dastar assessment of the state of attribution rights written in 2007, Justin Hughes argued that the crediting gap left by Dastar was not as wide as commonly believed. “[I]f we work through all the possibilities, the practical hole created by Dastar may be operatively modest,” he contended. 

Dastar creates a gap in protection for those works and circumstances where there is a failure of appropriate attribution and no cause of action under VARA, under state moral rights laws, under 17 U.S.C. § 1202 for failure to include copyright management information, or under state unfair competition laws in states where the courts hold that Dastarshould not control, and where contract law does not establish a framework to protect attribution. 

Hughes’s assessment has proven excessively sanguine, unfortunately. Although the legal theories to which he cited as alternative bases for protection may be numerous in quantity, they are qualitatively impoverished and provide scant (if any) relief in the vast majority of situations. Moreover, in the nearly two decades since Dastar, the significant size of the credit gap has become manifest as the jurisprudence of the intervening years has made clear how little bite these alternative legal theories provide for the vindication of crediting interests.

1.  False Advertising Claims Under the Lanham Act

The very cause of action to which the Supreme Court cited as continuing to provide attribution protection post-Dastar—the Lanham Act’s prohibition on false advertising or misrepresentations of fact—has proven feeble in this regard. While Dastarexpressly foreclosed the possibility of attribution-related claims under section 43(a)(1)(A), it did not altogether eliminate the ability to vindicate crediting rights under the Lanham Act. Since the Dastar holding only opined as to the meaning of “origin” in the statute (which is invoked in section 43(a)(1)(A), referring to “confusion . . . as to the origin”), remaining provisions of the Lanham Act that did not employ that word could still have application to attribution-related issues. This was true for the Lanham Act’s cause of action for false advertising that, under section 43(a)(1)(B), created liability for anyone who “misrepresents the nature, characteristics [or] qualities . . . of . . . goods, services, or commercial activities.” In fact, Dastar expressly pointed to this provision as one ground for relief that may still be possible following the decision. As the Court noted, 

If, moreover, the producer of a video that substantially copied the Crusade series were, in advertising or promotion, to give purchasers the impression that the video was quite different from that series, then one or more of the respondents might have a cause of action—not for reverse passing off under the “confusion . . . as to the origin” provision of § 43(a)(1)(A), but for misrepresentation under the “misrepresents the nature, characteristics [or] qualities” provision of § 43(a)(1)(B). 

That said, such a path has proven less than promising and the Court’s supposition that such relief might be forthcoming has proven too optimistic, at best—or disingenuous, at worst. First, despite Dastar’s seemingly express exhortations to the contrary, subsequent courts have found that the holding in Dastar actually prevents both section 43(a)(1)(A) and section 43(a)(1)(B) claims on similar facts. Second, and more fundamentally, false advertising claims face additional hurdles not present in an attribution claim under section 43(a)(1)(A). These impediments would be difficult for most plaintiffs seeking vindication of an attribution right to clear. For example, many courts require competitor standing to bring a false advertising suit. Until the Supreme Court recently broke a circuit split, false advertising claims were, in many circuits, per se limited to commercial “actual” (direct?) competitors. Even now, standing remains a significant issue. As the Supreme Court noted in Lexmark International, Inc. v. Static Control Components, Inc., false advertising plaintiffs must show that they “fall within the zone of interests” protected by the statute and must have suffered a harm proximately caused as a result of the act of false advertising. But to fall within the zone of interests, the plaintiff must “allege an injury to a commercial interest in reputation or sales.” Since consumers typically do not lose sales or suffer an injury to reputation, the new standard makes it exceedingly unlikely that consumers can bring a false advertising claim. While plaintiffs need not be direct competitors anymore to bring a claim under section 43(a)(1)(B), they still generally need to be competitors of some sort. 

Finally, false advertising is actionable under section 43(a)(1)(B) if and only if the statement is false on its face or the misrepresentation is material, that is, relied upon in consumers’ purchasing decision. This consumer reliance requirement makes eminent sense for false advertising claims, but it makes less sense when dealing with issues of attribution which should be, first and foremost, about vindicating the rights of authors to receive credit for their works rather than rights of the public from being deceived in material consumption decisions. Moreover, while the most famous and acclaimed of authors may survive such a materiality requirement, the vast majority will have a far more difficult time. 

2.  Attribution Claims Under the Visual Artists Rights Act

On the surface, VARA would appear to provide significant protection for the attribution rights of authors. Codified in section 106A of the Copyright Act, VARA offers creators an independent cause of action “to claim authorship of [their] work,” and “to prevent the use of his or her name as the author of any work of visual art which he or she did not create.” VARA claims are eligible for recovery of both statutory damages and attorneys’ fees and, to make matters even better for putative plaintiffs, unlike for infringement claims, an author does not even need to timely register the work in question as a condition for these remedies. Thus, a cursory examination of VARA might elicit hope for the vindication of crediting. But a closer look reveals just how profoundly limited the rights under VARA are. 

First, as the very name of the legislation makes clear, VARA’s attribution rights only encompass works of visual art. As such, the Act fails to apply to large swaths of subject matter otherwise protectible under the Copyright Act, including writings, music, and other important works. But the limits do not end there, as the attribution right does not even attach to all forms of art that might be characterized as visual in nature. Rather, the statute covers only paintings, drawings, prints, sculptures, and photographs created for exhibition purposes only. It therefore excludes the most commercially important of visual art—film. It also does not apply to any “poster, map, globe, chart, technical drawing, diagram, model, applied art, . . . book, magazine, newspaper, periodical, data base, electronic information service, electronic publication, or similar publication” or any “merchandising item or advertising, promotional, descriptive, covering, or packaging material or container.” In addition, all works made for hire fall entirely outside of VARA’s protections. Finally, for the narrow category of visual art works to which VARA might apply, the attribution right only attaches to original versions of those works or limited editions thereof issued in sets of “200 copies or fewer that are signed and consecutively numbered by the author.” At the end of the day, therefore, VARA’s attribution right only applies to a limited set of visual art works that are not prepared as works made for hire. In short, VARA provides no crediting protection for the vast majority of authors.

3.  Falsification and Removal/Alteration of Copyright Management Information Claims Under the Digital Millennium Copyright Act

Introduced into law with the passage of the Digital Millennium Copyright Act in 1998 (“DMCA”), the provisions of the Copyright Act that make it unlawful to falsify, alter, or remove copyright management information, which includes any authorship and copyright ownership data accompanying a work, would seemingly serve as a powerful vehicle to vindicate attribution rights. But while these provisions—codified in 17 U.S.C. § 1202 (“section 1202”)—constitute the sole protection granted to authorship information in all (rather than VARA’s narrow subset of) copyrighted works, their reach is deliberately constrained. Among other things, the structure of the two causes of action provided under section 1202—a claim for falsification of copyright management information (“CMI”) and a claim for removal or alteration of CMI—makes clear that the protections therein are subservient to the goal of fighting infringement and not any inherent value that may come from crediting. In other words, the guiding principle behind section 1202 is preventing further infringement, not vindicating an author’s very real, but potential separate, interest in crediting. As such, section 1202 fails to provide a meaningful right to crediting for authors.

Specifically, a claim for falsification of CMI requires that plaintiffs show that defendants “knowingly and with the intent to induce, enable, facilitate, or conceal infringement . . . provide[d] copyright management information that is false.” Similarly, a claim for removal/alteration of CMI requires that plaintiffs show that defendants “intentionally remove[d] or alter[ed] copyright management information . . . knowing, or . . . having reasonable grounds to know, that it will induce, enable, facilitate or conceal an infringement.” Thus, both falsification and removal/alteration claims have a strict double scienter requirement that necessitates plaintiffs demonstrate that defendants acted with a particular mens rea—that is, knowingly and with intent to facilitate infringement. 

This onerous scienter requirement is significant in at least three ways. First, it contrasts markedly from the complete absence of any scienter requirement in matters of direct copyright infringement. Specifically, infringement has always been a strict liability tort, where a defendant’s state of mind is wholly irrelevant to the issue of liability. By sharp distinction, to prevail on an attribution claim through section 1202, a plaintiff must meet not one but two (if not three) showings on the defendants’ state of mind. 

Second, by conditioning attribution relief on an intent to facilitate infringement, section 1202 firmly grounds its protections in service of the fight against infringement rather than any broad vindication of crediting rights. This position is further buttressed by the fact that section 1202 only protects CMI that is “conveyed in connection with copies or phonorecords of a work or performances or displays of a work.” Furthermore, some courts have even read the legislative history and intent behind section 1202 to preclude application of falsification and removal/alternation claims to nondigital works. According to the logic of these courts, section 1202’s primary purpose—fighting the scourge of piracy in the online environment because of the unique ease of digital infringement—compels such a limitation on section 1202 claims. Such a position, however, leaves attribution rights outside of the digital environment unaddressed.

Third, and relatedly, the dual scienter requirement makes it extraordinarily difficult to prevail on a section 1202 claim. To state a cognizable removal/alteration claim, for example, a plaintiff must demonstrate that either a work “came into Defendant’s possession with CMI attached, and Defendant intentionally and improperly removed it” or a work “came into Defendant’s possession without CMI attached, but Defendant knew that CMI had been improperly removed, and Defendant used the [work] anyway.” A plaintiff may be unable to show how or in what form a work came into the defendant’s possession in the first place and, even if they can, it is rare to have sufficient evidence showing that the removal/alteration was specifically with the intent to facilitate infringement. For example, an erroneous belief about the copyright status of an image can preclude a finding of the knowledge required to state a claim under section 1202. Meanwhile, even intentionally cropping out a copyright notice from an image is insufficient to meet the intent to facilitate requirement.

Not surprisingly, therefore, CMI claims are frequently adjudicated as a matter of law based on the failure to adequately make even a threshold showing of knowledge and intent. Consider, for example, the difficulties that an author might face in bringing a section 1202 claim even against someone who both knowingly and intentionally crops an image to cut out the authorship information. Even assuming such authorship information qualifies as actionable CMI, there are myriad reasons (that may have nothing to do with the concealing of infringement) to crop out such authorship information. Among other things, the person making use of the image could claim to have cropped the images for aesthetic purposes, because of inherent space limitations for the usage, or without any idea that they were removing CMI. In all of these instances, authors may have legitimate, if not strong, interests in seeing uses of their work include attribution. Yet they would be unactionable under section 1202.

4.  Attribution Rights Under State Unfair Competition Law and Other Common Law Theories.

Although there was initially some optimism about attribution rights remaining available under state law post-Dastar, such hopes have proven misplaced. First, in many states, such as California, courts have interpreted unfair competition protections as coextensive with the Lanham Act. Thus, if Dastar renders attribution claims no longer viable under the Lanham Act, such claims must necessarily also fail under state unfair competition law. Second, in the wake of Dastar, both Tom Bell and Michael Landau suggested that copyright preemption issues raised by the decision could preclude use of state or common law theories to protect attribution rights. These predictions turned out to be correct, as courts have regularly read Dastar in such a manner. In fact, even prior to Dastar, some courts viewed state unfair competition claims seeking credit as preempted.

The absence of clear legal protections for crediting has led some plaintiffs to rely (often futilely) on a veritable smorgasbord of common law theories in an attempt to cobble together some basis for relief. For example, when a Cornell graduate student (Antonia Demas) sued a member of her advisory committee (Professor David A. Levitsky of the School of Human Ecology) for improperly taking credit for research she had conducted into the nutritional habits of elementary schoolchildren, using that research to obtain a significant grant without her name, and then actively and publicly rebuffing her allegations of wrongdoing, she did not bring a claim under the Lanham Act for misattribution. Instead, she was left reciting the common law’s greatest hits in her complaint by claiming liability for misappropriation, fraud, breach of contract, breach of fiduciary duty, negligence, tortious interference with prospective economic advantage, defamation, and intentional infliction of emotional distress. While circumstances may make it possible to prevail on one of these theories, victims of crediting abuse face an uphill battle in meeting all of the required elements of such common law claims.

5.  Private Contracting: The Promise and Perils

With false advertising, VARA, CMI falsification/removal/alteration and unfair competition claims providing little relief, creators are left with private contracting to do the work of crediting. There is no doubt that, in some industries, private contracting and even social norms have gone a long way toward ensuring proper crediting. But significant lacunae remain and, even where private contracting and norms do provide for crediting, it is not always reflective of authorial contributions. As such, reliance on private systems to govern crediting is insufficient to provide for appropriate attribution rights.

There are some fields where private contracting has given rise to deeply nuanced and vigorously patrolled crediting requirements. Two paradigmatic examples are Hollywood and academia. In the movie industry, collective bargaining has helped level the playing field between the studios and talent, and the operative guild agreements have insisted on getting even the most minute of credits done correctly. Though it is not without its flaws, the system has worked relatively well. And even if it might seem a tad onerous to any member of the public who has sat through the credits of a motion picture, those credits instill industry professionals with a sense of pride over their brief moment of acknowledgement on the silver screen. Just as importantly, by making the contributions of industry professional publicly legible in databases such as IMDb.com, the regime also ensures that those individuals can reap the reputational and economic benefits of their credits, or, to give a notable example, help avoid the ruin that might come from an unfair attribution. To wit, from 1968 through 2000, the Directors Guild of America allowed aggrieved directors who believed a studio or other producer had butchered their movie in unimaginable ways to petition to have their directorial credit replaced with the fictional “Alan Smithee” pseudonym, lest the final product sully the real director’s good name.

In the Academy of Motion Picture Arts and Sciences, strong attribution norms have given rise to anti-plagiarism codes, which have the bite of law and are frequently enforced against offenders in university disciplinary proceedings. Though not perfect, the carrot of the norm and the stick of disciplinary proceedings have served to ensure generally robust crediting practices.

But in other industries without collective bargaining or finely tuned attribution codes, where crediting is just as important and billions of dollars are on the line, there are no such formal crediting regimes. In such endeavors, crediting decisions are often left to general norms and individual negotiations. As a result, crediting often becomes more about power than actual contribution. As Catherine Fisk points out about most fields of entertainment, “Apart from the guild-controlled screen credit system, the credit system for other creative and technical people in entertainment seems to be more governed by norms, charity, and power than by law.” One notable example of this is producer credits, which are not governed by collective bargaining. As a result, producing credits are notoriously corrupt, and a veritable “prestige market” for production credits exists. Meanwhile, even in the guild crediting systems of the Screen Actors Guild-American Federation of Television and Radio Artists, the Writers Guild of America, and the Directors Guild of America, power relations frequently trump creative contributions in determining attribution rights. As Fisk notes,

Because the guild agreements limit the number of people who can be credited in some roles on any one film, power relations among various possible contenders for credit affect who is listed. Individual workers with significant bargaining power (actors, directors, writers, and producers) negotiate for specific treatment on each project, which may or may not reflect the same level of artistic contribution as compared to others who receive a similar type of credit on a different film or who receive the same credit (or no credit) on the same film.

The absence of legal protection for attribution rights outside of private contracting has profound consequences for distributive justice. When viewed through the prisms of race, gender, or socioeconomic disparities, crediting practices have a particularly troubling history. Simply put, those who are not white, male, or wealthy have far too often struggled to receive credit, even when they indisputably authored work. This is because crediting is as much (if not more) about power dynamics and contractual leverage as it is about origination. As K.J. Greene has poignantly noted, 

Top directors, such as a Spike Lee or Steven Spielberg, will have no problem obtaining credit [by exercising their bargaining power in negotiations to contract for it], but anyone else dependent on a contract to secure credit will likely lose out. . . . [W]hile Dastar, on its face, seems completely neutral on the subordination issue, it actually promotes greater subordination; despite the Oprah’s and Denzel’s of the world, Blacks, women, and other minorities still occupy the bottom of the totem pole in entertainment hierarchies, making them the most vulnerable to misattribution abuses.

Greene’s concern is not speculative or hypothetical. Unfortunately, it is widely reflected in the history of scientific and creative enterprise.

Consider, for example, the systematic undervaluing and underrecognition of innovations by women. In the sciences, the phenomenon even has its own term—the Matilda effect—and it is no less prevalent in the world of arts and letters. To take a few illustrative examples, Margaret Keane was the actual painter of the “big-eyed waifs” long credited to her husband, Walter; Elizabeth Magie created the game of Monopoly, not Charles Darrow; and although attributed to Marcel Duchamp, The Fountain—the infamous urinal that rocked the art world at the 1913 Armory Show—was likely the work of Elsa von Freytag-Loringhoven. In short, crediting is often about who has the leverage (and, in the cases of some swindlers, the gall) to claim authorship, not who really created a work.

The dogged persistence of disparities in attribution has far-reaching consequences, exacerbating existing gender gaps in a number of professions, including the law. For example, Jordana Goodman’s empirical study of crediting practices for patent attorneys, which examined a set of over 200,000 patent applications and office action responses before the United States Patent and Trademark Office from 2016–2020, found an alarming divergence between “attribution and presence” for female patent attorneys, even when accounting for nongendered partner-associate power differentials, years of practice, and other relevant experience. In the field of computer software, for instance, Goodman estimates that female attorneys suffered a thirty-one percent shortfall in crediting. As she concludes, the “lack of equitable attribution perpetually disadvantages women, negatively impacts their career progression, and likely creates an insurmountable chasm between their capabilities and their prestige.” Ultimately, such practices “contribute[] to women’s systemic underrepresentation at top leadership levels throughout the United States,” a state of affairs presided over by current private ordering regimes such as the workflow structure of modern law firms.

       The problematic dynamics in leaving crediting to private contracting are on full display in the music industry. As Fisk points out, “in music there is a not uncommon practice of people who do not contribute to the writing of a song being ‘cut in’ on songwriting credit.” The practice is not always nefarious, of course. Peter Jackson’s Beatlesdocumentary, The Beatles: Get Back, provides a notable example. As the film’s exhaustive studio footage capturing the crafting of the title song makes crystal clear, the work was the singular product of Paul McCartney’s musical ingenuity. But the song’s writing credits—“Lennon/McCartney”—tell a very different tale. In this case, a desire to keep an uneasy (though ultimately unsustainable) peace and to honor the duo’s (soon-to-be dissolved) songwriting partnership came at the expense of accuracy. Less innocuously, however, there are myriad instances where crediting practices reflect power more than creative contribution and cut along disturbing gender or racial fault lines. To take one example, Little Richard coauthored the classic Tutti Frutti with Creole songwriter Dorothy LaBostrie. However, as if it were not bad enough that handlers cajoled him into selling his publishing rights to his record company for a proverbial song (a meager fifty dollars), he also provided songwriting credit to a party that likely had nothing whatsoever to do with the authorship of the song—one that may have been the be pseudonym for the owner of Richard’s record label (who reaped the royalties, which continue to be earned on the song to this day). As Greene documents, Richard’s experience was no outlier; it was par for the course. And as he observes, “The fact that minority artists received less protection—or in many cases no protection—for their compositions undermines the incentive theory of intellectual property laws. Many Black artists received little no economic reward for their creations. Others certainly received less than what they should have.”

Even beyond issues of race, gender, and socioeconomic status, crediting often reflects relational and power dynamics that may have nothing whatsoever to do with real creative contributions, such as the tenured professor receiving sole authorial credit for a work that includes substantial contributions from graduate students, the law firm partner who has no problem enjoying attribution for the work of a junior associate, or the senator whose “words” are actually those of a speechwriter. Indeed, Spenser Clark’s deep dive into the crediting practices on Hold Up, one of the songs from Beyoncé’s acclaimed concept album Lemonade, illustrates this point in the world of pop music. As Clark explains, Hold Up’s title and some of its lyrics come from a line that Ezra Koenig, the lead singer of Vampire Weekend, had once tweeted (which, itself, was based on a lyrics from Maps, a song by indie rockers the Yeah Yeah Yeahs) and subsequent lyrics Koenig had developed in the studio with Beyoncé’s noted producer, Diplo, while they were working with a loop from an Andy Williams song. Beyoncé ultimately gave Koenig and the Yeah Yeah Yeahs songwriting credit on Hold Up and Diplo a producing credit, but, notably, Williams received no credit at all—either as a songwriter or producer. As Clark concludes, 

Oftentimes [artist crediting] choices are not based in law, but rather more intangible considerations like the desire to maintain relationships with creators they wish to work with in the future. Andy Williams’ song, for example, was released in 1963, and therefore [Beyoncé] Knowles was probably less concerned with that relationship as she was with other, more relevant artists.

Notably, in the music industry (just as in some other creative fields), crediting is not only of reputational or ethical significance; it also determines payment of royalties related to the exploitation of sound recordings and musical compositions.

Finally, besides the power dynamics inherent in the private negotiation of credits, the fundamental constraints of contracting also limit how far it can go in ensuring proper attribution. Crediting claims that rest on negotiated obligations require privity for enforcement, and the realities of the marketplace dictate that not all uses of one’s work will be by individuals or entities with whom an author could or would contract. Indeed, this is precisely why we do not leave protection against unauthorized reproduction or other unlawful uses of copyrighted works to contracts and, instead, have infringement claims available that require only access and substantial similarity—no privity and, indeed, no knowledge. All told, therefore, while contracting, especially via collective bargaining, has enjoyed some success in certain industries, private law has not proven sufficiently robust to ensure that crediting rights are adequately protected in the many areas of the information economy where they matter vitally to authors, investors, and consumers.

III.  REFORM

A.  Questioning Attribution Rights: Why Good Norms Do Not Necessarily Make Good Law

Our legal regime’s present crediting gap—the yawning chasm between the high value of attribution and the surprising absence of safeguards in our existing system to protect the practice—would seem to suggest a manifest need for reform. But before wholeheartedly embracing the adoption of some kind of credit-mandating legal regime, it is worth pausing to consider that best practices do not always translate into righteous laws. In other words, while giving credit might be the right thing to do, that does not necessarily mean we should legally require it, either broadly or in limited contexts. To put it bluntly, not all norms need the bite of law. For example, compliance with some norms—like thanking a gift giver—is more meaningful when it results from volition rather than compulsion. Moreover, to limit the scope of potential government intrusion into personal affairs, we do not want the law to microregulate every aspect of human existence. Thus, it is important to approach any effort to expand the law to regulate behavior that was previously not squarely within the aegis of our legal regime with a healthy amount of skepticism. 

For example, despite moral entreaties against them, prevarications mostly lie outside of the scope of legal regulation—and with good reason. As Judge Alex Kozinski explained in one of the most mordant and entertaining paragraphs ever to appear in the Federal Reporter, such a societal choice honors the freedom of human expression, regardless of moral valence, and serves greater First Amendment interests. “Living means lying,” Kozinski famously posited (in words that are part of the public domain and thereby forgiving of extended quotation):

Self-expression that risks prison if it strays from the monotonous reporting of strictly accurate facts about oneself is no expression at all. Saints may always tell the truth, but for mortals living means lying. We lie to protect our privacy (“No, I don’t live around here”); to avoid hurt feelings (“Friday is my study night”); to make others feel better (“Gee you’ve gotten skinny”); to avoid recriminations (“I only lost $10 at poker”); to prevent grief (“The doc says you’re getting better”); to maintain domestic tranquility (“She’s just a friend”); to avoid social stigma (“I just haven’t met the right woman”); for career advancement (“I’m sooo lucky to have a smart boss like you”); to avoid being lonely (“I love opera”); to eliminate a rival (“He has a boyfriend”); to achieve an objective (“But I love you so much”); to defeat an objective (“I’m allergic to latex”); to make an exit (“It’s not you, it’s me”); to delay the inevitable (“The check is in the mail”); to communicate displeasure (“There’s nothing wrong”); to get someone off your back (“I’ll call you about lunch”); to escape a nudnik (“My mother’s on the other line”); to namedrop (“We go way back”); to set up a surprise party (“I need help moving the piano”); to buy time (“I’m on my way”); to keep up appearances (“We’re not talking divorce”); to avoid taking out the trash (“My back hurts”); to duck an obligation (“I’ve got a headache”); to maintain a public image (“I go to church every Sunday”); to make a point (“Ich bin ein Berliner”); to save face (“I had too much to drink”); to humor (“Correct as usual, King Friday”); to avoid embarrassment (“That wasn’t me”); to curry favor (“I’ve read all your books”); to get a clerkship (“You’re the greatest living jurist”); to save a dollar (“I gave at the office”); or to maintain innocence (“There are eight tiny reindeer on the rooftop”).

. . . .

Even if untruthful speech were not valuable for its own sake, its protection is clearly required to give breathing room to truthful self-expression, which is unequivocally protected by the First Amendment. . . . If all untruthful speech is unprotected, as the dissenters claim, we could all be made into criminals, depending on which lies those making the laws find offensive. And we would have to censor our speech to avoid the risk of prosecution for saying something that turns out to be false. The First Amendment does not tolerate giving the government such power.

We not only insulate certain forms of morally suspect speech from legal liability, but also certain acts. So while we believe cheating on a spouse is repugnant to one’s martial vows, in most states no civil liability attaches to unfaithfulness, and it is largely irrelevant in most divorce proceedings. Thus, while we may have a broad societal consensus that some actions constitute moral wrongs, we do not necessarily criminalize, or impose civil liability on, all of those wrongs. 

That said, crediting directly ties to a matter of significant public, rather than solely private or familial, interest. As we have detailed, attribution rights not only strike at the core of the utilitarian function of the copyright regime—advancing progress in the arts by incentivizing the production of creative work—but also other social benefits tied to economic and cultural interests. Indeed, in the conclusion to her exhaustive survey of crediting regimes in a wide variety of industries involved in scientific and cultural production, Fisk concludes that, while private law and norms have provided some protection for attribution rights, the current state of affairs warrants, if not compels, some type of legal intervention. But Fisk also cautions that any reform should supplement, but not supplant, existing practices. Since private ordering and norms have functioned with some success, there appears to be wisdom in approaching attribution rights with a disinclination to implement any new regime that is overly onerous or excessively undermines flexibility. With that caveat in mind, we turn to assess several proposals for reform. 

B.  The Problem with Overturning Dastar and Amending the Lanham Act

The most immediate and obvious reform measure for addressing the crediting gap created by Dastar and its progeny would involve overturning Dastar’s core holding. For example, as Justin Hughes has suggested, such an action could occur through legislation that amends the Lanham Act to define origin as including the intellectual source of creative works that have not yet fallen into the public domain. In other words, congressional action could restore the availability of attribution-related claims for reverse passing off for works still under copyright protection. But such legislation would also respect the Supreme Court’s rightful concern about limiting erstwhile rightsholders with expired copyrights from attempting to perpetuate their monopolistic stranglehold on the exploitation of creative works by turning the Lanham Act into a “species of mutant copyright law that limits the public’s ‘federal right to “copy and to use” ’ expired copyrights” However, even if the legislation is carefully crafted to apply the holding of Dastar only to works with expired copyrights, such a proposal might create more problems than it solves. Among other things, the Lanham Act is a poor fit for the vindication of attributive interests in creative works, and, even prior to Dastar, those inadequacies and fissures showed.

The ability of litigants to vindicate attribution rights through the vehicle of the Lanham Act has always been less than ideal—the Ninth Circuit’s Montoro decision and its progeny notwithstanding. Indeed, Bobbi Kwall argued this very point in 2002—just before the Dastar ruling—when she highlighted at least three ways in which the extant jurisprudence of the time stunted attribution claims, even under the Lanham Act. First, competing interpretations of section 43(a) by the federal courts in different jurisdictions had created a patchwork of inconsistent requirements that hampered the viability of reverse passing off claims for misattribution. Second, courts had sometimes even found such claims preempted under section 301 of the Copyright Act. Finally, and potentially most problematically, section 43(a)’s ultimate focus on consumer confusion and the prevention of deception led courts to “become preoccupied with different manifestations of ‘falsity’ at the expense of [protecting] an author’s personality and reputational interests.” Thus, dignitary injuries to an artist from a lack of attribution, or even speculative injuries as to the future harm that a lack of recognition may bring, are not cognizable under a section 43(a) claim. So, for example, in 1999, the Fifth Circuit affirmed summary judgment for a record label on a section 43(a) claim for reversing passing off based on the record label’s alleged failure to credit the authors of a digital sample of the authors’ work. Even though it acknowledged the lack of attribution, the Fifth Circuit still denied the claim on the basis that the plaintiffs could not demonstrate a genuine issue of likelihood of confusion. Read strictly, section 43(a)’s requirement of a showing of likelihood of consumer confusion would threaten most attribution claims, especially those that stem from smaller uncredited uses of a work and even larger uses of works by authors that are not sufficiently well-known so as to meet the threshold of consumer confusion necessary to sustain a claim under section 43(a). In other words, the Lanham Act’s conditionality of liability on consumer confusion inherently and significantly narrows the breadth of any protection for attribution it might otherwise provide. Overruling Dastar would do nothing to address this issue. 

Meanwhile, although restoration of attribution-related reverse passing off claims for works still under copyright protection might address the Supreme Court’s concern about the potential private recapture of public domain works, it would not address another problem that undergirded the rationale of Dastar: the catch-22 of crediting. As the Dastar Court pointed out, attribution rights can mire users of copyrighted works in a damned if you do, damned if you don’t scenario. On one hand, if they do not provide credit, they might face claims for failure to attribute. On the other hand, if they do attribute, they might face accusations of a type of passing off—effectively engaging in a form unwanted attribution that the attribute regards as connoting sponsorship, endorsement, or affiliation with their product. This, in turn, can produce liability under the Lanham Act.

Meanwhile, though the anticompetitive implications of reverse passing off claims related to attribution are most pressing when a work is otherwise in the public domain, the ability of such a cause of action to stifle legitimate uses of works under copyright protection also bears consideration. Specifically, the Supreme Court’s anxieties about a mutant form of copyright law apply more broadly than the re-copyrighting of works that have fallen into the public domain; they apply with equal force to how a crediting regime could entangle and ensnare all sorts of unwitting users of copyrighted works, including properly licensed ones, for failure to make proper crediting. Attribution requirements can be onerous, particularly if we return to the pre-Dastar state of affairs under the Lanham Act, where it was unclear just how much crediting might be required to avert potential reverse passing off claims. Indeed, in the unanimous Dastar opinion, Justice Scalia cited the “serious practical problems” that would result from an attribution requirement without carefully circumscribed limits. After detailing the exhaustive list of potential credits that Dastar would have had to give if the Court had found that the Lanham Act’s reference to “origin” required attributions to all of the originators of “the ideas or communications that ‘goods’ embody or contain,” Scalia quipped that it made no sense to interpret the Lanham Act as requiring a search “for the source of the Nile and all its tributaries.”

A restoration of the pre-Dastar state of the law for works still in copyright could adversely impact the rights of legitimate users of copyrighted works and enable a similar type of result as Fox sought to achieve in pursuing its claims in Dastar. An example illustrates this point. If a producer-rightsholder grants a distributor rights to its work and then the distributor properly sublicenses those rights to an exhibitor, there is no issue of copyright infringement and, under our current regime, the exhibitor would feel secure in exploiting the work. But if attribution-related reverse passing off claims are restored under the Lanham Act, all manner of mischief could result in undermining the exhibitor’s properly granted exploitation rights if a purported “source” does not receive the attribution they believe they deserve in conjunction with the exploitation. The broad scope of who or what might constitute a “source”—as illustrated by Dastar’s infamous passage about the “Nile and all its tributaries”—makes this clear. 

Thus, it may be with good reason that the period of time during which courts recognized an attribution-related claim for “reverse passing off” was relatively short. Although there were occasional outlier decisions in the distant past, “[w]idespread acceptance of [such] a cause of action began around 1980”—meaning that creators enjoyed access to such a claim for less than a quarter century and questions about the practice abounded during that era. Several theorists, for example, argued that use of the Lanham Act in this (admittedly sympathetic) context stood on shaky, if not wholly unjustifiable, legal grounds. Although he acknowledged that the right of attribution was “a commercially valuable right,” Randolph Stuart Sergent asserted that such a claim failed to serve the Lanham Act’s purported goals and was inappropriate under section 43(a). In presciently anticipating Dastar, he bemoaned the power of Montoro-like claims to serve as “a tool for controlling the sale of the underlying product [in a manner that would] reduc[e] marketplace competition . . . to the immediate detriment of consumers.” Meanwhile, John Cross argued that, although “[a]llowing [a] plaintiff to recover for reverse passing off certainly ‘feels’ right,” it is worth noting that “vague feelings of impropriety . . . are not enough to justify a cause of action.” In his analysis, imposing liability under the Lanham Act for reverse passing off failed “to prevent or cure any meaningful consumer deception” and undermined the delicate balance between encouraging innovation and promoting competition by allowing original sources to monopolize works that are either ceded to or eventually fall into the public domain by operation of copyright and patent law. These concerns remain for any effort to overturn Dastar.

In short, even before Dastar, the Lanham Act simply did not provide consistent protection for authorial crediting. As such, simply reforming Dastar does not really get us a proper fix for vindicating attribution rights. Although there is much to criticize about Dastar—the void it has created in the law of crediting and its shaky factual premise—there are also compelling reasons to leave the primary holding of Dastar undisturbed and to eschew reliance on the Lanham Act as a means to vindicate attribution rights. Indeed, if Dastar achieved any good, perhaps it was in taking the issue of authorial attribution out of the scope of the Lanham Act, where it represented a square peg being forced into the proverbial round hole.

C.  The Challenges with Creating an Independent Attribution Claim Under the Copyright Act

Other scholars have considered whether it might make sense to amend the Copyright Act to provide for a general attribution right. Jane Ginsburg, for one, has advanced such a proposal. While the idea certainly has a great deal of merit, it also suffers from some significant shortcomings. On the positive side, Ginsburg’s proposal seeks to resolve this surprising lacuna in American intellectual property jurisprudence by finally granting creators a general right of attribution. Meanwhile, Ginsburg advocates the duration of the attribution right to match the copyright term—thereby averting instances of attribution liability for the use of public domain works. For reasons that we have also advocated, she also recognizes the importance of taking attribution rights outside of the Lanham Act given that attribution should be recognized regardless of proof of economic harm or consumer confusion. She also attempts to address potential issues regarding the unwieldy and uncertain scope of attribution obligations pre-Dastar by limiting the affirmative right of attribution to just legal authors and performers.

But Ginsburg’s proposal has some significant difficulties. While legal authorship is often singular, performers can number into the thousands. Thus, the inclusion of performers in the attribution requirement could create the specter of liability for the unwitting. More broadly, crediting of authors is not always practicable. Although Ginsburg addresses this concern by suggesting that the statute would be subject to a standard that incorporates a “reasonableness criterion,” such ambiguity is arguably the last thing that copyright law needs. After all, copyright users already have the remarkable illegibility of the fair use analysis with which to contend. Adding an additional crediting requirement that has no restraint other than “reasonableness” adds just another unfortunate layer to the copyright thicket of licensing and clearance requirements that already stifle creative activity.

Indeed, the very example that Ginsburg uses to tout the salutary and nimble nature of an attribution requirement grounded in the ambiguous notion of “reasonableness” demonstrates the very dangers of such a regime. As she writes,

[A] requirement to identify all authors and performers may unreasonably encumber the radio broadcast of a song, but distributed recordings of the song might more conveniently include the listing. This may be particularly true of digital media, where a mouse click can provide information even more extensive than that available on a printed page.

Admittedly, with its relative dearth of spacing limitations, digital media makes it arguably reasonable (from a spacing point of view) to credit any number of authors and performers. But such a requirement can quickly become onerous. Consider a professor teaching a Russian history course who wants to screen excerpts from Alexander Sokurov’s acclaimed experimental drama Russian Ark, a ninety-six-minute film shot in just a single take one night at the Hermitage with a cast of more than two thousand actors and three orchestras. Though such an action would likely constitute fair use, meaning the professor could engage in the use without the hassle of payment and permission and without fear of infringement liability, Ginsburg’s proposal would place the professor in jeopardy of a different kind of liability: failure to attribute.

Ginsburg’s proposal also lacks any fair use defense, a point emphasized when she explains that “the test of reasonableness in this context is not the same as for fair use. The question is not whether the use should be prevented or paid for, as it is when fair use is at issue, but whether the use, even if free, should acknowledge the user’s sources.” While such a move is welcome from an equity point of view—for all too long, copyright law has devalued the creative contributions of performers—it bodes less favorably for those making use of copyrighted works. It is hard enough to prevail on a fair use claim; but now users will have to contend with a whole other issue: liability exposure under an independent and separate cause of action depending on the reasonableness of their crediting practices.

In addition, entitling creators to an affirmative right of attribution could have a surprisingly adverse impact on the functioning of intellectual property licensing markets. For example, while they acknowledge the critical importance of crediting and recognition to authors (a position backed up by their own empirical experiments), Sprigman, Buccafusco, and Burns have suggested that the indisputable value that creators place in attribution should not automatically lead to legislative enactment of an affirmative attribution right. As they caution, the operation of a default right of attribution, even if waivable, could result in significant inefficiencies in the licensing market. Most obviously, transaction costs would increase. But less obviously, the combined impact of the endowment and creator effects—which can cause irrational overvaluation of the intellectual property rights held by authors in their creative output—can make licensing transactions increasingly unlikely and more burdensome.

Grounded in the public interest and the efficient functioning of licensing markets, this argument warrants further examination and should give pause to any hasty enactment of attribution legislation. To understand why, an examination of the emerging literature in behavioral economics is in order. Specifically, in recent years, psychologists and economists have observed a phenomenon dubbed the “endowment effect,” wherein the subjective valuation an individual will give a particular object increases significantly when the individual possesses that object, even for a limited time. As a consequence of this effect, individuals will “demand much more to give up an object than they are willing to spend to acquire it.” Although not without its critics, this result appears to subvert neoclassical economic theory, which assumes that an individual’s willingness to pay (“WTP”) for a good should equal the willingness to accept (“WTA”) compensation for the loss of the good. In a now-classic experiment, Kahneman, Knetsch, and Thaler found that randomly assigned buyers valued a particular mug at three dollars, on average. By sharp contrast, randomly assigned owners of the very same mug required substantially more money (seven dollars, on average) to part with it. In short, the owners’ loss in divesting themselves of the mug was valued at more than twice the buyers’ gain in acquiring the exact same mug. Thus, under the endowment effect, most people appear to require a much higher price to part with a product to which they hold a legal entitlement (that is, through possession or ownership) than they would pay to purchase the very same product.

As it turns out, the endowment effect can be especially pronounced and dangerous in matters dealing with intangible property such as copyright. The tendency toward overvaluing endowed goods is amplified when measurements of value are more subjective, and the lack of fungibility for creative works can exacerbate holdout problems and make completion of licensing deals more difficult. This leads to what James Surowiecki and others have billed as the “permission problem.” And the impact is not merely the stifling of creative rights of scholars, critics, satirists, and others. Since the endowment effect raises the price otherwise demanded for access to a copyrighted work, “members of society do not enjoy the increased access to art that the copyright law is designed to provide.”

It is at this point that Sprigman, Buccafusco, and Burns’s findings become particularly salient. They found that that endowment effect was particularly extreme when creators engage in transactions involving their own work. This so-called “creativity effect”—what Sprigman, Buccasufsco, and Burns refer to as the “the tendency of creators of goods to assign higher value to their works not only compared to would-be purchasers of the goods, but relative also to mere owners (that is, subjects who had not created but merely been given the works, as in previous studies)”—can badly “magnify the valuation anomalies associated with the endowment effect. The creativity effect drives creators’ WTA even further away from buyers’ WTP, and in doing so it makes deals over creative goods more difficult to reach.” The data from Sprigman, Buccafusco, and Burns’s work therefore suggests that vesting an affirmative attribution right in creators could serve as a significant impediment on the licensing market and further complicate and stifle the ability of would-be licensees to reach deals for the use of creative content—a cost that impacts consumers of copyrighted works as well as the vast number of authors who draw upon preexisting content to create transformative works. As a result, they conclude that an affirmative attribution right would ultimately not serve the public weal and could have a disruptive effect on commerce.

But, perhaps most damningly, the biggest drawback against an independent claim for attribution under the Copyright Act is not whether it would make for good law but, rather, whether it would be feasible to pass such legislation in the first place. To illustrate this point, it is worth considering a few salient points about the history of copyright law in our country. It took almost 100 years for the United States to accede to the terms of the Berne Convention of 1886, which, since 1928 and per Article 6bis, requires member states to recognize a right of attribution. When the United States finally acceded to Berne in 1988, the House Report on its implementation concluded that a patchwork of existing laws in the United States already provided sufficient protection for attribution to meet Berne’s minimum standards. The availability of Lanham Act relief for reverse passing off in situations of misattribution was key to this conclusion. Nevertheless, Congress passed a narrow right of attribution under VARA shortly thereafter in 1990 which, as we have discussed, does not cover the vast majority of creative works and provides only scant protection. Furthermore, since Dastar, there has been no meaningful effort to undo its holding in Congress, making the path toward a legislative fix unlikely, at best. As this timeline illustrates, the odds of congressional intervention to add a broad attribution right to the Copyright Act—particularly given how constrained the attribution claim embedded in VARA ultimately became when it was finally passed in 1990—do not seem particularly good.

D.  A Modest Proposal: Locating Attributive Use in Section 107

With this analysis in mind, we turn our attention to a modest proposal that I believe would not require legislation and, in fact, already reflects the jurisprudence on fair use: the recognition by courts of attributive use as an express subfactor in the application of the fair use defense to allegations of copyright infringement. This proposal advances the cause of attribution rights in an incremental, but significant, manner; provides flexibility for courts to adapt the concept to contexts and emerging technologies; and bolsters norms of crediting in a way that can lay the framework for future (and bolder) changes in the law.

Moreover, the proposal builds on the important work done by Pierre Leval with his article Toward a Fair Use Standard some three decades ago. Just as Leval argued that transformative use was already, and had good reason to be, playing an important role in fair use determinations, I argue the same with attributive use. In that spirit, as the title of this Article suggests, I advocate a move toward a new fair use standard. As our exegesis of the extant jurisprudence on fair use reveals, attributive use already has an implicit place in the fair use calculus. I argue that courts should lean into this reality and make attribution an explicit consideration in their factor one analysis on the purpose and character of the use. Just like transformative use, which advances the utilitarian aim of the copyright regime to promote progress (by enabling the creation of new work), attributive use serves a key role in the copyright regime by helping advance progress in the arts (by appealing to the incentivizing function of crediting). So, under this scheme, as part of their factor one analysis, future courts would consider: (1) whether a use is commercial; (2) whether a use is transformative; and (3) whether a use is attributive. In short, attributive use would take its place with commercial and transformative use as key factors in determining the purpose and character of a defendant’s unauthorized exploitation of someone’s copyrighted work.

Admittedly, leaving attribution rights to only function as an affirmative defense to infringement still leaves crediting as a tail, wagged by the infringement dog. But this solution avoids the numerous complications posed by either an affirmative attribution right in the Copyright Act or an undoing of the Dastar holding. Under such a proposal, works used with permission can continue to have exploitation governed by licensing terms that can call for proper attribution as appropriate and meaningful, thereby leaving existing crediting regimes in place and enabling further development of new ones. But for unlicensed works, an attributive use subfactor will provide significant encouragement of crediting while not requiring it in every instance and leaving some flexibility around the issue, so that courts can consider the context of a particular use to decide whether attribution is valuable, meaningful, or practicable under the circumstances. As a result, crediting will not become an absolute requirement, thereby addressing the significant concerns that would come from a broad attribution right. Meanwhile, for public domain works, there will be no concern about attribution because such works would not be subject to a fair use defense since their exploitation is, per se, noninfringing. As a result, the proposal averts rightful concern about erstwhile copyright holders using crediting requirements to achieve perpetual protection for works that fall into the public domain.

Moreover, to avoid making attribution overly onerous, the crediting at issue could be limited to legal authorship. As even Ginsburg admits, attribution requirements can be burdensome, potentially causing a problem that Ginsburg characterizes as the “most practical of all”: a regime that mandates “tiny print or endless film credits that no one will look at anyway.” To Ginsburg, criticisms about the potential burdens of crediting requirements are exaggerated. As she opines,

[D]ifficulties in determining whether a contributor at the fringes of a creative enterprise should be denominated an “author” or “co-author” should not obscure attribution claims where authorship is apparent. Moreover, where the creators are multiple, business practice may assist in identifying those entitled to authorship credit. That the resulting credits may not attract most readers’ or viewers’ attention does not warrant forgoing them altogether.

But there may also be a simpler refutation to these objections. Specifically, as Ginsburg herself admits, “Our caselaw has enough trouble, in the joint works context, identifying who is an author.” This is certainly true but it is also worth noting that, as a result of this difficulty, courts have shown themselves extraordinarily loathe to recognize joint authorship. Indeed, numerous doctrines, such as the strict reading of the mutual intent requirement, have emerged from courts to avert recognition of joint authorship. So, on a practical level, the problem of endless attribution seems quite solvable by considering crediting not of all creative contributors, but of the legal authors—a designation that courts have gone out of their way to make singular and, consequently, quite knowable (despite the many flaws in the way courts define legal authorship). In other words, given that courts already carefully circumscribe the notion of legal authorship in order to avoid the messiness of joint authorship and the accompanying headache it may cause in the fracturing of rights, attribution rights that are limited to recognition of legalauthorship are not quite as complex as objectors may suggest.

All told, this solution draws and expands upon, with some important alterations, a proposal once presented briefly by the late Greg Lastowka at the end of his article considering the (morbid) state of attribution rights post-Dastar. After bemoaning the extant law’s lack of protection for crediting, Lastowka proposed a corrective step: congressional amendment of section 107 to incorporate attribution as an explicit fifth factor in the fair use analysis. I tweak Lastowka’s proposal for two reasons. First, the addition of an express fifth factor would require legislative amendment, making change less likely (as I have documented with the difficulty in passing any affirmative attribution right in the Copyright Act). Indeed, as the influence of Leval’s 1990 article has suggested, change through the common law is both swifter and more likely. Leval, of course, achieved a dramatic change in the way courts have approached the fair use analysis in the past three decades by emphasizing the importance of a factor that had received scant explicit consideration before: transformation. Secondly, analytically speaking, I argue that attribution already resides in the existing four factors without the need to add a fifth. Most significantly, as I shall detail, courts have both explicitly and implicitly considered attribution in the fair use calculus in the past, often as part of assessing the purpose and character of the use (factor one). Building on the occasional, but unpredictable, judicial solicitude to attribution as a part of the fair use balancing test, I argue that, normatively, such a move makes a great deal of sense.

1.  Attributive Use and the Existing Fair Use Calculus

As Lastowka argued, courts have occasionally drawn on attribution as a factor in the fair use calculus. But, as he cautioned, 

[w]hat these cases demonstrate is not that attribution is regularly considered by courts as a factor in the fair use analysis. This is most certainly not the case. The cases merely illustrate that in certain cases, plaintiffs and defendants have been successful in persuading courts to incorporate evidence about attribution into a fair use analysis.

Lastowka may have understated matters, however. Indeed, a careful exegesis of the relevant jurisprudence—including noted decisions from the two circuits (the Second and the Ninth) that most prominently opine on copyright law, as well as consideration of the broad attributive practices in clearance norms—strongly suggests that attribution is already a guiding factor in the fair use calculus and, either explicitly or implicitly, is playing a (rightful) role in fair use determinations. As such, the proposal advanced here calls for overt recognition of attribution as a key subfactor in how courts weigh the purpose and character of a use. 

The fair use doctrine finds its origins in Justice Joseph Story’s influential 1841 opinion in Folsom v. Marsh. Eventually codified in section 107 of the 1976 Copyright Act, fair use typically involves the weighing of a four-part balancing test to determine whether an unauthorized use of a copyrighted work is excused from infringement liability. These factors include:

(1) the purpose and character of the use, including whether such use is of a commercial nature . . . ; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work.

However, with its use of open-ended language, the text of section 107 suggests that the four listed factors are not exhaustive of the considerations a court may undertake. As a result, courts have always had the freedom to introduce other relevant factors to their fair use analysis. Indeed, some have accepted the invitation, including many that have made attribution and crediting practices a consideration.

2.  The Role of Attribution in the First, Fourth, and “Fifth” Fair Use Factors

In Haberman v. Hustler Magazine, Inc., for example, a Massachusetts district court drew on “equitable considerations” as a fifth factor and found that the defendant’s attribution practices supported a fair use defense against infringement claims for unauthorized reproduction of two fine art photographs in a magazine. Specifically, the defendant’s fair use claim was substantially aided by the fact that it made “no effort . . . to palm [the photos] off as anything other than [the photographer’s] creations.” Thus, in some cases, attribution can and has become a part of the fair use calculus through an unofficial “fifth factor.”

That said, courts do not necessarily have to resort to the introduction of a fifth factor to make room for crediting. In fact, numerous decisions have integrated attributive use into their analysis of the existing four factors. This body of case law suggests that attribution already has a place (and voice) in the existing four fair use factors—particularly the first (“purchase and character of the use”) and fourth (“market harm”).

Melville and David Nimmer, for example, have argued that attribution can and should be a proper consideration in the first factor, as it speaks to nature and character of the use being made by a defendant. Numerous courts have subscribed to this view, whose application is illustrated in Williamson v. Pearson Education, Inc. In the suit, Pearson Education offered up a fair use defense to infringement claims stemming from its publication of a book featuring unauthorized quotation of a number of passages from a prior work on General Patton’s leadership principles. Drawing on Harper & Row, Publishers, Inc. v. Nation Enterprises’s instruction to consider the “propriety of the defendant’s conduct” as part of the nature and character of a defendant’s use of a copyrighted work, the Court found that the first fair use factor favored defendants because, among other things, they were “not attempting to pass [the] fact-gathering off as their own. Rather they are crediting [the plaintiff] as the source of the factual information that defendants use to construct some of the arguments in their book.” Similarly, in Rubin v. Brooks/Cole Publishing Co., a court identified the propriety of the defendant’s conduct as one of three subfactors under the “purpose and character of the use” consideration and found this factor favored the defendant since the defendant had “credited [the plaintiff] clearly and favorably in the text.” In another infringement case—one involving unauthorized use of a portion of a report about a hydropower facility—a federal court deemed that the defendants’ acknowledgement of the source of the original work helped the first fair use factor “weigh[] heavily in favor of a finding of fair use.”

It is not just the first factor that makes room for attribution. While “good faith” is the most common doctrinal vehicle through which crediting finds a voice in the first factor, economic factors form the doctrinal vehicle through which crediting finds a voice in the fourth factor. Meanwhile, the market harm factor also leaves room for consideration of attribution. Specifically, as Rebecca Tushnet has argued in the context of fan fiction, giving attribution attenuates the possibility of market harm. As she reasons, “Correct attribution helps prevent confusion and preserves the market for the official product and bears an indirect relation to the fourth fair use factor.” Tushnet’s view is not merely aspirational; it is also already reflected in some cases. In Richard Feiner & Co. v. H.R. Industries, Inc., for instance, a New York federal district court declined to grant a fair use defense to The Hollywood Reporter/HRI for its unauthorized use of a photograph of Laurel and Hardy in a feature spread on special effects and stunts. The failure to attribute the photograph to its author played an important role in the court’s calculus. “HRI’s use of the photograph without attribution to Feiner represents to the world that the photograph is in public domain,” the court concluded, “thus potentially impairing Feiner’s future revenue both in income and in the costs of protecting its rights.” Based on this logic, the court found significant market harm in HRI’s actions and found the fourth fair use factor militated against the defendant. Indeed, the Feiner case stands in contrast to Nuñez v. Caribbean International News Corp., in which a different member of the media—a Puerto Rican newspaper named El Vocero—also published an article making unauthorized use of a photograph—an image from the modeling portfolio of the former Miss Puerto Rico Universe 1997. In Nuñez, crediting played a role in the fair use analysis under both factors one and four. On the first factor, the court considered good faith, which favored El Vocero since it had “attributed the photographs to Núñez.” On the fourth factor, the court pointed to the fact that “the only discernible effect of the publication in El Vocero was to increase demand for the photograph”—a consequence doubtlessly buttressed by the credit that El Vocero provided to Nuñez, which enabled future licensees to know whom to approach for permissions to use the image. 

3.  Harper & Row’s Good Faith Admonition

With all of this said, it is important to acknowledge that the consideration of “good faith,” which courts have often used to raise attributive concerns, has come under fire in recent years. On the surface, this might suggest increasing judicial resistance to the factoring of crediting in the fair use calculus. But a closer examination of this trend says otherwise. 

The clearest expression of the invitation to consider good faith in fair use determinations (and the basis upon which some courts have invoked attribution) came in Harper & Row, in which the Supreme Court dictated, in seemingly absolutist terms, that “fair use presupposes ‘good faith.’ ” Despite the blusterous verbiage, the exhortation remained inchoate, as the case gave little guidance as to just constituted “good faith.” In that particular instance, the Court was referring to how The Nation’s knowing exploitation of a purloined manuscript that remained unpublished demonstrated bad faith because the magazine had usurped the copyright holder’s valuable commercial rights of first publication. The Court then grafted this assessment of propriety to its consideration of the first, second and fourth fair use factors in finding that The Nation’s actions had no refuge in the defense. Notably, the matter of attribution (as a form of good faith or otherwise) had nothing whatsoever to do with that case. 

In the intervening years, the Supreme Court has walked back on Harper & Row’s language about good faith. In both of its two most recent fair use pronouncements—Campbell in 1994 and Oracle in 2021—the Court has expressly downplayed the consideration. In Campbell, the Supreme Court acknowledged a split in persuasive authorities as to whether fair use must necessarily presuppose good faith. In Oracle’s dicta, the Court cited to Leval’s work to express its skepticism as to whether good faith should ever be a factor in fair use determinations.Ultimately, however, the Oracle Court eschewed taking a definitive position on the issue, leaving the weight (if any) given to good faith squarely to lower courts to determine. As the Court mused, “We have no occasion here to say whether good faith is as a general matter a helpful inquiry.” Nevertheless, it would not be stretch for lower courts to view the commentary in Campbell and Oracle as dampening enthusiasm for further use of a general good faith consideration in the fair use calculus.

The evolving concern about consideration of “good faith” as dictated in Harper & Row is compelling—at least in the manner in which the Harper & Row, Campbell, and Oracle Courts used the term, where they considered if an alleged infringer had engaged in related wrongdoing, such as exploiting a purloined manuscript or proceeding with making use of a work despite being denied permission. After all, if someone asks for, and is denied, permission but proceeds with a use anyway, it is worth considering whether that demonstrates good faith or bad faith, or does not necessarily suggest anything at all. As Elina Lae points out, courts have gone both ways on this issue—a fact that speaks to the unworkability of the factor. Or if a defendant genuinely believes that their actions constitute fair use and does not ask for permission, it is hard to understand why that factor should be held against them in the very determination of whether something is fair use. Indeed, while conceding the potential the utility of a good faith consideration in the fair use calculus, the Ninth Circuit long ago pointed out that “to consider [a defendant] blameworthy because he asked permission [and went ahead with the use after not receiving it] would penalize him for this modest show of consideration. Even though such gestures are predictably futile, we refuse to discourage them.”

More pointedly, reading the Copyright Act holistically, the introduction of considerations of propriety into the fair use calculus would appear unbalanced, particularly when one acknowledges that courts do not typically weigh such factors in determining putative rightsholders’ entitlement to copyright protection. As Leval has forcefully argued, 

Copyright seeks to maximize the creation and publication of socially useful material. Copyright is not a privilege reserved for the well-behaved. Copyright protection is not withheld from authors who lie, cheat, or steal to obtain their information. If they have stolen information, they may be prosecuted or sued civilly, but this has no bearing on the applicability of the copyright. Copyright is not a reward for goodness but a protection for the profits of activity that is useful to the public education. The same considerations govern fair use.

In the end, therefore, when viewing fair use as an integral part of the utilitarian goal of our copyright regime, the ultimate focus on progress in the arts would appear to preclude consideration of the subjective mental state of a defendant or a defendant’s general intentions. 

But, quite critically for our purposes, the discussion about good faith in Harper & Row, Campbell, and Oracle has never once touched on the issue of attribution. And although both Campbell and Oracle cited to Leval’s admonition to decouple fair use from good behavior, his entreaty also had nothing to with attributive practices. Indeed, as Leval reasons, fair use should be focused on progress in the arts. As we have detailed, that is precisely the reason why attribution should be considered, while other good faith factors, such as a defendant’s state of mind or general intentions, should not. Thus, despite the concern about good faith expressed in Campbell and Oracle and by Leval, these contemplations do not speak to the issue of crediting. And, if anything, the case law’s continued emphasis on promoting a vision of fair use dominated by the utilitarian goals of the copyright regime favors the consideration of attribution (though perhaps not general good faith) in the fair use calculus.

4.  The Existing Role of Attribution in Second and Ninth Circuit Jurisprudence

All the while, attribution has and continues to play a role in the existing body of fair use jurisprudence in the influential Second and Ninth Circuits—from which the plurality, if not majority, of copyright caselaw stems. While these decisions that explicitly invoke attribution do not take the next step of formally mandating a role for crediting in the enumerated factors, when read together they suggest that such a role—much like the transformative use element identified by Leval in Toward a Fair Use Standard—would be consistent with the existing fair use rubric. Indeed, if district courts in these circuits hewed more closely to this extant jurisprudence, attribution would enjoy more overt recognition as a fair use factor. 

The precedent of the Second Circuit is replete with cases where attribution has played a key, if not decisive, role in the fair use calculus. Weissmann v. Freeman is a particularly instructive case because it stemmed from an academic dispute where credits and notoriety, rather than significant profits, were at stake. In the suit, a professor at the Albert Einstein School of Medicine, Heidi S. Weissmann, took legal action against her colleague and erstwhile collaborator, Leonard Freeman, for copyright infringement, claiming that Freeman had unlawfully reproduced fifty copies of one of her articles for use in a special review course he was giving at the Mount Sinai School of Medicine. Ordinarily, such a relatively de minimis and educational use of a work would not give rise to infringement litigation. But one key fact made this case atypical: Freeman had deleted Weissmann’s name from the paper and, after adding three additional words to the title, put his own name on it. Although Weissmann might have pursued a claim for reverse passing off (albeit with some potential hurdles), she focused on a copyright infringement claim. Though the lower court had held Freeman’s use was a fair one, the Second Circuit reversed. 

Despite claims that Freeman’s use was noncommercial and for academic purposes, his failure to credit Weissmann (and his substitution of his own name as the author) proved outcome determinative in the Second Circuit, and the court cited the crediting issue in its analysis on the first, second, and fourth fair use factors. On the first factor, the court determined that, in wresting credit from Weissmann, Freeman’s motivations were commercial in nature as he profited from the use, which enhanced his professional reputation. As the court noted, “Particularly in an academic setting, profit is ill-measured in dollars. Instead, what is valuable is recognition because it so often influences professional advancement and academic tenure.” On the second factor, the court noted the importance of weighing the incentives for continued creation of scholarly work and thereby linked attribution and progress in the arts. As it reasoned, crediting provided the likes of Weissmann “with an incentive to continue research—an endeavor that, if successful, would and has led to professional and monetary benefits [and that courts] need to uphold those incentives necessary to the creation of works such as [the article].” Finally, on the fourth factor, the court saw clear market harm because “[i]n scholarly circles such as, for example, the small community of nuclear medicine specialists involved in this suit, recognition of one’s scientific achievements is a vital part of one’s professional life.” In short, the issue of crediting permeated the entire fair use analysis in Weissmann. In the end, the court also emphasized that “[n]o case was cited—and [they] found none—that sustained [a fair use] defense under circumstances where copying involved total deletion of the original author’s name and substitution of the copier’s.” This observation alone suggests the implicit role of attribution (and the disfavor with which misattribution is viewed) in the extant fair use jurisprudence.

Rogers v. Koons, one of the most well-known copyright decisions from the Second Circuit, also emphasizes the role of attribution in fair use determinations. The Rogers controversy started when prominent modern artist Jeff Koons found inspiration in a cheap tourist postcard featuring Art Rogers’s Puppies, a photograph of a couple and their dogs posing in Rockwellian tranquility. Koons appropriated the depiction without Rogers’s permission and accentuated various elements to create a sculptural work, String of Puppies, that satirized suburban American aesthetic sensibilities. Rogers, unamused by Koons actions, sued for copyright infringement and Koons claimed fair use. The district court rejected Koons’s defense, holding that, among other things, his activities were not sufficiently transformative because they did not criticize or comment upon Rogers’s original photograph. On appeal, the Second Circuit affirmed, noting that, “though the satire need not be only of the copied work and may . . . also be a parody of modern society, the copied work must be, at least in part, an object of the parody, otherwise there would be no need to conjure up the original work.” Because Koons purportedly did not “need” the original work to make his expressive point, there could be no fair use. The result of the case was a debacle for Koons. In addition to damages, he was ordered to pay the plaintiff’s attorneys’ fees.

To critics of the decision—which came down before Campbell, the trumpeting of Leval’s Toward a Fair Use Standard, and its advocacy for the primacy of an expansive notion of transformative use—the court gave short shrift to the transformative use that Koons had arguably made of Rogers’s original work. Indeed, one could argue that the Second Circuit implicitly reversed itself in its next case involving Jeff Koons and appropriationist art, in which it reached an exact opposite conclusion.

That said, the absence of proper crediting served as a critical factor in justifying the court’s rejection of Koons’s fair use defense. Specifically, in considering the first factor—the purpose and character of the use—the court drew on notions of attribution in two separate ways to weigh against a finding of fair use. First, employing the “good faith” criteria, the court pointed to Koons’s decision to remove a copyright notice from one of Rogers’s notecards that he sent to Italian artisans when he commissioned fabrication of his statue based on the Rogers photograph. As the court concluded, this action “suggests bad faith in defendant’s use of plaintiff’s work, and militates against a finding of fair use.” Second, the court concluded that the failure to credit supported its finding that Koons’s work was not commenting sufficiently on the original work to qualify for the heightened protection usually given to parody. As the court argued, Koons’s use failed to make the public aware of the original work and the resulting lack of proper crediting undermined a key reason why certain types of commentary—attributive ones—get fair use protection. “If an infringement of copyrightable expression could be justified as fair use solely on the basis of the infringer’s claim to a higher or different artistic use—without insuring public awareness of the original work—there would be no practicable boundary to the fair use defense,” noted the court.

Koons’ claim that his infringement of Rogers’ work is fair use solely because he is acting within an artistic tradition of commenting upon the commonplace thus cannot be accepted. The rule’s function is to insure that credit is given where credit is due. By requiring that the copied work be an object of the parody, we merely insist that the audience be aware that underlying the parody there is an original and separate expression, attributable to a different artist. This awareness may come from the fact that the copied work is publicly known or because its existence is in some manner acknowledged by the parodist in connection with the parody. 

Courts within the Second Circuit continue to pick up this language from Rogers to emphasize the importance of crediting in the fair use calculus. For example, in a recent published decision, the Southern District of New York drew upon Rogers’s bad faith analysis to find this subfactor within the “purpose and character of use” weighed in favor of the plaintiff when the defendant removed the copyright mark, thereby robbing the use of attribution. All the while, at least one Second Circuit decision, NXIVM Corp. v. Ross Institute, has mandated that good faith be considered by any court conducting a fair use analysis.

The Ninth Circuit has also cited the failure to provide authorial attribution as an important factor weighing against fair use. In Marcus v. Rowley, the court hewed to Harper & Row’s good faith edict in making attribution a part of the first fair use factor. Thus, the defendant’s failure to provide credit weighed against a finding of fair use. As in Weissmann, the complete absence of any pecuniary harm and the use of the work in an academic setting did not trump the failure to credit. Meanwhile, in Narell v. Freeman, the author of a best-selling novel, Cynthia Freeman, was caught having lifted, word-for-word, portions of her work from a previously published book by Irena Narell about the history of Jewish migration to San Francisco. Although the Ninth Circuit ultimately found that the misappropriated sections were largely factual in nature and not protectible expression, it nevertheless addressed the issue of fair use. And while it ultimately found Freeman’s actions protected by the fair use doctrine, it held that the first factor “weigh[ed] heavily against Freeman because she did not acknowledge in her work that she had consulted Our City in writing Illusions.”

The Narell court cautioned that acknowledgment would not, by itself, excuse infringement. And that has certainly proven true in other cases. For example, when a luxury fashion designer pushed back against claims that it had infringed a photograph of a model wearing its clothing, it advanced its fair use defense by pointing out that it had credited the photographer. The court demurred, noting that 

Defendant has not pointed to any precedent supporting its theories that because Defendant credited the photographer in the caption of the Photograph or because Plaintiff hired the model to wear Defendant’s clothing, Defendant has a right to use the Photograph. Simply put, attribution is not a defense against copyright infringement.

Even there, however, the court reaffirmed the value of attribution in the fair use calculus by citing to Narell favorably and noting that, while “acknowledgement does not itself excuse infringement,” a “finding [of] failure to properly attribute copyrighted material weighs against fair use.”

5.  The Implicit Role of Attribution in Transpurposive Use Cases

Beyond the case law from the Second and Ninth Circuits explicitly embracing attribution as a factor, there are instances where crediting has played an implicit role in fair use determinations. This is particularly true in a body of recent cases involving technology-related transpurposive uses excused by courts under the defense. So, for example, in Kelly v. Arriba Soft Corp., the Ninth Circuit blessed the unauthorized creation of thumbnail versions of online images to facilitate search engine functionality. This judgment was famously reaffirmed in Perfect 10, Inc. v. Amazon.com, Inc., when the Ninth Circuit found that Google’s creation of thumbnails for its search engine was also protected.

The Kelly suit resulted from Arriba’s practice of crawling websites and indexing them by creating thumbnail versions of images that it found on those sites. Arriba would then provide these thumbnails to its users, as relevant, in results on its search engine. Photographer Leslie Kelly, a chronicler of the American West in her images, objected and sued for infringement. The Ninth Circuit, however, held that Arriba’s activities constituted fair use and the attributive nature of Arriba’s use helped propel both the first and fourth factors.

On the first factor, the court placed great weight on the fact that the use of the photographs was both transformative and attributive. Unlike the original images, which were intended to inform and provide visual enjoyment, Arriba’s thumbnails served no aesthetic purpose and, instead, functioned solely as “a tool to help index and improve access to images on the internet and their related web sites.” Here, the attributive characteristics of Arriba’s thumbnails become important, as they were instrumental to the transformative functionality they embodied. As the court noted, clicking on thumbnails would produce an “Images Attributes” page that would provide a link back to the original page, which was the photographer’s home page, where sourcing information and attribution would naturally follow. If the thumbnails did not produce sourcing and attribution, they would not serve a true indexing purpose and, consequently, they would not promote the locational function of search engines.

As for the fourth fair use factor, the court found no market harm precisely because of the link-back attribution feature: “By showing the thumbnails on its results page when users entered terms related to Kelly’s images, the search engine would guide users to Kelly’s web site rather than away from it.” Steering users to the home webpage for the original source of the photographs would then provide them with crediting and attribution information and, if they were interested, the ability to license. Although the Kelly court never invoked the language of crediting, attribution played an implicit role in permitting its creation and use of thumbnails under the fair use doctrine. If the thumbnails were created without an ability to link back to the original source (where attribution would be given), they would be neither “transpurposive” nor without market harm.

Similarly, when evaluating whether Google’s creation of cached copies of websites and the images of them for its search engine would be excused from infringement liability, the court in Field v. Google Inc., drew on prior Ninth Circuit authority and the open-ended language of the Copyright Act to find authority to weigh as a fifth factor in the fair use calculus “a comparison of the equities.” Notably, the equities and good faith that mattered to the court revolved around Google’s decision to provide a link back to cached websites—a key mechanism for digital attribution.

6. Attribution and Copyright Clearance Norms

Finally, the copyright norms we hold most dear as a society also support attribution as a mitigating factor in practices that might otherwise constitute infringement. For example, although the practice of quotation literally reproduces and distributes copyrighted material without permission, notwithstanding the antics of the James Joyce Estate, it is universally viewed as per se fair use. Indeed, the decision that gave rise to the fair use defense in the first place, Folsom v. Marsh, announced the obviousness of this proposition when Justice Story wrote that “no one can doubt that a reviewer may fairly cite largely from the original work, if his design be really and truly to use the passages for the purposes of fair and reasonable criticism.”Quotation inherently involves attribution. The use of inverted commas around a sentence directly indicates that the sentence’s origins lie with a third party and not with the author of the text that one is reading. Critically, those inverted commas are then accompanied by some form of attribution. Admittedly, the attribution is part of the transformative function of quoting. After all, criticism or commentary about someone else’s work cannot occur unless that other work is identified. But the equitable nature of a quotation—giving attribution—distinguishes it from the act of taking someone else’s words without permission in the exact same way but without attribution—an act that can amount to infringement. Or, to put it in the form of a Zen-like koan:

Q: What is a quotation without quotation marks?

A: An infringement.

When using portions of someone else’s work, a key differentiator between infringement and fair use is the act of attribution.

Common beliefs about copyright law reflect our mores (even when they do not entirely match up with the law), and these popular notions have also long viewed attribution as a factor in what actions should or should not constitute infringement. As any copyright practitioner knows, popular misconceptions about fair use abound. Members of the public will often cite a particular bright-line threshold of use (no more than five percent of a work, for example) as providing insulation from infringement liability; they will claim that, so long as one does not profit from unauthorized exploitation of a copyrighted work, it constitutes fair use; or they will contend that any educational use automatically constitutes fair use. Of course, each of these absolute statements is wrong. But the beliefs are all based on a kernel of truth: the less one uses of a work, the less one profits from an unauthorized exploitation, and the more academic a use, the more likely it is to constitute fair use. The same dynamic is at play with another popular myth: that giving proper credit can insulate you from infringement liability for the unauthorized exploitation of someone else’s work. While this notion is obviously incorrect, it is not entirely without basis. Although it may be infringing just the same, a credited use is indubitably more fair than an uncredited use, as the former better complies with societal norms against plagiarism and advances authorial interests in attribution. As such, the former is more likely (even if marginally so) than the latter to enjoy a viable fair use defense.

For example, consider a recent controversy at the intersection of intellectual property and cultural appropriation. In 2021, Addison Rae Easterling, a popular social media influencer, appeared on The Tonight Show to “teach” Jimmy Fallon eight of the most popular dances trending on TikTok and other social media sites. Although the dances can enjoy copyright protection as choreographic works, the routines were not licensed from their creators, and, to make matters worse, Easterling and Fallon originally failed to give any credit at all to the originators of the dances. As Kamilah Moore, an entertainment attorney and the chairperson of the California Assembly’s Reparations Task Force, has noted, this misstep was particularly pernicious because of its racial valence: it “received major backlash” because white TikTok superstars “have often gained notoriety and received millions of views by parroting dance routines primarily created by Black creators and other creators of color” while failing to provide “proper attribution in the marketplace.” Easterling and Fallon’s unauthorized recreation of the dances could well constitute fair use, but there is a vast difference between an unlicensed use that is also uncredited versus one that is at least attributive. Recognizing this and responding to public pressure, Fallon subsequently attempted to make amends by hosting the dance creators in a later broadcast of his show. Given the importance of correcting for persistent gaps in crediting that troublingly fall along racial, gender, and socioeconomic lines as well acknowledging the economic value of proper attribution, a fair use regime that incorporates attribution as a factor would incentivize users of intellectual property to at least recognize the cultural influencers from whom they draw.

As this survey of the fair use landscape and extant jurisprudence has shown, acknowledged or not, crediting is already an implicit fair use factor. The only remaining question is to what degree it is or should be. That query provides rife fodder for future scholarship, as the weight accorded to attributive use, just like the other fair use factors in section 107, would be something left firmly to the discretion of judges who can consider the particular context in an adaptive and holistic manner.

CONCLUSION

The Supreme Court decided Dastar almost twenty years ago. Since that time, the sky has admittedly not fallen. As such, it would be hyperbole to suggest there is some kind of pressing emergency to address the absence of attribution rights. Nevertheless, as we have detailed, the lack of substantive legal protection for crediting in the post-Dastar era has impoverished the responsiveness of our intellectual property regime to the interests and motivations of authors. With this in mind, change through legislation would face difficult odds. After all, it took almost a century for the United States to accede to the Berne Convention and, when it did, that only sparked the most minimal of efforts to comply with Berne’s attribution requirements through the relatively limited scope of VARA. The most significant legislative efforts on attribution in the past generation came through the falsification, removal, and alteration of CMI provisions passed as part of the DMCA. But even there, facilitation of infringement, rather than vindication of crediting interests, drove the statute—a fact made clearly by the claims’ onerous double-scienter requirements. As such, it would be disingenuous to fail to recognize that the prospects for legislative action for broader crediting protection are dim, at best. Moreover, there are good reasons to doubt whether the benefits of providing affirmative attribution rights either by overruling Dastar or by providing an independent cause of action for crediting under the Copyright Act would outweigh the risks. Instead, change can come in another, more incremental fashion—one that averts key dangers of an affirmative cause of action for crediting, has grounding in the existing jurisprudence, and could occur without legislation action.

A generation ago, Pierre Leval’s Toward a Fair Use Standard forced a first major change in fair use jurisprudence. I argue for a second. Like the implementation of transformative use, attributive use fits comfortably into the existing jurisprudence on fair use, speaks to the purpose and character of the use and even market harm factors, and ultimately supports the underlying purpose of the overall copyright regime: promotion of progress in the arts. Since fair use only represents a defense to infringement, such an admittedly restrained approach still tethers attribution to infringement and, as such, it does not represent a complete victory for the independent value of crediting. Nevertheless, as the success of Leval’s appeal to transformative use shows, such an entreaty for change can have an impact. Moreover, if implemented, recognition of attributive use would promote further development of a culture of crediting, a particularly important move at the dawn of the digital age, when all individuals make daily (infringing) use of copyrighted works thousands of times per day. Ultimately, therefore, the attributive use doctrine could not only enable fair use to operate in a way that is more consistent with the utilitarian design of our copyright regime, but it can also help build support for further recognition of attribution rights in the future.

 

96 S. Cal. L. Rev. 1

Download

A.B. Harvard, J.D. Yale Law School. Paul W. Wildman Chair and Professor of Law, Southwestern Law School; Visiting Professor of Law, University of California, Los Angeles (“UCLA”) School of Law.

The Invention of Antitrust

The long Progressive Era, from 1900 to 1930, was the Golden Age of antitrust theory, if not of enforcement. During that period courts and Progressive scholars developed nearly all of the tools that we use to this day to assess anticompetitive practices under the federal antitrust laws. In a very real sense, we can say that this group of people invented antitrust law.

The principal contributions the Progressives made to antitrust policy were (1) partial equilibrium analysis, which became the basis for concerns about economic concentration, the distinction between short- and long-run analysis, and later provided the foundation for the development of the antitrust “relevant market”; (2) the classification of costs into fixed and variable, with the emergent belief that industries with high fixed costs were more problematic; (3) the development of the concept of entry barriers, contrary to a long classical tradition of assuming that entry is easy and quick; (4) the distinction between horizontal and vertical relationships and the emergence of vertical integration as a competition problem; and (5) price discrimination as a practice that could sometimes have competitive consequences. Finally, at the end of this period came (6) theories of imperfect competition, including the rediscovery of oligopoly theory and the rise of product differentiation as relevant to antitrust policy making.

Subsequent to 1930, antitrust policy veered sharply to the left. Then, two decades later it turned just as sharply to the right. Eventually it moderated, reaching a point that is not all that far away from the Progressives’ original vision.

INTRODUCTION

The long American Progressive Era to the New Deal, roughly 1900 into the early 1930s, was the formative age of antitrust policy. During this period a diverse group of policy makers developed nearly all of the analytic tools that antitrust law uses today to evaluate business practices or market structures thought to be anticompetitive. For all intents and purposes, they invented antitrust law. In fact, after decades of experimentation we are reclaiming much of it. The extraordinary Progressive influence on antitrust policy was at least partly a historical coincidence. The passage of the Sherman and Clayton Acts and the development of techniques for evaluating practices tracked extraordinary developments in technology as well as social and economic thought. Antitrust policy would have looked very different had it developed a half century earlier.

The Progressive Era antitrust movement was both political and economic. It reflected the emergence of new interest groups as well as new sources of economic concern and theoretical developments. The emergent interest groups were large multistate business, the trade association movement dominated by small business, consumers, and labor. The new sources of concern were industrialization, the rise of modern distribution, the labor movement, and the increasing importance of consumers as market participants. The new theoretical developments were the rise of marginalist economics and industrial organization theory, which provided competition analysts with a set of tools like none they had before.

The legislative debate leading up to the Sherman Act can hardly be characterized as a dispute about economic theory. That came later as litigants and courts looked for tools that would enable them to assess practices in a coherent way. Consistent with the economic-focused language of the Sherman Act itself, the tools that emerged were mainly economic, although they were applied by non-economist lawyers and judges. The record of their engagement with the law is impressive; judges routinely used them even if they were not aware of their economic origins or technical meaning. Nearly all of these developments placed antitrust theory on an expansion course that prevailed until the reaction against the New Deal found a voice in the neoliberalism of the 1940s, particularly as expressed by the Chicago School. Even so, the neoliberal revolution adopted most of these tools, although it modified some of them and rejected a few.

The Progressives are occasionally caricatured as people who really did not care about costs and productivity but were concerned exclusively about bigness as such. That could not be further from the truth. By and large the Progressives appreciated the fact that the trusts had lower costs than smaller firms and did not want to punish them for that. In fact, they were fairly obsessed with efficiency and cutting of costs. That obsession extended even to Louis Brandeis, a strong proponent of business efficiency even as he railed at large firms. He campaigned for “Taylorism,” or scientific management, as a way of limiting price increases. One antinomy in Brandeis’s work was his persistent failure to acknowledge the relationship between greater efficiency and larger size, even though contemporary economists clearly did.

The numerous and varied participants in the Chicago Conference on Trusts, discussed below, favored lower costs and were also concerned about higher prices. They worried that exclusionary practices might be a vehicle for achieving them and making market dominance permanent.

I.  THE CHICAGO CONFERENCE ON TRUSTS

The Progressive Era was heavily preoccupied with the rise of larger firms, or the “trust” problem. The initial reaction was an eclectic range of views about what to do about them, or whether to do anything at all. The gigantic 1899 Chicago Conference on Trusts, hosted by the Civic Federation of Chicago, is an exceptional window into the contemporary mindset because it reflected this diversity of views. Its personnel and proceedings, which were published in 1900, represented every interest group that had a stake in policy about the trusts. Some participants were invited by the conference managers, while others were invited by the governors of individual states. The speakers included politicians, economists, lawyers, social scientists and statisticians, industrialists, labor union leaders, insurance company representatives, and even clergy.

This diverse group identified a number of phenomena that explained the rise of the trusts and that either justified or damned them. Some argued that the trusts were entirely the consequence of economies of scale or scope and as such were an engine of economic progress that should be left alone. Others argued that potential competition and new entry would always be present to discipline monopoly pricing, thus mitigating any concerns. Many others saw the trusts as harmful and blamed their rise on deficiencies in state corporate law. They debated about a national corporation act as a potential solution. Others both blamed and defended tariffs or unethical business actors.

Within this amalgamation of concerns the Sherman Act itself was hardly dominant. In fact, it played a surprisingly small part, and the speeches tended to emphasize its deficiencies more than its strengths. Henry Rand Hatfield’s well-known contemporary account of the Chicago Trust Conference is very likely responsible for the view that the economists who spoke were nearly all opposed to the Sherman Act. A fair reading of the proceedings suggests two quite different splits. First was the division of those who thought that the trusts were efficient and harmless from those who regarded them as threatening. Contrary to Hatfield’s view, a clear majority believed that the trusts presented a serious problem. Second was the question of the best legal tools for confronting them. Here, Hatfield’s point has more traction. As correctives, corporate law and tariff reform were at least as prominent as the Sherman Act, and many of the speakers professed strong disappointment in Sherman Act litigation to that time. Although the speakers were hardly unanimous, the strongest consensus around a single view was that the trusts should be controlled by changes in corporate law.

Prior to the Chicago Conference, the Civic Federation had sent a questionnaire to participants. The summaries contained in the Proceedings say nothing about the methodology, but there were 554 respondents to 69 questions. The respondents were described as “trusts, wholesale dealers, commercial travelers’ organizations, railroads, labor associations, contractors, manufacturers, economists, financiers, and public men.” A separate list, or circular, was sent to a smaller but overlapping group of lawyers, economists, and “public men.” The description of the survey also fails to indicate whether respondents were limited to one answer or could select multiple answers. Nor does it specify how recipients of the questionnaire were selected and what was their distribution over various interest groups. These omissions largely reflected the state of public opinion research at the time. In any event, nothing suggests that this was anything more than an informal questionnaire distributed broadly to invitees.

David Kinley, a professor from the University of Illinois, reported on the results. Three quarters of the participants overall believed that the trusts injured consumers. Two-thirds of the respondents regarded the trusts with “apprehension.” Most on the main        questionnaire believed that the trusts resulted in higher prices. However, 90% of the respondents on the second Circular, which was more focused on academics, lawyers, and government officials, believed that the effect of the trusts was to reduce costs. Two-thirds of the respondents on this second list also believed that consumers would benefit. Interestingly, roughly two-thirds of the respondents overall believed that labor organizations should be treated as all other trusts, while one-third took an unspecified “opposite view.” This suggests that the idea of a labor “exemption” from antitrust law did not have popular support in 1900. Tellingly, this occurred after the federal courts had begun using the Sherman Act as a powerful striking-breaking device. Evidently, most of the participants did not object.

The survey concluded with a very general question: “What shall be done with combinations?” The answers were all over the place, with pluralities going to unspecified “legislation” (61 respondents), “let alone” (60) and the third highest specific proposal going to “Tariff revision” (45). “Antitrust” did not appear on the list, except to the extent it may have been included in unspecified legislation. Twenty-six respondents preferred government ownership or control of natural monopolies, and even fewer (10) supported “Stricter Limitation on Corporate Powers.” No specific proposal other than “let alone” received 10% of the votes, and it received only 10.8%. The list of options did not include any that were obviously related to morals or ethics, although 123 responses were classified as “miscellaneous,” with no specification of their content.

At the time of the conference, the Sherman Act was nearly ten years old and had produced two important Supreme Court decisions condemning railroad cartels. Even here, the very small number of comments on the railroad cartel decisions were more negative than positive. One complaint was that the railroad cartel cases did not authorize the courts to set reasonable rates, but only to condemn bad agreements. Another was that the Trans-Missouri railroad cartel case, which had adopted a per se rule against price fixing, had largely “expunged” the rule of reason from the law.

By 1900 the Sherman Act had also been used aggressively several times against labor unions, a development that was both praised and condemned by participants. In nearly all of the labor cases the plaintiff had been the United States, thus inviting debate about what should be government policy toward labor union activities. P.E. Dowe, statistician of the Anti-Trust League, declared that while the cost of living within the last two years had increased some 12–16%, wages had risen by less than 3%. Nevertheless, as noted above, there was little support for labor antitrust immunity. Overall, while attitudes toward labor changed significantly between 1890 and 1914 when the Clayton Act was passed, most of this was not yet reflected in the conference proceedings.

Other conference participants criticized the Supreme Court’s very first Sherman Act decision, United States v. E.C. Knight Co., which had concluded that Congress lacked the constitutional authority to control intrastate manufacturing simply because the goods were destined for interstate shipment. That provoked the view that the country “must have a constitutional change if the general government is to deal with the trust problem.” Another speaker praised the railroad cartel decisions as well as E.C. Knight for developing the distinction between intrastate and interstate trusts. Many commentators expressed concerns about federalism, but most were of the nature that while the states had a primary role in combatting trusts they could not control interstate companies without federal assistance.

Several conference participants spoke about the role of costs. Many recognized that the trusts tended to reduce costs. Even the “Great Commoner” William Jennings Bryan acknowledged the cost reductions but protested that nothing ensured that these savings would be passed on in the form of lower prices. “A trust, a monopoly, can lessen the cost of distribution. But when it does so society has no assurance that it will get any of the benefits . . . .” Similarly, others indicated a concern for higher prices. For example, John M. Stahl of the Farmers’ National Congress acknowledged that the trusts had lowered costs but accused them of setting anticompetitively high prices. Some participants defined competition in terms of cost reduction.

       Critics later faulted the Chicago Conference for failure to make specific recommendations, and state governors called a second conference for that purpose which met in St. Louis later in 1899. It issued a number of recommendations, but its proceedings were apparently never published and it received little attention from the press. It was dominated by state attorneys general who focused largely on corporate law remedies. Its recommendations either duplicated those already contained in the Sherman Act or else called for corporate law modifications limiting the power of corporations to do business in more than a single state.

The path of antitrust development that took place in subsequent years leading up to the Clayton Act in 1914 was much more focused than the conference debates, mainly because many alternatives dropped away. The move for a national incorporation statute or expanded state corporate law remedies ran out of gas. Debates over the tariff remained, but no legislation ever linked them to trusts as such.

The role of labor became more controversial after 1900, with distinctive positions emerging by the 1912 presidential election. The 1912 Democratic Party platform called for protection of labor organizing so that “members should not be regarded as illegal combinations in restraint of trade.” The Republican platform was silent on that issue, although it did advocate for preservation of high tariffs as a means of protecting workers’ wages. High tariffs, it should be noted, protected producers directly, and labor only if producers passed on some of their gains in the form of higher wages. The Progressive Party, with Theodore Roosevelt as its head, called for an end to labor injunctions but did not mention a substantive antitrust immunity. The Democrat’s 1912 election victory very likely accounts for insertion of a labor immunity into the Clayton Act, now as section 6.

Debates over good morals in business behavior are of course never ending, but the concerns were never reflected in the text of an antitrust statute. Rather, when it passed the Clayton Act in 1914, the Progressive-dominated Congress doubled down on the use of exclusively economic language. The Act condemned conduct when it threatened to “substantially lessen competition” or “tend to create a monopoly.”

One thing that emerges powerfully in the proceedings of the conference is that, even though the participants represented a wide variety of political beliefs as well as professions, for a clear majority of them the dominant concern was with the power of the trusts to set high prices or drive rivals out of business. But there were some exceptions. Of the roughly seventy participants whose statements were published, a half dozen emphasized political or social concerns either in addition or as an alternative to the economic ones. The most prominent in Progressive circles was economist Henry Carter Adams, at this time a statistician for the Interstate Commerce Commission. Adams spoke at some length about rising concentration and economic power, as well as the deficiencies of state corporate law. However, he also complained about the “general social and political results of trust organizations” that must be considered. “For the preservation of democracy there must be maintained a fair degree of equality in the social standing of citizens,” he observed, and wondered whether the rise of the trusts was consistent with that. He concluded:

I would not claim, without discussion, that the trust organization of society destroys reasonable equality, closes the door of industrial opportunity, or tends to disarrange that fine balance essential to the successful workings of an automatic society; but I do assert that the questions here presented are debatable questions, and that the burden of proof lies with the advocates of this new form of business organization.

He also suggested that the trusts might have outsize political influence.

Dudley Wooten, then a member of the Texas legislature, agreed, arguing that the trusts were antidemocratic perversions brought about by selfishness. Aaron Jones, a leader of the national Grange, a populist political organization of farmers, observed that the sugar trust made political contributions to the Republican Party in Republican-controlled states and to the Democrats in Democrat-controlled states. John W. Hayes, General Secretary of the Knights of Labor, saw a political war between the power of the state and the power of the trusts, as did Edward W. Bemis from the Bureau of Economic Research. However, Bemis also praised chain stores for offering low prices and distinguished them from the trusts. While the trusts cut prices selectively in order to drive out rivals, the department store “furnishes alike to all the advantage of lower prices, which are rendered possible by the economies of a big business.” William Dudley Foulke, a prominent journalist and political activist for Progressive causes, argued that “the political and social effects of monopoly are far more menacing to society than its economic results.”

For more conservative political activist George Gunton, by contrast, politics were present but pulling the other way: politicians were being urged to abandon sound economic principles of “industrial freedom” in order to vote the “arbitrary paternalism” of harsh regulation of the trusts.

Following the Chicago Conference, Progressives began to focus more narrowly on the antitrust laws and the discipline of economics as the preferred tool for dealing with the trusts. While political and moral rhetoric about the trusts has always been present, there is little evidence that it provided substantial guides to policy making. The dominant tool became marginalist economics, then in its infancy, and the darling of the younger generation of political economists in the United States. Most of these were Progressives with a much stronger bias in favor of government intervention than their predecessors had supported.

The principal tools that emerged were (1) partial equilibrium analysis, which became the basis for concerns about economic concentration, the distinction between short- and long-run analysis, and later came to justify and provide support for the concept of antitrust’s “relevant market”; (2) classification of costs into fixed and variable, with the emergent belief that industries with high fixed costs were more problematic; (3) development of the concept of entry barriers, contrary to a long classical tradition of assuming that entry by new firms is easy and quick; (4) the distinction between horizontal and vertical relationships and the emergence of vertical integration as a competition problem; and (5) price discrimination as a practice that could have competitive consequences. Finally, toward the end of this period came (6) theories of imperfect competition, including the rediscovery of oligopoly theory and the rise of product differentiation as relevant to antitrust policy making.

II.  MARGINALIST ECONOMICS AND MARKET REVISIONISM

The antitrust movement in the United States coincided with a far-reaching revolution in economics. The marginalist revolution has unfortunately been seriously undervalued in history writing about antitrust, mainly because so many historians did not understand it and failed to appreciate its implications. Nevertheless, the fact remains that one cannot understand the set of tools that Progressive antitrust policy makers deployed without understanding their underlying economics. By the 1930s nearly all economists were marginalists.

The classical political economists had seen value as inhering in goods or the labor that went into making them. They tended to assess costs and benefits by looking at averages, which were necessarily taken from the past. They also tended to believe that capital would flow naturally toward profit and that the only practical impediment was government licenses or other restrictions. In sharp contrast, marginalists saw value as willingness to pay or accept for the next, or “marginal,” unit of something. As a result, its perspective on value was forward looking. Further, market entry was a dynamic concept and its ease and likelihood varied greatly from one market to another.

Three features of marginalism account for both its influence and the resistance to it. One was that marginalist analysis enabled various values governing demand, supply, or economic movement to be “metered,” or quantified, in ways that classical political economy could not do. This feature also made marginalist economics much more technical, with increasing informational demands, but also promised to give Progressive Era economists capabilities far beyond those of their predecessors. Second, and relatedly, marginalism expanded the use of mathematics in economics, to a degree unknown by the classical political economists. This became a particularly attractive feature to younger economists and social scientists looking to add rigor and expertise to their disciplines. It also accounts for some of the resistance from older economists.A third feature was that marginalism undermined the classical view that markets are competitive unless the state creates monopoly. Under marginalism competitiveness was a matter of degree, and only a small percentage of markets satisfied the conditions for perfect competition. As a result, marginalism began to make a broad and unprecedented case for selective state intervention in the economy.

A.  Markets as Human Institutions: Coercion

The classical political economists saw the world of commercial relationships in binary terms. For private arrangements people were either free or bound. Aside from government constraint, the boundaries of obligation were defined by contract, property, and tort law. Value inhered in things or the labor used to produce them, and people either purchased or not. Setting aside public obligations, within that world people were free to make their own economic decisions unless a contract, property right, familial hierarchy, or sovereign command bound them. That bond was particularly strong because the common law principle of liberty of contract refused to set very many contracts aside. Further, the classical tradition regarded the market itself as a part of nature. Francis Wayland’s popular textbook on political economy defined the discipline in 1886 as “a branch of true science,” and by science “[he] mean[t] a Systematic arrangement of the laws which God has established.”

By contrast, one prominent feature of the late nineteenth century was its fascination with change—in everything from biological evolution to physics to mechanics. The historian Howard Mumford Jones described the period as the “Age of Energy.” It was only natural that economists would develop marginalism, with its forward-looking concept of value that focused on change and the next thing rather than on averages from the past. “Equilibrium” became the steady state to which all change aspired but seldom reached. Motion rather than stasis was the natural order of things.

Marginalism began with the premise that value is a measurable expression of human choice. Value depended on willingness to pay or willingness to forego. Further, marginalism distinguished among goods depending on costs, availability, and preference. One corollary was the increasing belief that markets were not all the same and did not all function equally well. This opened the way for more substantial if selective intervention to correct market deficiencies.

This change in the conception of markets from a pure product of nature to a created human institution was perhaps Progressive economics’ most important contribution. Markets became imagined as human creations and not merely a reflection of permanent natural laws. Their design was a product not only of preference but also of state policy, which could be for good or for ill. As the institutionalist progressive economist John R. Commons put it in his important book on law and capitalism, the evolution of economic phenomena was artificial, more like “that of a steam engine or a breed of cattle, rather than like that of a continent, monkey or tiger.”  Further, the “phenomena of political economy” are in fact “the present outcome of rights of property and powers of government which have been fashioned and refashioned in the past by courts, legislatures and executives through control of human behavior by means of working rules, directed towards purposes deemed useful or just by the law-givers and law interpreters.”

An outpouring of literature stretching from the 1890s through the early decades of the twentieth century developed aspects of this view that markets are “created” rather than simply present in the natural world. One manifestation was unprecedented economic concern with the distribution of wealth as a legitimate target of state policy because, after all, the state was responsible for it in the first place. Progressive economist Richard T. Ely argued in his two-volume book on the common law and the distribution of wealth that the legal system itself was strongly biased against the poor. The coercive rules of property and contract relinquished power to those who already had it. In a review, Cambridge economist Charles Percy Sanger concluded that “the most salient fact is the mass of evidence which shows how hostile the constitution of the United States, as interpreted by judges, is to the poor or the public.”

 A related consequence that had more salience for antitrust policy was the idea that markets themselves could be coercive instruments that limited human freedom. Columbia Professor Robert Hale, another Progressive who was one of the earliest economists to be hired onto a law school faculty, expressed this idea for an entire generation. In an article entitled “Coercion and Distribution in a Supposedly Non-Coercive State,” he observed that the economic systems that had been developed by classical economists gave lip service to freedom. In reality, however, their systems are “permeated with coercive restrictions of individual freedom, and with restrictions, moreover, out of conformity with any formula of ‘equal opportunity’ or of ‘preserving the equal rights of others.’ ”

Many of these newly discovered concerns about market coercion showed up in public law—things such as greater protection for labor from onerous wage agreements, prohibitions of child labor, women’s suffrage, the progressive income tax, and eventually the expansive safety net programs of the New Deal. But they also affected competition policy. For example, the law of vertical restraints became increasingly aggressive, particularly in its protection of small retailers. It abandoned very benign common law rules for virtual per se illegality for most distribution agreements that limited dealer behavior, as well as aggressive rules for vertical mergers. The classical conception that new entry would always be around to discipline monopoly unless the government prevented it gave way to one that saw markets themselves as forestalling new competition. The idea of competition itself came increasingly under attack, and not from socialists who did not believe in it. Rather it was from neoclassically-trained economists who realized that the viability of competitive markets depended on several assumptions that did not invariably obtain.

B.  Partial Equilibrium Analysis

Marginal utility theory permitted the creation of tools for determining the relationship between costs and either competitive or monopoly prices within a firm. By itself, however, it was not able to assess how competition works among multiple firms or what the conditions are for achieving it. That required additional theory about interactions among firms.

Partial equilibrium analysis permitted people to group firms producing similar products into “markets” on the assumption that the interactions of firms within the same market were much more important for evaluating competition than the interactions (or lack of them) among firms in different markets. Cambridge University Professor Alfred Marshall, the first great marginalist industrial economist, borrowed this approach from the science of fluid mechanics: for goods within the same market, prices and demand would flow toward equality, but not across the market’s boundaries.

In 1890 Marshall brought the ideas of marginal utility and equilibrium together in a way that made the analysis of market behavior both tractable and useful. First, he developed what came to be known as the Marshallian demand curve, illustrating the inverse relationship between price and output of a single commodity. The downward slope of the demand curve is driven entirely by the next, or “marginal,” buyer’s willingness to pay for one unit of that commodity. The model ignored choices people might make about different commodities, even though in a world of limited budgets such choices were relevant.

Marshall was not the first marginalist, but he did turn marginalism into a practical tool of competition analysis. He explained that he had come to attach great importance to the fact that our observations of nature, in the moral as in the physical world, relate not so much to aggregate quantities, as to increments of quantities, and that in particular the demand for a thing is a continuous function, of which the “marginal” increment is, in stable equilibrium, balanced against the corresponding increment of its cost of production.

For example, a firm would calculate a selling price by comparing the amount of additional cost that production and sale would encounter and the amount of additional revenue that it would produce.

Marginalism provided a partial theory of individual firm behavior, but not so obviously a theory of firm interaction and competition. In order to do that, Marshall needed a mechanism for identifying who in the economy competes with whom. This was in contrast to earlier contemporaries such as Leon Walras and Marshall’s own successor as professor of political economy at Cambridge, Arthur Cecil Pigou, who were more concerned with the economy as whole. Today this division roughly segregates macroeconomics and microeconomics.

Marshall’s concern was to make economic analysis more manageable by focusing on those firms that competed with one another in an obvious way. He realized that everything in an economy affects everything else, but the most important influences can be identified and tracked. In the influential eighth edition of Principles, published in 1920, Marshall observed that informational demands made it necessary for people, with their “limited powers” to “go step by step.” They would have to break up “a complex question, studying one bit at a time and at last combining [their] partial solutions into a more or less complete solution of the whole riddle.”

He described his solution, which came to be known as partial equilibrium analysis, this way:

The forces to be dealt with [in the economy are] so numerous, that it is best to take a few at a time; and to work out a number of partial solutions as auxiliaries to our main study. Thus we begin by isolating the primary relations of supply, demand and price in regard to a particular commodity. We reduce to inaction all other forces by the phrase “other things being equal”: we do not suppose that they are inert, but for the time we ignore their activity. This scientific device is a great deal older than science: it is the method by which, consciously or unconsciously, sensible men have dealt from time immemorial with every difficult problem of ordinary life.

This focus on individual industries quickly took over the entire field of business economics, or “industrial organization,” as a distinct area of economic inquiry. Industrial organization theory seeks to determine the conditions under which a particular industry attains equilibrium. Today, antitrust has become a substantially microeconomic discipline, certainly in litigation if not always in theory.

Marshall set industrial organization economics on the path of studying industries individually by identifying goods, which he termed “commodities,” that were sufficiently similar that they could be said to compete with each other. He borrowed from Augustin Cournot the definition that a “market” is the “whole of any region in which buyers and sellers are in such free intercourse with one another that the prices of the same goods tend to equality easily and quickly.”

This assumption had numerous implications that were relevant to antitrust. One was to invite questions about exactly how to identify who was in such a market and who was not. A second was to consider whether the identity of the firms in this grouping changed over time. The concept of “entry barriers” explained the likelihood that firms would cross this line, coming in when profits were high. A third was to make the analysis of relationships among competitors, or “horizontal” relationships very different from the analysis of vertical or other relationships. A fourth was a search for the conditions that either furthered or undermined competition once such a group of firms or their commodities had been defined.

Marshall clearly realized that in reality there is no such thing as a single market that is completely isolated from the rest of the economy. Partial equilibrium analysis, as it came to be called, was no more than a working assumption—although a very important one for making economic analysis manageable. The idea that groupings of similar (competing) commodities should be industrial economics’ principal subject of study had a profound influence on antitrust policy. One of the most important antitrust tools to come out of this focus was the idea of the “relevant market,” or the grouping of sales whose products and prices are strongly influenced by one another.

The late nineteenth century was the golden age of engineering and science, including social science and economics. Marshall borrowed his ideas about markets, movement and equilibrium straight from Newtonian physics: “When two tanks containing fluid are joined by a pipe, the fluid, even though it be rather viscous, which is near the pipe in the tank with the higher level, will flow into the other,” he wrote in 1890. Further, “if several tanks are connected by pipes, the fluid in all will tend to the same level . . . .”

While he appeared to be discussing fluid mechanics, Marshall was actually speaking of the principle of economic substitution at the margin, which he defined as the tendency for prices within a single market “to seek the same level everywhere,” just as the fluid in a tank. Further, “unless some of the markets are in an abnormal condition, the tendency soon becomes irresistible.” Within this model a “market” was a closed system in which fluids moved naturally toward equality. A different market would be a different enclosed system, and without any flow from one system to the other. Further, as soon as one relaxed the assumption that resources would move freely and quickly from any place of low utility to any place of higher utility, it became prudent to investigate where such movements could be expected to occur, when they would be less likely, and what were the obstacles that stood in the way.

Irving Fisher, who was to become one of America’s most important early marginalists, used his Ph.D. program at Yale in the 1890s to construct a “utility machine.” The machine illustrated with fluids controlled by pumps and valves how prices within the same market flowed to an equilibrium, but did not flow across market boundaries.

The utility machine was thought to be so innovative that it was scheduled for display at the 1893 Columbian Exhibition in Chicago but was destroyed in route. Other American economists also used illustrations derived from fluid mechanics to illustrate the equilibrium of prices in a market.

Figure 1. Irving Fisher’s Utility Machine (1893)

Sources: Timothy Taylor, Photos of Fisher’s Physical Macroeconomic Model, Conversable Economist (Oct. 25, 2016, 8:06 AM), https://conversableeconomist.blogspot.com/2016/10/photos-of-fishers-physical.html [https://perma.cc/N64M-VLR7].

Marshall’s conclusion that the fluids in a tank would flow to a level equilibrium, even though they were “rather viscous,” presaged another development in marginalist economics: the idea of friction, or “costs of movement,” in the words of Marshall’s successor Pigou. This idea was later narrowed and refined to become “transaction costs.” The idea was simply that the costs of moving resources to an equilibrium varied from one market situation to another, and in some cases these costs prevented the movement altogether. As a result, one feature of some markets was “chronic disequilibria,” as Joseph Schumpeter later observed. Another result was increasing awareness that these costs could interfere with a market’s movement toward competition. These concerns were reflected in the increasing attention toward barriers to entry, in contrast to the historical classical assumption of free entry.

Marginalist industrial economics also broke the bond that had always existed between classical political economy and laissez faire policy—at least until significant neoliberal pushback occurred in the 1940s. The classicists had been strenuous opponents of government intervention in the economy, but the new Progressives were not. Indeed, Marshall himself moved significantly to the left as he grew older. As the technical study of market competition under marginalist principles developed, economists became increasingly concerned about defining the conditions for “perfect” competition. Accompanying this came the realization that the conditions are in fact quite strict. Nearly all markets deviated from them, although some more than others. One thing that marginalism provided was a set of tools for measuring these deviations, provided that the data were available. Antitrust policy in turn became a tool for examining certain industry structures and practices in order to determine whether they were anticompetitive and, if so, whether they could be corrected by the legal system.

C.  Industrial Concentration

The idea of a correlation between the number of firms in a market and its degree of competitiveness dates back to Cournot, a French mathematician who wrote in the mid-nineteenth century. In Cournot’s model, as the number of effective competitive players in a market becomes smaller, the margin between price and marginal cost increases until it reaches the monopoly level with a single firm. For more than a century, the relationship between industrial concentration and competitive performance has been an important component in competition policy, both at the legislative level and more specifically in merger policy. Nevertheless, its role has been controversial.

“Concentration” refers to the number of firms in a market and, under most measures, their size distribution. A market is said to be more concentrated as the number of firms goes down or as the size distribution is more lopsided. In order to have a measure of industrial concentration someone needs to have a concept of a market, or “industry,” and that is why partial equilibrium analysis was an essential premise.

Around the turn of the century, marginalist economists began to examine the relationship between market structure and industry performance. As early as 1888 Gunton used data from the U.S. Census of Manufactures to conclude that over the previous half century, industrial concentration in some markets had grown significantly. For example, the cotton industry census data from 1830 and 1880 showed that during that interval the amount of capital invested in the industry grew fivefold, the amount of production more than tenfold, but the number of firms had actually shrunk from 801 to 756. The data also showed that the amount of capital invested per worker had roughly doubled, indicating that the firms were becoming more capital intensive. Gunton also identified railroading, telegraphing, petroleum production, and sugar as showing greatly increased concentration.

Gunton’s conclusions were not addressed to competitiveness. He never discussed the relationship between the number of firms in a market and the threat of oligopoly or collusion. He observed that some had complained that the “concentration of capital tends to increase prices” but found no evidence of it. Rather, he found that most of the facts “point the other way.” Prices in most of the industries that had experienced higher concentration had actually gone down rather than up. He also rejected the argument that “although these trusts have constantly resulted in reducing prices,” still greater saving would result “should the government run the business.” He then concluded that the large firms were fundamentally a good thing.

Increasingly, however, economists and competition lawyers became less sanguine. Boston attorney Lionel Norman lamented that industrial concentration was increasing at an alarming rate. Cornell economist Jeremiah Jenks and Walter Clark, a professor of mathematics and economics, were also much more pessimistic, as were Progressive economists Ely and Edwin R.A. Seligman. Looking at the business landscape just after the turn of the century, Seligman concluded that the “study of modern business enterprise thus becomes virtually a study of concentration.” He also relied heavily on data from the U.S. Census of Manufactures, which showed rapidly increasing concentration around 1900 and a significantly greater number of “combinations,” or firms that had attained their large size by merger. All but one of the top twenty-five combinations had been formed between 1890 and 1904. On effects, he noted both the possibility of lower costs and higher profits. He also noted that higher profits did not necessarily mean higher prices, because higher output and lower prices could also be profitable. He seemed particularly troubled by the fact that the trusts earned higher margins, even if they sold at lower prices.

Progressive railroad economist and Harvard Professor William Z. Ripley also undertook a comprehensive examination of industrial concentration data derived from the Census of Manufactures. The two census figures he found to be most informative were those of the number of firms in each consecutive five-year census period and the value of their gross product. He concluded that in 142 of the 322 industries grouped in the census, the number of firms had declined, and there had been significant increases in per firm output. He was able to group industries by their tendency toward monopoly, simply by examining the trend toward increased concentration. “Concentration varies more or less directly with the degree of monopolization,” he concluded.

These writers generally assumed a correlation between the data contained in the Census reports and the “markets” that Marshall referred to for partial equilibrium analysis. In fact, the census data correlated very poorly. For example, one classification in the 1909 Census of Manufactures was “[f]urniture and refrigerators,” which included both metal and wood furniture of all kinds, as well as wooden iceboxes and metal refrigerators, which were first coming into commercial use. A metal refrigerator did not compete very much with an upholstered chair, which did not compete very much with a wooden bed. This very poor fit between industry census data and antitrust markets has served to weaken conclusions about industry competitiveness from census classifications—something that a few Progressive economists realized already at the turn of the century. This poor correlation has remained to this day as a problem with the measurement of industrial concentration through the use of census data. The classifications are better today than they were a century ago, but they still are not well designed to address this problem. Nevertheless, data of this type have been in continuous use to produce measures of industry competitiveness ever since the late nineteenth century.

The Chicago School largely rejected the significance of concentration data, opting for a position more like Gunton’s that the aggregation of large firms resulted mainly in greater efficiency and lower prices. Numerous other scholars from the mainstream and further left have disagreed. In the mid-1970s, the debate produced an influential conference collecting representatives from both sides. The resulting book hardly put the debate to rest, however, and census-driven concentration data continue to find a controversial but important place in debates about American competitiveness. For example, the Biden Administration’s 2021 executive order on American competitiveness lamented declining competition and relied on concentration data to make the point.

D.  Fixed Costs and Equilibrium

Both marginalism as a theory of value and Marshall’s theory of equilibrium made cost classification essential. In fact, for Marshall, the cost problem produced significant frustration. Competition drives prices to marginal cost which, by definition, are costs encountered for each incremental change in output. But if hard competition drives prices to marginal costs, then how could a firm pay off its other costs?

Marshall used the term “marginal cost” to describe the immediate additional cost that a firm faced when it increased output by a single unit. In a chapter on the “Equilibrium of Normal Demand and Supply,” he observed that under what he called “free competition” prices would be driven to a level very close to marginal cost, and this would become a stable equilibrium.

Marshall’s theory of marginal cost was an effort to determine how firms decide on prices. He observed that prices are related to costs but not all costs are the same. Some costs seem to be quite unrelated to a firm’s decision about what price to charge, at least over the short run. This included administrative costs as well as depreciation on plant and durable equipment. In calculating whether a particular price is immediately profitable, the firm largely ignores these costs. Marshall identified “total cost” as the sum of these supplemental costs plus marginal costs. In the short run each additional sale would add to a firm’s profit so long as it was at a price that exceeded the firm’s marginal costs.

Marshall never used the terms “fixed costs” or “variable costs.” He devoted an entire chapter to “cost of production,” which spoke of “prime costs,” “total costs,” and “marketing costs.” The words “prime” and “direct” were almost always used as references to what we would call variable costs. Within prime costs he included “the (money) cost of raw material used in making the commodity and the wages of that part of the labour spent on it which is paid by the day or the week.” He excluded salaries such as are paid to management because these did not vary with output over the short run.

Marshall observed that for goods that require a “very expensive plant” the “[s]upplementary” cost is a “large part of their [t]otal cost.” As a result, a “normal price” “may leave a large surplus above their [p]rime cost.” In today’s terminology, in order to be profitable a business with high fixed costs would have to charge a premium above its variable costs. He also observed what would become a significant problem for establishing equilibrium in markets with high fixed costs. “[I]n their anxiety to prevent their plant from being idle” producers may “glut the market.” If they “pursue this policy constantly and without moderation,” price may be so low “as to drive capital out of the trade, ruining many of those employed in it, themselves perhaps among the number.” When firms are under “keen competition” this urge becomes inevitable, and firms “whose business is of this kind . . . are under a great temptation” to sell “at much less than normal cost.”

Marshall’s problem was getting an equilibrium that would sustain a market that was both competitive and had high fixed costs—an increasingly prominent feature of industrial production. By his eighth edition in 1920, Marshall had come up with a largely unsatisfactory biological model to explain how firms with significant fixed costs might attain equilibrium. Firms were like trees in a forest, he explained. They have individual lifecycles, and thus come and go, and some never survive infancy. This organic metaphor never fit very well into the emergent neoclassical model of equilibrium that looked strictly at the mathematics of profit-maximization.

 During the formative years of antitrust policy in the United States, a “fixed cost controversy” drawn from Marshall’s model of competition dominated important debates about the appropriate roles of competition, antitrust policy, and regulation. In industries such as the railroads or heavy steel manufacturing, the argument went, “ruinous” competition would occur because firms would be forced to cut their prices toward marginal cost, leaving insufficient revenue to pay off their fixed costs. One equilibrium solution was the emergence of monopoly, perhaps by merger. Others were collusion or price regulation. These concerns were very likely a major contributing factor to the great merger wave that occurred around the turn of the twentieth century. Antitrust lawyers representing cartel defendants in markets with high fixed costs repeatedly asserted a “ruinous competition” defense to price fixing, but the federal courts consistently rejected it, as they do today.

On the other side, several of the more left-leaning Progressives denied that there was any such thing as chronic overproduction. By rejecting the defendants’ arguments, the Supreme Court was effectively taking their position. That was ironic, because the principal architect of the view was Justice Peckham, also the author of Lochner v. New York. He could hardly be classified as a left-leaning Progressive. Peckham’s opinion in the Joint Traffic case expressed strong doubts about the ruinous competition argument, concluding that the principal consequence of very low rates was increased demand, which would in turn produce a larger supply. One possibility, of course, was that Justice Peckham did not fully understand the implications of high fixed costs.

Justice Peckham’s clever response to the defense in the Addyston Pipe case was that, whether or not competition was ruinous, the defendants themselves could not be trusted to set a price no higher than necessary to prevent it. In fact, they had set prices so high as to deprive the public of the advantages of any competition at all. The Court cited cost evidence developed in the lower court that the reasonable cost of the defendants’ pipe, including a fair profit, did not exceed $15 per ton and could have been delivered profitably to Atlanta for $17 to $18 per ton. The bid price was actually $24.25 per ton. That statement at least suggested that one judicial response to a ruinous competition defense could be a judicial inquiry into costs, but the Court never went down that rabbit hole. It simply rejected the defense outright, as it has done ever since.

Theorizing about the behavior of firms with high fixed costs became a central focus of early antitrust literature, as well as the early American economic literature on the theory of industrial organization. It also proved to be a general attack on the model of perfect competition.

Prior to the development of imperfect and monopolistic competition models in the early 1930s the principal Progressive theorist of fixed costs was the institutionalist economist John Maurice Clark. Clark found the existence of significant fixed costs, which he termed “overhead” costs, to be a disruptor of the standard notion of the equilibrium of supply and demand under competition. The problem, as he noted, was that in the short run of immediate demand price and output are determined by demand and marginal cost, but in the presence of fixed costs this could be attained only over some longer run. High fixed costs continuously produced “irregularities” that threw the relationship between demand and supply out of balance, with some periods of excess capacity and others of excessive demand. Echoing Marshall, he observed that “where overhead costs are a substantial item, the perfect theoretical equilibrium is not found.”

The implications, as Clark worked them out, were chronic overproduction, because any price above short run marginal cost would serve to reduce the deficit in payment of fixed costs. Another result was that price discrimination became a profitable strategy to the extent that a firm was able to maintain higher prices on established demand while bidding a lower price for new sales. One characteristic of price discrimination as a solution to the problem of high fixed costs is that when it occurs it results in increased output. Clark concluded that there was nothing inherently anticompetitive or even suspicious about most instances of price discrimination. They were simply a mechanism that firms used to sell individual batches or product at a profit-maximizing (or loss-minimizing) price. That view has very largely persisted within antitrust policy.

The Marshall equilibrium problem ultimately went away when economic models began to incorporate product differentiation, particularly in the theory of monopolistic competition. The principal problem had been Marshall’s assumption that all sellers in competition sold identical “commodities.” As a result, firms competed only on price. When differences in the product or even the terms of sale were incorporated, it became possible to have equilibrium without relying on any non-economic theorizing about the nature of the firm. The significance of this debate, which occurred almost entirely during the Progressive and New Deal eras, is difficult to exaggerate. It gave us much of our theory about equilibrium in industrial markets, analysis of costs, and theories about the limits of competition and the appropriate scope of regulation. It also fueled the Harvard School view that markets differ from one another, and antitrust policy thus requires intense factual queries into particular industries and practices.

E.  Market Failure and Regulation

The fixed cost controversy strongly supported Progressives’ suspicions that markets were not as inherently benign as the classical political economists had believed. However, some worked better than others. Antitrust for its part is dedicated to the proposition that markets can be made to work tolerably well on their own with only selective intervention. In other cases, however, the roots of failure are so deep that ordinary market forces are ineffective.

Increased appreciation of market diversity led to a more general theory of market “failure,” championed by Pigou. Pigou developed the idea of a “divergence” between private and social costs, or “externalities” that private bargaining could not correct. For example, a negative externality might occur when a polluting refiner was not required to compensate downwind neighbors for its air pollution. By contrast, a positive externality occurred when the inventor of a new product could not effectively prohibit people from copying it. In the first case the result would be too much pollution; in the second case it would be too little invention.

 The idea, which became more technically expressed in the 1950s, was that in a few markets sustainable competition is impossible without state intervention. The goal of regulation became to emulate competitive outcomes in these markets. Adams had anticipated a version of that argument already in the 1880s, arguing that competition was not sustainable in industries with declining costs because the emergence of monopoly was inevitable.

The Progressive Era then saw an outpouring of literature on regulation as a corrective for market failure, much of it focused on transportation and public utilities. Among the most important contributions was Joseph Beale and Bruce Wyman’s 1906 book on railroad regulation. They made two important observations. The first was that monopoly provisions in corporate charters for railroads and bridges were common at least since the early nineteenth century. The argument that Justice Story articulated for them already in 1837 was that monopoly privileges were essential to attract investment into public utility markets, which were distinctive because of the amount of investment they acquired. However, Beale and Wyman observed a second rationale, which was “virtual monopoly”—namely, that the cost structure of these industries required a monopoly. Further, they argued, this was the “true ground” for regulation of monopolies. “[W]here competition prevails it regulates the conduct of business by its own processes, but monopoly requires the intervention of the law of the land . . . .”

This neoclassical theory of regulation has since formed the basis of core regulatory theory in the United States, as well as one of its most controversial features: cost-of-service rate making. The idea of market failure expanded significantly in the 1930s and after, bolstered in significant part by the Depression. Regulation moved far beyond the relatively narrow neoclassical conception of market failure even to the idea that markets themselves cannot be trusted to distribute goods or services in an efficient, egalitarian manner.

 Both Progressive and New Deal regulatory theory were aggressively assaulted in the 1960s and 1970s by Chicago School critics such as George J. Stigler. His critique completely ignored natural monopoly or other structural characteristics thought to justify regulation. Rather, he substituted a theory based entirely on political capture—namely, that regulation is nothing more than interest group purchase of regulatory favors from legislatures or government agencies. Stigler never even mentioned declining average costs or natural monopoly. In fact, the only costs he discussed were the cost of operating the political process, including the costs to lobbyists or political operatives of obtaining favorable legislation. He argued, for example, that the costs of successfully lobbying for an exclusionary occupational license are small when distributed over each member of society, but they can produce enormous gains to activists seeking such licensing protection. In sum, Stigler’s model completely divorced the theory of regulation from firm costs or market structure; it was purely political.

That Chicago School effort substantially failed. It never generated a theory with significant explanatory power outside the realm of badly designed regulation that could be explained only by political influence. For example, it could not explain why public utilities are subject to price regulation at the retail level while groceries in every state are sold competitively, except perhaps by offering that the utilities had better lobbyists. To be sure, the Chicago School did make some important contributions at the margins—mainly by hammering home the proposition that regulation can lead to harmful capture and there are good reasons to be on guard about overreach. In addition, regulatory fervor led to excessive controls that did more harm than good. For that, however, the usable critiques came from centrists such as then-Professor Stephen Breyer or Cornell economist and Chair of the Civil Aeronautics Board Alfred E. Khan.

F.  Price Discrimination

Price discrimination, which technically refers to selling to two or more customers at different ratios of price to cost, has always produced divisions in antitrust policy, most typically between economists and non-economists. Lawyers often view it with suspicion, something like race or gender discrimination. By contrast, economists have always tended to be more circumspect, and more inclined to divide it up into different varieties. Even a Progressive institutionalist economist such as John Maurice Clark discussed it in relatively benign terms. Minnesota economist and eventual Director of the United States Census Edward Dana Durand probably stated the consensus view among Progressive economists. In a critique of the Clayton Act, he observed that price discrimination “is an all but universal practice and is not necessarily injurious or calculated to bring about a monopoly.” However, he also observed that price discrimination could be a strategy of selective predatory pricing used to drive competitors out of the market.

Most of the economic foundations for our understanding of price discrimination developed during the Progressive Era as an outgrowth of marginal analysis. The principal originator of the modern theory was Pigou. Pigou divided price discrimination into three types, which he named first-, second-, and third-degree price discrimination. First-degree, or “perfect” price discrimination, is an analogue of perfect competition: it never exists in the real world but is an important tool for analysis. Under it, a seller sells every unit at that customer’s reservation price, or the highest price that customer is willing to pay. The result is that output is at the competitive level, but all of the industry profits go to producers rather than consumers.

Second-degree price discrimination occurs when the seller adopts a discriminatory pricing formula and the buyer “chooses” its price by selecting how to purchase. A quantity discount schedule is one prominent example. The purchaser can obtain a lower price by buying more. A discount for early booking is another.

In third-degree price discrimination the seller preselects categories of customers based on certain observed characteristics and charges them different prices—for example, one price for commercial users and another for residential users.

United States antitrust law has never developed general antitrust rules governing price discrimination. Section 2 of the Clayton Act, subsequently amended by the Robinson-Patman Act, addressed a practice that it called “price discrimination.” But the set of practices that statute reached often had little to do with economic price discrimination. Rather, the statute simply condemned price differences. The Progressives did often identify predatory price discrimination as one of the evils brought about by the trusts, particularly Standard Oil. The result was the original section 2 of the of the Clayton Act, which the Robinson-Patman Act later amended. The original statute was intended to reach a particular form of predatory pricing widely attributed to the Standard Oil Company as well as others. The House Judiciary Committee report on the provision indicated that its purpose was to target the practice of large corporations using local price cutting intended to destroy a competitor. In a 1923 decision, the Second Circuit described the condemned practice this way:

[P]rior to the enactment of the Clayton Act a practice had prevailed among large corporations of lowering the prices asked for their products in a particular locality in which their competitors were operating for the purpose of driving a rival out of business. Such lowering of prices was maintained within the particular locality while the normal or higher prices were maintained in the rest of the country; and this practice was continued until the smaller rival was driven out of business, whereupon the prices in that locality would be put back to the normal level maintained in the rest of the country. The Clayton Act was aimed at that evil.  

The statute did not explicitly require that the lower price be below cost, but that was largely the way it came to be interpreted. The Supreme Court initially construed the statute broadly without discussing any requirement of below-cost pricing. Further, the statute’s express limitation to “commodities” meant that it could not apply to things such as railroad rates, which were one of the biggest targets of price discrimination concern.

John Maurice Clark’s important 1923 book on fixed costs made a convincing argument that, setting aside differences in bargaining relationships or customer sophistication, price discrimination is largely a consequence of fixed costs. A firm with a heavy fixed cost investment needs to keep its output up, and any sale at a price greater than incremental costs will improve its bottom line. As a result, it tries to retain legacy customers at higher prices while bidding lower prices for new or spot market sales. When a firm has excess capacity, these pressures are great.

This explanation of price discrimination was already known in the railroad industry by Clark’s time. Forty years earlier Yale economist and eventual President Arthur Twining Hadley had made a similar observation in justifying railroads’ policies of charging different freight rates for different commodities depending on shippers’ willingness to pay. By doing this the railroads were able to maximize output. Given their high fixed costs, this meant that the average cost of transportation went down.

The Robinson-Patman Act was passed in 1936, subsequent to the period under discussion here. It was not a way of approaching the problem of fixed costs. The statute condemned many of the things that Clark’s analysis had explained as causing no competitive harm. In any event, the Robinson-Patman Act was a complete misfire. The concern motivating the statute was the emergence of large chain stores such as A&P, which had become the nation’s largest grocer. A&P drove many smaller grocers out of business, mainly because it was vertically integrated and also because it was able to purchase in large quantities, enabling it to undersell small grocery stores. The Robinson-Patman Act ignored vertical integration and scale economies and identified the problem entirely in terms of a firm’s insistence on charging some buyers lower prices than others.

The statute completely failed to limit vertical integration because of its requirement that both the higher priced and lower priced transactions be “sales.” The courts consistently held that a “sale” refers to a transfer of goods from one firm to a different firm. The vertical passage of a good from a firm to its wholly owned store or other subsidiary was not a “sale.” The Act did condemn a few large suppliers, such as Borden, for selling milk to large grocers at a lower price than to small grocers. Further, because the statute targeted “sales,” it did not effectively reach powerful buyers such as A&P itself. The statute did contain a buyers’ liability provision, almost as an afterthought, which was never very effective.

During the Progressive Era through the New Deal, the antitrust analysis of price discrimination was spotty and indeterminate. In fact, however, it remains indeterminate to this day. We have never developed good theory for generalizing about the competitive effects of price discrimination. The consensus of economists today is probably not much different from what it was in the 1920s and 1930s—namely, most instances are competitively harmless, particularly if the discrimination tends to increase output.

G.  Monopoly Power and Structure: Potential Competition, Barriers to Entry, and the Relevant Market

In 1890, when the Sherman Act was passed, legal doctrine did not have a coherent conception of market power as a measurable phenomenon. Economics was not much further along. Judicial decisions contained plenty of discussions of “monopoly,” virtually always in relation to patents or other grants of exclusive rights. In most cases “monopoly” was simply assumed from the existence of the exclusive grant itself. For example, many nineteenth-century decisions spoke of the “patent monopoly,” as if the relationship between the two terms was automatic. All of the references to patents in the Chicago Conference used the term this way. The law dealing with various aspects of monopoly came essentially from three sources: patent and copyright law, the common law of unfair competition and contracts in restraint of trade, and state corporation law. None contained a market power requirement, and power was generally either assumed or irrelevant.

Estimation of market power by reference to the share of a relevant market, as it is used today in antitrust cases, was a relatively late arrival. Today it has become so conventional that we regard it as routine, and in 2018 a divided Supreme Court mistakenly concluded as a matter of law that it is the only way to assess power in a vertical case. Since the existence and measurement of market power present questions of fact, the Court’s conclusion was not only technically incorrect, it was also a dictatorial intrusion of policy into fact finding. Econometric tools for assessing market power, such as the Lerner Index, were actually developed prior to judicial usage of the “relevant market” in antitrust analysis. Today econometric methods often produce better results than traditional measurement. Further, the use of econometric devices is fundamentally inconsistent with the model of perfect competition. The firms within a perfectly competitive market have no power to price above marginal cost unless they collude. Implicit in the Lerner Index, and later in the development of more sophisticated econometric tools for assessing the power of individual firms, is that the firms are not operating in perfectly competitive markets.

1.  Potential Competition and Barriers to Entry

The belief that trusts both promised lower costs and threatened higher prices at least partly explains the heavy focus in the early antitrust literature on “potential competition” as a disciplinary tool. In 1895, Gunton optimistically described potential competition as a force “that is ever waiting to step in where large profits warrant the risk.” Even a dominant trust would not charge monopoly prices if the looming threat of competition was sufficient to keep its prices down. Classical political economists had always assumed that any attempt to charge monopoly prices would invite new competitive entry that would force prices back to the competitive level. About the only things that would prevent this were government restrictions on entry, including patents.

In his 1884 critique of traditional political economy, Ely, who was to become one of the most prominent Progressive economists, caricatured the classical assumptions of easy market entry, which he described as “the absolute lack of friction in economic movements. Not only do capital and labor move with perfect ease from place to place and from employment to employment, but this . . . is accomplished without the slightest loss.” Under this image of the economy, Ely continued:

The silk manufacturer diverts his capital into another employment like the construction of locomotives with precisely the same facility with which he turns his family carriage horse from an avenue into a cross street, while the Manchester laborer on a moment’s warning finds a suitable purchaser for his immovable effects and without expense or loss of time transfers himself to London where employment is at once offered him at the rate of wages there current. Equality of profits and equality of wages flowed naturally from these assumptions. 

By contrast, the emerging discipline of industrial economics began to consider how long this might realistically take, what were the market factors that determined the speed and scope of new entry, and the power of incumbent firms to throw obstacles in the way. As Adams admonished in his book on trusts, “[t]he point at issue is whether the public is justified in placing sole reliance upon potential competition, active competition having disappeared.”

Privately created barriers emerged as a concern of antitrust law early in the Progressive Era. They were undoubtedly heightened by the Progressives’ increased sensitivity to the natural coercive power of markets. The Supreme Court recognized one such barrier already in 1904. In an early private action under the Sherman Act, the Supreme Court condemned a guild rule that limited membership and effectively prohibited market participation by tile layers who were not members of the defendant organization. Members of the association were prohibited from dealing with non-members. As Justice Peckham noted in his opinion for a unanimous Court, the association’s rules prohibited dealers from acquiring tile “upon any terms” from members of the guild, and all of the manufacturers in the area were members.

A few years later, in the American Tobacco case, the Court referred to a dominant firm’s vertical integration and market foreclosure as creating “perpetual barriers to the entry of others into the tobacco trade.” Some lower courts were less concerned. For example, in United States v. Quaker Oats Co., the court rejected the government’s claim of attempt to monopolize, noting that the product at issue, packaged rolled oats, was a commodity produced by many firms, and that the defendant had no reasonable means of excluding them.

Most of the participants in the multi-disciplinary proceedings of the Chicago Conference on Trusts saw potential competition as crucial to any assessment of the likelihood of monopoly. They disagreed about its effectiveness. The debates reveal that the classical assumption of free entry had become controversial. For example, Jenks was a skeptic. He acknowledged the existence of potential competition as a disciplinary force but doubted that the power of the large trusts to charge high prices would be effectively controlled.

Attorney A. Leo Weil was less concerned. He observed that the trusts generally reduced costs and prices, but if there were any tendency toward price increases, potential competition from new firms would tamp them down. Further, this new entry could be expected to occur “unless the laws of trade are to be reversed.” Statistician Joseph Nimmo observed that as a consequence of the revolution in railroad transportation, the range of potential competition was much wider than it had been previously. Economist James R. Weaver from De Pauw University was even less concerned. He suggested that potential competition “rarely fails” to aid the consumer. Accumulations of capital were easily assembled, and those who controlled it stood “ready to enter any specific field of production, whenever the profits of that industry offer sufficient inducement.” Further, it was well known that at the present time entrepreneurs were sitting on “a great mass of idle capital.” As a result, “to avoid this new competition, prices must be lowered or profits shared with the consumer.” Francis B. Thurber, the President of the United States Export Association, believed that the trusts merely moved competition to a higher and more beneficial level:

If a combination of capital in any line temporarily exacts a liberal profit, immediately capital flows into that channel, another combination is formed, and competition ensues on a scale and operates with an intensity far beyond anything that is possible on a smaller scale, resulting in breaking down of the combination and the decline of profits to a minimum.

John Bates Clark, the most prominent economist among the Conference participants, was much more skeptical. In theory, he observed “potential competition . . . is the power that holds trusts in check,” but “[a]t present it is not an adequate regulator.” The “potential competitor encounters unnecessary obstacles when he tries to become an active competitor.” He mentioned patents as one obstacle, but refused to endorse abolition of the patent system. He was also more cynical about the railroads, which he regarded as using manipulation of shipping rates as a device for deterring potential competition. Clark also blamed selective price discrimination—or the power of the trusts to exclude entrants by charging unreasonably low prices in that particular portion of the market where new entry was threatened. A particularly pernicious form of price discrimination was selective predatory pricing:

The ability to make discriminating prices puts a terrible power into the hands of a trust. If . . . it can sell goods at prices that are below the cost of making them, while it sustains itself by charging high prices in a score of other fields, it can crush me without itself sustaining any injury. If, on the other hand, it were obliged, in order to attack me, to lower the prices of all its goods, wherever they might be sold, it would be in danger of ruining itself in the pursuit of its hostile object. Its losses would be proportionate to the magnitude of its operations. 

This observation became the theory under which original section 2 of the Clayton Act was passed in 1914—namely to prevent firms from using selective, geographically limited discounts to drive rivals out of business. Finally, Clark opposed tariffs because their higher costs deterred the potential competitor “from becoming an actual one.”

Several years later Clark was even more pessimistic. At one time potential competition may have been more effective at keeping prices down, he acknowledged, but today that power had largely been eliminated by incumbent firms’ use of selective preferential rates, local discrimination, and exclusionary agreements. Clark then gave a strong endorsement to the Sherman Act, although he believed that more was necessary, including a federal law chartering corporations and an “industrial commission” designed to examine the competitiveness of individual large firms. Further, he would impose on them “a burden of proof,” first to show that they do not dominate the entire market and, secondly, to show “that the way is so open for the entrance of more that prices cannot become extortionate.”

Adams agreed in a 1903 essay on the trusts, as did Boston lawyer Robert L. Raymond. Raymond argued what came to be a common position held by Progressives—that potential competition was natural and ordinarily to be expected, but that dominant firms could devise practices that would prevent or limit its operation. He also observed that potential competition did not “instantaneously” become actual competition. Rather, “even with abundant capital one cannot erect a steel manufacturing plant or a sugar refinery until considerable time has elapsed.” This delay, he observed, gave dominant firms an opportunity to behave strategically. He also warned, however, that competition policy should not go further; it had to preserve the “true economic value” that they promised while also preserving the power of potential competition to limit their prices. Progressive economist Ely, who published his book on monopolies and trusts simultaneously with the Chicago Conference, doubted potential competition as a device for disciplining monopoly. He concluded that “[n]o evidence has been adduced of the sufficient action of potential competition in the case of monopoly.”

Clark returned to this problem in The Control of Trusts, a book he had had originally published in 1901. For subsequent editions he was joined by his son, John Maurice Clark. The revised edition was even more pessimistic than John Bates’ original, very likely reflecting John Maurice’s more institutionalist leanings. “When the first edition of this work was issued, so called potential competition had shown its power to control prices,” the Clarks lamented, but

[t]he potentiality of unfair attacks by the trust tended to destroy the potentiality of competition. Under these conditions it was and is clearly necessary to disarm the trusts—to deprive them of the special weapons with which they deal their unfair blows. It is necessary to repress the specific practices referred to and so to enable every competitor who, by reason of productive efficiency, has a right to stay in the field, to retain his place and render his service to the public. 

As a result, they concluded, while experience has shown that “potential competition is a real force, it has also shown that it is a force which can be easily obstructed.” A few years later, John Maurice Clark argued that potential competition was an unlikely discipline for monopoly in markets with “heavy permanent investment”—that is, with high fixed costs. In such cases, he noted, incumbent firms will be holding excess capacity and be able to expand their own output in response to new entry. Knowing this, potential competitors will not wish to make a significant investment in entry. Further, he observed, prospective entrants into such a market would realize that total output would be higher when their own production was added in, and thus prices lower. So what appeared to be profitable entry before might not be so later.

The Clarks’ work developed the basic model that emerged by mid-century for monopolization cases and that prevails today. That judge-made formulation required a showing of both monopoly power and anticompetitive practices. This model retained faith that in a market that is not restrained by either the government or private action, new entry could be expected to maintain competition. The problem for the antitrust laws was anticompetitive practices that forestalled competitive entry before it could occur or become effective. “A merely possible mill which as yet does not exist may forestall and prevent monopolistic acts,” the Clarks conceded, but only provided that the way is “quite open for it to appear.”

Writing in 1911 about the ongoing government cases against Standard Oil and American Tobacco, Raymond observed that the firms’ growth had depended on the suppression of potential competition. In American Tobacco, the district court condemned a trust agreement that involved a group of the same shareholders’ acquiring interests in multiple companies. The court acknowledged the defense that potential competition would discipline any monopoly because the combination itself did not involve any sort of market exclusion. But entry would take some time, the court observed, and the “objection is to present and not future conditions.” The court believed that argument to be worthy of “serious consideration.”

By contrast, in the 1918 United Shoe Machinery (“USM”) merger case the Supreme Court refused to condemn the union of several shoe machinery makers into what became the USM Company. The government’s argument was that the merged companies were potential competitors who could have turned into actual competitors but for the merger. The case thus invited a tradeoff question that remains to this day: some mergers increase productive efficiency by enabling a firm to do things at lower cost, but in the process may harm competition by preventing competition that might have developed had the merger not occurred.

The USM union was a merger of complements, and the district court had concluded that the individual companies were not in competition with one another at the time of the merger. Justice Holmes had actually elaborated on that conclusion several years earlier in a decision that approved the original merger. He also observed that the participating firms had not been competitors but rather were makers of complements. One firm produced lasting machines, another welt-sewing machines, and others outsole-stitching machines and heeling machines. It was not the purpose of the Sherman Act to “reduc[e] all manufacture to isolated units of the lowest degree.” In this case “the combination was simply an effort after greater efficiency.” He compared the merger to a situation in which a single firm was created to make “every part of a steam engine,” rather than using the antitrust laws to force “one to make the boilers and another to make the wheels.”

In the American Can case, which condemned the can-making trust but declined to break it up, the court also cited potential competition as the reason for being cautious about the remedy. The court observed that the American Can Company, given its large size and multiple plants, was highly efficient and made good cans. Further, the record revealed “that there are many ways in which a large and strong can maker can serve the trade, and a small one cannot.” In any event, the defendant’s power to restrain competition was limited by “a large volume of actual competition and to a still greater extent by the potential competition” from which it cannot escape. For example, when the defendant raised its price—perhaps prematurely believing that it had destroyed enough rivals—new competitors quickly re-emerged. It became “apparently profitable for outsiders to start making cans with any antiquated or crude machinery they could find in old lumber rooms.” At that point the defendant became so desperate that it actually started buying cans from its rivals, even though these were “very badly made.” Many of these were later destroyed.

The language of potential competition evolved into the modern doctrine of “barriers to entry,” a term that came into common use at mid-century. An entry barrier could be either natural or fabricated obstacles that made it more difficult for competition to enter the market. The Supreme Court first used the term in the American Tobacco case, when it referred to the defendant’s acquiring control of numerous “seemingly independent corporations, serving as perpetual barriers to the entry of others into the tobacco trade.” More specifically, the Court referred to the defendant’s acquisition of plants “not for the purpose of utilizing them, but in order to close them up and render them useless,” and also to noncompetition clauses placed on sellers that kept them from re-entering the market. A few years later a district court quoted this language in condemning Eastman Kodak of monopolization by acquiring around twenty companies and assembling all of the components of the photography industry. The phrase did not find much use in the economic literature until the 1940s, followed by significant expansion in the 1950s. It entered the mainstream antitrust literature after Joe S. Bain’s pioneering work on barriers to entry in the 1950s.

2.  From Potential Competition to the Relevant Market

As long as confidence was high that potential competition could be trusted to control prices, the precise definition of the market in which firms operated was relatively unimportant. Even monopolists could be kept in check if potential competition was robust. The assumption of robust potential competition explains both why early antitrust decisions involving dominant firms were not particularly fussy about market definition and also why they tended to emphasize detailed litanies of exclusionary practices. Monopolization was all about harmful conduct intended to exclude rivals.

 As confidence in the efficacy of potential competition waned, however, it became more important to know the number and robustness of a firm’s actual competitors. Any discipline of monopoly would come primarily from them. As John Maurice Clark observed in 1923, for most markets “it is inherently impossible to have industry effectively governed by potential competition alone.”

Concerns about potential competition are inherently dynamic. They ask about where a market is going, rather than how it may appear at this moment. In fact, accounting for movement and the ability to make useful predictions about it is one of the most challenging questions of antitrust policy. Classical economists assumed markets were competitive unless the government intervened because they focused so completely on the long run. The fact that monopoly might be dissipated by new market entry is certainly reassuring. Eventually such a market may reach an acceptably competitive equilibrium, but how long will that take, and who will be affected along the way? Focusing on macroeconomics in the 1920s, John Maynard Keynes ridiculed the optimistic faith of many economists that eventually the economy would move to a healthier equilibrium. In contrast stood the policy maker’s more immediate concerns about time. He famously concluded that the “long run is a misleading guide to current affairs. In the long run we are all dead.” Further, focusing on the long run makes economics worthless as a policy tool: “Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.”

The “relevant market” in antitrust analysis emerged as a device for trading off these static and dynamic concerns. First of all, it revealed who was competing with whom in the present instant. If the market was well defined and included consideration of entry barriers, it also estimated what was likely to change over time. The evolving concern was with how rivals and customers would respond to a future price increase above competitive levels.

The idea of a “relevant market” is entirely a creature of partial equilibrium analysis. While that proposition is uncontroversial, it was not commonly acknowledged in the antitrust literature until Oliver Williamson began talking about antitrust policy and welfare tradeoffs in those terms in the 1960s. As Marshall had observed, in selecting a market economists should group sales of close substitutes and then make a working assumption that those within the grouping affect one another’s behavior, but that firms outside of the group do not. Marshall also realized that this was a simplifying assumption and not a hard picture of a situation in which the elasticity of substitution between goods in the same market is infinitely high, while the substitution between goods inside and goods outside is zero. Today we commonly say that to the extent a market is “well defined” these two conditions come closer to applying.

Assessing antitrust practices by reference to the “market” in which they occur naturally produced several questions about delineation and measurement. The most obvious one was how to identify the particular grouping of firms to which the analysis should be applied. Marshall himself paid scant attention to the issue. He identified the grouping of sales in a particular market as a “commodity.” His favorite example was tea. In that case, sales of tea constituted the relevant market. He gave only a little thought to questions about whether tea competed with coffee or water, or even the extent to which a coffee producer might switch to tea in response to a higher price. He did conjecture at one point that a failure in the coffee harvest might lead to an increase in demand for tea. He made a similar conjecture about beef and mutton. He also noted that questions about “where the lines of division between different commodities can be drawn must be settled by the convenience of the particular question under discussion.” For some purposes, he acknowledged, we might even acknowledge Chinese and Indian teas as different.

Early Sherman Act cases took roughly the same approach, never putting a fine point on market definition. For example, neither the 1911 Standard Oil nor American Tobacco decisions discussed the boundaries of the “market” under consideration. In StandardOil, the Court referred repeatedly to “petroleum and its products,” without saying anything about what that might include. In American Tobacco, the Court did observe that the defendant produced a number of products, including “cheroots, smoking tobacco, fine cut tobacco, snuff and plug tobacco.” The Court did discuss some vertical practices that involved specific products. For example, the defendant also tried to control sales of licorice paste, an essential ingredient in plug tobacco, in order to exclude rivals. The Court never spoke of any of these products as relevant markets, or considered whether they were in the same or different markets.

The American Can decision a few years later described a large litany of bad practices but said virtually nothing about the scope of the market, other than to refer to it as “cans.” The court gave no thought to such questions as whether glass bottles, which were also widely used for preserving food, were in the same market. Such questions arose regularly after mid-century.

The International Shoe case, decided in 1930, included a brief discussion of the proper delineation of a product market. It also reflected the emergence of product differentiation as a factor in market analysis. The FTC challenged a merger of two manufacturers of dress shoes. McElwain made more expensive, attractive, and “modern” shoes entirely of leather. International made cheaper shoes that included some non-leather components. Without discussing the scope of the market, the Court did credit the defendants’ testimony that there was “no real competition” between the two firms.

Estimating market power today by reference to a share of a “relevant market” is not a pure exercise in static partial equilibrium analysis. In Marshall’s model, one examined equilibrium in the market under study on the assumption that the price and output of everything else remained constant. However, he also acknowledged that this assumption often fails to obtain in the real world:

[T]he demand schedule represents the changes in the price at which a commodity can be sold . . . other things being equal. But in fact other things seldom are equal over periods of time sufficiently long for the collection of full and trustworthy statistics . . . . This difficulty is aggravated by the fact that in economics the full effects of a cause seldom come out at once but often spread themselves out . . . .” 

A price increase naturally invites other sellers to move into the price increaser’s market territory and customers to defect away. These substitutions upset the equilibrium, and within Marshall’s model, continue to occur until the equilibrium is restored. To the extent the market is more rigorously defined and the market share of the price increaser is higher, the movements would take longer or be less likely to occur.

The 1940s and 1950s saw a significant expansion in antitrust usage of relevant markets to estimate market power. Judge Hand’s discussion in the Second Circuit’s 1945 decision in United States v. Aluminum Co. of America has become well known. The first Supreme Court decision to contain a significant discussion about the scope of a relevant market was United States v. Columbia Steel Co. in 1948. It concluded that the market that the government alleged was too narrow. First, the area of effective competition was larger than the government claimed. Second, the two firms actually made different although somewhat overlapping types of steel. On a 5–4 vote, it dismissed the complaint. Justice Douglas’s dissent (joined by Justices Black, Murphy, and Rutledge) contained almost no discussion of the relevant market except to dispute the fact that the acquired firm’s three percent share of the purchasing market under consideration was insubstantial.

The chronology of these concerns is revealing because of what it says about the declining faith in potential competition to solve monopoly problems. As noted previously, as of 1899 even monopoly was not a matter of concern for some participants in the Chicago Trust Conference because potential competition could be trusted to keep prices down. Subsequently, greater doubts about the disciplinary effects of new entry naturally led to increased concerns about just how competitive the market was when entry is disregarded. By the 1930s most antitrust cases involving large firms were harboring significant doubts about the ameliorating effects of potential competition. That explains the rising importance of market definition in antitrust cases.

3.  The Rise of Structuralism and the Diminishing Importance of Conduct

As Chief Justice White observed in the 1911 Standard Oil decision, the monopolization offense required bad conduct and not mere monopoly status. Chief Justice White’s reasoning was that the practices condemned by section 1 of the statute actually forbade “all means of monopolizing trade, that is, unduly restraining it by means of every contract, combination, and so forth.” To this, section 2 of the Sherman Act sought,

if possible, to make the prohibitions of the act all the more complete and perfect by embracing all attempts to reach the end prohibited by the first section, that is, restraints of trade, by any attempt to monopolize, or monopolization thereof, even although the acts by which such results are attempted to be brought about or are brought about be not embraced within the general enumeration of the first section. 

The lower court had spoken much more clearly: section 2 should require a restraint of trade as embraced by section 1, but the difference was that “[o]ne person or corporation may offend against the second section by monopolizing, but the first section contemplates conduct of two or more.” That is in fact the distinction that modern courts have adopted.

What the statute did not do, Chief Justice White continued, was condemn “monopoly in the concrete,” or the mere status of being a monopolist.”

At that point, however, the Chief Justice cast his entire reasoning and perhaps even his mental acuity into doubt with his infamous argument that because “reason was resorted to” in deciding earlier cases the law reached only unreasonable actions. That dubious rationale presaged the more formal recognition of a rule of reason seven years later.

All of this was in pursuit of a larger point, as the Chief Justice elaborated, which was to shunt aside the argument that a court could not constitutionally divest an innocent firm of its property without compensation simply because it was a monopolist. Property owners had no right to engage in restraints on trade. Rather, the statute was directed to “particular acts,” even though these were inferred only “generically” from the statutory language. That is, requiring wrongful acts—even though the statute did not explicitly list them—was essential to the statute’s constitutionality. At that point the opinion turned to a detailed summary of Standard Oil’s conduct.

The Clayton Act developed this theme further by its enumeration of specific acts that threatened to create monopoly—namely selective and discriminatory predatory pricing, tying and exclusive dealing contracts, and anticompetitive mergers. Nothing in the Clayton Act even hints of possible condemnation of monopoly without fault; indeed, its added specificity points in the other direction.

It is thus not surprising that Progressive Era monopolization cases often read like tort cases—with an extensive discussion of conduct, accompanied by relatively thin treatment of market structure and power. This period preceded the structuralist revolution that would occur in the late 1930s and 1940s. Indeed, some commentators from the period wrote of the monopolization offense as if it did not contain a market power requirement at all, but only guilty conduct. After World War II antitrust policy as led by industrial economists completely flipped that script.

Already in the 1930s some industrial economists began to study the monopoly problem by looking at the types of structures most likely to produce it. In 1937 Harvard industrial economist Edward S. Mason observed that in “recent years economic thinking on the subject of monopoly has taken a radically different trend.” It began with the observation that “monopoly elements” of conduct were apparent in the “practices of almost every firm.” As a result, policy makers were increasingly required to make “distinctions between market situations all of which have monopoly elements.” For that, conduct alone provided little basis for differentiation. The important differences were not the conduct but rather the markets in which the conduct occurred. He noted an emerging distinction between “restriction of trade” and “control of the market.” If economics was to make a contribution to the problem of monopoly, Mason observed, it must move beyond practices and descriptive accounts of anticompetitive behavior and look for structural features that made markets more or less conducive to monopolization.

The development of imperfect competition theories in the early 1930s forced a shift in focus toward the particular market structures that made noncompetitive outcomes more likely. Some of the foundational work was done earlier. For example, in the 1920s economist John Maurice Clark looked at the manifold sources of economies of large plant size. The economies, which resulted from technology and engineering, were inherent in certain industries. In addition, the presence of high fixed (“overhead”) costs provided an explanation for price discrimination, showing it to be typically but not invariably procompetitive. Clark also discussed “economies of combination,” showing how the effect of high fixed costs and large plant size made markets more conducive to both horizontal and vertical control arrangements. In such industries “large-scale production, combination, and monopoly or restricted competition are all more or less bound together, and all occur in the same class of industries.” Everything in Clark’s book pointed in the direction of assessing competition problems by assessing the particular structural characteristics of each firm, emphasizing the extent and nature of fixed costs.

Clark’s book was too technical to have widespread public appeal, but it did both reflect and lead an important set of developments in the field of industrial economics. Antitrust policy became more interested in the types of market structures that made noncompetitive outcomes more likely. Enforcement policy followed these developments, culminating in massive monopolization cases brought against capital intensive firms in the 1930s and after, including Alcoa and USM. Both decisions emphasized market structure and market definition and de-emphasized conduct. Indeed, both toyed with but did not ultimately embrace the idea of monopoly “without fault”—or that certain dominant firms should be broken up simply because they are too big. In Alcoa, Judge Learned Hand discussed the possibilities of a presumption that a firm that had acquired a ninety percent market share was behaving unlawfully. It could defeat that presumption, however, by showing that monopoly had been “thrust upon it,” or that it was merely the “passive beneficiary” of monopoly. A few years later in the USM case, Judge Wyzanski characterized Alcoa as suggesting that a firm with an overwhelming market share monopolizes whenever it “does business.” That was as close as American antitrust law ever came to a rule of no fault monopolization.

With those decisions the courts entered the era of antitrust structuralism, which in its strongest form made evidence of bad conduct almost but not quite irrelevant. That largely ended the Progressive Era’s tort theory of monopolization.

III.  THE EMERGENCE OF VERTICAL COMPETITION POLICY

A.  “Competition,” Horizontal and Vertical

Progressives were the first to examine vertical practices and vertical integration systematically as competition problems. While some law of vertical contracting practices existed prior to that, almost none of it was concerned with competition. The Progressive accomplishment was noteworthy, because vertical business practices have historically been the most poorly understood in antitrust and have provoked the most controversy. Articulate writers have argued that they should be governed by both the extreme rules of per se illegality and per se legality. The Progressives in fact opted for a highly defensible middle ground that has proven to be very durable.

Progressive Era contributions to the law of vertical integration and restraints were formative but also modest. Mainly, they focused on the relationship between vertical integration or vertical contracting and realistic threats of monopoly. Subsequently the antitrust law of vertical business relationships veered to the left and became very aggressive, condemning many practices where harm to competition was never seriously threatened. Later it changed course again, veering very far to the right and developing rules of virtual nonliability in Chicago School academic writing. The case law never went quite that far. Since 2000 or so it has been moderating once again. The rule of reason that is currently the law for nearly all vertical practices is in between, although somewhat closer to a rule of nonliability. This is at least partly because the courts have made it so difficult for plaintiffs to win rule of reason antitrust cases.

The thoroughly conventional distinction that antitrust and economics makes today between “horizontal” and “vertical” practices is actually a fairly recent development. Today most of our antitrust rules of illegality are driven by it: horizontal restraints are more suspicious than vertical ones. Unlike horizontal agreements, vertical agreements do not increase the effective market share of the participants. The means by which horizontal price fixing agreements reduce market output are more obvious and better understood than for vertical price agreements. Vertical arrangements have a greater potential to produce cost savings.

Classical political economists and most lawyers prior to the 1910s or so did not see these distinctions. They tended to see competition as “rivalry,” and the vertical rivalry that might occur between a buyer and a seller, or employer and employee, counted as “competition” just as much as the rivalry between two competitors. For example, in the 1888 edition of his popular text on political economy, MIT economist Francis Walker defined competition as “the operation of individual self-interest among buyers and sellers.”

Marshall did only a little better in Principles of Economics. On horizontal competition, he focused almost entirely on the theory of monopoly, or single firms that accounted for all sales in a market. In a footnote he spoke briefly about “partial monopoly,” which he described as a firm whose wares were better known than those of other firms. Marshall’s chapter on “The Theory of Monopolies” largely assumed exclusivity and focused on how the monopolist determines its output and price when there is no threat of entry. He did mention that a vulnerable monopolist, such as a railroad threatened by new competition, would very likely charge a lower price in order to protect its trade. Never once in the 750 pages of the first edition did Marshall mention cartels or price fixing. While he drew his theory of marginalism from Cournot, he never discussed Cournot’s very influential theory of oligopoly. He did mention the rise of the American trusts in his Eighth Edition in 1920, seeing them largely as an alternative to German cartels and ultimately describing them as “treacherous.” He also saw the evil of the trusts as “narrowing . . . the field of industry which is open to the vigorous initiative of smaller businesses.” None of these discussions mentioned vertical integration or restraints.

Marshall’s relatively infrequent expressions about “competition” seem almost amateurish today—for example: “The strict meaning of competition seems to be the racing of one person against another . . . .” He complained that the term “competition” has “gathered about it evil savour, and has come to imply a certain selfishness and indifference to the well being of others,” and that “unrestrained competition” produced suffering. He spoke of competition as “glorified individualism.” He also lamented that machine production had led to undesirable competition that, “like a huge untrained monster,” led to weakness and disease. He blamed this on excessive British protection for liberty of contract. Marshall made the same complaint about labor, where he saw unfettered competition as driving wages to subsistence levels.

Marshall also had little to say about vertical integration and vertical relationships, and nothing about their impact on competition. His few mentions focused on labor. For example, he distinguished horizontal movement of workers from one firm to another from vertical movement, or promotion within a firm. Speaking again of labor, he also discussed the “vertical” competition that existed between skilled and unskilled workers who performed the same task. He concluded that for workers competition was both vertical and horizontal. First, they competed vertically for advancement within the firm. Second, they competed horizontally by movement from one employer to another. In Chapter 8, entitled “Industrial Organization,” he used the term “integration” a single time, using a biological metaphor. He defined it as “a growing intimacy and firmness of the connections between the separate parts of the industrial organism.” Late in his life, in his much less prominent and overly long book on Industry and Trade(1919), Marshall began exploring some of the differences between horizontal and vertical expansion.

Prior to 1910 or so, courts also viewed “competition” in terms that did not distinguish the horizontal from the vertical. Often the reference was to the “competition” that exists between the two parties to a bargain, with the seller wishing to receive as much as possible while the buyer wished to pay as little as possible. For example, John D. Park & Sons Co. v. Hartman, one of the earliest Sherman Act challenges to resale price maintenance, spoke of the practice as “protecting the seller of property against the competition of the buyer.” The Supreme Court of Oklahoma treated resale price maintenance agreements as a form of noncompetition covenant, used to protect “the seller of the property against the competition of the buyer.” Today, of course, we would characterize the relationship between a buyer and a seller as vertical, at least in most cases.

Even Justice Holmes, whose grasp of economics was better than that of most contemporary judges, spoke of competition interchangeably as horizontal or vertical. While a Justice on the Supreme Judicial Court of Massachusetts, he had defined competition in a tort case as “not limited to struggles between persons of the same class” but rather as applying “to all conflicts of temporal interests.” He continued, offering a purely vertical illustration:

One of the eternal conflicts out of which life is made up is that between the effort of every man to get the most he can for his services, and that of society, disguised under the name of capital, to get his services for the least possible return.

In keeping with more modern views, in 1908 the Supreme Court of Illinois rejected that characterization, describing it as “fanciful and far-fetched.” It then concluded that an employer and its unionized employees could not be said to be in “competition” with one another, even though their interests clearly diverged.

Holmes also dissented from the U.S. Supreme Court’s decision condemning resale price maintenance. The Court had reasoned that resale price maintenance was a restraint on alienation that served to eliminate competition among dealers in the sale of Dr. Miles’s brand of medicines. Holmes responded that the competition of “conflicting desires” should be sufficient to do that for most goods that were not essential, and Dr. Miles medicines were not. If a good was not essential (Holmes’s example was “short rations in a shipwreck”), the price would be set by the “competition” between the seller’s wish to charge more and the buyer’s wish to pay less. In the Northern Securities merger case he dissented from the majority’s condemnation of a merger to monopoly under section 1 of the Sherman Act. The “act says nothing about competition,” he observed. He then described the litany of common law situations characterized as contracts in restraint of trade and concluded that the facts of the present case did not fit into any of them. The idea that elimination of competition between firms that had previously been rivals might result in higher prices did not obviously trouble him.

With one implicit exception, the Sherman Act itself never distinguishes vertical from horizontal practices. The exception is the reference to “contracts . . . in restraint of trade” in section 1 of the Act. As Justice Holmes pointed out in his Northern Securities dissent, at common law that phrase referred to “contracts with a stranger to the contractor’s business, . . . which wholly or partially restrict the freedom of the contractor in carrying on that business as otherwise he would.” Justice Holmes gave as an example the British decision in Mitchel v. Reynolds. The lessor of a building to be used by the plaintiff as a bakery promised not to open a competing bakery in the vicinity. Noncompetition agreements such as these are vertical because they are formally between the seller (lessor) and buyer (lessee) of property or in other situations between an employer and an employee. Nevertheless, the agreement also has a horizontal effect to the extent that its purpose is to limit the competitive choices of the promisor. In Mitchel, the lessor had promised the lessee that he would not enter into business in competition with the lessee.

Even the Clayton Act, passed in 1914, ignored vertical competition issues with one limited exception. That was section 3, which prohibited the sale of commodities on the “condition or understanding” that the buyer not deal in a competitor’s goods. This of course became the basis for the modern law of tying and exclusive dealing. Even here, however, while the law condemned a vertical agreement, the impact was horizontal. The concern was agreements that limited competition from rivals. Further, its historical focus was on patent license agreements in which it was thought that patentees used ties to extend their patent beyond its lawful scope. The Clayton Act did not seek to expand the law of purely vertical restraints that limited only the sales of a manufacturer’s own product.

Section 2 of the original Clayton Act prohibited price discrimination directed at rivals, a form of predatory pricing. That was a purely horizontal practice. The provision was amended in 1936 as the Robinson-Patman Act so as to reach so-called “secondary line” price discrimination, or the charging of two different prices to two different customers, favoring the customer who paid the lower price. These 1936 amendments effectively turned it into a predominantly vertical statute. Ever since, the Act has distinguished “primary line” (horizontal) and “secondary line” (vertical) violations. However, the first was entirely a creature of the original 1914 Act, while the second was developed by the 1936 amendments.

 Likewise, the original Clayton Act provision condemning mergers reached only those that limited competition “between” the merging firms—that is, mergers of competitors. It was amended and extended to vertical mergers in 1950, as it appears today. 

In sum, while the Clayton Act greatly expanded upon the Sherman Act and the Supreme Court largely interpreted it that way, it was concerned almost exclusively with horizontal practices. It became “vertical” only through amendments passed in the mid-1930s and after. Outside the law of resale price maintenance, which did not have a well-developed economic rationale other than the concern for restraints on alienation, competitive concerns about vertical integration had not yet emerged. While the Sherman Act’s concern with contracts in restraint of trade and the Clayton Act’s concern with tying were both vertical, today we would characterize both as “interbrand” restraints. That is, they were vertical contracts aimed at limiting horizontal competition.

At the same time, however, section 3 of the Clayton Act and the 1917 Motion Picture Patents case became important vehicles for developing a theory of anticompetitive vertical practices that expanded greatly in the 1930s. Notably, however, the practice in that case was substantially horizontal, directed at insulating the patentee’s films from the films offered by rivals.

B.  Progressive Economics and Vertical Integration

One important chapter in Adam Smith’s Wealth of Nations was entitled “That the Division of Labour is Limited by the Extent of the Market.” Smith’s point was that larger markets permit greater specialization because businesses are able to depend more on exchange rather than internal supply. In isolated villages in Scotland, “every farmer must be butcher, baker and brewer, for his own family,” and in such towns it is hard even to find a professional carpenter or mason. From Smith’s insight that larger markets lead firms to rely more on others for certain inputs, Stigler fashioned a theory that small markets provide an impetus for internal vertical integration. As markets grow larger firms have more opportunities to buy rather than make. But to turn that argument into one that saw Adam Smith as developing a general economic theory of vertical integration was a stretch. 

During antitrust’s early years the idea of competitively harmful vertical practices was almost entirely absent in economics as well as law. Very little of that occurred prior to the twentieth century. In the 700 pages of proceedings of the Chicago Conference on Trusts neither lawyers nor economists ever once discussed vertical integration or vertical practices as a competition problem. A few years later federal courts first began addressing resale price maintenance under the Sherman Act.

Exploration of vertical business relationships and competition policy began to enter economics literature in the early twentieth century, although somewhat haphazardly. Notwithstanding the heightened Progressive concern about the trusts, they did not see vertical integration or vertical control as a threat. The early discussions spoke of it in very benign terms. In 1901 William F. Willoughby, a political scientist and lawyer who taught at both Harvard and Princeton, concluded that the competitive effects of vertical integration were overwhelmingly positive. Speaking of Andrew Carnegie’s steel company, he concluded that the “policy of the company” in integrating vertically was “not in attempting to lessen outside competition, but in seeking to bring about a more perfect organization and integration of its own properties.” Overall, he believed, the principal reason that firms integrated vertically was to ensure themselves of adequate and timely supply in the event of shortages.

 Progressive economist and President of the University of Wisconsin Charles Van Hise’s 1912 book on the Trust Problem spoke a single time of “vertical combination.” He was referring to vertical integration in the steel industry, but drew no conclusions about vertical integration generally. John Bates and John Maurice Clark’s important 1914 book on The Control of Trusts never discussed vertical practices or vertical integration at all. That omission is significant because at the time the father, John Bates, was one of the most prominent economists in the country and a leading marginalist. More generally, their book was a fierce indictment of the trusts.

In his 1919 book on Industry and Trade, written near the end of his life, Marshall did speak several times about the “vertical expansion” of firms into markets for supply or distribution. He noted, for example, that firms sometimes integrated vertically in order to avoid the effects of upstream cartels. His only sustained discussion of vertical integration was in relation to firms that did so in order to assure sources of supply or distribution, and he spoke of it entirely in benign terms. A few other economists did talk about vertical integration, mainly to emphasize the efficiencies that vertical control made possible.

John Maurice Clark’s 1923 book on fixed (“overhead”) costs did contain a more detailed discussion of vertical integration. He spoke briefly of “vertical combination” in the steel industry and more generally in a chapter entitled “Economies of Combination.” He described it as “the combination under one management of successive stages in a chain of productive operations.”

Clark cast the vertical integration problem as one of managing information and fixed costs: “The employer’s knowledge of his own needs and of the conditions of his own business is an expensive industrial asset . . . .” Further,

[A]nother gain from integration arises, in the shape of great reliability in the supplying of materials. The two concerns adapt their processes to each other, and the supply of materials, both in quality and regularity, can be more carefully suited to the needs of the user than they would be if the two were independent concerns . . . .

As a result, “[a]nother thing that is saved is all the work of negotiation, bargaining, higgling, stimulating demand (on the part of the seller) . . . and much of the other work of buying and selling, which could be reduced to a matter of routine.” He described this as “an overhead outlay which is capable of being enormously reduced by vertical combination.” Clearly Coase was not the first to observe that internal integration is a way of avoiding the costs of using the market. Clark’s own contribution was mainly to observe that high fixed costs and product differentiation exacerbated problems of market coordination of upstream and downstream levels.

During the Depression the economic treatment of vertical practices did an about face, becoming much more critical, minimizing the role of cost savings or even finding them harmful, and focusing on problems of monopoly. One of the most pessimistic was economist Arthur R. Burns’s 1936 book The Decline of Competition, which was heavily influenced by the theory of monopolistic competition. He presented vertical integration as inherently monopolistic and as strong evidence that competition was in decline.

C.  Progressives and the Emerging Law of Vertical Integration

In distinguishing vertical from horizontal practices, the difficult part was to determine how a firm’s control of a vertically related market affected competition. As previously noted, economists of the day were keenly aware that vertical integration could reduce costs. So were many courts. Already in 1866, a British decision observed that one effect of a railroad’s acquisition of a colliery was to reduce the cost of coal necessary for its operations.

The courts were also aware of foreclosure threats but did not generally find them decisive. In 1886, the Supreme Court held that a railroad that had integrated into express freight delivery services had no obligation to provide equivalent services for an independent delivery company. Justices Miller and Field dissented. Given that the delivery service was a complement to the railroad, they observed, the effect of the refusal would be to exclude competing express companies from the markets served by that railroad. There was no relevant antitrust law or even an Interstate Commerce Act, which was passed a year later. Rather, they would have found a duty under the common law of common carriers. A few years later the first Justice Harlan wrote the opinion for a unanimous Court declaring that an exclusive dealing contract between a railroad and a provider of sleeping cars was not contrary to public policy or common law. The action did not rely on any federal statute.

Speaking of noncompetition covenants, which are a form of vertical exclusive contracting, Judge Taft’s 1898 antitrust opinion in United States v. Addyston Pipe & Steel Co. noted that they could sometimes be harmful. They might injure the parties by depriving them of opportunities; or they might deprive the public of services that would be valuable and thus discourage enterprise. In addition, he gave two reasons more directly related to competition policy: they might “prevent competition and enhance prices,” and they “expose the public to all the evils of monopoly.” For its part, the common law approved the great majority of vertical agreements with the exception of some noncompete agreements. In any event, Judge Taft’s statements in Addyston Pipe were dicta, because the case involved only naked horizontal price fixing.

That analysis still left many questions open. For example, how does one account for the fact that vertical arrangements may simultaneously reduce costs and exclude rivals? One of these things seems beneficial and the other harmful. Further, how much weight should be given to the common law’s traditional strong protection for liberty of contract and the freedom to trade? Those concerns loomed large in cases involving resale price maintenance and other vertical restraints, where the freedom to trade came to be the freedom to be free from restrictions on distribution. As the Supreme Court reiterated in a 1919 decision declining to find an agreement to engage in resale price maintenance, the purpose of the Sherman Act is to “preserve the right of freedom to trade.”

Historically the common law did recognize limitations on business firms’ vertical integration by contract, but the concerns did not relate to competition policy. First was the common law policy against restraints on alienation, which courts had used regularly to decline enforcement of certain types of contracts. Later on, antitrust decisions cited this policy as a rationale for using the Sherman Act to condemn vertical contractual limitations on resale, including resale price maintenance. The Supreme Court cited concerns about restraints on alienation in an antitrust case as recently as 1967, when it declared territorial restraints on dealers to be per se antitrust violations.

Second, many exclusive dealing and similar contracts were incomplete because they did not specify price or quantity. The common law itself exhibited a strong preference for “one off” contracts that contemplated sales with precise terms covering all important elements. Here, the most frequently challenged practice was requirements contracts, which later came to be called exclusive dealing. Under them, a purchaser promised to purchase its needs for a product from the seller but did not state the quantity. Through the early twentieth century such contracts were routinely struck down, not because of concerns for competition, but because the contracts lacked specificity. As the New York Court of Appeals declared in 1921, while a contract did not necessarily need to specify a precise amount, the quantity must be able to be “determined by an approximately accurate forecast.” This rule threatened the early development of business franchising, because franchise agreements were by nature open ended as to price, quantities and even other terms of dealing. Within a few years such open-ended contracts were to become a routine and essential part of franchised dealership networks. This occurred largely as a result of contract law’s developing doctrine of the good faith purchaser.

In his influential 1920 treatise on contracts, Harvard’s Samuel Williston approved of the common law’s restrictive interpretation. He also suggested a workaround, however, that revealed that competition policy was not the driving concern. As a general rule, he concluded, a promise to sell a purchaser’s needs without precise specification of the number is “not sufficient consideration” to make an enforceable contract. He then added however, that the contract could be made enforceable if the buyer promised to purchase all of its needs from the seller. Thus “the promise of a seller not to manufacture except for the buyer, or the promises of a buyer not to buy except from a particular seller” was adequately supported. Williston’s statements, amply supported by case law, reflected that the common law around 1920 ran in just the opposite direction as the subsequently emerging antitrust rule: contracts of this kind were enforceable at common law only if they were exclusive. By contrast, under antitrust law exclusive contracts were looked at with ever increasing suspicion.

Another concern that the case law reflected and that did breach the boundary into antitrust policy was when contractual restraints were included in patent or copyright licenses. Initially the courts refused to enforce many such agreements under patent law, using a variety of doctrines intended to limit the power of patentees to impose restrictions on patented articles once they had been sold. For example, in its influential decision in Wilson v. Simpson, forty years prior to the Sherman Act, the Supreme Court held that a patentee could not require purchasers of its wood planing machine to purchase its own unpatented disposable blades. In Adams v. Burke, the Supreme Court refused to enforce a condition imposed by the manufacturer/patentee of coffin lids limiting the geographic area where the lids could be used for a burial. That restriction, the Court held, was not “within the monopoly of the patent.” In Bobbs-Merrill Co. v. Straus, it refused to enforce a resale price maintenance agreement contained in a book copyright license, three years before the Supreme Court applied the antitrust laws in the Dr. Miles decision. The decision did not cite the antitrust laws. Long prior to the passage of the antitrust laws, the Supreme Court was routinely denying enforcement to vertical restrictions contained in patent or copyright licenses. Much of this doctrine eventually found its way into antitrust law.

These decisions did not consider anything about competition in distribution, but only whether the restrictive license provision fell outside the scope of the intellectual property grant. Eventually, however, the patent decisions did generate some pushback on competition grounds. One example was Judge (later Justice) Horace Lurton’s 1896 opinion in Heaton-Peninsular Button-Fastener Co. v. Eureka Specialty Co.The seller of a patented button-fastening machine prohibited purchasers of the machine from using it with any except its own unpatented fasteners, one of which connected each button to a garment. In modern terms we would characterize this arrangement as a variable proportion tying arrangement. In addition to a dispute over the reasonable scope of the patent license in which the restriction was placed, the purchaser made an argument “based upon principles of public policy in respect of monopolies and contracts in restraint of trade.” The gist was that “public policy forbids a patentee from so contracting with reference to his monopoly as to create another monopoly in an unpatented article.” Judge Lurton responded by noting that the tying clause served the useful purpose of measuring usage of the machine in order to determine the royalty.

In 1912 a divided Supreme Court relied heavily on the Button-Fastener case to hold in Henry v. A.B. Dick Co. that the maker of a patented office copying machine could tie its own unpatented paper, stencils, and ink to the machine. By this time Judge Lurton had been elevated to the Supreme Court and wrote the opinion. The Sherman Act had now been passed, but the Court rejected the contention that it prohibited this kind of agreement. Rather, the Court noted the general rule of “absolute freedom in the use or sale of rights under the patent laws.”

The Henry decision proved to be too much. Congress responded two years later with section 3 of the Clayton Act, which prohibited ties of goods “whether patented or unpatented,” provided that harm to competition was shown. That is, competition law rather than the appropriate scope of the patent became the driver. With that statement, the law of tying migrated from patent law into antitrust law. Section 3 became the first antitrust statute specifically targeting a vertical restraint. The statute actually went further, prohibiting not only absolute ties but also discounts or rebates conditioned on tying. However, it did not condemn all ties or even all patent ties, but only those that threatened to “substantially lessen competition or tend to create a monopoly.” Indeed, it is hardly clear that the Clayton Act would have condemned the button and office copier ties that had provoked Congress to act. Both were of common commodities and very likely caused no harm to competition.

 In 1917 the Supreme Court overruled Henry in condemning a tying arrangement involving the Edison motion picture projector. It was sold subject to a patent license agreement that prohibited users from showing any films other than the seller’s own. By the time of the litigation, separate patents on the film had expired. The Court read the license restriction as effectively attempting to continue the film patent’s exclusivity by tying the film to the patented projector. While the decision generally relied on patent law, the Court quoted the new Clayton Act provision as confirming its conclusion. Unlike Henry, the Motion Picture Patents case did involve a serious threat of monopoly in the infant motion picture industry. After 1930 the tying decisions were not so circumspect and began condemning competitively harmless ties.

Decisions such as Motion Picture Patents never spoke of vertical practices, but the decision did indicate judicial recognition of downstream control of films as a monopoly problem, at least in the area of patents. The concern in this case was that a patented film projector and control of film could become the lever for control of the motion picture industry. In the 1930s this concern about vertical practices as a tool of monopoly became prominent in the literature of industrial economics. In the motion picture industry itself, it eventually led to a near obsession with vertical integration reflected in the 1948 Paramount decree.

Resale price maintenance—a so-called intrabrand restraint because it does not limit competition with rival products—received the harshest treatment of all. Today we are inclined to think that tying arrangements present greater potential for competitive harm than do resale price maintenance agreements. In 1907 Judge Lurton, still on the Sixth Circuit, held that an agreement between a proprietary medicine manufacturer and its various distributors and resellers stipulating their resale price was not enforceable because it was a contract in restraint of trade. There were no antitrust issues. The difference between this case and his own previous decision in the Button-Fastener case was that the medicines in question may have been protected by a trade secret, but they were not patented. Four years later the Supreme Court agreed in Dr. Miles Medical Co. v. John D. Park & Sons Co. It referenced the Sherman Act only to conclude that earlier decisions refusing to apply it had all involved patented products.

Federal antitrust case law did not refer to a practice as “vertical” until the 1930s. In 1934 a district court opinion in the SugarInstitute case spoke about the possibility that “vertical organization of distribution agencies” might result in “a lower price to the ultimate consumers.” 

More explicit judicial recognition of a distinction between horizontal and vertical practices emerged a little later, and from an unlikely source. After the Dr. Miles decision holding resale price maintenance unlawful, small business interest groups began a “fair trade” movement to permit individual states to opt out of federal law and permit resale price maintenance within their borders. After some state attempts to do so contrary to federal law, Congress yielded to an intensive campaign of small business groups led by the National Association of Retail Druggists, which had drafted a “model act” for Congress to adopt. Congress responded with the Miller-Tydings Act in 1937. President Roosevelt opposed the bill and threatened to veto it, but he caved to political pressure at the last moment.

Miller-Tydings authorized states to approve resale price maintenance within their borders, but it invited considerable dispute about its scope. While it never used the terms “vertical” or “horizontal,” it did contain a proviso that it did not immunize agreements among manufacturers, producers, and wholesalers. The scope of this immunity had to be determined judicially. Because the proviso was triggered by state legislation, it was interpreted mainly by state courts, which very largely concluded that the statute exempted “vertical” agreements but not “horizontal” ones. For example, the North Carolina Supreme Court explained in 1939:

The agreements authorized by the law are vertical, between manufacturers or producers of the particular branded commodity and those handling the product in a straight line down to and including the retailer; not horizontal, as between producers and wholesalers or persons and concerns in competition with each other . . . .

The Supreme Court eventually confirmed this view as a matter of federal antitrust law in Schwegmann Bros. v. Calvert Distillers Corp., concluding that the statute did “not authorize horizontal contracts, that is to say, contracts or agreements between manufacturers, between producers.”

By the early 1930s the law of vertical practices had developed to a place not all that different from where it is today, save for the treatment of resale price maintenance. Tying arrangements were addressable under antitrust, but liability was very largely limited to firms that had dominant market shares or where foreclosure percentages were high. In addition to Motion Picture Patents, the IBM tying case of 1936 found a tie of IBM’s computation machine and its data cards to be unlawful on a market share that exceeded eighty percent. By contrast, General Motor’s (“GM”) tie of car repairs to its original equipment parts was approved when the court concluded that the tie was essential for quality control and that there was plenty of competition in any event. Other decisions also approved ties when the markets in question were competitive.

The same thing was true of exclusive dealing, which condemned the practice when it realistically threatened to perpetuate market dominance. In a decision applying the Clayton Act to exclusive dealing, the Court noted that the supplier controlled roughly forty percent of the dress pattern outlets in the country and that the exclusive agreement in question threatened to create several local monopolies. There was no antitrust law of vertical territorial restraints until the Supreme Court addressed the issue in the 1960s in White Motor Co. v. United States. Justice Douglas held for the Court that it was too early to say. Resale price maintenance, which remained unlawful per se, was the outlier.

The law of vertical mergers and ownership vertical integration cut a similar path. The courts condemned it when it appeared to create or preserve monopoly, but generally required evidence of market dominance or foreclosure. For example, judicial condemnation of vertical integration in the American Tobacco, Corn Products, Kodak, and Keystone Watch decisions were all predicated on at least an assumption of dominant market shares. On the other hand, the court refused to condemn United States Steel’s integration into distribution facilities, finding that the integration improved efficiency and reduced costs and uncertainty. In affirming, the Supreme Court cited evidence that it was cheaper for the defendant to combine several operations in a single facility and that this combination would enable it to compete more effectively in the world market.

 In its unanimous antitrust decision in Eastern States Retail Lumber Dealers Ass’n v. United States, the Court even intervened to protect ownership vertical integration in the lumber industry. The defendants were classic examples of Progressive Era small businesses who relied on the mantle of “fair trade” to protect themselves from larger vertically integrated firms. In this case they organized a boycott, which the Court condemned, agreeing among themselves that they would not purchase lumber at wholesale from anyone who had vertically integrated into retailing. The decision never used the words “vertical” or “integration.” Rather the boycott was cast in terms of wholesalers who sold directly to customers rather than exclusively to the defendant retailers.

D.  Growing Fears of Vertical Control After World War II

The law of vertical relationships began to go off the rails in the 1940s, and for a confluence of reasons. One of course was the Great Depression and the dramatic rise of small business as an interest group following World War I. Another was President Franklin D. Roosevelt’s appointment of Thurman Arnold to be head of the Department of Justice Antitrust Division, turning it into a potent antitrust and anti-patent tool. The development of influential models of imperfect competition also had considerable influence.

In its International Salt tying decision in 1947, the Supreme Court applied both the Sherman and Clayton Acts to condemn a non-foreclosing tie involving a common staple—salt—that was not realistically capable of being monopolized. The case effectively migrated patent act tying policy into antitrust law by holding that the defendant’s patents on its salt injecting machine created a presumption of market power sufficient to condemn that tie. It also watered down the Clayton Act requirement that an unlawful tie must “substantially lessen competition” by holding that proof of competitive harm did not require foreclosure—something that would have been impossible to show, given that the tied product was ordinary salt. Rather it was enough to show that the tying contracts covered a significant amount of salt. In this case that was approximately $500,000 per year.

From that point tying law was used aggressively to condemn competitively harmless practices that the Court did not understand. Nor did it need to, because the per se rule for tying that the Court adopted created a strong presumption of illegality without competitive analysis. The Court relied on Justice Frankfurter’s dicta in the 1949 Standard Stations exclusive dealing case that “[t]ying agreements serve hardly any purpose beyond the suppression of competition.” That dicta served to make the Court more hostile toward tying arrangements than it was toward exclusive dealing.

In its 1949 Standard Stations decision, the Supreme Court expanded the rules against exclusive dealing to prohibit Standard Oil of California from engaging in “single-branding,” or insisting that its franchised gasoline stations pump only its own gasoline. Standard Oil’s contracts covered 6.7% of the gasoline sold in California. The Court’s condemnation of the practice was too much for Justice Douglas, otherwise an aggressive antitrust enforcer, who predicted in his dissent that requiring franchised gasoline stations to sell multiple brands of gasoline would force the refiners to build their own stations, thus eliminating the smaller dealers altogether.

One effect of these decisions was a long-standing hostility toward tying arrangements, although it never extended quite as far to exclusive dealing. That distinction does not make a great deal of sense. While a tie requires a dealer to carry a specific second product as a condition of obtaining the first, exclusive dealing excludes a particular product from the dealer’s entire business. For example, under tying a dealer that sells GM cars might be required to repair them using GM parts. By contrast, under exclusive dealing the dealer would be prohibited from selling non-GM cars altogether. While outcomes vary with facts, often the amount of market exclusion produced by exclusive dealing exceeds the amount produced by tying. In any event, the per se rule for tying was not a creature of the Progressive Era, but rather of the late 1940s.

The courts also became more aggressive about vertical integration by merger and even by new entry. In fact, vertical integration almost became a suspect category. After the merger law was amended in 1950 so as to reach vertical as well as horizontal mergers, the Court applied it liberally to situations where foreclosures were not in the 40% and above range that Progressive courts had condemned, but as low as 3% or 4% on the Supreme Court, or barely over 1% in the lower courts. Internal vertical expansion earned similar treatment. For example, some decisions condemned automobile makers’ distribution of cars through wholly owned dealerships rather than contracting with independents. While the Mt. Lebanon Motors decision observed that the law of exclusive dealing required market power, the requirement was met when the court defined the market as “Dodge automobiles [sold] at the retail level in Allegheny County,” thus guaranteeing that Chrysler’s market share would be 100%.

Numerous decisions in the 1960s and 1970s prohibited nondominant firms from doing any more than switching to self-distribution rather than relying on independent dealers. None of these decisions has survived today.

CONCLUSION

In 1933, two disruptive books appeared that presented the theories of imperfect and monopolistic competition. One was written by Cambridge University’s Joan Robinson, and the other by Edward Chamberlin from Harvard. Both books reflected the Progressives’ increased skepticism about the benign qualities of markets. In the process they also paved the way for significantly more aggressive enforcement.

The theories of imperfect and monopolistic competition immediately became influential in academic circles. They gradually evolved into a single set of theories that today go by the name of imperfect competition. Whether incidentally or as a result, antitrust policy began to veer left, often past all reasonable boundaries, condemning efficient practices where the creation of monopoly was virtually impossible.

This increased level of antitrust enforcement subsequently provoked a fierce neoliberal reaction, mainly from the Chicago School. It was prominently represented in the writing of George J. Stigler and, a little later, Robert Bork. The Chicago School fought an ultimately losing battle to present imperfect competition models as untestable or incoherent. An empirical renaissance in economics, mainly in the 1970s and after, refuted that critique. Today imperfect competition models clearly dominate the microeconomic literature as well as antitrust law, and their empirical robustness is well established.

The most general result has been a shift back toward the center. Today antitrust policy sits between the aggressiveness of the Roosevelt Court on one side, which often condemned competitively harmless practices, and the decaying remnants of the Chicago School on the other. Against this the Progressive response—aggressive in its own time but quite moderate today—has proven to be surprisingly durable.

96 S. Cal. L. Rev. 129

Download

James G. Dinan University Professor, University of Pennsylvania Carey Law School and the Wharton School. Thanks to Erik Hovenkamp and Matthew Panhans for valuable comments.

Fifty Ways to Leave Your Lover: Doing Away with Separation Requirements for Divorce

Despite the evolution of no-fault divorces, which were intended to remove certain barriers to divorce and essentially make any divorce filed inevitable, many jurisdictions prescribe a waiting period before eligibility for divorce, during which there must be a demonstrable period of separation. In support of findings of facts and conclusions of law about whether the divorcing couple has established a separation, some jurisdictions will ask whether the couple has lived in the same abode and, if so, will inquire about the divorcing couple’s roles and choices vis-à-vis one another—for example, preparing meals for one another or engaging socially with one another. Other jurisdictions will make explicit inquiries into whether a couple has had sex with one another. Probing into families’ living arrangements and adults’ sexual choices does real and particular harm to marginalized social groups, and doing so defies the liberty and privacy interests of families and couples. In explicating this litany of critiques, this project attempts to avoid the trap that family law scholarship can too easily fall into; namely, criticizing doctrine “on a low level of abstraction” and rushing to a proposed reform. This piece, therefore, offers a taxonomy of the harm that separate and apart requirements cause—paying particular attention to the ways in which these laws are classist, heteronormative, gendered, and racially charged—and illuminates how constitutionally precarious such laws are. The project is ambitious as it attempts to situate and expose the deep-seated problems of separate and apart requirements as reflective of the deep-seated flaws in family law jurisprudence generally. The piece offers a comprehensive analysis and investigation of separate and apart requirements, and it serves as an invitation to further conversation and exploration of the themes raised herein.

Based on the author’s practice experience as much as her scholarship, the proposal insists that where couples are struggling deep in the heart of the matter about their choices—the good ones and the mistakes—they do not need or desire a judicial officer to ask them to wait or to organize their life a certain way before allowing them to divorce. Nothing and no one is served by insisting on some normative view about what the end of a marriage looks like and requiring some time period for performance of that view. The proposal in this piece joins a growing chorus of practitioners, judges, and scholars talking about administrative divorces. The distinct voice in this piece advocates for administrative divorce as a procedural decoupling of divorce from any underlying or attendant economic and custody issues. The piece motivates this argument based on the premise that allowing families to proceed thusly will enhance the self-determination of families in transition and promote use of the courts when, and only when, the families determine that court involvement in matters of children and economics will improve their stability. 

INTRODUCTION

The problem is all inside your head, she said to me

The answer is easy if you take it logically

I’d like to help you in your struggle to be free

There must be fifty ways to leave your lover

Paul Simon knew full well that there are 50 Ways to Leave Your Lover, yet many jurisdictions insist on just one. That one way looks something like this: decide you are unhappy, unsafe, or unstable in your marriage. Leave the marital home or somehow excise your spouse from it. Pay for that additional rent or mortgage or count on the fact that your spouse can and will. File some paperwork with the court and wait. Wait a long time. Pay a lawyer. Pay a lawyer a lot of money. While you are waiting and paying you are still married, but you are also not really married. So do not resume living with your spouse, even if there is room in that property for you. If you do find yourself back in the house (but goodness, please don’t) do not socialize unduly with your spouse. You may not be sure what that looks like, but just please refrain from it. Do not share meals with your spouse. Certainly do not sleep with your spouse. Never. Not if you are living together or if you have moved out. Eventually, go to court. See a judge. Let the judge know that you followed these rules.  

This Article takes up those rules, namely jurisdictions’ requirements that couples live separate and apart and wait out arbitrary waiting periods to be eligible for no-fault divorce. Despite the evolution of no-fault divorces, which were intended to remove certain barriers to divorce and essentially make any divorce filed inevitable, many jurisdictions prescribe a waiting period before eligibility for divorce, during which there must be a demonstrable period of separation. In support of findings of facts and conclusions of law about whether the divorcing couple has established a period of separation, some jurisdictions will ask whether the couple has lived in the same abode and, if so, will inquire about the divorcing couple’s roles and choices vis-à-vis one another—for example, preparing meals for one another or engaging socially with one another. Other jurisdictions will make explicit inquiries into whether a couple has had sex with one another. These are questions about families’ living arrangements and adults’ sexual choices, questions that invade the privacy of families concerning their living arrangements and adults concerning their sexual choices. Moreover, the requirements of separateness and the inquiries they inspire do real and particular harm to certain social groups. This Article critiques these requirements as being classist, heteronormative, gendered, and racially charged and suggests that they defy constitutional protections. The Article ends by proposing a process that protects the dignity of divorcing couples and better provides predictability and stability for families in transition.

The primary argument for separate and apart requirements posits that separateness is a proxy for establishing that the decision to leave one another is mutual and voluntary or at least that one spouse has given the other a very clear indication that they want out. To the extent that one regards marriage as a contract, a meeting of the minds as to a modification of its terms or its termination makes a certain sense. But the requirements of separateness and the inquiries they inspire are superfluous and odd, given several realities of divorce: first, under no-fault divorce, no one has to prove any particular transgression; and, second, the contestations in divorce are rarely if ever about the divorce itself—rather, disagreements concern custodial, property, and support disputes.

A second argument, more tenuous than the first, to justify these requirements is anchored on the belief that marriage is a primary source of stability and security for children, families, and society. Divorce, the argument goes, is a destructive life event that couples should avoid, delay, or undertake painstakingly slowly. Yet the passage of time and greater visibility of families and couples not hiding their choices and arrangements has debunked the myth that marriage is the only available and functional means of raising children and ordering a civil society. Meanwhile, the time periods and requirements embedded in many separate and apart requirements are deeply destabilizing and burdensome. 

Moreover, the requirements for separateness burdens certain social groups in particular. To begin, living—and parenting—separately prior to final orders for support and division of assets is challenging if not impossible for those who are under-resourced or living in poverty; these economic realities impact women in particular. Moreover, the obsession about what is happening behind closed doors and what those intimate and interpersonal choices might tell the public about a couple’s desires or capacities is deeply rooted in heteronormative thinking and reasoning that applies rigid binaries to gender, gender performance, sexuality, and family constellations. Many expressions of self, love, and family do not match rigid constructions of how to “do” family. Moreover, inquiries into sex or home life are a particular violation to women, members of the LGBTQ+ community, and people of color, as classes of people whose sexuality and home life are too often distorted or weaponized against them. Meanwhile, the requirements appear contrary to constitutional protections. A fulsome accounting of harms—both shared and specific—and a survey of the constitutional concerns reveal that separate and apart requirements defy the very expectations we ought to have for family policies. They do not extend the dignity and respect to couples and families that they deserve, and they do not scaffold the predictability and stability that divorcing couples and families need.  

Part I of this Article will explore the context of divorce—who is divorcing and why people leave marriages. This Part also offers a primer as to how the process and requirements for divorce are situated in the history of divorce. Part II will clarify and expand upon the harm done by separate and apart requirements generally and the intrusion they inspire. The Part will begin with an overview of the toll that pursuing divorce takes and how separate and apart requirements compound these burdens. This Part also seeks to situate these harms in the context of the disenfranchisement experienced by those who the law subordinates or fails to anticipate, as well as the particular psychological harm to subordinated communities brought on by invasions of privacy and judgment about lifestyle. To the extent that Part II describes how separate and apart requirements complicate the lived experience of families, Part III introduces the legal doctrine that should challenge the existence of the requirements themselves. 

Part III outlines preliminarily the substantive due process right to be free from the burden of separate and apart requirements and inquiries. Specifically, the Part will illuminate an intersection in the Venn diagram of family law—namely, in the overlay of intimacy cases, right to marry cases, and family rights cases that suggest separate and apart requirements are on shaky constitutional ground. This Part will be in conversation with scholars calling for a right to sexual privacy and a right to unmarry, and it is meant as an invitation to further and future analysis. Preliminary analysis is offered here in this inchoate form to illuminate how clumsily and carelessly we define and defend family as a matter of law. It is not just that separate and apart clauses cause or exacerbate psychic and sociological harm—the risk of this harm exists and persists even where the law appears to be on precarious constitutional footing. In many respects, the Article agrees with Martha Minow’s assessment from almost thirty-five years ago that there is “an incoherent jurisprudence about families, [because it is] a jurisprudence tugged and pushed by other concerns.”

The final Part of the paper will turn to a consideration of what really matters to families: (1) being afforded dignity and respect; and (2) stability and predictability for ordering finances and property and raising children. Interestingly, these are the public policy concerns cited in support of, but not actually served by, divorce law. Part IV will offer prescriptions that eschew dogmatic and political views of marriage and actually serve familial interests. First, jurisdictions must do away with separate and apart requirements. Second, jurisdictions should bifurcate the adjudication of divorce in a prompt administrative proceeding, allowing for subsequent adjudication or alternative dispute resolution of custodial, property, and support disputes.   

This Article attempts to avoid the trap that family law scholarship can too easily fall into criticizing doctrine “on a low level of abstraction” and rushing to a proposed reform. This Article offers a taxonomy of the harm that separate and apart requirements cause, paying particular attention to the unique harm to those whose experience of the law is too often invisible. The Article also illuminates how constitutionally precarious such laws are. The project, then, is both ambitious and insufficient, as it attempts to situate and expose the deep-seated problems of separate and apart requirements as reflective of the deep-seated flaws in family law jurisprudence generally. The Article is intended, therefore, to serve as an invitation to further conversation and exploration of the themes raised herein.

I.  YOU JUST SLIP OUT THE BACK, JACK: GETTING DIVORCED

People get divorced for all sorts of reasons along a spectrum: from mistaken compatibility to situations that pose health and safety risks to spouses or children. All along this spectrum, there may be elements of neglect of self or partner in the marriage, or unkindness or sorrow or even deceit and scandal, but there is also room for collaboration or planning for a next chapter and a changed future. Whatever the reason, or whatever the conduct of the spouses involved, states now universally recognize the importance of letting people out of unhappy marriages. And about forty to fifty percent of Americans will avail themselves of that option each year. Even where a party can now rely on no-fault grounds for divorce, in many jurisdictions they must establish eligibility under the jurisdiction’s separation requirements. A separation requirement refers to the amount of time two spouses must live separately to be eligible for a divorce. Separation requirements range from sixty days to five years. These requirements affect when a party can file and start the clock regarding when the matter will actually be heard or finalized. Moreover, in many cases, these waiting periods are not just a matter of running the clock; rather, the period of separation has to have demonstrable features of separation to satisfy the court that the matter is ripe for divorce.  

Judges in Pennsylvania, for example, may seek evidence that the spouses began to lead independent lives, may inquire about whether spouses have stopped sharing a bedroom and whether they have had sex, may ask how much time a spouse spent in the marital home, and may question whether the spouses shared meals. In the District of Columbia, as in Pennsylvania, a couple is permitted to remain in the same marital home pending divorce, but the court will make explicit inquiry into whether the couple has had sex with one another and may additionally inquire about whether and when the spouses began to use separate bedrooms and how household finances were managed. In Maryland, couples are not permitted to cohabitate at all, which does not forestall inquiry into the spouses’ sexual relationship; rather, Maryland courts will make a direct inquiry regarding sex. Two spouses who have had sex will be deemed to be cohabitating regardless of a reality of separate abodes.The existence of separation periods and the depth of inquiry required in some jurisdictions are vestiges of confounding Victorian principles and the conspiring paternalism of the state when it comes to divorce.

A.  Current Requirements in Conversation with the Confounding History of Divorce

Historically marriage was a matter of “status and cultural location” as well as economic security (of dependent women) and property rights (of men). As stated by the Supreme Court in 1888,

Marriage is something more than a mere contract, though founded upon the agreement of the parties. When once formed, a relation is created between the parties which they cannot change; and the rights and obligations of which depend not upon their agreement, but upon the law, statutory or common. It is an institution of society, regulated and controlled by public authority. 

Divorce, then, was about restructuring one’s economic life and one’s public standing. By extension, divorce was quite public and political. Perhaps the greatest indication of this was the manner of seeking a divorce, namely through a petition to the legislature. Divorce by legislature meant a popularly elected branch of government would decide whether a marriage harmed the spouses and the community to such an extent that the harm justified ending the marriage. The move from legislative halls to courtrooms did not really render divorces particularly more available, any more private, or any less political. Separate and apart periods and requirements reflect the longstanding insistence that marriage has an awful lot to do with the performance of normative roles and obligations, so divorce is only an option if there has been a failure to execute these roles. Proving oneself eligible for divorce inspires the same theater of early divorce law and continues to invite or advance a regime of judicial paternalism.

1.  Marriage as Performance of Obligation

Where the legislature might have asked itself if a given marriage violated public policy, the courts addressed divorce as an adversarial process concerning a breached marital contract. The terms of the contract flowed between husband and wife reflecting normative values. The conventional story of marriage in the nineteenth and early twentieth centuries was this: man and woman meet, perhaps based on love, but more likely based on a courtship promoted and controlled by the involved families. Man marries and stays man; woman marries and becomes wife, a person no longer entitled to an independent legal identity. Husband assumes the legally and culturally assigned role of provider and protector for this now vulnerable creature. Wife agrees to obedience and sexual submission in order to birth children and tend to a home.  

Even as divorces left legislative halls, the jurisprudence of divorce still reflected a second social contract that flowed between the couple and the state. The state had an interest in reinforcing predictable, regulated (gendered) expectations of support and obligation. The prevailing notion was that this paradigm policed virtue and upstandingness. As sole benefactor to his family, a man would be sober and productive. As keepers of the hearth, women would be too busy or grateful to be performing their manifest destiny as mothers to notice their disenfranchisement. Children would be fed and clothed and sheltered. Conduct such as adultery, desertion, or cruelty—and eventually habitual drunkenness and use of illicit drugs—became acceptable common grounds for divorce as they amounted to an obvious breach in the promises flowing not just from husband to wife, but also between married couple and state. 

2.  Divorce as Theater

Forced to comply with divorce law’s requirements for specific action or omission by a spouse, “one-sided evidentiary hearings[] and feigned testimony became common.” Litigants continued to make public performances in courts concerning the appropriateness of their divorcing, just as they had before the legislature. Indeed, litigants would “blithely relate[] prefabricated stories of their spouses’ ‘extreme cruelty’ destroying their marriage.” The following situation seems humorous in retrospect, but is in actuality a maddening example of what happens when there is such a gulf between law and society. In New York, couples staged elaborate farces, complete with paid actors, for example, to play a mistress who would substantiate an adultery ground for divorce. Attorneys and judges played along. The truly affluent skipped the show and headed to Reno for “quickie” divorces. When New York, one of the last states to allow no-fault divorces finally did so, the two “chief evils the new divorce law was designed to eliminate” were the “collusive or fraud-ridden divorce actions” and “out-of-state divorces based upon spurious residence and baseless claims.” If a party failed to keep up the guise of the performance or a judge was not as accommodating of any novelty or stretch in the arguments litigants made, this could forestall the divorce. And of course, any party not willing to concede grounds or agree to the divorce could lock a miserable couple together in perpetuity. 

Eventually, the chasm between what relationships actually looked like and what divorce laws and jurisprudence required grew too huge and too public to ignore. The nineteenth century Women’s Movement had animated questions about women’s roles and capacity that challenged the prevailing notion about family. These reformers raised consciousness regarding women’s property rights and a desire to dismantle male domination, “alter[ing] the notion of the husband/father as the legal representative for the family in public and commercial realms.” Decades later, in the 1960s, a powerful secondwave of feminism supplied further fodder for divorce reform. These twentieth-century feminists were outspoken in naming the privilege and subordination of the varying roles and opportunities available to men and women in marriage and the public realm (for example, the privileged public role of man as employee versus the subordinated private role of woman as caretaker). They mounted resistance to the subordination of private roles and to a woman’s default position in them. Emerging notions that women might have myriad ways of deciding whether and how to be wives and mothers lent themselves to reforms that facilitate movement in and out of marriage. This cultural revolution combined with the chorus of litigators, judges, and families who were growing tired of the system-inspired collusion and fraud piloted a transition to no-fault divorces. 

The timeline for states adopting no-fault divorce statutes tracked with Supreme Court cases chipping away at laws that carried or reflected assumptions that women would marry, marry young, and remain dependent in their marriages. The first state to adopt no-fault divorce was California in 1969, and the last state was New York in 2010. To this day, only seventeen states have true no-fault statutes whereby the parties cannot raise fault. While the concept of fault eroded or became of second order importance in the divorce law of most states, requisite periods of separation did not, and normative requirements for what constitutes appropriate levels of disconnectedness arose. Parties’ abilities to satisfy the court that they have lived separately turns on their abilities to perform as expected.  

The standards before a family court judge are notoriously subjective. The best interest of the child standard in custody matters, for example, asks judges to make determinations concerning a child’s welfare and happiness. On the margins—where one parent is unfit or dangerous—this may be an easy call, but, in the vast majority of cases, judges are trying to parse facts about things like parental involvement, a “child’s adjustment,” and “the wishes” of the parties and the child in order to make predictive determinations about what arrangements will best serve the needs of a child. These determinations inevitably turn on a judge’s opinion of the parents—opinions, which are in turn, based on the judge’s own observations and values. Parties that do not play the predictable and expected part of mother as nurturer and father as provider can struggle in custody determinations. Similarly, judges’ expectations and values about roles and behavior inevitably also come to bear when judges consider the conduct and choices of parties when making findings of fact and conclusions of law about whether or not couples’ marriages have in fact broken down irretrievably. Trials and colloquies on these issues, after all, seek to unearth the couples’ “inner most beliefs.” Perhaps because this task is so impossible and offensive, some jurisdictions pretend to have turned instead to “objective” evidence of separateness to prove the claim that the marriage is over. And yet even here these inquiries can include questions about sex and particularized questions about shared meals, sleeping arrangements, and social engagements when a couple continues to share a residence. Just as with other aspects of family law, couples whose performance is outside of a subjective norm can and will struggle to convince the court that they are eligible for divorce.

One appropriately wonders if any of these inquiries into intimate and familial choices are necessary given that parties can plead in plain and simple language that yes, in fact, through “no fault” of either party, there has been an irretrievable breakdown of the marriage. Surely the inquiries are of no consequence when neither party contests the divorce, and even where one spouse contests the divorce, under a no-fault regime, this spouse cannot successfully defend against the divorce itself for long: if one spouse wants out, they will get out. And yet all scenarios—from uncontested divorces to those where someone will impotently contest an inevitable divorce—many courts can and do inquire into the conditions or level of separation before entering an order of divorce.

3.  Judicial Paternalism

The early days of judicial divorce and the jurisprudence of fault invited an era of judicial paternalism in which parties aired the failures of their spouse to act in accordance with norms, and the court’s orders were a mechanism to replace the failed male head of household. Subsequent divorce reform allowing for no-fault divorce shifted the “regulatory” energy or emphasis away from that of locating blame on one individual and towards the “internal aspects of family life.” The need for parties to demonstrate that they have lived separate and apart generally, and the companion assumption that this means something, particularly about the way a couple shares bed and board, tells us that models of judicial paternalism are alive and well. 

In twenty-nine states, parties must invite the court to enter their home to determine if they and their soon-to-be-ex-spouse are behaving as if the marriage is truly over. The suggestion, in turn, that evidence of sex acts or contributions to a shared residence is sufficient proof of an intact marriage reflects antiquated and gendered visions of marriage, namely that marriage is nothing more than the exchange of sexual services and housewifery for the support of bed and board. It is also a reminder that marriages have always been a vehicle for the state to police interpersonal relationships and regulate society: “Marriage defines normality. It is the standard against which all other relationships are judged. Societies promote and expect marriage. And governments use marriage to police social groups.”

The creation of family courts themselves signal a sense that what the court and judge were doing was intervening into the family, not merely presiding over a breached contract or brokering terms for a party wishing to modify their marital contract. Family courts as originally conceived were meant to “mend, and if possible cure, sick marriages,” ending them only “if cure was hopeless.” Judges, then, became marriage doctors or conciliators. Still today in many jurisdictions, when one files through no-fault grounds, it triggers not just a separation period and the inquiry into separateness already discussed, but it can also trigger a requirement that the couple undergo mandatory “counseling” sessions. In Pennsylvania, for example the court does not have to order counseling but may do so on information and belief that there is a reasonable prospect of reconciliation. So, if a judge determines that the couple really meant that they wanted to be divorced when they filed for divorce, the judge may decline to order them to counseling. But if the judge decides—what exactly?—that one spouse might still be invested in the marriage and should be able to use state resources to pursue their disinterested spouse, or that either spouse has not thought it through? Then the judge can order the parties into counseling. And counseling to what end? To reconcile? To “play nice”? This is not counseling. This is social engineering. It is well studied that in order for clinical intervention to be successful, particularly in family counseling, each member must come to the counseling with a sense of autonomy, choice, and insight. 

Of final and considerable concern, courts’ paternalistic “fact” finding into separateness seeks to ask and answer heteronormative and gendered questions about family composition and choices. The burden of the courts’ voyeurism is not something experienced or borne equally across all populations; rather, it is a practice that disproportionately subordinates people of color, women, the poor, and members of the LGBTQ+ community. Meanwhile, the intrusion into divorcing couples’ intimate choices and shared living arrangements, and its disparate impact on subordinated populations, is inapposite to family rights doctrine and the evolution of privacy rights in marital and non-marital homes. 

II.  SHE SAID IT GRIEVES ME TO SEE YOU IN SO MUCH PAIN: BURDEN AND HARM FROM INTRUSION INTO INTIMACY AND FAMILY

While there is some variation across jurisdictions, one can articulate a “typical” process to secure a divorce. One spouse will file a complaint and another will answer, or the two will file a joint complaint; the matter will be marked for a preliminary hearing, at which point the court and the parties chart a path for the divorce, which may include discovery deadlines and a series of court appearances; thereafter follows a final hearing, which may or may not be contested, so this hearing may be an evidentiary hearing or more of a colloquy with the parties; and finally, finally, finally the court will enter an order pronouncing the couple divorced and addressing issues of custody, support, and property. Yet, even within this similar arc of a divorce case, the distinct experience of a given family will be different depending on the predilections of their jurisdiction or the circumstances of the litigants. A rudimentary Google search tells us that on average it will take a couple about one year to secure a divorce. In those jurisdictions that require a waiting period before filing for divorce, however, the timeline will be one year plus that waiting period; in Maryland for example, practitioners will set clients’ expectations to contemplate a total wait of about two years before the divorce is final. These timelines have elongated substantially during the COVID-19 pandemic and the ensuing crisis of capacity and flexibility in state courts to administer matters remotely. 

A.  Social Emotional and Financial Costs of the Divorce Process

Scholarship about divorce includes studies tracking divorce rates, attempting to predict why couples divorce, and describing how they fare in the years after divorce. Dwarfing that scholarship are the multitude of articles and studies about how children fare after a divorce. In contrast, there is very little writing and research about the experience—the trajectory and social emotional states—during the years that pass when couples are waiting to file for divorce and moving through the courts to secure a divorce. We do know that leaving a marriage is a significant life stress. “For many people, marital separation means substantial financial upheaval, the renegotiation of parenting relationships and co-parenting conflict, changes in friendships and social networks, moving locally or relocating cities, as well as a host of psychological challenges, including re-organizing one’s fundamental sense of self: Who am I without my partner?”

We also know that the divorce process imposes financial strain on families. Divorces are costly. There are filing fees associated with the process. People using an attorney pay for that assistance—sometimes as much as $400 per hour. Divorces may involve consultation with accountants, therapists, or other professionals, none of whom work for free. Divorcing may require refinancing homes and cars to adjust ownership of that property. Couples may face moving costs or additional rents and payments to set up separate homes. The particular time periods and separate and apart requirements of certain jurisdictions are deeply destabilizing and burdensome. The waiting periods and the requirements of separate and apart make the cost of divorce more immediate or pronounced. Protracted divorce proceedings mean lost wages or use of personal leave for multiple court appearances, as well as the risk of job loss for missing work time and the cost of childcare expenditures. 

Financial strain and the protracted timeline for divorce map onto a sea of logistical and existential difficulties that are already part of divorce for families. One difficulty surrounds the public airing of private matters. We are socialized—and in fact, the law affirms in many respects—that our marriages are confidential places. Yet, the divorce process invites, even requires, an invasion of this privacy. When the issue of privacy breaches in divorce is discussed publicly or in legal discourse, the discussion usually centers on situations in which one spouse may have crossed an ethical, if not legal, line in accessing information to buttress their claims for a divorce. The invasions I refer to here, in contrast, are invasions solicited by the court and the legal process itself. Even leaving aside the particular inquiries of separate and apart, divorce itself as it is conceptualized and adjudicated asks litigants to discuss a breakdown of a (previously) private domain. It may further require discussion or examination of child rearing and finances. Now layer onto this the particularized inquiry of some separate and apart jurisdictions: last sexual encounters, sleeping arrangements, or the nature of shared meals and social engagements. In almost any other context, sex, money, and child rearing are hallowed grounds. These are issues that one may not have reason or comfort enough to discuss with anyone at all, or only with close friends; and yet now the divorce requires a public airing, all while insisting that the subject matter of the litigation is not to locate or determine any one person’s fault. As shall be discussed in more detail below, privacy is an important concept in one’s sense of self and sense of control, so the confusing breaches of it take a human toll.

Additionally, families arriving in divorce court are not on happy or easy footing to begin with. They have experienced interpersonal stressors or have had pressures outside the marriage spill over into the marriage, which have triggered the marital conflict. When the divorce process itself introduces new sources of stress and strain—financial, logistical, psychological, and otherwise—it taxes the very families who are already struggling to maintain a sense of collaboration and problem-solving. Where deterioration of the social fabric is absolute, the inability to abide each other, let alone work with one another, presents a particular problem for families with children. The prevailing wisdom is that (absent issues of abuse or parental unfitness) children benefit from access to and care by both parents, and so the presumption at law is one of joint custody. Essentially, divorcing parents will need to “deal” with one another regarding the care of their shared children. Childcare is not the only matter that requires cooperation or compromise during a divorce. Divorcing couples must make decisions about property distribution and support or risk the court making the decision for them. Even in situations in which couples are willing and able to communicate and contribute to the joint enterprises of raising children or structuring post-divorce households, navigating these scenarios requires heightened intentionality and care in order to avoid or minimize discord. This work is exhausting. Enter separate and apart requirements—requirements that exacerbate all of the sources of stress and tension described above. 

B.  Burdens Are Not Evenly Held

While any household or divorcing couple risks facing the burdens described above, the risk of exposure to the burdens or the depth of experience of each burden is not evenly borne by each family and couple. This is because not all families are resourced, respected, and accounted for in a way that provides them political and social power. It is worth starting by pointing out the ways in which differently situated families’ actual passages through the divorce process will be different; from there, we will move to consideration of the more nuanced aspects of social and political differentiation as it impacts families’ relative treatment in, and experience of, the divorce process. To begin then, it is not uncommon for different types of cases to be “tracked” differently, with separate judges for each type of case and distinct tracking orders that reflect the different realities of the pace and nature of the litigation. For families with fewer means, and particularly for those without counsel, pretrial events become the occasion for negotiation and mediation, much of which can be happening before the judge’s eyes or with the judge’s involvement. Where there are breakdowns or confusion regarding temporary orders, there are no attorneys to turn to for assistance, so the parties will seek the assistance of the court. In contrast, for parties with means, many pretrial court appearances are quite pro forma. The attorneys for the parties submit or discuss the private separation agreements that the divorcing spouses have agreed to in out-of-court negotiations or mediations. Parties produce evidence and ask and answer questions in depositions or interrogatories. Court appearances can be an occasion for announcing what is known, what has been done, and what has been decided. 

The effect is to offer people of means the opportunity at least for the vision of divorce that Cady B. Stanton herself had wanted when she advocated for marriage to be considered a private agreement between the parties that could be terminated themselves with only state acknowledgment of the termination. What she argued against is what people of lesser means arguably endure: supervised marital relations by surrogate governmental heads of household. Yet, for certain families, the entire scaffolding for divorce invites judicial involvement and threatens judicial paternalism. All this, in turn, maps on to the public discourse about the divorce “problem.” One hears claims that feuding parents should stay together for the sake of the children, that revaluing the idea of marital service and obligation would improve family life, and that marrying and not divorcing would lift women and children out of poverty. Conservative pundits have laid blame for all manner of social problems on the thresholds of “broken homes.” “[M]arriage, rather than a shift in public priorities, [is] the solution to poverty, violence, homelessness, illiteracy, crime, and other problems.” It is in this context of punitive and judgmental rhetoric and under the eye of judicial paternalism that families are asked to declare their choices about whether they have lived together, how much they have communed with one another if they have lived together, and whether or not they have had sex with one another. There is a risk that subordinated and under-resourced families will have a particularly difficult or strained experience in such a divorce process. This is not only unfair on a systemic level for a society that strives for justice, but it is painful on a personal level for those individuals whose families and needs are ignored, mischaracterized, or marginalized.

The state’s intrusion into sexual and familial choices is a story told in race, class, gender, and sexuality, yet the state will declare its laws neutral. The critique herein is twofold: first, to notice the inadequacy or stubbornness of the law, but also to take the time to name the psychic collateral consequences of our subordinating jurisprudence. An example will help here. Let us consider the law of rape. It is well studied that when Black women report rape, their accusations are under-investigated and under-prosecuted. Yet, in other contexts, the law is swift and careless in its intrusion into Black communities for the purpose of criminalizing the behavior of Black bodies. Kimberlé Crenshaw explains how, therefore, a Black woman may be reluctant to call the police even when she has been raped or assaulted due to an unwillingness to subject her private life to the “scrutiny and control of a police force that is frequently hostile” to the Black community. Will her account be heard as an assault as clearly as it would if it had been reported by a white woman? Will her assault and the violation of her sanctity be credited as intolerable as it would if it had been reported by a white woman? This has led, then, to the reality of Black women underreporting violations to their bodies. It has confirmed in the hearts and minds of many in the Black community that the law, for them, is not about protection and safety. There is also lasting psychic harm to Black women, and ongoing risks to their bodily safety.

One can follow a similar path in analyzing separate and apart requirements and inquiries. To begin, requirements and inquiry around separate and apartness are manifestations of not believing—not believing that a family is considering or preparing itself appropriately for divorce; not believing their declarations that a marriage is over. Not being believed takes a psychic toll. Secondly, these laws require probing into private spheres, and often, sexual choices. In this way law is primed—designed?—to alienate, ignore, or suppress classes of people, because not everyone’s sexual dignity is held in positive regard, and because the law is tethered to heteronormative arrangements for what the private family sphere is “supposed” to look like. Moreover, inquiry in search of “proof” of separation invades a person’s sense of privacy. Here, I do not refer to privacy in the constitutional sense (though I will do so in later sections of this Article) but rather in the ways in which individuals understand, hold, and value their privacy. Privacy is an elusive concept: Privacy is associated with liberty, but it is also associated with privilege (private roads and private sales), with confidentiality (private conversations), with nonconformity and dissent, with shame and embarrassment, with the deviant and the taboo . . . and with subterfuge and concealment.” Perhaps as a consequence, people perceive invasions of privacy differently and bear those invasions differently. We are not all situated similarly in terms of the treatment we receive, the ways we are heard, the sense people make of our lives, and our experience of normative expectations. As described above, rhetoric about divorce is already punitive and judgmental. Black, Indigenous, and people of color (“BIPOC”), LGBTQ+, and under-resourced families, meanwhile, are asking for divorces in the context of their own stigmatization, discrimination, and associated psychic pain. They are asking for divorces in the context of specific stigmatization and discrimination about their families and sexuality.

1.  Stigma & Discrimination

Discrimination contributes to poor health outcomes and specifically affects mental health when the experience alters “one’s perception of self and their surroundings.” One’s stigmatized social status can create “unique minority stressors” for stigmatized and disadvantaged populations. People of color, specifically, are “stressed by individual, institutional, and cultural encounters with racism.” Specific encounters with racism may be aversion, harassment, discrimination, hostility, and violence. These encounters and experiences can be the source of affirmative trauma or the cumulative experience of them can lead to toxic stress responses. Unsurprisingly, studies suggest that these race-based stressors have an impact on BIPOC’s psychological and physical health. 

LGBTQ+ people also suffer from individual and institutional discrimination. LGBTQ+ people may suffer from stigmatization by individuals and institutions, which can in turn provoke self-stigma. LGBTQ+ people are specifically subjected to stigmas based on perceptions of illegitimacy: gay and lesbian individuals do not participate in legitimate relationships; transgendered persons do not express their gender in a legitimate way. Researchers have identified different categories of stigma. “Felt stigma” is the knowledge of society’s perception of you. Felt stigma can motivate LGBTQ+ persons to “constrict their range of behavioral options (e.g., by avoiding gender nonconformity or physical contact with same-sex friends) and even to enact sexual stigma against others.” Felt stigma may encourage some LGBTQ+ people to conceal their identity or socially isolate. Another manifestation of self-stigma is “internalized sexual stigma . . . . Internalizing sexual stigma involves adapting one’s self-concept to be congruent with the stigmatizing responses of society.” Finally, stigmas of a different flavor plague women and particularly poor women. Since time immemorial, women looking to leave marriages were cast as lustful and deviant. To this day, poor women, in particular, are subject to commentary about their being imprudent and reckless. Consider, for example, the double standard of marriage as something that is necessary or ideal for mothering. A white celebrity in all her staged glory and living in an environment buttressed by endless support and resources can tell a story of her personal redemption and strength in her decision to be a single mother. The object of a “welfare mom,” however, is scrutinized as having subjected herself, her children, and society at large to her irresponsible decision to mother alone. Moreover, numerous studies have confirmed that the accumulation of stress present in a life of poverty has adverse health and mental health outcomes.

Withstanding the domination and control of racist, gendered, or heteronormative systems interferes with one’s esteem and mood states. It also frustrates one’s locus of control. In psychology, a locus of control refers to one’s perception that they control what happens to them and around them. Someone with a strong internal locus of control can believe and actualize that they are the masters of their own destiny. Individuals with external loci of control are left with the feeling that the world happens to them and that they are powerless to chart or change their path. The requirements of divorce risk adding to accumulative stress, stigmatization, and a loss of control already experienced by vulnerable families. Additionally, any judgment or rejection during a divorce proceeding about not getting the separation “right” follows a litany of experiences and systems that tell them BIPOC, LGBTQ+, and under-resourced families are not getting family “right.” 

2.  Getting Family and Sex “Right”

Consider, specifically, the requirement for inquiry into a couple’s decision to cohabitate during a period of separation. As previously discussed, anyone might be annoyed or embarrassed by offering a virtual stranger in open court an account of your bed and board choices, but these requirements and inquiries present particular insult to subordinated populations. Separate and apart inquiries specifically are an intrusion into the inner workings and decision-making in a private realm. For subordinated populations in particular, this private realm is a last bastion of dignity. The experience of the intrusion into a private sphere can be particularly painful and acute for those who weather subordination in the public sphere. 

As Crenshaw so astutely surmised, 

There is . . . a more generalized community ethic against public intervention, the product of a desire to create a private world free from the diverse assaults on the public lives of racially subordinated people. The home is not simply a man’s castle in the patriarchal sense, but may also function as a safe haven from the indignities of life in a racist society.

Racism’s chronic external, public assaults on dignity create resistance to, or sensitivity about, inquiry and critique of private family decisions that are nuanced and, therefore, more susceptible to racist misinterpretations and biased reasoning. People of color are not alone in distrusting inquiries into private realms or experiencing heightened discomfort during such inquiries. The LGBTQ+ community has borne bias in many spheres of life. For too many members of the LGBTQ+ community, rejection and judgment started in their homes and families. The rejection of LGBTQ+ children in homes and in school can turn violent. Judgment and hostility in the workplace or public spaces is common too. Far too often, LGBTQ+ families are cast as deviant, illegitimate, or confusing to children. The home spaces and families designed by some members of the LGBTQ+ community are an expression of what is required to build the family or keep it safe from heteronormative hostility. Family design can also reflect a conscious decision to reject gendered norms for a family’s financial and social arrangements. These families may feature partners and children connected in diverse ways. Historic lack of protection—or affirmative criminalization—for the family ordering of LGBTQ+ families leave many LGBTQ+ families with legacies of perceived vulnerability, a perception that can be particularly acute during divorce.  Preliminary research regarding same-sex couples, for example, suggests that these couples feel a “heightened social scrutiny at the time of a relationship’s end.” LGBTQ+ families are not alone in structuring families that do not fit a rigid heteronormative paradigm—male head of household, female companion, children. Under-resourced communities, foreign-born families, and Black families are all more likely than white affluent families to live in multigenerational homes. There are racial and ethnic disparities in marriage matters as well. Lastly, there are growing disparities by class concerning modalities for child rearing. When families operate outside of the norm, they raise the hackles of our system of supervision: a class-based system of white, heteronormative supervision. 

Finally, consider where the inquiry into the private family sphere includes specific inquiry about sex. Domination and control of sex and sexuality is an old tool in the arsenal of oppression. Consider, for example, that there was a time when the law did not acknowledge marital rape as a crime. This was because sex was an “essential obligation of marriage,” and sex between married people was “private.” Bound up in the protection of male entitlement to sex and freedom from scrutiny regarding how they pursued it was systemic acceptance of the domination of women. Eventually, the mantle of marriage could not disguise the violence of rape and the public came to see the law’s willful ignorance of the violence as tantamount to support. The change in law, in turn, better reflected and resisted the dominance and control inherent in rape and acknowledged that dominance and control is no less dangerous and damaging in the context of a marriage. Legacies of domination and control explain why women, BIPOC, and LGBTQ+ persons face the most abuses of their sexual privacy and are vulnerable to critique of their sexual choices in public spheres. 

Anti-racist scholars have also demonstrated the “sexualized nature of racial oppression.” Since the time of slavery, when Black women were reduced “to a sexual object, an object to be raped, bred or abused,” and onward, Black women’s sexuality has been co-opted and weaponized against them. Meanwhile, the hyper-sexualization of Black men is “one of the most prevalent stereotypes in white America’s racial mythology.” Indeed, in family court proceedings and the child welfare context, one still sees that the sexual stereotypes of Black men and women result in the “devaluation” of mothers and the stereotype of the absent father. The scrutiny of Black parents generally, and their sexuality specifically, is ingrained into our definition of the worthy and unworthy poor.

LGBTQ+ communities, meanwhile, have experienced state-sponsored hostility regarding private, consensual sexual expression for centuries. Consider “sodomy laws” for example, which “do not merely express societal disapproval; they go much further by creating a criminal class.” Sodomy laws provide a particularly clear example that the law is often clumsy and mean in its desire and attempts to define, understand, and regulate relationships. Separate and apart laws are no exception to this general rule. Laws and procedures that require probing into family constellations and sexual choices are not neutral or kind—not by design and not in effect. Meanwhile, how we define and dignify intimacy between people and to whom we extend corollary rights to privacy and liberty matters. It has significant implications for equality. When we hone in on considerations of liberty and privacy interests, what also becomes clear with separate and apart laws, beyond the fact that they are bastions of bias and unkindness, is that they are not obviously even permissible. 

III.  SHE SAID IT’S REALLY NOT MY HABIT TO INTRUDE: INTIMACY, MARRIAGE, AND FAMILY

The legal grounds for doing away with separate and apart requirements and their invasive inquiries are hiding in the shadows where many rights important to families and those in relationship do. Our Constitution does not articulate positive rights, rights securing access to a given thing—education or housing or health care, for example. The quintessential articulation of rights in the U.S. Constitution—the Bill of Rights—articulates a series of negative rights, or limits on the government. The Ninth Amendment does, however, remind us that the enumeration of certain rights “shall not be construed to deny or disparage others retained by the people.” And so, against a scaffolding of governmental restraint and in combination with an explicit invitation to recognize rights of the people, we see a liberty interest the “exactness” of which is difficult to define, but which “[w]ithout doubt . . . denotes not merely freedom from bodily restraint but also the right of the individual to . . . establish a home and bring up children” and an articulation of a privacy right “formed by emanations” from other constitutional guarantees. The articulation and application of these rights has been vital to those in families and relationships. These rights as discussed in the context of intimacy cases, right to marry cases, and family rights cases suggest that separate and apart requirements are on shaky constitutional ground. 

A.  Privacy: Intimacy

In 1965, the Supreme Court asked itself if our society could tolerate the police searching the “sacred precincts” of a marital bedroom for evidence of use of contraceptives. It answered its own question, declaring that “[t]he very idea is repulsive.” The Court’s language in Griswold v. Connecticut, describing the image of a police officer in the bedroom in order to regulate the intimacy of two adults, was not hyperbolic rhetoric. Rather, the description was reminiscent of the actual encounter that Mr. and Mrs. Loving had with police in their bedroom in 1958 and foreshadowing of state action to come. In 1988, an officer entered Michael Hardwick’s home with a (moot and invalid) warrant for his arrest on another matter, and, upon seeing him in his bedroom having sex, arrested him. Then, in 1998, police entered John Lawrence’s home on a report of a “weapons disturbance,” saw John Lawrence and Tyron Garner having sex, and arrested them. All might have been lost for Lawrence as it was in Bowers v. Hardwick had the Court not recognized that the issue before it concerned “the most private human conduct, sexual behavior, and in the most private of place, the home.” In so doing, the Court finally agreed that the question provoked by a law regulating sex between consenting adults was not a question of what an individual was doing in the privacy of his own bedroom, but rather what was the state doing there. Griswold, Lawrence v. Texas, and their progeny tell us that the bedroom becomes a proxy for “the exercise of . . . personal rights.” These cases also confirm that these rights exist within, but also extend beyond, marital relationships.

Danielle Keats Citron argues more specifically that it is “time to conceptualize sexual privacy clearly and to commit to protecting it explicitly.” Citron’s advocacy concerns civil and criminal liability for those who attack and assault the sexual dignity of individuals through any range of behaviors, including nonconsensual pornography, coerced sex, nonconsensual capture of nude images, and so forth, but her analysis affirms concepts important for the issue at hand. Citron defines sexual privacy as “the behaviors, expectations, and choices that manage access to and information  about the human body, sex, sexuality, gender, and intimate activities.” She argues that sexual privacy combines principles of equality, intimacy, and sexual agency and that recognition of such a right and protection under it allows people to “author [their] intimate lives and be seen as whole human beings rather than as just . . . intimate parts or innermost sexual fantasies.” Protecting the self-disclosure and vulnerability inherent in sex upholds principles of dignity and equality. While the concept of sexual privacy is developed and litigated, cover for the literal and figurative “bedroom” at issue in separate and apart inquiries is undeniably located in the penumbra of privacy interests. The privacy rights here clearly establish that the state is not, when it comes to consenting adults, permitted to intrude on who is having sex and to what end. 

One cannot help but notice how these principles of privacy around sexual intimacy erode completely in the context of separate and apart requirements. Indeed, in the context of divorce, at least, the needle has moved since the early and deeply influential articulations of why privacy matters. Samuel Warren and Louis Brandeis, in their seminal contributions to the conversation of privacy, repeatedly emphasized protection of “thoughts, sentiments, and emotions,” not just the body and property. They made their impassioned case for privacy following publicity and specifically photography of a wedding at which the many Boston elite were present. To their thinking, by photographing the wedding and making those pictures available for public view, the press was laying bare “the sacred precincts of private and domestic life.” If photographing marital joy was so compelling to early proponents of privacy, how can seemingly superfluous inquiry at the time of a divorce not seem problematic? Consider, for example, in Bergeris v. Bergeris, from the year 2012—not 1812—in which we see a court probing the interactions of a couple to determine whether and what type of phone sex they had. The probing occurred despite Maryland ostensibly being a no-fault jurisdiction. The probing occurred despite the procedural posture of the case, in which Ms. Jeanine Bergeris sought and received a protective order against Mr. Bergeris, and both parties had—at varying times in the history of the case—sought limited or absolute divorces from one another. And, in which, the scrutiny of “sexually explicit telecommunications” forestalled the divorce of this couple, despite their having been locked in litigation for two years during which time one or both of them was seeking one. 

Privacy for sexual intimacy is not the only substantive right important to families hanging out in the shadow of liberty interests. One can see declaration after declaration that “[t]here . . . exist[s] a ‘private realm of family life which the state cannot enter.’ ” As stated in Carey v. Population Services International, “[w]hile the outer limits of [the right of personal privacy] have not been marked by the Court, it is clear that among the decisions that an individual may make without unjustified government interference are personal decisions ‘relating to marriage, procreation, contraception, family relationships, and child rearing and education.’ ” Accordingly, the Court has admonished laws that abridge the freedom of personal choice in matters of family life. The family realm, as a site for making choices for and about one’s family, has been afforded both substantive and procedural protection. 

B.  Family Rights

Families’ privacy and liberty interests can link in important ways to their survival. This liberty interest has translated into the law affording families’ choices dignity, respect, and a wide berth. Family survival has, in turn, always included the notion of change or restructuring. Any suggestion that families journeying through a divorce are no longer families or will no longer be families once the divorce is finalized is intellectually dishonest and demeaning. To begin, any argument that the end of a marriage means the end of a family does not track with common sense or with the Court’s recognition of many non-nuclear or bi-modal families. Moreover, statutes adjacent to divorce, namely support, child custody, and property distribution statutes, confirm that divorced families will still be tied to one another through continued coordination, support, or cooperation, even while each spouse will be entitled to independence from the marriage. Property distribution statutes, for example, do not just consider spouses’ past contributions to marital property and past acquisitions of assets and income to design equitable distributions, but also consider spouses’ future opportunities for acquisition of assets and income, and forward-looking needs in terms of providing care for any children. Custody statutes will ask about the living arrangement and structure of care to which children are already accustomed while also asking questions about a parent’s willingness and capacity to shape new and presumptively shared custody arrangements going forward. Alimony statutes call out spouses’ past contributions to the achievements of the other or running of the household, while also enumerating forward-looking considerations of spouses’ abilities to find employment or achieve financial independence. In this way, the jurisprudence around care of children, support, and division of property reflects the complex reality of divorced families: while the pathways for two divorcing individuals is diverging, there is a history that binds them and a tomorrow that involves them both. 

Prior to any restructuring contemplated above, there is a limbo period during which spouses are contemplating divorce or are in the process of negotiating or litigating a divorce. What couples are in this stage is still married. And what the couple is doing at this stage is making choices. These choices might include choices about how to spend money, how to organize their affairs, and how to care for children and prepare them for their new reality. Couples’ status as (still) married and the choices they are confronting provoke liberty interests. Direct and easy application of family rights doctrine should forestall court inquiry into the private realm of their family dealings. Parents of a child, for example, may make decisions to continue to share physical space and even a degree of intimacy as part of a larger vision of how to provide the best care for a child during a time of emotional and financial upheaval. This ability to decide how to raise one’s child is a clearly constitutionally protected interest. Families’ interest in childrearing, their interest in protecting their family, and their interest in creating social and legal order were considerations Justice Kennedy named as buttressed to the right to marry. He wrote: “marriage is inherent in the concept of individual autonomy”; marriage is an “intimate association,” a “union unlike any other in its importance to the committed individuals”; “the right to marry . . . safeguards children and families”; and finally, “marriage is a keystone of the Nation’s social order.” Taken together, these four principles put flesh on the bones of the interests bound up in marriage. 

C.  Right to Marry

In Obergefell, the Court had relatively recent occasion to write its latest love letter to the institution, agreeing with sentiments from Courts before it that marriage is the “relation . . . most important” in life, and that freedom to marry is a “vital personal right[] essential to . . . happiness.” And yet, our nation has a shameful history of denying access to marriage for all sorts of reasons. Until appallingly recently, many states had anti-miscegenation laws on their books. When, in 1968, Loving v. Virginia declared such laws unconstitutional, fifteen states in addition to Virginia had similar laws. And there was not a clear, unencumbered pathway for same-sex couples to marry until 2015. 

Loving, Obergefell, and their progeny clarify and confirm that the state may not “significantly interfere” with decisions to enter a marriage, but it is well understood that states can and do regulate marriage, both in terms of one’s entrance into it and exit from it. With few remaining constraints, however, you can decide to be married and you can—relatively immediately—be married. In contrast, there is no prohibition against interference regarding the decision to divorce. The only restrictions on states are that they cannot deny access and opportunity to be heard to end the marriage and they must extend full faith and credit once a jurisdiction has pronounced a divorce. Thus, states police both the gateway to marriage and the gateway to divorce, but they are neither the same gate nor do they swing with equal ease or open to equal breadths. 

Those arguing for a “right to unmarry” take issue with the fact that the process to divorce is so encumbered, as compared to the process to marry. 

The government promotes marriage by making it fast and easy, at least if it’s your first marriage. In states like Nevada, you can even get married on the spot. By contrast, divorce is slow and burdensome. It can take many months and inevitably requires many filings. Unlike marriage, which is essentially a ministerial act, divorce typically requires legal representation, multiple filings, court appearances, and considerable expense. You can get married on a lark, but getting divorced is always a bear.

They argue—quite convincingly—that all four Obergefell principles regarding marriage apply to a right to a prompt divorce. In its argument that the fundamental right to marry “must apply with equal force to same-sex couples,” the majority opinion relied upon four principles: (1) individual autonomy; (2) intimate association; (3) the promotion of familial relationships; and (4) social order. In articulating these principles, the Obergefell Court declared that marriage “draws meaning from related rights of childrearing, procreation, and education” and that choices about marriage “shape an individual’s destiny.” Proponents of the right to unmarry suggest that “[i]f it offends autonomy and dignity” to prohibit a given marriage, then surely it “offends autonomy and dignity” to bind someone to a marriage they no longer wish to be part of, particularly where that bind constricts their ability to marry another. 

One can see that protecting the “choices” and “destiny” of those in a marriage means nothing—and in fact sets us back hundreds of years—if we then limit the acceptable choices to only those that reflect a willingness to stay bound to a marriage no matter the consequences to safety, psychology, or finances. But a stronger, or additional, argument might thread the needle a little differently. One can argue that the principles in Obergefell apply directly to a divorcing couple because, during a divorce proceeding, a couple is married. The operation of laws confirms this simple truth: until parties are actually divorced, they are married. They cannot, for example, remarry in the interim without risking the subsequent marriage being deemed polygamous. Property acquired before a divorce is final can be deemed marital property, and property disposed of before the divorce can be seen as a party dissipating assets. Couples engaged in divorce proceedings should, therefore, be entitled to privacy in any intimate association they choose to maintain and deference to their sound discretion concerning child rearing and creation of stability and predictability, because doing so will indeed serve to preserve social and legal order. 

These constitutional mandates taken in combination with one another are suggestive of a substantive due process right to be able to maintain privacy and demand state deference to familial decision-making during a divorce. These protected rights of family liberty and privacy should foreclose parties from having to submit to a hearing about their choices to engage in sex, share meals, or occupy similar space. There may also be equal protection challenges embedded in the pronounced burdens that separate and apart laws place on to particular social groups. But, what both the typology of harms and the analysis of the rights and interests show more generally is that separate and apart requirements are baffling and problematic. They are a “solution” in search of an actual problem, which meanwhile ignores the legitimate needs of families in transition. Perhaps this should surprise no one because family law jurisprudence and the associated rights and obligations so rarely reflect the needs and interests of families and the individuals that make up those families; rather, they are a reflection of prevailing political forces and wills. Divorce, in particular, is “a lightning rod for deep-seated political anxieties that revolve[] around the positive and negative implications of freedom.” And we have a long and particularly brutal history of disregarding or distorting the familial rights and interest of subordinated groups.

IV.  MAKE A NEW PLAN, STAN

If we are to design laws and procedures that protect families, it is worth pausing to name, even in the most general sense, what matters to families. What appears to matter—sociologically, psychologically, and historically speaking—is (1) stability and predictability for raising their children and ordering finances and property; and (2) being afforded dignity and respect. Separate and apart requirements, and the invasive inquiries that some requirements provoke, defy these interests. They deny some families a path to the clean, expeditious exit that they need, despite it being “socially and morally undesirable to compel a couple whose marriage is dead to remain subject to its bonds.” For other families, it forces social arrangements that feel premature, unnatural, or disadvantageous to a family’s plan for transition. In contrast, the proposal herein better protects and reflects a commitment to stability and predictability and dignity and respect. Preliminarily, divorce law must be rid of separate and apart requirements. From there, one could more easily contemplate divorce as an administrative and civil—not judicial—matter, just as marriage is a civil and administrative matter. Divorce could then be bifurcated from subsequent adjudication of custodial, property, and support disputes where the circumstances and needs of a family require adjudication of those matters.   

A.  Stability and Predictability

In a manner relatively consistent since the Victorian era, divorce has been cast as the cause of many social ills—sex-crazed men and women, unrestricted by a commitment to have sex only in order to have children and then raise children together; vagabond children left by the aforementioned parents; impoverished women-led households. Today, conservative pundits insist that decline in marriage is the cause of children struggling in school, financial strife, and even crime and violence. Rather than unearth the complicated social realities and psychology that contributes to struggling marriages or undertake public policy reform to address social ills, it is far easier to posit marriage as “the solution to poverty, violence, homelessness, illiteracy, crime, and other problems.”

Yet, far from causing all social ills, divorce actually provides important social and financial recalibrations for many families. As way of example, let us begin with an honest look at the ubiquitous claim of many pro-marriage and antidivorce activists: what about the children?!? Divorce obviously affects children, and studies have rather consistently concluded that the event of a divorce will produce measurable anxiety and depression for many children. What early studies failed to ask, however, was how long or pronounced that suffering would be; and moreover, how evidence of anxiety, depression, or clinical antisocial behavior was linked to the quality of family life prior to divorce. More nuanced studies reveal that children in marriages marked by high dysfunction suffer, so their antisocial behavior decreaseswhen parents dissolve unhappy marriages. Other studies suggest a positive relationship between divorce and measures of increased resilience across time. Paying attention to the experiences and outcomes for high-conflict families is particularly important when one considers the reality that divorce is also a means of escape from affirmatively abusive environments. Approximately, twenty-five percent of divorces are initiated in response to domestic violence. There is also an inverse correlation between divorce rates and domestic violence rates: “In the first five years after the adoption of no-fault divorce, divorce rates did indeed rise, but the domestic violence rates fell by about 20 to 30 percent, and wives’ suicide rate fell by 8 to 13 percent.” 

Even in situations that do not overtly threaten one’s personhood, divorce can increase predictability and stability because it opens the door to structuring parenting and finances. It is no secret that divorce can create economic hardship and that this hardship disproportionately affects female headed households, but it is not true that financial independence and competence would have been assured if the marriage had remained intact. Similarly, it is true that custodial disputes can be contentious and painful, but it is not true that marriages produce equal and adequate parenting. Oftentimes, divorce recognizes the truth that some people reach more authentic or sustainable parenting and financial arrangements when they are apart. And indeed, it is only through divorce and not within an intact marriage that parties have a legal right to equitable distribution of property, claims for support, and claims of custody that are severable from that of the child’s other parent. It is only in the context of divorce and not marriage that parties can seek intervention of the court to help them act on these rights. 

Whether the assistance of the court or an internal reckoning gets them there, divorces can and do change people’s choices regarding their parenting and decisions to work or pursue training. After studying nearly fourteen hundred families, Mavis Hetherington describes custodial parents learning to round out their skills as parents; for example, a parent may learn more about discipline and control or strive for greater kindness and softness when the parent cannot offload some aspect of parenting on the other parent and must develop the skills themselves. She also writes about a group she calls “divorce activated fathers” who “begin to do all the things they were too busy to do before divorce or had relegated to their wives,” for example, soccer games and school plays. Other individuals studied reported that divorce ignited an opportunity or a motivation to go back to school, find work, or switch jobs. For many of these divorcees, these changes in their roles and ways they conceived of themselves in their families led to self-discovery and empowerment. 

B.  Dignity and Respect

Perhaps precisely, because divorce is a pathway to refiguring one’s social, financial, and custodial relationships, leaving a marriage can be an important expression of agency and self-determination. Self-Determination Theory looks to human experience to understand what motivates people and posits that the ultimate goal for any intervention is inspiring a person’s optimal functioning. The theory suggests that three basic psychological needs are associated with increases in wellbeing: autonomy, competence, and relatedness. Autonomy refers to the need to have an independence of being; competence is defined as the desire to “master one’s environment”; and relatedness refers to the desire for meaningful social interactions. Scholars have described marriage as an “expressive resource” and that commitment to marriage is an expression of association and personhood. David Cruz argues that that ability to hold oneself out in a relationship recognized by civil law, and not just social reality, is an expressive resource. He made this case in 2001 to argue for same-sex marriage, but the notion easily applies to divorce as well. A decision to divorce and an appeal to have that divorce recognized by law is an expression of needs and choices about the continuation of a union with another person and the associated intermingling of finances, property, child rearing, and habitation. Limiting an individual’s expression inside marriage to only those decisions and behaviors that commit to the marriage constrains one’s self-determination. Divorce can be a valid expression of one’s autonomy, decision-making, and desire for a different manner or source of relatedness. Divorce allows people the opportunity to resituate themselves in new relationships or outside of any relationship at all. Additionally, in ways unavailable to them in an intact marriage, those seeking divorce are able to pursue orders for equitable reallocation of property or for support that may alter financial and power dynamics with their spouses in important ways.

It is not just the ability to divorce that matters for one’s self-determination; freedom from public scrutiny regarding the mode and manner of separation while divorcing has implications for one’s self-determination and mental health as well. Much of divorce reform acknowledges that judges have no business probing into families’ personal issues, because their doing so is beside the point if the couple themselves declared their marriage “dead” and because such probing produces “gut-wrenching pain” and injures families. Moreover, such probing is also fated to skew toward normative bias or whimsy, which in turn tend to ignore, silence, or diminish voices of those who fall outside the dominant narrative. Nothing defeats a sense of autonomy and competence like being told “you might have meant x, y, or z by your words and actions, but we find your interpretation of your own words and actions less valuable than our own interpretation of your words and actions.” Moreover, certain requirements of separation ask families to distort or hide their truth by either rearranging themselves or avoiding certain disclosures, or they incentivize families to misrepresent themselves. Truth-telling, meanwhile, promotes social emotional health. Research suggests that truth-telling strengthens the connection between our prefrontal cortex—our “adult” brain—and our limbic brain—our “child” brain. Truth-telling is literally good for our brains. 

What if, instead of substituting our judgment about what the end of a marriage looks like or incentivizing distortion to satisfy our judgment, we simply said “okay” when a party said, “I wish to no longer be bound by marriage to this person?” What if our process reflected the reality that when couples are struggling deep in the heart of the matter about their choices—the good ones and the mistakes—they do not need or desire a judicial officer to ask them to wait or organize their life a certain way before deciding to divorce or while undertaking the divorce? What if we could avoid forcing “efficacious resolution of economic issues and custody” to take a back seat to the timeline of divorce by decoupling these issues from the right or opportunity to divorce? We would then be free to conceptualize alternative dispute resolution alternatives that can improve outcomes for the thorny issues of custody and economic issues. 

C.  Prompt Administrative Divorce

This Article proposes that divorces should be administratively and promptly issued upon the filing of a request by an aggrieved party to a marriage; such that, thereafter, issues of support, equitable distribution, and custody can drive the manner and mode of adjudication or alternative dispute resolution. First, one must note that this proposal flips the script on divorce, suggesting that a pronouncement of a divorce can be disaggregated from and precede final resolution of matters of property, custody, and support. Currently, in all states, a divorce starts when a party files a complaint and serves it on the other; after which time the parties enter a “kind of purgatory,” a “pendente lite stage.” During this time, the court holds hearings or the parties submit agreements regarding temporary decisions on parenting, support, and use or access to certain marital property (such as homes or cars) while the case is pending. After (in some cases) protracted discovery or (in all cases) protracted waits due to court congestion, matters are calendared and heard in final hearings, or negotiated final agreements receive judicial blessing. The meat of these final hearings and agreements are the minutia and nuance of the same issues that were handled initially and temporarily, namely custody, support, and property. 

The embedded notion is that a party cannot be divorced until matters of custody, support, and property are squared away. But why? It is a social and legal myth that parties cannot negotiate and contract regarding support and property or negotiate and mediate about parenting children outside of the bonds of marriage. As a matter of law, it is actually possible, for example, to grant a divorce and table or calendar matters of custody for a final hearing. Socially, we know full well that many children are raised in households by unmarried parents or are raised in and by two separate households. Moreover, even in the current regime, the litigation of custody, support, and property issues often persists and survives after the dissolution of marriage via motions to modify or motions to compel.

In addition to disaggregating the divorce itself from resolution of corollary matters, this proposal conflates or includes the concept of shortening waiting periods with freedom from requirements during that waiting period. Any proposal to promptly issue divorces and do away with separate and apart requirements can find comfort in the fact that plenty of states do not have them, and the world appears to have kept on spinning. States such as Alaska, Nevada, New Hampshire, Wyoming, South Dakota, and Idaho have short waiting periods for divorce. These same states do not have involved requirements for demonstrating separation in order to qualify for the divorce. When one cross-checks “quickie” divorce states against other states, one is struck by the fact that nothing is striking. To begin, aside from Nevada, which has a distinct explanation for being an anomaly, divorce rates in these other low-bar states are on par with other states, and even Nevada is not alone in being a high divorce rate state. But, then again, separation periods and normative standards for periods of separation are about creating predictability for children and economic and psychological security for families. So then, surely the citizens of Alaska, Nevada, New Hampshire, Wyoming, South Dakota, and Idaho must be floundering in a state of civic and familial chaos. But no. No, they are not: measures of social services consumption, child welfare statistics, school performance data, and so forth are all unremarkable compared to other states with more stringent divorce requirements. 

The question then becomes what, if anything, should the requirements be for the administrative pleadings and requests for divorce. Here again, we see examples of jurisdictions offering opportunities for “summary dissolution,” “streamlined dissolution,” or “simplified dissolution” to offer parties efficient, less public, and more cost-effective dissolution of their marriage. These processes still require judicial approval, but they do not contemplate a trial and instead invite parties to craft their own agreements. States allowing these dissolutions may impose limits on assets, requests for spousal support, or length of marriage, or they may be limited to cases in which there are no children. France has taken this approach one step further and allowed matters that can be handled summarily to move forward without judicial involvement at all. The same is true in Australia, where uncontested divorces with no children can be obtained by administrative procedure through the mail. Denmark similarly allows for an administrative procedure in uncontested cases.

An expeditious administrative process supports the goals of creating a system that respects the families utilizing it, as well as the goal of creating predictability and security for families. To begin, an administrative process that separates requests to divorce from requests for the court’s assistance with property, support, and custody better reflects several important realities beleaguering the family courts and harming the families who are forced to engage with the courts. Family courts are overcrowded and inefficient. Family courts also deal with vast numbers of pro se litigants. As compared to represented parties, pro se litigants are more likely to have substantive or procedural missteps such as missed deadlines or deficient filings. As a family court judge and two practitioners put it, 

This perfect storm created by a void of knowledge of procedural, substantive, and evidentiary law on the part of individuals stuck in a system to deal with unhappy, very personal, and, at times, highly conflicted matters results in an unnecessary overuse of judicial resources and a growth of the backlog in the court’s docket.

Under this new proposal, certain divorce scenarios would come off the court docket all together, clearing room for those matters that require more time and attention. Other matters could come before the court, not automatically upon the filing of a divorce, but rather when the families’ own needs and energies direct them to file. Some parties may feel they need court intervention to understand, negotiate, and contract around their property interests, support needs, or child custody issues; some parties may not. Some parties may choose to merge and incorporate settlement agreements into court orders, while other parties may not. 

Courts and legislatures recognize that the more process a law requires or inspires, the greater the delay and that the greater the delay, the greater the agony. Social science literature—and if we are honest with ourselves, our own lived experiences—buttress this conclusion. People do not like to wait; and waiting for uncertain time periods, for an uncertain outcome, is the worst kind of waiting. People become agitated and irrational under these conditions; “[w]aiting in ignorance creates a feeling of powerlessness, which frequently results in visible irritation and rudeness.” The psychology of waiting is often studied in connection with customers waiting in line, so one must consider how these same psychological tendencies toward frustration and anger will be amplified when the matter at hand—a divorce—is more socially emotionally fraught than a trip to a customer service center. Consider, for example, patients asked to wait for medical procedures due to COVID-19 protocols and barriers. Here, patients showed marked symptoms of mental distress as they waited to undergo their procedures. To add insult to injury, the psychic toll did not just cause suffering in the patient, but it also adversely impacted patients’ trust in the health care system. As it turns out, such findings do and have translated onto the legal system generally and family courts specifically. Delay upon delay with uncertainty about the divorce risks exacerbating the social-emotional stress a family is under and threatens to diminish litigants’ ability to work together and confidence in the legal system. 

Simplifying and truncating the line between wanting a divorce, filing for a divorce, and getting a divorce has advantages for almost every type of couple who the family court sees. The law lacks teeth on the issue of divorce itself; so much of divorce law is actually about regulating or apportioning property and money among family members to support the individuals and bimodal family constellations arising out of breakdowns in the married, nuclear family. Some couples struggle financially to create separate households or face contentious divorces. Without the benefit of advocacy and assistance, separation periods are rarely productive and simply impede the couples’ full opportunity to use the resources of the court and the force of the law to plan and prepare for a new future. A prompt administrative divorce clears the way for these couples to immediately seek final resolution for the matters that will help them prepare financially and logistically for their next chapter. In contrast, some resourced and represented couples are able to negotiate agreements about property, support, and child support. These couples do not need active involvement of the court to broker agreements, but they need the court’s prompt attention to finalize them. For these resourced couples, the murkiness created by periods of separation prior to divorce can create confusion around what holdings or debts are marital property. Clearly delineating the moment of divorce from the period of negotiation and possible adjudication regarding property and support interests clarifies which choices and conduct were “marital” and which were not. Still other couples have “simple” cases where they own little to no property and have no children. Despite shifts in divorce law leaving the courts with little to no authority over the matter of divorce itself, these couples must queue up, adding to the clogged docket, missing work for interim court appearances, and waiting—sometimes years—for a pronouncement of what they themselves have known all along: their marriage is over. The current divorce system not only creates all these inefficiencies and delays, thus forestalling family problem-solving, but it also decentralizes the problem-solving.

Court filings automatically trigger the involvement of bureaucratic authority, which can be demoralizing or unnecessary for many families. Administrative trends reflect a jurisprudential reality that divorces seem less like a “legal matter in need of adjudication and more a private matter subject to administrative regulation.” Administrative divorce trends, meanwhile, also mirror the parallel practice of alternative dispute resolution (“ADR”) trends. Many states, even those without summary dissolutions on the books, allow families to use ADR such as mediation to settle their disputes before seeking judicial approval. Such practices promote parties’ self-determination over personal matters and their everyday lives. One argument is that court processes and the role of judges are often about rewarding and punishing or incentivizing and barring. These frames contemplate one winner and one loser. The frames are not comfortable or appropriate for people trying to share property equitably or contemplate some sort of partnership to raise children. Others point out that the process of re-conceptualizing family relationships in a new family system is slow, iterative, deeply personal, and ideally collaborative. The divorce process is not currently designed with the space to negotiate, grieve, try, fail, and try again. A system that reduces dockets and narrows issues before the tribunal might better foster opportunity to design systems and processes more respectful of, and responsive to, families’ needs.  

CONCLUSION

The origin story of separate and apart requirements is a legacy of the bizarre and lasting stalemate between what people were doing inside unhappy or unhealthy marriages and the technical lawful authority to divorce. Surely now we can begin to narrow that divide between law and society, because after all, the train has left the proverbial station.

About half of all Americans over the age of 18 are married, but an increasing number of them have been married before.Over the past ten years, the number of cohabiting adults over the age of 50 has increased dramatically, from 2.3 to 4 million. More than one-quarter of married men in their seventies report having had an intimate relationship with someone other than their spouse. The grey divorce rate—the divorce rate for those age 50 and older—doubled from 1990–2015, although it remains significantly lower than the rate for those under 50. Almost a third of Americans (30%) have a step- or a half-sibling. . . . As many as 5% of Americans are polyamorous, having serious intimate relationships with more than one person at the same time. Approximately 40% of children are born to unmarried mothers, but more than a third of those mothers are cohabiting at the time they give birth.

“[S]tatistics have normative as well as empirical implications.” One can see these implications playing out on our TV screens, in our neighborhoods, our schools, our work, and our social spaces. Families increasingly “do” family all sorts of ways—ways that suggest that family is a beautifully nuanced thing. Bound up in this, is the reality that marriage must also be a beautifully complicated thing—or at least something that is about more than sex and meals. It is illogical to hover the magnifying glass on these issues when a party seeks to end their marriage. Moreover, this piece suggests it is acutely painful and harmful to certain populations and constitutionally precarious to do so. A simple conclusion flows from all of this: these requirements do not make sense. They require performance and permission that is neither necessary in life nor should be acceptable in law.

96 S. Cal. L. Rev. 77

Download

* Assistant Clinical Professor, Boston College Law School, Director of Interdisciplinary Practice and Family Law Professor. With thanks to the members of the Boston College Summer Writers’ Workshop for their comments on this project; and thanks also to Laura Robinson for her tireless enthusiasm, even for my most tedious asks; and to Karen Breda, Boston College Law School Librarian and Lecture, who I am convinced, could find anything anyone ever asked for.

Ditching Daimler and Nixing the Nexus: Ford, Mallory, and the Future of Personal Jurisdiction under the Corporate Consent and Estoppel Framework

While personal jurisdiction is intended to assess whether a defendant should be forced to defend a lawsuit in a location due to the defendant’s contacts with that forum, the doctrine has shifted to require the plaintiff to show a connection to the forum, even if the defendant otherwise has substantial contact with it. In its

Read More »

Toward a New Fair Use Standard: Attributive Use and the Closing of Copyright’s Crediting Gap

A generation ago, Judge Pierre Leval published Toward a Fair Use Standard and forever changed copyright law. Leval advocated for the primacy of an implicit, but previously underappreciated, factor in the fair use calculus—transformative use. Courts quickly heeded this call, rendering the impact of Leval’s article nothing short of seismic. But for all of its merits, Leval’s

Read More »

The Invention of Antitrust

The long Progressive Era, from 1900 to 1930, was the Golden Age of antitrust theory, if not of enforcement. During that period courts and Progressive scholars developed nearly all of the tools that we use to this day to assess anticompetitive practices under the federal antitrust laws. In a very real sense, we can say

Read More »

Justice Breyer’s Friendly Legacy for Environmental Law

Environmentalists did not cheer President Bill Clinton’s decision in May 1994 to nominate then-First Circuit Judge Stephen Breyer to fill Justice Harry Blackmun’s seat on the Supreme Court. Just the opposite. Many instead expressed serious concerns about Breyer’s impact on environmental law were he to be confirmed, and openly questioned whether a Justice Breyer might be “hazardous to our health.” This Article considers whether, in light of Justice Breyer’s actual record over the past twenty-seven years on the Court, environmentalist concerns about him at the time of his nomination were realized. The Article concludes they were not. Justice Breyer was instead friendly to environmental protection concerns even if he fell shy of being an unqualified friend on the bench. In almost all of the most important environmental cases of the past twenty-seven years, he was a reliable vote joining the majority in the big cases environmentalists won—often providing the critical fifth vote. And although Justice Breyer on a handful of occasions was less a reliable vote in dissent with liberal justices sounding the alarm in the big cases environmentalists lost, in none of those cases was his vote dispositive of the outcome. For this reason, although environmentalist concerns at the time of Justice Breyer’s nomination were reasonable, and had the potential to cause the very problems environmentalists identified, they proved largely insignificant in actual application. Finally, Justice Breyer’s actual record on the Court suggests the wisdom of rethinking what it means to be a “dream” justice for environmental law. Most simply put, the best Justice for environmental law may not be a Justice who always votes in favor of the outcome favored by environmentalists in individual cases.

INTRODUCTION

Environmentalists did not cheer President Bill Clinton’s decision in May 1994 to nominate then-First Circuit Judge Stephen Breyer to fill Justice Harry Blackmun’s seat on the Supreme Court. Just the opposite. While Justice Breyer had his defenders, environmentalists mostly expressed serious concerns about Justice Breyer’s impact on environmental law were he to be confirmed. And some denounced him on that ground, worrying that he might be “hazardous to our health”—a concern strikingly similar to that which had been expressed a year earlier by attorneys advising President Clinton when the President had first considered Justice Breyer for a vacancy on the Court. Yet, ironically, although those concerns had abruptly derailed Justice Breyer’s nomination only hours before its expected announcement in 1993, they became a major reason why the President chose Justice Breyer over the President’s first choice for the nomination in 1994. Senate Republican leaders threatened to wage an all-out campaign against the President’s first choice, Secretary of the Interior Bruce Babbitt, because of his reputation as an unabashed environmentalist. And that same Republican leadership promised smooth sailing if Justice Breyer were instead the nominee because Justice Breyer had expressed concern about unduly costly environmental protection requirements and therefore was perceived, unlike Babbitt, as pro-business.

Justice Breyer’s recent retirement from the Court after twenty-eight years provides an opportune moment to reflect on his legacy for environmental law based on his actual record—as reflected in the votes he has cast and the opinions he has written. More specifically, this Article considers whether Justice Breyer’s record on the Court confirms or contradicts the expectations of his supporters and detractors more than a quarter century ago.

To that end, the Article is divided into three Parts. Part I reviews the events surrounding Justice Breyer’s nomination and the role that environmental law then played in securing both his nomination and confirmation. Part I includes discussion of previously undisclosed information long buried in the official archival papers of President Clinton related to the decision not to nominate Justice Breyer in 1993. Part II reviews Justice Breyer’s record in environmental cases before the Court, with special emphasis on opinions he wrote in those cases, whether majority, concurring, or dissenting.

Part III considers whether, in light of Justice Breyer’s actual record on the Court, environmentalist concerns—shared by some advising the White House—about Justice Breyer were realized. Although those concerns were understandable and pertained to the very problems that White House advisors and environmentalists identified in 1993 and 1994, they have proven largely insignificant in actual application. In the vast majority of environmental law cases heard by the Court during Justice Breyer’s tenure to date, Justice Breyer has both displayed heightened sensitivity to environmental protection concerns and voted in a manner sympathetic to environmentalists, without expressing any concern about environmental protection requirements being too demanding. And, in those relatively few cases in which his concern with unduly stringent environmental law was relevant to a legal issue’s resolution, Justice Breyer’s views made no difference to the outcome of the case, nor has he written an opinion of the Court in any of those cases, instead at most writing a separate concurring opinion of no legal effect. In only one relatively unimportant case that defied application of liberal or conservative ideology did he ever supply the decisive fifth vote against environmentalists, likely because the Court has been consistently dominated by at least five more conservative Justices ever since he joined the bench.

I.  WHITE HOUSE ADVISOR CONCERNS AND ENVIRONMENTALIST OPPOSITION TO JUSTICE BREYER’S NOMINATION

President Clinton’s decision to nominate then–First Circuit Judge Breyer to fill Justice Harry Blackmun’s seat in May 1994 was remarkable because only one year earlier, the President had decided at the last minute not to nominate Judge Breyer to fill Justice Byron White’s seat on the Court. In June 1993, Judge Breyer had been the expected nominee, bolstered by the strong support of Massachusetts Senator Ted Kennedy. But in what was dubbed by the New York Times as “A Surprise Choice,” Clinton instead tapped D.C. Circuit Judge Ruth Bader Ginsburg to fill the opening.

According to press accounts at that time, Judge Breyer’s nomination stumbled in the final day and hours because of reports that he had failed to pay Social Security taxes on the wages of a part-time housekeeper. Judge Breyer’s problem was an especially sensitive one for the White House because the President had recently suffered repeated embarrassment when his two successive choices to serve as the first female Attorney General were abandoned because of problems with their domestic household employees: the first had reportedly failed to pay taxes for a childcare provider and the other had failed to inform the White House that she had once hired an illegal alien as a household employee (although, in her defense, that hiring was not itself unlawful when it occurred). And, as much as the Clinton White House sought to distinguish Judge Breyer’s circumstances, the specter of excusing conduct from a man that resembled conduct that had only recently derailed two female cabinet picks apparently cratered Judge Breyer’s nomination. Judge Breyer also reportedly interviewed poorly with the President not long before Ginsburg was announced—which Judge Breyer’s supporters blamed on painkillers he was taking while recovering from a serious bicycle accident for which he had recently been hospitalized.

But what was not reported at the time is that White House concerns about Judge Breyer were also substantive in nature. In early June 1993, White House Counsel Bernie Nussbaum asked Joel Klein, a highly skilled and trusted D.C. private sector attorney, to conduct a confidential review of both Judges Ginsburg and Breyer to assist the President’s decision. Over the course of just a few days, Klein orchestrated detailed reviews of the record of both judges by forty lawyers at six law firms. And Klein provided Nussbaum a comparative analysis of the two judges in memos dated June 10 and June 11—precisely the time period when President Clinton pivoted away from Judge Breyer in favor of Judge Ginsburg, whose Senate confirmation Klein then championed in his new role as Deputy White House Counsel.

The reviews were devastating to Judge Breyer’s prospects. As summarized by Klein to Nussbaum, “Judge Ginsburg more closely meets the President’s articulated standards for the Supreme Court than does Judge Breyer.” Her “work has more of the humanity that the President highly values and fewer of the negative aspects that will cause concern among some constituencies.” “She has written more, and consistently, about the human condition and the plight of the disadvantaged, and she has done so with obvious conviction and commitment.” In a draft memo to Nussbaum on Judge Breyer, while describing Judge Breyer as “a brilliant jurist” with “the potential to rank with the most distinguished judges in our past,” Klein also described the “dispassionate” nature of his writing and how “he does not wear his heart on his sleeve.” Klein added that Judge Breyer’s views on government regulation such as environmental risk regulation were “conservative” and in a recent book he authored on risk regulation, he had proposed “a government-wide cost/benefit approach” akin to what Republicans then favored and to those regulatory reforms supported by the prior presidential administration.

Those attorneys who reviewed Judge Breyer’s writings for Klein stressed Breyer’s apparent lack of sensitivity to the human stakes of economic regulation and environmental protection requirements. In one memo dated June 8, and sent to Klein on June 9, two reviewers described the “bloodlessness” evidenced by Judge Breyer’s penchant for “analyz[ing] every problem he is considering within a framework [so] bounded by economic theory or rules of logic that the result seems devoid of emotion and even . . . humanity.” The reviewers harshly contrasted Breyer’s writings with those of Bruce Babbitt—the candidate favored by environmentalists who was then serving as Secretary of the Interior—which were described as “lucid, exuberant and wide-ranging.” A second memo, prepared on June 7, similarly described Judge Breyer as a “cold fish,” “bring[ing] no passion or insight” and as so “lack[ing] of vigor in his jurisprudence that one suspects he does not have (or refuses to utilize) any innate sense of justice.” The memo concluded that Judge Breyer was “certainly a judicial conservative” and “[c]onservatives will be thrilled if Judge Breyer is appointed. . . . Nothing in Judge Breyer’s opinions suggests that he would be a great Supreme Court Justice.” “In no way is he a ‘man of the people,’ as some other candidates have been.”

These concerns had a clear impact. Then-Associate White House Counsel Ron Klain made an explicit reference to the concern in his June 11, 1993, memorandum to White House Counsel that listed the questions that the President could ask Judge Breyer in their interview. One question asked Judge Breyer to respond to the claim that his “writings suggest an over-emphasis on economics: putting a cost on lives, for example.” Another question more pointedly asked him to “respond to the criticism that his opinions are ‘bloodless.’ ” President Clinton’s interview with Judge Breyer that same day did not go well, and a few hours later, Klein called Judge Ginsburg to let her know she should be available for a meeting with President Clinton. She met with the President two days later, Sunday, June 13, and the President called her later that same night to offer her the job.

What resurrected Judge Breyer’s prospects a year later and secured President Clinton’s nomination was the President’s desire to avoid a Senate confirmation battle. Clinton’s apparent first choice in 1994, as it had first been in 1993, was Bruce Babbitt. Liberals in the Democratic Party strongly endorsed Babbitt, as did environmentalists, in 1994 because of his progressive views and his championing of environmental protection causes both as Governor of Arizona and Interior Secretary. Indeed, environmentalists had a year earlier been so enthusiastic about Babbitt that some had actually opposed his nomination to the Court to replace Justice White because they did not want to lose his leadership at Interior—in retrospect a decision they may well regret.

But it was that same environmentalist enthusiasm for Babbitt that ended up sinking his possible nomination to replace Justice Blackmun in 1994. When the White House let leak to the news media that the President had settled on Babbitt and would announce his nomination shortly, Republican Senate leadership preemptively announced that they would vigorously oppose Babbitt—because of his reputation as an ardent environmentalist. Simultaneously, the Republican Senate minority leader, Senator Bob Dole, predicted “smooth sailing” were President Clinton to nominate Judge Breyer instead. The President blinked, and Judge Breyer became the nominee. Judge Breyer also overcame Clinton’s earlier doubts, upon meeting him a summer before, that he lacked energy, by literally taking a run with the President along the Capital Mall to establish his physical stamina for the job. The White House’s concerns of a year earlier, regarding his lack of humanity and affinity for regulatory reform, apparently disappeared—these qualities had been transformed from a political liability into a political virtue.

Most liberal Democrats in Congress muted their displeasure with the Judge Breyer choice, presumably to avoid breaking publicly with their own President, following twelve years of Republican administrations. But some of the more progressive Democrats and hardcore environmentalists did not shy away from sharply criticizing the nominee, revealing their obvious frustration that the President had let pass a potentially historic opportunity to have an acclaimed environmentalist join the High Court.

The focus of their criticism was a common theme evident in Judge Breyer’s scholarship, work experience, and judicial opinions in favor of reform of excessively burdensome regulations. These were the same concerns that advisors to the White House had stressed in 1993. As counsel to the Senate Judiciary Committee, on leave from Harvard Law School in the late 1970s, Breyer had worked effectively in a bipartisan fashion with both Democrats and Republicans in support of legislation that deregulated the airline industry. He favored such deregulation on the ground that regulation imposed unnecessary costs on industry and impeded the operation of free market forces that could on their own lead to better products and services for lower prices than burdensome government regulation might achieve.

Indeed, Breyer had so impressed Senate Republicans with his support of regulatory reform that they endorsed President Carter’s nomination of Breyer to serve on the First Circuit even though that nomination occurred on November 13, 1980—a time when a nomination for a life-tenured position should have been dead on arrival in Congress. After all, the date of the nomination was only nine days after Carter had lost the Presidency to Ronald Reagan and the Democrats had lost the Senate to the Republicans. Confirming Breyer to the First Circuit during a congressional lame duck session would accordingly mean the elimination of an important federal appellate court vacancy that would otherwise have been available for a Republican President and Republican Senate to fill a couple months later. Yet, because of Senator Kennedy’s clout and significant Republican leadership support for Breyer, rooted in his work on regulatory reform as a Senate staffer, Breyer was confirmed as a federal appellate judge less than a month later in December 1980.

As an appellate judge, moreover, Judge Breyer continued to be a proponent of regulatory reform, including for environmental protection rules. In both his judicial rulings and his extra-judicial writings, Judge Breyer expressed concern about the possible harm caused by irrational environmental regulations with compliance costs that far exceeded their benefits.

One of Judge Breyer’s most prominent opinions for the First Circuit was United States v. Ottati & Goss, in which the court upheld a trial court ruling that had rejected the proposed remedy by the Environmental Protection Agency (“EPA”) to clean up a hazardous waste site on the ground that the public health benefits did not warrant the high cleanup costs. The ruling was remarkable at the time because it was so unusual for a court not to defer to EPA’s judgment about the extent of cleanup needed to reduce risks from hazardous wastes. Although the force of Judge Breyer’s opinion for the First Circuit was a bit muted because the appellate court was simply affirming the trial court’s ruling against EPA—concluding that “[w]e cannot say that the district court was ‘clearly erroneous’ or unreasonable”—his opinion seemed to join the lower court in ridiculing EPA’s decision to base its cleanup remedy on the proposition that a child would eat contaminated soil over a sustained period of time. And, while declining to impose sanctions on EPA, Judge Breyer’s opinion gratuitously took the occasion to “wonder about the government’s priorities in the face of other, apparently more serious, environmental demands for ‘cleanup’ time and effort.”

Judge Breyer, moreover, did far more than just author the opinion. Outside of his judicial role, he trumpeted its policy themes regarding how environmental risks should be regulated. He used the Ottati & Goss case as the basis for his 1992 Oliver Wendell Holmes Lectures at Harvard Law School, which he then published as a book entitled Breaking the Vicious Circle: Toward Effective Risk Regulation the following year. In that book, Judge Breyer identified why and how government regulation of risks, including environmental risks, had a tendency to require excessive expenditures to reduce the “last ten percent” of risks, here too referring to the facts of the Ottati & Goss case as an illustrative example. Judge Breyer more broadly proffered the question whether determining the acceptable level of environmental risk was best answered by a political process vulnerable to accommodating the public’s tendency to overreact to environmental risks. And, answering his own question, Judge Breyer concluded that such public policy questions were better answered by expert, technical agencies removed from the pressure of politics and popular opinion.

What simultaneously made Judge Breyer’s book so popular with the Republican Party and regulated business and so unpopular with Democratic Party progressives and environmentalists was its embrace of the rhetoric of regulatory reform. Regulatory reform had in fact been an express and significant part of the agenda of the administration of President Jimmy Carter and EPA when Judge Breyer first endorsed it in his work at the Senate in 1980. Regulatory reform then was a more benign political issue, and it enjoyed support on both sides of the political aisle. In March 1978, President Carter issued an executive order, developed in part by EPA leadership, that sought to “reform” the process for developing “significant regulations” in order to eliminate regulations that “impose unnecessary burdens on the economy.” Political appointees at EPA during the Carter administration favored opportunities to employ economic analysis to ensure that the Agency was directing its limited resources to the most serious environmental issues and taking advantage of market incentives to reduce pollution in general. The Reagan administration announced from the outset that it would similarly champion a regulatory reform agenda that took more account of the costs of environmental protection.

But by 1993, the term “regulatory reform” had become a highly partisan term, tainted by efforts during three Republican administrations to cut back on environmental protections under the guise of cost-benefit analysis and economic efficiency. Reagan administration officials at the Office of Management and Budget and EPA used the rhetoric of regulatory reform and cost-benefit analysis but in a wholly skewed fashion to justify deregulation based on exaggerated estimates of regulatory costs coupled with underassessments of the benefits of environmental protection. Environmentalists vehemently opposed those efforts. Even prominent supporters of President Reagan openly commented at the time that his environmental appointees had so bungled the regulatory reform effort that they had undermined it. The President’s own chair of the Council of Economic Advisors, a stalwart champion of regulatory reform, publicly declared: “We will be lucky if by January 1985, we are back where we were 1981 in terms of the public’s attitude toward” regulatory reform.

That is why Judge Breyer’s promotion of regulatory reform rhetoric in his 1993 book set off alarm bells throughout the environmental community, and to those reviewing his writings for the White House Counsel in 1993, to a degree that would not have happened in the late 1970s during the Carter administration. But, in light of how much the political debate had shifted since 1980 when Breyer was working on deregulation on the Senate Judiciary Committee, his 1993 publication was either politically tone-deaf or deliberately designed to position Judge Breyer for promotion as a justice with bipartisan support. With Judge Breyer, the former is a distinct possibility. However, whatever the actual motivation, the publication of Judge Breyer’s book coincided with the openings of two seats on the Supreme Court in successive years and played a central role in whether he would be nominated and confirmed.

Republicans and the business community became his cheerleaders while environmentalists expressed serious concerns. The latter’s criticism could be scathing. “[I]t is clear that [Judge Breyer] is no fan of health and environmental regulation.” If he “had been a member of Congress, he would not have supported many of the current health and environmental statutes.” They accused him of blithely accepting the economic
analysis of right-wing think tanks to belittle the risks addressed by government regulation, minimizing “the risks posed by toxic chemicals
in the environment,” “reject[ing] a policy of erring on the side of safety . . . because it leads society to spend too many dollars chasing after what he believes to be trivial risks,” failing to recognize the limits of cost-benefit analysis, and of “even accept[ing] the highly dubious ‘richer is safer’ argument against stringent regulation of activities that pose health and safety regulations.”

Consumer advocate Ralph Nader pulled no punches in testifying against Judge Breyer’s confirmation. Nader described Judge Breyer as an “extremist.” According to Nader, Judge Breyer was “ridden with fantasy” and “insensitive on the ground to the health and safety needs of the American people.” Judge Breyer, Nader concluded, “appears to seriously question many health and safety laws that he will be expected to interpret impartially as a Justice of the Supreme Court.”

There were, of course, supporters too. The renowned scholar Professor Cass Sunstein, a close professional colleague of Judge Breyer, casebook co-author, and the leading legal scholar in support of the central role of cost-benefit analysis for rational regulation, testified in favor of Judge Breyer’s nomination. More moderate legal academics contended that Judge Breyer would be a “friend” to environmental law and that, even though “in sheer numbers, his rulings against environmental groups probably exceed his rulings in their favor”—only because they lose most of their cases—the judge “ha[d] shown a sensitivity and appreciation for environmental issues.”

Only one environmental public interest organization affirmatively supported Judge Breyer’s nomination—the Conservation Law Foundation, headquartered in Massachusetts—perhaps because of his favorable ruling in a case they had brought to clean up Boston Harbor, because of geographic allegiance to Judge Breyer, or perhaps even more likely, because of institutional loyalty to his principal political sponsor, Massachusetts Senator Ted Kennedy. But even that organization’s letter was noticeably understated. It was only two paragraphs long and addressed merely “To Whom It May Concern.” The most the organization’s director could muster in his opening sentence was that “Stephen Breyer has fashioned a remarkable record on environmental matters that have come before the First Circuit Court of Appeals.” The word “remarkable,” is, of course, itself remarkable for what it does not say in a letter that purports to be an endorsement.

During his own Senate testimony, Judge Breyer plainly sought to assuage concerns by walking back from the deregulatory import of some of his writings. While conservative Republican Senator Strom Thurmond stressed how he “was pleased to learn of [Judge Breyer’s] concerns with excessive regulation,” Judge Breyer asserted that the “role of economics” was necessarily “much more limited” in application to “health, safety, and the environment . . . because, there, no one would think that economics is going to tell you how [much] you ought to spend helping the life of another person.” He also characterized the book as “a plea . . . not to cut back by 1 penny this Nation’s commitment to health, safety, and the environment” but only to “reorganize[e] that commitment” to ensure that money was spent on saving real lives rather than “on the statistical life that might not exist.”

The Senate voted overwhelmingly in favor of Judge Breyer’s confirmation to join the Court. Eighty-seven senators voted in favor, only nine were opposed, and four did not vote. And the only votes opposed were a smattering of conservative Senate Republicans. No liberal Democrat opposition challenged their own President’s nominee notwithstanding any misgivings they might have harbored.

II.  JUSTICE BREYER’S RECORD IN ENVIRONMENTAL LAW CASES BEFORE THE COURT

Justice Breyer’s environmental law record consists of his votes in individual cases and his written opinions in a subset of those cases. During the past twenty-eight years on the Supreme Court, Justice Breyer has written more than five hundred opinions: (1) about two hundred opinions for the Court; (2) approximately one hundred concurring opinions; (3) just shy of two hundred dissenting opinions; and (4) approximately thirty opinions dissenting and concurring in part.

The Justice has written relatively few opinions in environmental cases, mostly for the straightforward reason that the Court does not decide that many environmental cases. Environmental law is, at least numerically, a relatively small part of the Court’s docket so long as one defines “environmental law” more narrowly as I am doing for the purposes of this inquiry.

My narrower approach considers only those cases that arise in a fact pattern in which environmental protection concerns are at stake and those stakes are not wholly incidental to the legal issue raised. That definition sweeps in both cases involving the construction and application of classic environmental laws like the National Environmental Policy Act as well as those cases involving general cross-cutting legal issues such as the scope of congressional commerce authority in a case where environmental protection is at stake. A broader, and perfectly fair contrary approach would be to consider all cases that raise legal issues that, although not arising in an environmental protection setting in the case then before the Court, are likely to have significant implications in future cases that do. Consistent, however, with the author’s belief that there is a discernible “environmental” dimension to environmental law that is important for judges (and Justices) to consider, this Article relies on the narrower definition instead.

Based on that narrower definition, I have identified sixty-three “environmental law cases” decided by the Court during Justice Breyer’s tenure to date, in which he participated in sixty-one due to his recusal in two cases in which his brother, also a federal judge, participated in the case in the lower courts. This subset of cases fulfilled two conditions: (1) each case raised legal issues in the environmental protection context; and (2) that context was not wholly incidental to the legal issue being considered by the Court. For instance, I excluded from my sample a case like Alaska v. United States, a 2005 original action case in which the State of Alaska and the United States were disputing ownership over certain submerged lands in Glacier Bay. I also excluded the Court’s recent decision in BP P.L.C. v. Mayor of Baltimore, concerning the scope of judicial review of a district court ruling not to allow removal of a state court case to federal court. In neither of those cases, or those like them, do the environmental stakes play any non-incidental role in the Court’s resolution of the legal issue to be decided. But I included cases like Article III standing and regulatory takings cases because, as I have elaborated in previous scholarship, the environmental dimension of those cases should bear on the application of the relevant legal standards even if, as described below, individual Justices and sometimes a majority of the Justices too often fail to grasp its relevance.

A.  Justice Breyer’s Votes

In 2000, I published an article that tried to develop a rough quantitative basis for comparing how individual Justices voted in environmental law cases and for assessing whether certain Justices and the Court as a whole were more or less responsive to the need for environmental protection. The article argued more broadly in support of the thesis that there was a uniquely “environmental” dimension to environmental law relevant to judicial decision making—for instance, how such concerns might provide a proper basis for rethinking what constitutes a “concrete injury” for Article III standing purposes, a “property right” in natural resources for regulatory takings purposes, an “intelligible principle” for nondelegation doctrine purposes, an “economic activity” in Commerce Clause analysis, or the degree of judicial deference owed a federal agency in both technical assessments and statutory interpretations. While concluding that the Court overall had displayed “apparent apathy or even antipathy towards environmental law,” I concluded that some individual Justices had shown more sensitivity than others to how environmental protection concerns could be relevant to how the legal issues before the Court should best be decided.

My 2000 analysis relied on a scoring system somewhat analogous to that employed by the League of Conservation Voters in scoring members of Congress on environmental matters, but now applied to each Justice. A Justice was awarded one point for each outcome that I classified as “pro-environmental protection,” resulting in each Justice receiving an “EP score,” based on the percentage of pro-environmental votes the Justice cast out of those in the sample of environmental cases in which that Justice participated. Although nominally quantitative in its ultimate yield, I freely admitted at the time the “inevitabl[e] arbitrariness and sometimes downright foolishness in attempting any such ‘pro’ or ‘anti’ policy assignments to Supreme Court rulings, especially assignments that purport to be binary in nature.”

The problems are obvious. First, I defined as “pro-environmental” the legal position favored by environmentalists in each case. An environmental advocate, however, trying to win a particular case may in fact be making an argument that leads to a win in that case but to losses in other future environmental cases. For instance, the advocate may be arguing against deferring to an expert agency’s judgment because, in the case before the Justices, an argument against such deference may be needed to secure a win. But if the advocate prevails in that case, the precedent established may cause environmentalists in the future to lose far more than they win if it turns out that judicial deference to agency expertise is more advantageous to environmental protection concerns over the longer term.

Second, the legal position favored by environmentalists in a particular case may be very weak on the merits and warrant rejection. There is no necessary correlation between a legal argument favoring environmental protection and its being a meritorious argument. Not every argument in favor of environmental protection is necessarily a strong legal argument that a judge or Justice should accept.

Indeed, there is good reason to worry that the Justices tend to hear cases in which the legal arguments favoring environmental protection are disproportionately weaker. As I have detailed elsewhere, the Court’s decision-making at the jurisdictional stage for most of the past fifty years that define the modern environmental law era has been skewed against environmentalists. The vast majority of the cases in which the Court has granted review are cases in which the position favored by environmentalists prevailed in the courts below. The Court has taken relatively few cases in which the environmentalists lost and then sought the Court’s review, especially those in which the federal government was the prevailing party. The Court has, in effect, cherry-picked the cases in which environmentalists may have won based on potentially weak arguments while not being similarly ready to review cases in which business interests have won on weak grounds.

I accordingly warned in 2000 against drawing any conclusions based on EP scores apart from those at the two extremes—either very high or very low. Because of the obvious limits to such scoring, only such extreme discrepancies in scores might offer a fair basis for positing that the Justice in question was more or less “likely to rule in favor of or against an environmentally protective outcome because of that outcome’s environmental dimension.” Those same limits are also a reason not to be surprised when even the highest EP score is not that high—the potential result of a skewed merits docket.

In 2000, Justice Breyer had served on the Court for only six years, and his EP score back then of 66.6 after six years of service was in fact one of the highest of those then on the Court. It far exceeded Justice Scalia’s strikingly low score of 13.8 and the scores of Justices Thomas (20.0) and Kennedy (25.9). But, of course, his score came nowhere close to Justice Douglas’s, who retired from the Court in 1975. An environmentalist hero and former member of the Board of the Sierra Club, Douglas boasted of a score of 100—apparently no matter the legal issue presented, and perhaps even the relative strength of the competing arguments, Douglas always voted in favor of the outcome supported by environmentalists. Justice Breyer’s EP score of 66.6 was also higher than Justices Brennan (58.3), Marshall 61.3), Blackmun (58.3), Stevens (50.6), Souter (57.1), and Ginsburg (63.6) but not to any significant extent, especially because both Justice Breyer and Ginsburg had both served for far fewer years than Justices Brennan, Marshall, and Stevens and therefore reflected very different cases too. Justices Brennan, Marshall, and Stevens were accordingly being measured based on cases in which it might have been harder on the merits to vote for the side favored by environmentalists.

Two decades later, the number of environmental cases in which Justice Breyer has participated has naturally risen, and interestingly his new EP score (62.3) is essentially the same as before and as the two other Justices with high scores—Sotomayor (64) and Kagan (68). Yet Justice Breyer’s score is sufficiently higher than former Justices Scalia (23.4) and Kennedy (36.0) and current members such as Chief Justice Roberts (20), Justice Thomas (20.6), and Justice Alito (10.5) for their overlapping cases since 1994 to suggest significant differences in the application of law to environmental protection. Justice Alito’s score of 10.5 is astoundingly low. The only cases in which Alito voted on the side supported by environmentalists were in four cases that the Court decided unanimously in their favor.

On the other end of the spectrum, although former Justices Stevens, Souter, and Ginsburg’s scores in 2000 were a tad lower than Justice Breyer’s at that time, all of their scores became higher—Stevens (78.4), Souter (80.6), and Ginsburg (71.9)—than Justice Breyer’s for the cases on which they overlapped while serving on the Court. The gap between Justice Breyer and both Justice Stevens and Souter is not especially significant but arguably enough to suggest a potential difference in their respective willingness to consider how the environmental protection dimension of the case might be relevant to their resolution of the legal issue before the Court and, for Justices Stevens and Souter, in favor of a more protective outcome.

Finally, not all environmental cases are, of course, equally important, and the individual votes are more significant in some cases than in others. Some cases are far more significant in terms of their import for environmental protection. Whether EPA has authority to regulate greenhouse gas emissions under the Clean Air Act is clearly more important than whether a certain river in Alaska is “public land” for the purposes of the Alaska National Interest Lands Conservation Act. And, because in some of those more important cases the vote was also closely divided, the vote of any one Justice in the majority is outcome-determinative. In that distinct respect, the individual vote of any single Justice in a five-Justice majority is more significant.

Based on this criteria, Justice Breyer’s votes in several cases were especially significant, including Alaska Department of Environmental Conservation v. EPA, upholding the EPA’s authority to override Alaska’s issuance of a permit under the Clean Air Act; Kelo v. City of New London, sustaining a local government’s exercise of its eminent domain power to condemn residential property to promote commercial development; Massachusetts v. EPA, both upholding environmental-plaintiff standing and rejecting the EPA’s claim that it lacked authority to regulate greenhouse gas emissions under the Clean Air Act; and Murr v. Wisconsin, rejecting a regulatory takings claim against a local environmental restriction on residential development. Those are all, moreover, cases environmentalists won.

By contrast, in only one of the sixty-one environmental law cases in which Justice Breyer participated and environmentalists lost did he provide the critical vote against their position. Justice Breyer voted against the legal outcome favored by environmentalists on twenty-three occasions. In eleven of those cases, the Court ruled unanimously and in three others the vote was eight to one against the environmentalist position. Justice Breyer supplied the sixth and seventh vote for the majority in six cases and dissented in the last two. The only case in which Justice Breyer’s vote was outcome-determinative in a case that environmentalists lost during the past twenty-eight years was the Court’s ruling in June 2021 that the condemnation authority provided by the federal Natural Gas Act to recipients of a Federal Energy Regulatory Commission certificate of public convenience and necessity extended to the right to acquire state-owned property. Interestingly, the five-Justice majority was an unusual one, consisting of Justice Breyer, Chief Justice Roberts, who authored the Court’s opinion, and Justices Alito, Sotomayor, and Kavanaugh.

B.  Justice Breyer’s Opinions in Environmental Cases

Justice Breyer has written opinions in nineteen environmental cases, which is a disproportionately large number of the sixty-one environmental cases in which he has participated. He has written three majority opinions for the Court, six concurring opinions, and ten opinions either dissenting in full or in part. Although Justice Breyer’s majority opinions are clearly the most significant because they alone announce binding legal precedent, the concurring and dissenting opinions may well be the most personally revealing because they largely resulted from the Justice’s own decision to write an opinion expressing his views rather than, as with majority opinions, an assignment from the senior Justice in the majority to write the official opinion of the Court. The majority opinion, however, nonetheless can very much reflect the priorities and values of its author, especially whether the Justice chooses to write the opinion narrowly and tries to attract as many votes as possible or instead drafts the opinion in as sweeping a way as possible consistent with maintaining the bare minimum of five votes required for a majority.

1.  Justice Breyer’s Majority Opinions

Justice Breyer wrote the majority opinions in Ohio Forestry Association v. Sierra Club, Public Lands Council v. Babbitt, and County of Maui v. Hawaii Wildlife Fund. None is a headliner. Nor is that at all surprising, given that Justice Breyer remained the most junior Justice for his first twelve years on the Court, which does not lend itself to especially high-profile opinion assignments from his more senior colleagues. That status is also likely why it was not until 2019 that Chief Justice Roberts assigned Justice Breyer a moderately more important environmental case, County of Maui, though still far short of a blockbuster.

All three Court opinions by Justice Breyer evidence his essential pragmatism, a catchword that the White House promoted when he was nominated and that was accordingly captured in the first New York Times headline announcing his nomination. His pragmatism was similarly the theme of favorable testimony provided before Congress by one of his leading academic supporters.

Ohio Forestry is a classic opinion assigned to a junior Justice. Indeed, it might well be classified as one of the “dogs” of the docket that Term, a term of art the Justices use informally in referring to the kind of case no Justice has any particular interest in writing. At issue was whether the Sierra Club’s challenge to the Forest Service’s plan for managing the Wayne National Forest in Ohio was justiciable. The Court ruled unanimously that the lawsuit was not ripe for review on the ground that the plan did not itself create any adverse effects of a “strictly legal kind” because it did not purport to authorize any particular action within the forest. It would be far more sensible, Justice Breyer’s opinion for the Court reasoned, to wait until “the Plan is implemented” which would allow the reviewing court to benefit from “further factual development” of the issues.

Justice Breyer’s opinion for the Court evidences significant sensitivity to the administrative preferences of the federal agency and to the resources of the federal judiciary. The ruling that the case was not ripe emphasizes how allowing the Sierra Club’s lawsuit to proceed would have “require[d] time-consuming judicial consideration of the details of an elaborate, technically based plan . . . without benefit of the focus that a particular logging proposal could provide.” There is obvious force to the Court’s concern. But the opinion evidences no comparable consideration of the litigation resource challenges that a public interest organization like Sierra Club faces in trying to oversee a series of site-specific logging proposals over time. What the Court posits as the better approach may well be better in theory, but the challenges of such constant site-specific oversight may in fact be preclusive as a practical matter of any meaningful review of future logging decisions in a national forest.

To that same end, the Court declined to consider a series of other ways that the Forest Plan could immediately harm the Sierra Club and its members. The Court reasoned that Sierra Club’s argument “suffer[ed] from the legally fatal problem that it ma[de] its first appearance [before the] Court in the briefs on the merits.” That is a fair point, and the Court is not at all out of bounds in strictly applying administrative law exhaustion principles in denying consideration of Sierra Club’s argument. Yet, here too, the ruling ignores the practical limits of a resource-strapped public interest organization maintaining lawsuits in an effort to ensure other unrepresented interests are given voice. An organization like the Sierra Club is hard-pressed to monitor all the site-specific decisions that Forest Service personnel are making on a daily basis throughout a national forest. For this reason, the Club’s only practical recourse may be to persuade a court, as they tried unsuccessfully to accomplish in the Ohio Forestry case, to establish some guidelines for the exercise of Forest Service personnel discretion in the future. And the Court’s lack of sensitivity to that practical limitation contrasts unfavorably with the many ways that the Justices, in my experience both litigating for and against the United States, routinely allow the federal government to raise new arguments and bring to the Court’s attention new facts not considered below, because of the Justices’ awareness of the practical limits in the government’s ability to oversee all of its lower court litigation.

The Court’s providing such practical flexibility to the United States makes great sense. Otherwise, the Court would be making significant pronouncements of law affecting the country based on incomplete arguments and flawed factual assumptions. And, given the thousands of cases the federal government handles in the lower courts, it is exceedingly limited in its ability to ensure that all the best arguments are made in the timeliest manner. The Court, however, could demonstrate some sensitivity to the practical needs of environmental citizen suit litigants too. Justice Breyer’s opinion for the unanimous Court in Ohio Forestry evidences no such awareness of the problem.

Public Lands Council v. Babbitt was a logical sequel to Ohio Forestry. Again, Chief Justice William Rehnquist assigned the junior Justice Breyer the task of writing an opinion for a unanimous Court in another public lands administrative law case that was likely of little, if any, interest to the Justices. The major difference was that, rather than environmental plaintiffs challenging the Forest Service’s management of a national forest, it was commercial livestock interests challenging the Bureau of Land Management’s administration of grazing permits on public lands.

The basic result was the same. The Court concluded that the federal agency’s regulations governing the issuance of permits were valid under the relevant statutory language and that the commercial plaintiffs’ concerns that they might be harmed in how those regulations were applied in the future were largely premature. The plaintiffs should instead wait, not unlike the environmental plaintiffs in Ohio Forestry, until the federal agency actually applied the regulations in a specific factual context that harmed them. While the reasoning is similar is tone, there is still a significant practical difference between the two cases because the commercial party subject to a grazing regulation will naturally always know as soon as such harm happens in the future, which is not true for an environmental organization striving to learn of any possible site-specific decision to allow logging or other potentially harmful activity within a very large area of land such as a national forest.

It took twenty more years for Justice Breyer to write a third opinion for the Court in an environmental case, County of Maui v. Hawaii Wildlife Fund. And, reflecting his more senior status by that time, the case is far more significant than either Ohio Forestry or Public Lands Council. It is not a mere unanimous toss-off. County of Maui instead presents a rather thorny and important question of statutory interpretation under the Clean Water Act—the type of question that nicely lends itself to Justice Breyer’s proclivity to pragmatic solutions.

The precise legal issue raised in County of Maui concerns how direct or indirect an addition of pollutants into navigable waters must be to constitute a “discharge” of a pollutant into navigable waters requiring a pollution permit under the Clean Water Act. In County of Maui itself, the municipal sewage treatment facility seeking to avoid the permit requirement injected contaminated water into a well located about a half mile from the Pacific Ocean—but the discharge naturally reached the Pacific within a few months through the ground water. The facility contended that no permit was required unless the pollutants were directly introduced into a navigable water body like the Pacific, meaning that the pollution was exempted from the Clean Water Act permit requirement if it travelled even just a few inches through groundwater or over the surface land before reaching the ocean. The EPA agreed that any travel through groundwater placed the addition of pollutants outside the Clean Water Act but contrasted that any travel over surface land would depend on a more contextual analysis of directness.

In rejecting both those limits, Justice Breyer’s opinion for a six-Justice majority held that a Clean Water Act permit was required “when there is a discharge from a point source directly into navigable waters or when there is the functional equivalent of a direct discharge.” The Court’s ruling displays Justice Breyer’s willingness to embrace a nuanced and accordingly ultimately vague legal test—such as “functional equivalence”—when he believes clearer legal rules fail to account for all the factors that should be relevant in solving a problem. The Court’s “functional equivalence” test rejects any hard-and-fast lines for when an addition of a pollutant is too indirect in favor of a multi-factor inquiry. The opinion candidly acknowledges that there are “too many potentially relevant factors applicable to factually different cases for this Court to use more specific language,” while both highlighting seven relevant factors and underscoring that “time and distance will be the most important factors in most cases, but not necessarily every case.” Interestingly, the Chief Justice expressed confusion at oral argument about what Justice Breyer’s “functional equivalence” test meant, when Justice Breyer then raised the possibility of such a test, but nonetheless subsequently chose Justice Breyer to write the Court’s opinion.

The County of Maui ruling is significant for environmental law. Although the Court nominally vacated the lower court’s judgment favorable to the environmental plaintiffs and remanded the case to that court for reconsideration in light of its ruling, the functional equivalent test amounted to a clear win for the plaintiffs. They will do well under that test, as will environmental plaintiffs in a host of cases across the country who have brought Clean Water Act citizen suits against sources that discharge to navigable waters in a proximate but still indirect water through groundwater and over land. For instance, in the immediate aftermath of the County of Maui ruling, environmentalists targeted leakage of coal ash into navigable water bodies from power plants. Although County of Maui does not rise to the front page headline status of a case like Friends of the Earth v. Laidlaw, expanding Article III standing for environmental citizen suit plaintiffs, or Massachusetts v EPA, establishing the EPA’s authority to regulate greenhouse gases under the Clean Air Act, the case will make a big difference in application to lots of factual circumstances and represents an increasingly rare environmentalist victory as the Court’s own bench becomes more conservative.

2.  Justice Breyer’s Concurring Opinions

As described above, Justice Breyer’s separate opinions are even more revealing because, unlike majority opinions that are assigned by the most senior Justice in the majority, one can be more confident that the Justice writing separately is expressing their own views. Justice Breyer wrote six concurring opinions, four of which both address significant legal issues and relate directly to the concerns raised by environmentalists when Justice Breyer was nominated. In each, Justice Breyer expressed views that promoted the very regulatory reform themes antithetical to many environmentalists. His doing so each time in a concurring opinion makes clear that these themes remained very important to him, just as environmentalists had feared at the time of his nomination.

First, in General Electric v. Joiner, decided in 1997, Justice Breyer wrote separately while also joining the majority ruling that upheld the trial court’s decision to exclude from jury consideration expert testimony proffered to demonstrate a link between the plaintiffs’ exposure to polychlorinated biphenyls (“PCBs”) and small-cell lung cancer. In his separate opinion, Justice Breyer stressed that “modern life, including good health and economic well-being, depends upon the use of artificial or manufactured substances,” presumably alluding to a chemical like PCBs, and the need for judges to use their gatekeeping authority to ensure that tort liability did not effectively “destroy” the “wrong” chemicals. Such a concern with the potential for excessive tort liability to harm businesses was in the late 1990s a major talking point for business leaders seeking to curb large tort liability awards.

In Whitman v. American Trucking Associations, decided in 2001, Justice Breyer’s concurring opinion was the one blight on an otherwise glorious day for environmentalists. In Whitman, Justice Scalia authored a unanimous opinion for the Court that repudiated what had been a major attack on the constitutionality of a central part of the Clean Air Act. The Court rejected the D.C. Circuit’s remarkable ruling that the Act violated the nondelegation doctrine by requiring the EPA to promulgate national ambient air quality standards requisite to protect public health without basing its determination of those standards on an intelligible principle such as cost-benefit analysis.

Scalia’s opinion not only rejected the notion that cost-benefit analysis was required to satisfy the nondelegation doctrine’s requirement of an intelligible principle, but it further ruled that the relevant provisions of the Clean Air Act barred the EPA from considering economic costs at all in promulgating the national standards. It was a sweeping win for both environmentalists and the EPA. But what made their victory even sweeter still was that it was unanimous and written by Justice Scalia, the Court’s leading conservative.

Justice Breyer’s separate concurring opinion fell far short of dampening the victor’s spirits that day, but his words were nonetheless chillingly expressive of some of the worst fears of environmentalists upon his confirmation. He disputed Scalia’s powerful statement that the EPA could consider compliance costs only if Congress’s textual commitment to such consideration was “clear.” According to Justice Breyer, “other things being equal, we should read silences or ambiguities in the language of regulatory statutes as permitting [rather than] forbidding” regulatory agencies from adopting “rational regulation” that considered a proposed regulation’s adverse economic effects.

Even more telling, Justice Breyer conflated economic costs with public health, just as industry had long been arguing should be done. According to Justice Breyer, because an overly protective environmental protection requirement that returned society to the “Stone Age” would clearly not be “requisite to protect the public health,” “the EPA, in setting standards that ‘protect the public health’ with ‘an adequate margin of safety’” should be deemed to be able to weigh compliance costs against environmental benefits at least to guard against disproportionately high costs for only trivial benefits.

For environmentalists, Justice Breyer’s language sounded unsettlingly similar to the business community’s claims that a healthy society was a wealthy society and environmental protection laws that reduced business profitability were accordingly undermining rather than promoting public health. Had Justice Breyer authored his concurring opinion in any context other than a concurring opinion with no legal import and when environmentalists were otherwise enjoying an enormous victory, it might have garnered far more attention and concern. But, on a day of widespread relief and celebration, few paid attention to Justice Breyer’s concurrence.

Justice Breyer’s concurrence in Bates v. Dow Agriculture Sciences LLC, decided in 2004, strikes a similar concern about the adverse impacts of excessive government regulation. In Bates, however, the issue arose in the context of a federal preemption case, in which a pesticide manufacturer was arguing that federal pesticide regulation preempted state common law tort liability. The majority opinion, which Justice Breyer joined, rejected the manufacturer’s more sweeping preemption theories, concluded that some state tort law liability might not be preempted, and remanded the case back to the lower courts for further proceedings. Justice Breyer wrote separately to emphasize that the EPA, the federal agency charged with administering the federal pesticide statute at issue, enjoyed authority to promulgate regulations that effectively preempted state tort liability to avoid “a counter-productive ‘crazy-quilt of anti-misbranding requirements.’ ”

Finally, similarly sensitive to his perception of excessive environmental regulations was Justice Breyer’s separate concurrence in Coeur Alaska, Inc. v. Southeast Conservation Council in 2009, in which the Justice provided the more conservative wing of the Court with its sixth vote in a major loss to environmentalists. At issue in Coeur Alaska was in effect whether a gold mine that was discharging toxic slurry into a lake three miles away could avoid having to comply with section 402 of the Clean Water Act, which would likely have barred the activity, by characterizing their toxic slurry as “fill,” thereby triggering section 404 of the Act, which separately and less restrictively regulates the addition of fill into navigable waters. The gold mine had placed enormous volumes of toxic slurry into the lake, which was 51 feet deep, 800 feet wide, and 2,000 feet long. And the EPA freely acknowledged that the slurry would kill all of the lake’s fish and nearly all of its aquatic life.

To the environmental plaintiffs and the Ninth Circuit in its lower court ruling, the gold mine’s claim that it was “fill” rather than pollutant seemed like a blatant end run around the Water Act’s section 402 limitations on the addition of pollutants into navigable waters. But, relying on the EPA’s agreement that section 404 rather than section 402 applied, the Court ruled in industry’s favor. Justice Breyer agreed, declining to join Justice Ginsburg’s dissent, which both Justices Stevens and Souter joined. Exhibiting the same preference for deferring to expert technical agencies promoted by his 1993 book, Justice Breyer explained the reasons why he joined the majority: “I cannot say whether the EPA’s compromise represents the best overall environmental result; but I do believe it amounts to the kind of detailed decision that the statutes delegate authority to the EPA, not the courts, to make (subject to the bounds of reasonableness).”

The contrast between Justice Breyer’s willingness to defer to the EPA, notwithstanding the extreme results, and Justice Ginsburg’s dissent for herself and Justices Stevens and Souter was stark:

The Court’s reading . . . strains credulity. A discharge of a pollutant, otherwise prohibited by firm statutory command, becomes lawful if it contains sufficient solid material to raise the bottom of a water body, transformed into a waste disposal facility. Whole categories of regulated industries can thereby gain immunity from a variety of pollution-control standards.

Justice Ginsburg, unlike Justice Breyer, was not willing to assume that the EPA would ensure this loophole was not abused in future applications, especially given the dissent’s view that it had been abused in the facts of the case then before the Court.

3.  Justice Breyer’s Dissenting Opinions in Part or in Full

Justice Breyer wrote separate opinions that dissented either in part or in full on ten occasions. In some, he concurred in part or in full with conservative majorities, and in others he dissented in part from liberal majorities. And on a few occasions, he dissented in full. The latter category tended to be those instances when Justice Breyer expressed views wholly favorable to the legal arguments of environmentalists. 

Two of the cases involved tort liability. In Norfolk & Western Railway v. Ayers, decided in 2003, Justice Breyer dissented from that part of the Court’s ruling that allowed tort plaintiffs who had been exposed to asbestos fibers and were suffering from asbestosis to recover for damages from their related reasonable fear of cancer. Justice Breyer acknowledged that the legal issue was “a close and difficult one.” But he dissented in part from Justice Ginsburg’s majority opinion because he was worried both about the “impossibility of knowing an appropriate compensation” for such fear and that compensating victims for their fear might leave too little money remaining later on for victims who ultimately suffered from cancer. On the other hand, in Exxon Shipping Co. v. Baker, decided in 2008, Justice Breyer dissented from a majority ruling limiting punitive damages from the Exxon Valdez Alaska oil spill based on his view that the punitive damages awarded the oil spill victims in that case need not be reduced.

Unlike in Norfolk and Exxon Shipping, in Winter v. Natural Resources Defense Council, Inc., Entergy v. Riverkeeper, and Utility Air Regulatory Group v. EPA, it was Justice Breyer’s partial concurrences with a conservative majority rather than his dissent from a liberal majority that were telling. In each, Justice Breyer’s separate opinion reflected his pragmatism and general desire to provide federal expert agencies with flexibility absent excessive judicial second-guessing.

In Winter, Justice Breyer declined to join Ginsburg’s dissent from the majority ruling overturning a lower court injunction of U.S. Navy exercises that the environmental plaintiffs alleged were harming marine mammals. Justice Breyer concurred in part with the conservative Justices who made up a majority and concluded that the plaintiffs’ evidentiary support was too “weak or uncertain” to justify the “seriousness of the harm” that the Navy maintained the injunction would do “to the Navy’s ability to maintain an adequate national defense.”

In Entergy, Justice Breyer returned most explicitly to his argument, reflected in his 1993 book and earlier concurring opinion in American Trucking, that cost-benefit analysis is essential in setting rational environmental protection standards. Justice Stevens, joined by Justices Souter and Ginsburg, dissented from the majority ruling that the Clean Water Act permitted the EPA to engage in cost-benefit analysis in deciding the extent to which a power plant’s cooling water intake structure must minimize its adverse environmental impact. Justice Breyer, however, agreed that such analysis was permissible, arguing that “an absolute prohibition would bring about irrational results.” While suggesting some limits on how demanding such a cost-benefit analysis could be, he also cautioned that “in an age of limited resources available to deal with grave environmental problems, . . . too much wasteful expenditures devoted to one problem may well mean considerably fewer resources available to deal effectively with other (perhaps more serious) problems.” 

In Utility Air Regulatory Group, Justice Breyer again concurred in part with a conservative majority but this time to a very different policy end. The majority opinion, authored by Justice Scalia, ruled that the term “any pollutant” under the Clean Air Act did not extend to greenhouse gas pollutants as applied to one significant part of the Act. Justice Breyer proffered a different approach that, like the majority, avoided the EPA’s being compelled to regulate sources that the EPA agreed would be administratively impractical, but by interpreting instead the term “any source” in a manner that would ultimately provide the EPA more discretionary authority to choose how best to regulate greenhouse gas emissions. As described by Justice Breyer, “[t]he Court’s decision to read greenhouse gases out of the [Prevention of Significant Deterioration] program drain[ed] the Act of its flexibility.” And Justice Breyer’s preferred approach “le[ft] the EPA with the sort of discretion as to interstitial matters that Congress likely intended it to retain.”

On the other hand, the Justice’s practical approach prompted him to dissent in full from the Court’s ruling in U.S. Fish & Wildlife Service v. Sierra Club, decided in March 2021, in favor of a federal agency’s decision not to release a document to environmental plaintiffs under the Freedom of Information Act. That Act requires agencies to release to the public final agency decision-making documents unless they are deliberative in nature, reflecting Congress tempering its desire for public disclosure with a competing desire not to unduly chill those candid internal exchanges of ideas that might not occur if the participants knew all their thinking would later be made part of the public record.

Pursuant to the Endangered Species Act, the Interior Department’s Fish & Wildlife Service and the Commerce Department’s National Marine Fisheries Service provide formal “biological opinions” to any federal agency whose proposed action may “adversely affect” an endangered or threatened species or its critical habitat. In U.S. Fish & Wildlife Service, the two Services were preparing biological opinions on a proposed rule by the EPA under the Clean Water Act to regulate power plant cooling water intake structures because of the potentially adverse impact of those structures on aquatic wildlife.

Had the Services provided the EPA with a final biological opinion, the parties would not have disputed that such a final opinion would have been subject to public disclosure. In the case, however, the two Services never submitted a final biological opinion because the EPA ultimately rescinded its initially proposed rule after receiving an early draft of the Services’ biological opinion that had not yet been formally approved by either Service for submission. In challenging the EPA’s final cooling water intake structure, Sierra Club sought a copy of the informal draft biological opinions on the original proposed rule, which it hoped to use to attack the final rule.

The majority easily concluded, based on FOIA’s text, that such draft biological opinions—especially ones never approved by the relevant officials of the two services—need not be disclosed because they lacked any formal legal status within the ESA: “The deliberative process privilege protects the draft biological opinions at issue here because they reflect a preliminary view—not a final decision—about the likely effect of the EPA’s proposed rule on endangered species.” Such preliminary assessments were, the Court concluded, “both predecisional and deliberative.”

In dissent, Justice Breyer naturally took a more practical approach, looking not at the formal name of the relevant document, but its function in the agency decision-making process. Because, Justice Breyer reasoned, “[t]he function of a Draft Biological Opinion finding jeopardy [of an endangered species] . . . is much the same as that of a Final Biological Opinion” and triggers the same process within EPA, the same reasons that justify public release of the final biological opinion apply with equal force to the draft. However, because it was not clear whether the biological opinions at issue in the record were “drafts” or merely “Drafts of Draft Biological Opinion,” because they had never been approved by all relevant officials in the two Services, Justice Breyer contended the case should be remanded back to the court of appeals to decide their status.

In short, in some instances Justice Breyer’s lack of commitment to formalism supports policy ends favored by environmentalists, as in U.S. Fish & Wildlife Service. But in other instances, as in Coeur Alaska, he frustrates environmentalists by allowing the government to avoid what seems, on the face of the relevant statutory language, to be a clear transgression of congressional purpose.

However, Justice Breyer’s support for the constitutionality of environmental restrictions has been unqualified in Fifth Amendment takings cases. He has participated in ten Fifth Amendment takings cases while on the Court. And he voted against the regulatory takings claim in nine of those cases and against a per se physical takings claim in the tenth case. In one of those regulatory cases, Palazzolo v. Rhode Island, he filed a separate dissent to underscore his agreement both with Justice Ginsburg that the case was not ripe and with Justice O’Connor that it was relevant to regulatory takings analysis whether the landowner was challenging a land use restriction that existed at the time of their purchase of the property. And in Cedar Point Nursery v. Hassid, Justice Breyer filed a dissenting opinion on the ground that the majority erred by analyzing the state regulation of land use as a per se physical taking rather than as a possible regulatory taking.

Finally, Justice Breyer also earned high marks from environmentalists for his support of environmental plaintiff Article III standing. Like regulatory takings, Article III standing has been a persistent issue in environmental law since the 1970s. Justice Breyer voted in favor of environmental plaintiffs in the two most important standing cases during his tenure on the Court, Friends of the Earth v. Laidlaw and Massachusetts v. EPA, and he authored the opinion for himself and three other Justices dissenting from the Court’s ruling against the plaintiffs’ standing in Summers v. Earth Island Institute in 2009. Justice Breyer took direct issue with the majority’s ruling that the environmental plaintiffs had failed to demonstrate a concrete injury in their lawsuit challenging the U.S. Forest Service’s salvage sale of timber from a national forest.

 III.  ASSESSING JUSTICE BREYER’S ENVIRONMENTAL LAW LEGACY: FRIEND OR FOE OF ENVIRONMENTAL PROTECTION?

Justice Breyer is likely the only Justice ever chosen because of his perceived views on environmental law. But, ironically not because he was viewed as an ardent environmentalist. Just the opposite. He was thought to be a Justice who would instead be more sensitive to business than to environmental protection concerns.

That is both why Breyer failed to secure the nomination to the Court in 1993 and environmentalists opposed his selection in 1994—strongly favoring Secretary of the Interior Bruce Babbitt. And it is also why Republican Senate Leadership, including the Senate Minority Leader Bob Dole and the Judiciary Committee’s Ranking Minority Member Orrin Hatch, informed the White House that they would fight Babbitt’s nomination and promised a smooth confirmation process for Judge Breyer. As described at this Article’s outset, President Clinton chose not to fight, notwithstanding environmentalists’ warnings that Judge Breyer would be bad for environmental law and even “hazardous to our health.”

So, which has Justice Breyer turned out to be—friend or foe? The answer seems relatively clear: friendly, if still shy of an unqualified friend. As reflected in a rough sense in his EP score, especially compared to those of his colleagues on the bench, Justice Breyer has voted in favor of results supported by environmentalists far more than most of the other Justices on the Court. And, in almost all of the most important environmental cases of the past twenty-eight years, he was a reliable vote joining the majority in the big cases environmentalists won—often providing the critical fifth vote—and no less a reliable vote in dissent with liberal justices sounding the alarm in the big cases environmentalists lost—as he did in West Virginia v. EPA, the very last environmental case decided by the Court when Justice Breyer was on the bench. These cases include major cases decided under framework environmental laws like the Clean Air and Clean Water Acts, as well as those involving major issues of constitutional law, such as Article III standing, congressional Commerce Clause authority, and regulatory takings. Justice Breyer has been a reliable, forceful vote for environmental protection in the biggest cases that mattered the most.

Environmentalist concerns about Justice Breyer’s support for regulatory reform proved overblown in application, but not because they were wrong about his views on the central role that cost-benefit analysis should play in setting environmental standards and his willingness to believe that such standards are unduly protective. They weren’t incorrect. Especially in his concurring opinions and in a scattering of his votes, Justice Breyer made clear, just as they feared, his belief in the essential role of cost-benefit analysis as well as his receptivity to concerns that environmental protection requirements may be so exceedingly expensive as to undermine, rather than promote, public health.

However, in no case did Justice Breyer’s distinct views make a difference. He never once provided the critical “fifth vote” in any case in which he expressed those policy preferences for cost-benefit analysis. And his concurring voice was of no legal effect at all. Of course, had the makeup of the Court when those cases were decided been tilted slightly more to the left, Justice Breyer’s vote might well have made a critical difference, just as environmentalists had worried it would. But that concern was never realized in almost three decades.

Justice Breyer has also proved far less dogmatic in his views than assumed by his detractors at the time of his nomination. While supporting EPA’s authority to use cost-benefit analysis in his separate concurring opinion in Entergy, Justice Breyer agreed with the environmental respondents that Congress had intentionally curbed EPA’s ability to rely on cost-benefit analysis in the Clean Water Act. Where Justice Breyer departed from the environmental respondents was his view that EPA nonetheless was permitted to take such analysis into account so long as the agency did so in a very limited way: to guard against costs wholly disproportionate to environmental benefits—a far more modest invocation of cost-benefit analysis than that sought by industry. Justice Breyer also later fully joined Justice Kagan’s forceful dissent in Michigan v. EPA, which criticized the majority for concluding that EPA was required to consider potential compliance costs in determining whether regulation of toxic mercury emissions from power plants was “appropriate.” Justice Breyer agreed with the other Michigan dissenters that Congress had instead instructed EPA to base its threshold determination of the appropriateness of emissions controls only on the extent of environmental harm posed by such emissions. None of the Justices, including the four dissenters, disputed that the Clean Air Act required EPA to consider control costs in subsequently determining the extent of emissions reduction subsequently required of power plants.

Finally, critics of Justice Breyer’s nomination to the Court failed to appreciate how Justice Breyer’s pragmatism and openness to consideration of regulatory costs and cost-benefit analysis might prompt the Justice to favor upholding EPA regulations those critics favored. In two very significant Clean Air Act cases, EPA v. EME Homer City Generation in 2014 and the recently-decided West Virginia v. EPA, EPA’s legal arguments in favor of the regulations at issue—the Clean Air Interstate Rule in EME Homer and the Clean Power Plan in West Virginia—were weakened by the absence of clear support in the relevant statutory text. But what strengthened each of those EPA regulations was that both sets of ambitious regulations justified their broad reading of that language by the extent to which it permitted the EPA to take costs and benefits into account. In short, the kinds of economic analysis Justice Breyer favored allowed EPA to adopt more, not less, demanding environmental protection requirements.

To be sure, Justice Breyer has been no Justice Douglas. He has not voted in favor of the position favored by environmentalists in all cases. But nor is it clear that the nation, including environmentalists, should necessarily want such a Justice on the Court. Such a Justice might be a very good environmentalist, but not an especially good judge.

As described above, in eleven of the twenty-three environmental cases in which Justice Breyer voted against the position favored by environmentalists, all the Justices voted the same way. None dissented, neither Justice Souter, Stevens, Kagan, nor Sotomayor. All the other most progressive Justices on the Court agreed that there was no, or at least too little, merit to the legal position favored by environmentalists. In three more of those twenty-three cases, environmentalists lost by a vote of eight to one. Perhaps a Justice who dissented in those cases could be credited with perceiving actual strength in legal arguments that the others were missing. But it is also quite possible that they would be engaging in the very kind of ideologically driven judicial decision-making that environmentalists correctly condemn in those Justices with very low EP scores like Justices Alito and Scalia. Even those of us who care deeply about environmental protection, and worry no less deeply about the failings of our elected branches of government, should not see a judiciary that decides cases strictly on personal ideology rather than fair consideration of the actual strengths of the competing legal arguments as the proper solution to those failings.

Finally, contrary to the predictions of those advising President Clinton in June 1993, Justice Breyer has not remotely proven to be a dispassionate, “bloodless,” “cold fish” Justice lacking any “innate sense of justice.” To be sure, Justice Breyer is no Justice Sonia Sotomayor—a Justice whose writings evince a compassion for victims of injustice without ready modern parallel. He is a committed pragmatist. But his striking pragmatism should not be mistaken for a lack of passion. He has proven himself deeply committed to social justice and the fundamental role of the judiciary in its pursuit. He has made that philosophy clear in both his judicial opinions and in his writings outside of the Court, especially his 2005 book, Active Liberty, in which the Justice contends that judges should not merely attend to the need to ensure that individuals are free from governmental coercion but also ensure they enjoy freedom to participate fully in government itself, including the right to vote.

CONCLUSION

Justice Breyer was certainly not environmentalists’ dream pick in 1994. And they had good reason to be concerned. But he has proved in actual practice to be an outstanding jurist for the nation and an excellent Justice for environmental protection law.

More fundamentally, Justice Breyer’s record on the Court suggests the wisdom of rethinking what it means to be a “dream” justice. Should it mean having a Justice who shares one’s ideological preferences on certain issues like environmental protection and will vote accordingly? Or should it mean having a Justice whose votes are rooted in a broader understanding of the proper role of the courts in interpreting law and deciding cases, including the central role our Constitution assigns to the judiciary to safeguard certain individual and collective rights? While the former Justice may reliably receive an EP score of 100, the latter is the better Justice, even if that means they sometimes will, as they should, rule in ways that disappoint.


APPENDIX A

ENVIRONMENTAL LAW CASES OCTOBER TERM 1994–OCTOBER TERM 2021


Case Name

Citation

EP Designation

Dolan v. City of Tigard

512 U.S. 374 (1994)

Dissent

Babbitt v. Sweet Home Chapter Communities for A Great Oregon

515 U.S. 687 (1995)

Majority

Meghrig v. KFC Western

516 U.S. 479 (1996)

Dissent

General Electric v. Joiner

522 U.S. 136 (1997)

Dissent

Steel Company v. Citizens for a Better Environment

523 U.S. 83 (1998)

Dissent/Concur

Ohio Forestry Association v. Sierra Club

523 U.S. 726 (1998)

Dissent

United States v. Bestfoods

524 U.S. 51 (1998)

Majority

City of Monterey v. Del Monte Dunes at Monterey, Ltd.

526 U.S. 687 (1999)

Concur

Friends of the Earth v. Laidlaw Environmental Services, Inc.

528 U.S. 167 (2000)

Majority

Public Lands Council v. Babbitt

529 U.S. 728 (2000)

Majority

Solid Waste Agency of Northern Cook County v. United States

531 U.S. 159 (2001)

Dissent

Whitman v. American Trucking Associations

531 U.S. 457 (2001)

Majority

Palazzolo v. Rhode Island

533 U.S. 606 (2001)

Dissent

Tahoe-Sierra Preservation Council v. Tahoe Regional Planning Agency

535 U.S. 302 (2002)

Majority

Norfolk & Western Railway v. Ayers

538 U.S. 135 (2003)

Majority

Alaska Department of Environmental Conservation v. EPA

540 U.S. 461 (2004)

Majority

South Florida Water Management Dist. v. Miccosukee Tribe

541 U.S. 95 (2004)

Majority

Engine Manufacturers Association v. South Coast Air Quality Management District

541 U.S. 246 (2004)

Dissent

Department of Transportation v. Public Citizen

541 U.S. 752 (2004)

Dissent

Norton v. Southern Utah Wilderness Association

542 U.S. 55 (2004)

Dissent

Bates v. Dow Agrosciences

544 U.S. 431 (2005)

Majority

Lingle v. Chevron U.S.A., Inc.

544 U.S. 528 (2005)

Majority

Kelo v. City of New London

545 U.S. 469 (2005)

Majority

S.D. Warren v. Maine Board of Environmental Protection

547 U.S. 370 (2006)

Majority

Rapanos v. United States

547 U.S. 715 (2006)

Dissent

Massachusetts v. EPA

549 U.S. 497 (2007)

Majority

Environmental Defense Fund v. Duke Energy Corporation

549 U.S. 561 (2007)

Majority

United Haulers Association v. Oneida-Herkimer Solid Waste Management Authority

550 U.S. 330 (2007)

Plurality

United States v. Atlantic Research Corporation

551 U.S. 128 (2007)

Majority

National Association of Home Builders v. Defenders of Wildlife

551 U.S. 644 (2007)

Dissent

Exxon Shipping Company v. Baker

554 U.S. 471 (2008)

Dissent in part

Winter v. National Resources Defense Council, Inc.

555 U.S. 7 (2008)

Dissent 

Summers v. Earth Island Institute

555 U.S. 488 (2009)

Dissent

Entergy Corporation v. Riverkeeper, Inc.

556 U.S. 208 (2009)

Dissent

Burlington Northern & Santa Fe Railway Company v. United States

556 U.S. 599 (2009)

Dissent

Coeur Alaska, Inc. v. Southeast Alaska Conservation Council

557 U.S. 261 (2009)

Dissent

Stop the Beach Renourishment, Inc. v. Florida Department of Environmental Protection

560 U.S. 702 (2010)

Concurrence in part

Monsanto Company v. Geertson Seed Farm

561 U.S. 139 (2010)

Dissent

American Electric Power Company v. Connecticut

564 U.S. 410 (2011)

Dissent

Sackett v. EPA

566 U.S. 120 (2012)

Dissent

Arkansas Game and Fish Commission v. United States

568 U.S. 23 (2012)

Dissent

Decker v. Northwest Environmental Defense Center

568 U.S. 597 (2013)

Dissent

Koontz v. St. Johns River Water Management District

570 U.S. 595 (2013)

Dissent

EPA v. EME Homer City Generation L.P.

572 U.S. 489 (2014)

Majority

Utility Air Regulatory Group v. EPA

573 U.S. 302 (2014)

Concur/Dissent

Michigan v. EPA

576 U.S. 743 (2015)

Dissent

Federal Energy Regulatory Commission v. Electric Power Supply Association

577 U.S. 260 (2016)

Majority

Sturgeon v. Frost

577 U.S. 424 (2016)

Dissent

United States Army Corps of Engineers v. Hawkes Company

578 U.S. 590 (2016)

Dissent

Murr v. Wisconsin

137 S. Ct. 1933 (2017)

Majority

Weyerhaeuser Company v. United States Fish and Wildlife Service

139 S. Ct. 361 (2018)

Dissent

Sturgeon v. Frost II

139 S. Ct. 1066 (2019)

Dissent

Virginia Uranium, Inc. v Warren

139 S. Ct. 1894 (2019)

 Majority

Knick v. Township of Scott

139 S. Ct. 2162 (2019)

Dissent

Atlantic Richfield Co. v. Christian

140 S.  Ct. 1335 (2020)

All But Alito

County of Maui v. Hawaii Wildlife Fund

140 S. Ct. 1462 (2020)

Majority

United States Forest Service v. Cowpasture River Preservation Association

140 S. Ct 1837 (2020)

Dissent

United States Fish and Wildlife Service v. Sierra Club

141 S. Ct. 777 (2021)

Dissent

Guam v. United States

141 S. Ct 1608 (2021)

Majority

Cedar Point Nursery v. Hassid

141 S. Ct. 2063 (2021)

Dissent

Hollyfrontier Cheyenne Refining, LLC v. Renewable Fuels Association

141 S. Ct 2172 (2021)

Dissent

PennEast Pipeline Company v. New Jersey

141 S. Ct 2244 (2021)

Dissent

West Virginia v. EPA

142 S. Ct. 2587 (2022)

Dissent

 

APPENDIX B

ENVIRONMENTAL PROTECTION (“EP”) SCORES OF SELECTED INDIVIDUAL JUSTICES

OCTOBER TERM 1994–OCTOBER TERM 2021

 

Justice

Number of Cases

EP Points

EP Score

Breyer

61

38

62.3

Scalia

47

11

23.4

Stevens

37

29

78.4

Kennedy

50

18

36

Thomas

63

13

20.6

Souter

36

29

80.6

Ginsburg

57

41

71.9

CJ Roberts

40

8

20

Alito

38

4

10.5

Sotomayor

25

16

64

Kagan

25

17

68

95 S. Cal. L. Rev. 1395

Download

Howard J. & Katherine W. Professor of Law, Harvard Law School. I would like to thank Professors Hope Babcock, Jonathan Cannon, Bill Funk, Steph Tai, and Susannah Weaver for their terrific comments on a preliminary draft of this Article. Kathryn C. Reed, Harvard Law School Class of 2022, provided outstanding research and editorial assistance. Although it has no bearing on my analysis, for the sake of disclosure I served as counsel for either one of the parties or an amicus in the following Supreme Court cases that fall within this Article’s scope of review: Dolan v. City of Tigard, 512 U.S. 374 (1994); Palazzolo v. Rhode Island, 533 U.S. 606 (2001); Tahoe-Sierra Pres. Council v. Tahoe Reg’l Plan. Agency, 535 U.S. 302 (2002); Norfolk & Western Ry. v. Ayers, 538 U.S. 135 (2003); S. Fla. Water Mgmt. Dist. v. Miccosukee Tribe, 541 U.S. 95 (2004); S.D. Warren Co. v. Me. Bd. of Env’t Prot., 547 U.S. 370 (2006); Entergy Corp. v. Riverkeeper, Inc., 556 U.S. 208 (2009); Monsanto Co. v. Geertson Seed Farm, 561 U.S. 139 (2010); and Murr v. Wisconsin, 137 S. Ct. 1933 (2017).

Justice Breyer’s Friendly Legacy for Environmental Law

Environmentalists did not cheer President Bill Clinton’s decision in May 1994 to nominate then-First Circuit Judge Stephen Breyer to fill Justice Harry Blackmun’s seat on the Supreme Court. Just the opposite. Many instead expressed serious concerns about Breyer’s impact on environmental law were he to be confirmed, and openly questioned whether a Justice Breyer might

Read More »

Should Humanity Have Standing? Securing Environmental Rights in the United States

While courts around the world are increasingly recognizing rights of nature or the rights of individuals or communities to a safe and healthy environment, American courts have been much more skeptical of environmental rights claims. This Article examines this growing divergence and identifies trends in American law that might account for it, including explanations deeply

Read More »

Standing for Rivers, Mountains—and Trees—in the Anthropocene

In his well-known article, Should Trees Have Standing?—Toward Legal Rights for Natural Objects, Professor Christopher Stone proposed that courts grant nonhuman entities standing as plaintiffs so their interests may directly represented in court. In this Article, I review Stone’s ideas about standing and our relationship with the natural environment and describe the current, burgeoning, widespread

Read More »

Fish, Whales, and a Blue Ethics for the Anthropocene: How Do We Think About the Last Wild Food in the Twenty-First Century

One of the lesser celebrated threads of Christopher Stone’s scholarship was his interest in the ocean—especially international fisheries and whaling. Fish and whales are among the “last wild food”—that is, species that humans take directly from the wild for food purposes. While whales are primarily cultural food, fisheries remain important contributors to the human diet

Read More »

Identifying Contemporary Rights of Nature in the United States

The Rights of Nature movement is at the precipice of watershed social changes. Leaders of this international, Indigenous-led movement call upon the public to radically reimagine the human relationship with nature. This Article comes at a crucial moment when some leading environmental law scholars are questioning the potential Rights of Nature within the United States.

Read More »

Should Humanity Have Standing? Securing Environmental Rights in the United States

While courts around the world are increasingly recognizing rights of nature or the rights of individuals or communities to a safe and healthy environment, American courts have been much more skeptical of environmental rights claims. This Article examines this growing divergence and identifies trends in American law that might account for it, including explanations deeply rooted in U.S. constitutional history as well as recent doctrinal developments such as the major questions doctrine. More importantly, this Article offers a way forward for American law in the face of critical environmental challenges, most notably climate change. Specifically, it presents constitutional interpretive methods that might advance the interests of nature and individuals facing pollution impacts. It also explores the potential role for states and state constitutions in protecting the environment. But given the obstacles—both jurisprudential and political—to a more robust environmental rights framework in American courts, this Article concludes that the best path forward might be a theory of negative environmental rights—namely, the right to be free from harmful pollution. Such a narrowly constructed commitment to an end to uninternalized environmental externalities might offer a practical and politically feasibly way to move past the pervasive concerns of American courts about positive rights, separation of powers, the political question doctrine, and appropriate modes of judicial relief.

INTRODUCTION

In his landmark Should Trees Have Standing? article, Professor Christopher Stone posed the question of whether Nature could, and indeed should, have legally enforceable rights. Today, a handful of countries have granted rights—sometimes in the form of “legal personhood”—to Nature generally or to discrete geographic features, such as mountains or rivers. Many more countries recognize their citizens’ right to a healthy environment in one form or another. And a growing number of litigants across the globe—spurred on perhaps by the youth movement for climate change action—have used these rights to force governments or businesses to reduce greenhouse gas emissions. Courts in many nations around the world have been increasingly sympathetic to these claims, establishing the judiciary in many nations as an important point of leverage for moving society toward a more sustainable future in general and a more robust response to climate change in particular.

In the United States, however, such efforts have met with little success. Recognition of environmental rights remains limited and largely in the background of the American legal system. Such rights are rarely constitutionally defined (and only at the state level) and have gone almost entirely unrecognized by courts. Although Stone’s argument altered the environmental rights conversation in the United States (particularly after being cited in Justice Douglas’s dissent in Sierra Club v. Morton), it has not translated into strengthened environmental rights in American courts. Indeed, both federal and state courts across the country have expressly declined to entertain climate change litigation, rejecting a range of legal theories and assertions of environmental rights advanced by a diverse set of plaintiffs. The judges in these cases consistently suggest that the remedies sought by the plaintiffs go beyond what the judiciary can order.

This reality leads to the central puzzle of this Article: Why has the conception of environmental rights remained so crimped in the United States in contrast with other nations?

In seeking to answer this query, I also address another critical question: Are there legal avenues available to better secure environmental rights in the United States?

In Part I, I set the stage for these inquiries with a brief survey of environmental rights scholarship over the past fifty years, chronicling how such rights emerged from the human rights discourse and have now become constitutionalized in the vast majority of nations across the globe. I go on to document how environmental rights have become increasingly widely recognized in international law through treaties, resolutions, and declarations—including a 2022 United Nations (“U.N.”) General Assembly Resolution declaring access to a clean, healthy, and sustainable environment to be a human right. Given this universality of commitment to environmental protection, I argue that environmental rights should be recognized as natural rights that need not be granted by a constitution or a statute but rather understood to be inherent in what it means to be human. In this regard, the failure of U.S. courts to recognize environmental rights seems out of step with modern mores and legal thinking across the globe—setting up the puzzles noted above.

In Part II, I undertake a comparative review of national case law around the world, noting how courts in many nations have strengthened environmental rights in recent years—particularly in the context of the need to shift our economic activities onto more sustainable underpinnings and to address the rising risks of climate change. In analyzing the global march of environmental rights, I note that while the trend is toward broader protection of peoples across the world from pollution, each nation’s framing of environmental rights reflects the particular values, circumstances, and legal traditions of that society—with the United States in a relatively unique and lagging position.

I extend this analysis in Part III with a more detailed look at the reasons why environmental rights remain limited in the United States. I focus particular attention on the decisions in a number of recent climate change cases where courts have concluded that the judicial branch of government is not positioned to provide the relief that the plaintiffs sought. I go on to suggest that the narrow American view of environmental rights derives not only from the lack of a clear constitutional provision, but also from the U.S. judiciary’s tradition of restraint in the face of cases that present political or “major” questions that might be seen as within the purview of the legislative and executive branches of government. I also note that, unlike civil rights, which have relatively clear lines, environmental protection inescapably entails tradeoffs and multidimensional policy choices. This reality makes climate change and other environmental policy issues polycentric problems, which present competing claims and no clear framework for balancing the contesting interests. In the face of such difficulties, many judges and scholars have concluded (following the conceptual framing of Professor Lon Fuller) that such issues are inappropriate for courts to adjudicate and must be left to political processes. Finally, I note that the approach to evaluating competing interests embedded in the U.S. framework of pollution control law and regulation—centered on benefit-cost analysis with particular reliance on the Kaldor-Hicks model of net social benefits—effectively privileges economic activity and often treats individual environmental rights as inconsequential.

In Part IV, I argue that the sustainability imperative and the risks posed by climate change demand that U.S. courts revisit their hesitancy to vindicate environmental rights and respond to the need to address climate change and establish a more sustainable foundation for the American economy. I advance several legal theories and accompanying political strategies for expanding environmental rights in America—consistent with emerging norms across the country and around the world and the increasingly clear epidemiological and ecological evidence that deteriorating environmental conditions threaten the capacity of humanity to flourish in the years ahead. Ultimately, I argue that the key to progress might well not be found in the expansion of individual environmental rights per se, but rather in the emerging norm against uninternalized environmental externalities—the acceptance of which makes pollution spillovers unacceptable. Thus, the most promising pathway to expanded environmental rights in America might be through the assertion of the environmental rights of the people in a negative construct—that is, the right of individuals not to be harmed by pollution.

I conclude the Article with a reflection on the ongoing relevance of Christopher Stone’s 1972 vision of humanity’s moral development over time leading to the gradual extension of rights to those who (and that which) had previously been left out of legal personhood and thus the law’s protection. But rather than emphasize the value of extending legal rights to natural objects, I urge that human rights be understood to encompass a natural law right to a habitable environment—accomplishing through a different route Stone’s call for a “new conception of man’s relationship to the rest of nature . . . [as] a step towards solving the material planetary problems.” Rather than give trees standing, I propose a narrower path forward based on declaring an end to uninternalized externalities and asserting the right of each person to physical integrity and freedom from pollution. In doing so, we can give American citizens standing to challenge harmful emissions and stop the damage to the Earth systems that threatens to make the planet uninhabitable for humanity.

I.  ENVIRONMENTAL RIGHTS

Environmental rights have increasingly been recognized as foundational to human rights. In recent decades, many nations have enshrined rights to a healthy environment in one form or another in their constitutions. The near-universal acceptance of environmental rights provides a starting point for the argument that the right to a healthy environment should be recognized as an element of natural law.

A.  Environmental Rights as Human Rights

Over the past fifty years, much of the world has come to recognize environmental rights as fundamental to human existence—and therefore to be understood as natural rights that need not be expressly or specifically established by statute or governmental edict. While the 1948 U.N. Universal Declaration of Human Rights does not mention the environment specifically, it does recognize rights to life and to security. Moreover, just two decades later as environmental consciousness was rising around the world, the U.N. General Assembly adopted a resolution highlighting the relationship between environmental quality and human rights. In reflecting on this resolution and the momentum building for greater focus on environmental protection at the time of the 1972 U.N. Conference on the Human Environment, Janusz Symonides, a prominent Polish jurist and academic, observed that the right to a clean environment must be understood as a universal human right because the ability to enjoy other fundamental rights, including the right to life, depends on it.

The 1972 Stockholm Declaration on the Human Environment, which emerged from the U.N. Conference, strengthened this conclusion with a further observation: “Both aspects of man’s environment, the natural and the man-made, are essential to his well-being and to the enjoyment of basic human rights—even the right to life itself.” But the governments in Stockholm declined to specify what might be encompassed by this right, encouraging scholars and rights activists to develop their own definitions and conceptions. Most notably, Alexandre Kiss, a French diplomat and scholar, offered a series of publications that explored different dimensions of environmental rights centered on the theory that environmental protection is essential to what it is to be human. Kiss argued that Principle I of the 1972 Stockholm Declaration—asserting a “fundamental right to freedom, equality, and adequate conditions of life, in an environment of a quality that permits a life of dignity and well-being”—had become so well established as to be added to the category of fundamental rights, the enjoyment of which is guaranteed to all individuals. He further explained that these environmental rights create obligations (not only for states but also for individuals), duties to future generations, and “remedies in the event of environmental harm.”

As the world community prepared to gather in 1992 for the second Earth Summit in Rio de Janeiro, Professor Dinah Shelton further developed the argument for recognizing environmental rights as fundamental to human rights. She critiqued many of the theories of environmental rights that were common at the time, noting that none of them was “fully articulated.” Shelton ultimately concluded that an approach that viewed “human rights and environmental protection as each representing different, but overlapping, societal values” showed the most promise—and that a “clearly and narrowly defined international human right to a safe and healthy environment” could achieve objectives in human rights law and environmental law.

The breadth of support for the recognition of environmental rights has strengthened in recent years. In 2008, the U.N. Human Rights Council adopted Council Resolution 7/23, which affirmed the council’s view that “climate change poses an immediate and far-reaching threat to people and communities around the world and has implications for the full enjoyment of human rights.” The 2022 U.N. General Assembly Resolution—adopted with 161 votes in favor (including the United States) and just eight abstentions—declaring access to a clean, healthy, and sustainable environment to be a universal human right represents the latest manifestation of this growing consensus. Momentum continues to build, as the fifteenth Conference of the Parties to the Convention on Biological Diversity adopted in 2022 a biodiversity conservation framework designed to accommodate the “rights of nature and rights of Mother Earth.

B.  Constitutional Recognition of Environmental Rights

The importance of environmental conditions to human flourishing is now so widely recognized and highly valued that 150 nations highlight the importance of the environment in their constitutions. More than 100 nations now have constitutions that expressly recognize a right to a healthy environment in some form. As David Boyd explains in his seminal study, The Environmental Rights Revolution, three concurrent waves of rights conceptualization in the second half of the 20th century contributed to the firm footing of environmental rights in constitutions today: (1) growth in democratic governance; (2) a global “rights revolution”; and (3) public awareness of severe environmental degradation. Against the backdrop of expanding rights discourse and ecological consciousness, over half of the world’s constitutions were written or re-written, with many of those doing the drafting seizing this opportunity to establish a legal right to a healthy environment. Indeed, over this period environmental rights have been the fastest-growing provision in constitutional revisions. While it may be difficult to establish a precise causal link, Boyd demonstrates a consistent correlation between a formal right to a healthy environment and strengthened environmental governance and results.

The extent to which nations have constitutionalized environmental rights depends on country-specific context. For example, some resource-rich countries explicitly connect environmental rights to public access to the benefits of the country’s natural resources. Many island nations, perhaps recognizing their vulnerability to ecosystem damage, instead highlight the government’s duty to preserve an “ecologically balanced environment.” And a number of countries formerly constituting the Soviet Union include specific protections for the public’s access to “information about the environment,” likely inspired by the Soviet Union’s tradition of state-sponsored disinformation and the specific failure to share critical facts about the Chernobyl nuclear crisis.

C.  Environmental Rights as Natural Law

The increasingly universal recognition of environmental rights suggests that every person should have access to basic environmental amenities—including clean air to breathe, safe water to drink, freedom from exposure to toxic chemicals, and functioning Earth systems (including a stable climate) that provide a “safe operating space for humanity.” So fundamental is this right to human existence that it must be understood to have independent and intrinsic value—and not simply instrumental importance as a pathway to the fulfillment of other fundamental rights such as the right to life or health. Many legal commentators have thus concluded that the right to a healthy environment should be seen as an element of the universal moral principles that must be regarded as sacrosanct in all societies at all times.

Along with my colleague Don Elliott, I have argued that the U.S. Congress highlighted the existence of environmental rights at the time of the 1970 adoption of the National Environmental Policy Act (“NEPA”) when it declared: “The Congress recognizes that each person should enjoy a healthful environment and that each person has a responsibility to contribute to the preservation and enhancement of the environment.” Note that the Congress did not establish this right, but rather recognized it. In doing so, the Congress suggested that NEPA was intended to provide mechanisms to vindicate a pre-existing natural law right to a “healthful environment” and to clarify the obligation of every American to protect the environment. But the central thrust of this Article is not to make the case for environmental rights as an element of natural law but rather to map the environmental rights terrain in search of an explanation as to why U.S. courts have been hesitant to accept such rights. Part II takes up this quest.

II.  DEVELOPMENT OF AN ENVIRONMENTAL RIGHTS JURISPRUDENCE

In the face of the existential threat posed by climate change and a growing recognition that a commitment to sustainability must be a foundational feature of twenty-first-century life, courts around the world have advanced environmental rights in recent years and issued decisions that required both governments and corporations to address a diverse set of ecological and public health harms—including the risk of climate change from a build-up of greenhouse gases (“GHGs”) in the atmosphere. In their totality, these decisions by trial courts, appeals courts, and constitutional courts across the world make clear that environmental protection is now seen as a fundamental right in many societies. I begin in Section II.A with a discussion of the international and transnational legal framework that has underpinned many of these decisions—and explore the decisions of the European Court of Human Rights (“ECtHR”) in this respect. In Section II.B, I extract some common themes from the environmental decisions of national courts around the world.

A.  Transnational Courts and Environmental Rights

In recent years, the belief that access to a healthy environment is essential to the fulfillment of other human rights has increasingly been upheld in international legal proceedings. For example, Justice Weeramantry of the International Court of Justice observed in his Gabcikovo-Nagymaros opinion that protection of the environment is “a vital part of contemporary human rights doctrine, for it is sine qua non for numerous human rights such as the right to health and the right to life itself.”

This jurisprudence is underpinned in large part by language in a suite of treaties, which provide further international undergirding for environmental rights. Most notably, four regional agreements establish a right to a healthy environment: the African Charter on Human and Peoples’ Rights, the Aarhus Convention, the San Salvador Protocol to the American Convention on Human Rights, and the 2004 Revised Arab Charter on Human Rights. Together, these treaties have 126 independent signatories, comprising a healthy majority of all sovereign states. While these treaties do not cover every nation—with the United States being one of the notable non-signatories—they bolster a global sentiment that humans have a fundamental right to a healthy environment.

Nowhere is the trend toward recognition of environmental rights more visible than at the ECtHR. Indeed, while the European Convention on Human Rights (“ECHR”) has no explicit reference to the environment, the ECtHR has developed “an elaborate and extensive body of case law which all but in name provides for a right to a healthy environment.” In fact, the ECtHR Registry (which acts as the administrative support structure for the Court) has produced an extensive “Guide to the Case-Law of the European Court of Human Rights” that explains ECtHR case law on environmental issues.

The ECtHR has an environmental history going back to noise pollution cases in the 1980s. In 1994, the court explicitly recognized that environmental harms may “affect individuals’ well-being and prevent them from enjoying their homes in such a way as to affect their private and family life adversely” as protected by Article 8 of the ECHR. The ECtHR has found ECHR Article 8 violations related to air pollution and other environmental risks, such as the proximity of a dangerous chemical plant, mines, or a waste treatment facility, as well as potential water contamination by a cemetery close to a home. The ECtHR has also invoked the right to life in Article 2 of the ECHR in a few cases when environmental harms posed a direct risk to someone’s life. But these cases remain rare and not central to the court’s environmental rights jurisprudence.

The court’s decisions suggest that governments retain a degree of flexibility in addressing environmental harms and weighing them against other interests, such as the country’s economic well-being. But the ECtHR has been clear that states have a positive obligation to take preventive measures to address environmental harms.

B.  Environmental Rights Jurisprudence Around the World

An ever-growing list of countries around the world are seeing their courts act decisively in the face of environmental disputes. These courts have demonstrated a much greater willingness to make findings and order governmental action with regard to climate change and other sustainability threats than have American courts, as will be explored in Part III in greater detail. Many foreign courts have explicitly recognized substantive environmental rights and a governmental duty of care towards citizens with regards to environmental quality or climate stability, even if they have left the specific path forward to be defined by other branches of government.

       A number of courts have questioned the adequacy of their government’s efforts to mitigate or adapt to climate change—especially in light of the nationally determined contributions to which they committed as part of the 2015 Paris Agreement. Courts around the world have held that their country’s respective governments had run afoul of their constitutional, statutory, or common-law obligations (in some cases, violating the plaintiffs’ individual rights to a healthy environment) because of the insufficiency of their plan to reduce GHG emissions. Many courts—including those of France, Germany, Pakistan, and the Netherlands—responded to these violations by ordering the governments to develop new frameworks for reducing greenhouse gas emissions—with Germany’s Constitutional Court ordering the federal parliament to adopt a new climate change mitigation plan.

Some courts have been willing to entertain claims raised against private companies—or against the government vis-à-vis its failure to adequately regulate a private actor. One of the first of such cases was the 2005 Nigerian case, Gbemre v. Shell Petroleum. In that case, several plaintiffs filed suit against the Nigerian government for failing to stop Shell Petroleum from gas flaring, which they argued had devastating environmental effects in their local community. The Federal High Court—akin to a federal district court in the United States—ultimately found an environmental right in the Nigerian constitution and the African Charter on Human and Peoples’ Rights, and it declared that Shell’s gas flaring violated these rights as well as other human rights to life and dignity, and that provisions of Nigerian law that allowed the gas flaring to take place violated the country’s constitution. In the Netherlands, the same district court that ordered the government to reduce its greenhouse gas emissions concluded in a separate case that Royal Dutch Shell had violated the environmental plaintiffs’ rights under the Dutch Civil Code and ordered Shell to reduce its GHG emissions by 45% by 2030 from the oil company’s 2019 baseline.

And while explicit rights for Nature remain rare in national constitutions or in global jurisprudence, developments in Colombia and Ecuador have bucked this broader trend. In Ecuador, the Pachamama (or “Mother Earth” to indigenous Ecuadorians) has been protected in its constitution since the late 2000s. In 2021, the Constitutional Court gave this protection real force in holding that mining undertaken by Enami, the state mining company, in an ecologically sensitive part of the rainforest was unconstitutional. It grounded its decision not only in the rights of nature—but also tied those rights to human rights to the environment. Judge Agustín Grijalva Jiménez wrote:

The concept of nature that the Constitution develops in Article 71 includes human beings as an inextricable part of nature, and of the life it reproduces and realizes in its breast . . . . In order to highlight this relationship, the Constitution in its preamble states that Mother Nature is vital for our existence. Here the Constitution perceives (or pays close attention to) the fact that humanity’s own existence is inevitably tied to that of nature, for he conceives it as part of himself. The rights of nature necessarily span to the rights of humanity as a species of nature.

In a similar case, Colombia’s Supreme Court accepted the claims of a group of youth plaintiffs who argued that their rights to a healthy environment were being debased by the government’s failure to end deforestation in the Colombian Amazon region. In its 2018 Future Generations decision, the Court declared that “fundamental rights of life, health, the minimum substance, freedom, and human dignity are substantially linked and determined by the environment” and ordered development of a plan to address the deforestation concerns brought forward by the plaintiffs. The Court also granted the Amazon Basin something akin to legal personhood, finding that it was a “subject of rights” and was therefore “entitled to protection, conservation, maintenance, and restoration.”

These decisions reflect the effect of international legal regimes in many countries around the world. Many of the decisions favorable to environmental rights have linked together generic rights in national constitutions—such as a general right to life—with more specific protections in transnational treaties or agreements to craft a right to a healthy environment. The remedies that have been developed in these countries have not always been terribly specific. The orders in several European courts that governments reduce greenhouse gas emissions left the governments space to design their own plans—though in some cases, the government remained under the supervision of the court as it developed a plan. Indeed, in Pakistan, the court-created Climate Change Commission was designed to facilitate cooperation among government officials—not to function as a judicially imposed policymaking force.

C.  Conclusions from Survey of Global Environmental Rights Cases

Three broad conclusions can be drawn from this survey of the judicial response to environmental claims. First, environmental rights are being recognized ever more broadly across the world. Second, the frame and scope of these rights and the underlying legal theories advanced vary across the world—reflecting the individual circumstances, judicial traditions, values, and political dynamics of each society. Finally, the United States stands apart from the rest of the world with regard to the broad trend toward court recognition of environmental rights, clearly suggesting a distinct legal framework and tradition, which is the subject of Part III.

III.  UNDERDEVELOPMENT OF ENVIRONMENTAL RIGHTS IN THE UNITED STATES

The pattern that emerges from Part II of expanding judicial recognition of environmental rights—except in the United States—requires us to delve into the issue of American judicial exceptionalism in the environmental context. Specifically, why are U.S. courts an outlier with regard to recognizing environmental rights? A number of explanatory factors are explored below in pursuit of a better understanding of the unique elements of America’s legal structure and traditions that translate into a more constricted view of environmental rights than exists in other countries, particularly other economically advanced democracies.

A.  The State of Climate Litigation in the United States

Litigants in the United States seeking to enforce their environmental rights or advance the U.S. response to climate change have faced a skeptical judiciary. Of particular note in this regard, in the 2022 case of West Virginia v. EPA, the Supreme Court found that the EPA lacked authority under the Clean Air Act’s statutory framework to regulate greenhouse gas emissions from existing coal and gas-fired power plants via “generation-shifting”—a regulatory mechanism akin to a cap-and-trade system for greenhouse gases. In coming to this conclusion, the Court relied on a separation of powers argument and a refinement of its political question jurisprudence—advanced as a major questions doctrine.

In Juliana v. United States, arguably the most high-profile climate case in the federal courts to date, twenty-one youth plaintiffs, organized by an Oregon-based environmental group called Our Children’s Trust, asserted that their substantive due process rights to a life-sustaining climate system had been violated and, further, that the federal government had failed to uphold its public trust doctrine obligation to protect shared natural resources. The Oregon District Court initially ruled that the case could go forward based on the theory that “a climate system capable of sustaining human life” was a fundamental right under the Due Process Clause of the Fifth Amendment. But the Ninth Circuit, while conceding that the plaintiffs had demonstrated the risks of climate change and the federal government’s contribution to the build-up of GHGs in the atmosphere, declared that the plaintiffs lacked standing. The panel majority based this conclusion on a legal finding that the injury plaintiffs sought to have addressed was not “redressable” by the courts. More specifically, the majority opinion of the Ninth Circuit panel leans on separation of powers arguments and the limits of authority of Article III judges to suggest that, in providing equitable relief, courts are always constrained and can only act where they can identify “limited and precise” legal standards to follow.

Climate change litigants in state courts have faced similar hurdles. In 2020, the Oregon Supreme Court rejected public trust doctrine claims from a group of similar youth climate plaintiffs—including the lead plaintiff in Juliana—in Chernaik v. Brown. The court majority concluded that the doctrine applied only to the management of navigable waters and underlying lands—and should not be extended to include the atmosphere, nor does it require state action to address climate change as a potential source of damage to these resources.

In 2021, the Washington Court of Appeals rejected a similar claim in Aji P. ex rel. Piper v. State, in which youth climate plaintiffs asserted fundamental rights to a stable climate system. The court ultimately held that the claims presented non-justiciable political questions. The Supreme Court of Washington denied review of the appellate court’s decision, with two justices dissenting. While declaring that the “right to a stable environment should be fundamental,” the Court of Appeals leaned heavily on the logic of separation of powers and the political question doctrine as spelled out in the Supreme Court’s Baker v. Carr decision in dismissing the plaintiffs’ case.

The Alaska Supreme Court reached a similar decision in the 2022 Sagoonick v. State case. There, the plaintiffs advanced arguments similar to those of the plaintiffs in Juliana, Chernaik, and Aji P. But the court rejected their claims based on the conclusion that the plaintiffs raised non-justiciable political questions. Though the court acknowledged that, under the Alaska Constitution, it did have a role to play in supervising the state’s management of natural resources, those same constitutional provisions also “expressly delegated to the legislature the duty to balance competing priorities for the collective benefit of all Alaskans.” The court declined to intervene in the face of these political questions, but it noted that the plaintiffs had several alternative avenues for recourse, including the challenging of “discrete actions implementing State resource development and environmental policies,” pursuing a ballot initiative to codify their preferred policies, and lobbying state policymakers.

In finding that the relief requested by plaintiffs (a court order for more aggressive government policies to address climate change) has no judicially manageable standards and risks usurping the authority of the legislative and executive branches, the courts here followed a venerable tradition of judicial restraint within U.S. courts, but one to which the judiciary does not always adhere—as I explore further below.

B.  No Environmental Provision in the U.S. Constitution

Perhaps the most obvious place to start the search for an explanation for the resistance of U.S. courts to assertions of environmental rights lies in the absence of any explicit environmental or public health provision in the U.S. Constitution. Indeed, in almost all of the international environmental rights cases reviewed in Part II above, courts make reference to provisions in the country’s constitution or other foundational legal documents (including reliance on the ECHR). But only in a minority of cases was the constitutional provision one that specifically mentions the environment. Much more often, courts read environmental rights into provisions for life or health. Of course, the U.S. Constitution does not make mention of these terms either.

       But this explanation is not fully satisfactory. Other countries with constitutions that make no mention of the environment or related terms have seen the judiciary expand environmental rights and even extend legal protection to elements of Nature. In fact, my research suggests that thirty-seven other nations find themselves in a similar posture (see Appendix). But courts in many of these countries have advanced a broader view of environmental rights than is found in the United States. In fact, some of these nations have been trailblazers in judicial recognition of fundamental rights in support of environmental protection claims. In New Zealand, for example, where there is no constitutional provision for environmental rights, the Whanganui River has been given legal personhood with the Maori people who claim ancestral rights to the waterway acting in a trusteeship role to ensure the resource is protected. Likewise, although Canada lacks explicit environmental language in its constitution, its courts have repeatedly affirmed the authority of federal policies that regulate for the purpose of environmental protection. And multiple Canadian courts have assigned environmental rights to Aboriginal titleholders as well as Aboriginal title lands.

In other countries, the executive branches have asserted environmental rights in advancing pollution control and sustainability initiatives. In Kiribati, for example, despite an absence of constitutionally enshrined environmental rights, the government has developed an extensive right-based national policy focused on a healthy environment, and the nation’s political leaders have spoken at lengths on the global stage about the need to advance this right worldwide. Similarly, Japan’s legislature introduced a mandamus action within its Administrative Case Litigation Act in 2004, which the Japanese Supreme Court has interpreted as a rights-based obligation on the government to minimize damage to health from environmental pollution.

While the lack of explicit or implicit environmental provisions in the U.S. Constitution starts to explain the narrow view of environmental rights emerging from American courts, it cannot be seen as a full explanation given the divergent outcomes across the world in climate change cases and other legal challenges based on environmental rights.

C.  Non-Justiciability and Judicial Deference to the Political Branches

In the recent U.S. court decisions dismissing environmental rights claims, the standing of plaintiffs to bring a case has almost always been rejected based on the legal theory that courts cannot provide the remedy being sought—notably, a court order mandating more vigorous climate change policies. This conclusion builds on the separation of powers, political question, and the recently articulated major questions doctrines, as well as the longstanding Baker v. Carr framework, which suggests that courts should only take up cases where there are “appropriate modes of effective judicial relief.” But as noted in Part II above, other courts around the world have not hesitated to declare government policies inadequate and order the remedies requested in similar circumstances. So why does the United States stand apart?

Perhaps the real explanation lies in the seriousness with which courts in the United States struggle with the issue of whether the injuries for which plaintiffs seek redress are within the power of the judiciary to address. In the Juliana case, the Ninth Circuit agreed with the district court that the plaintiffs had alleged particularized claims of injury from GHG emissions that could be linked to federal government actions (including leases and subsidies) in support of fossil fuel producers. But the court concluded that the plaintiffs had failed to meet the redressability requirement for standing. The majority opinion rejects the notion that courts could “order, design, supervise, or implement” the sort of climate change action plan that plaintiffs sought. The two-judge majority goes on to declare that the plaintiffs must take their concerns to the “political branches” of the government.

This line of reasoning fits into a long American tradition of courts steering clear of political questions that are deemed to be better resolved by the political branches of the government—including in a series of prior cases involving environmental claims. The Juliana majority notes that the transition to renewable energy requires “a host of complex policy decisions entrusted, for better or worse, to the wisdom and discretion of the executive and legislative branches.”

But is this outcome really mandated? Couldn’t the court have declared the government’s current climate change posture inadequate and ordered a ramped-up response to the build-up of GHGs in the atmosphere—while leaving the details of how to do so to the other branches? As Aharon Barak, former Justice of the Israeli Supreme Court, observes in The Judge in a Democracy, which analyzes the role of the judiciary, “[T]he separation of powers is not pure and . . . each branch performs some functions that belong to the other branches[] so long as they are intimately related to the branch’s primary function.” Barak goes on to argue that the principle of checks and balances stands alongside the separation of powers as a foundational element of a functioning democracy—thus requiring the judiciary to act if the other branches fail to uphold the law or otherwise perform their duties. He concludes:

The more non-justiciability is expounded, the less opportunity judges have for bridging the gap between law and society and for protecting the constitution and democracy. . . . [T]he court should not abdicate its role in a democracy merely because it is uncomfortable or fears tension with the other branches of the state.

As Part II demonstrates, courts around the world seem to follow this principle in their willingness to step into environmental controversies, including cases that require them to declare the policies of the government inadequate—and to order more robust responses, including but not limited to climate change policies, to the claims of a diverse set of plaintiffs.

But the U.S. judiciary has traditionally taken a much narrower view of its proper role—and concomitantly has been much more likely than courts elsewhere to declare a matter non-justiciable when confronted with cases that seem to present political questions—following the Supreme Court’s guidance and multi-factor test established in Baker v. Carr. In Juliana, for example, the Ninth Circuit concluded that the plaintiffs’ request for relief would have entailed handling an issue committed to other branches of government (Baker v. Carr factor 1), forcing them into establishing a remedy where there were no judicially manageable standards (factor 2), and requiring the court to make policy determinations (factor 3). In finding the matter non-justiciable, the majority declared that they were “bound ‘to exercise a discretion informed by tradition, methodized by analogy, and disciplined by system’ ” and that “the plaintiffs’ case must be made to the political branches,” noting further and somewhat curiously “[t]hat the other branches may have abdicated their responsibility to remediate the problem does not confer on Article III courts, no matter how well-intentioned, the ability to step into their shoes.”

But this restraint is not mandated, as the dissent in Juliana makes clear. The dissenting Ninth Circuit judge signals that she would not have found the requested relief non-justiciable and believes that a court order that the federal government take more vigorous action to address climate change would be efficacious, even if such a command were not likely to fully solve the problem. She rejects the majority’s “deference-to-a-fault” approach and highlights a “countervailing constitutional mandate to intervene where the other branches run afoul of our foundational principles.”

Similar separation of powers and political question arguments dominate Aji P., in which the Washington Court of Appeals found plaintiffs’ claims nonjusticiable based on four of the Baker v. Carr factors. Notably, the majority concluded that: (1) the issues on which the plaintiffs sought judicial relief were “constitutionally committed” to the legislative and executive branches of government; (2) there exists no “judicially manageable standard” for providing relief; (3) the legislature and executive agencies have established climate change policies (albeit ones that plaintiffs believe are inadequate); and (4) judicial intervention in this case would “disrespect[] the coordinate branches” of government. While the Court of Appeals decision was upheld by the Washington Supreme Court, the Chief Justice dissented and indicated that he would have allowed the plaintiffs’ case to go forward, observing that “the Court of Appeals decision unnecessarily expanded the political question doctrine” and that “considerable statutory authority” supports the plaintiffs’ claim of fundamental environmental rights.

Similarly, the decision in Sagoonick was decided by just one vote and was issued over a vigorous dissent by Justice Peter Maassen, who noted that the state’s public trust doctrine incorporated a “constitutional right to a livable climate.” Justice Maassen criticized the majority for failing to issue a declaratory judgment “recogniz[ing] a constitutional right to a livable climate—arguably the bare minimum when it comes to the inherent human rights to which the Alaska Constitution is dedicated.” Justice Maassen further noted that the court had been repeatedly presented with the same question and that declining to answer it on justiciability grounds “will not eliminate it but will only postpone our answer, in the meantime putting the burden of redundantly litigating it on plaintiffs, the State, and the trial courts.” He added that recognition of the right does not require the court to develop a remedy itself, to immediately and fully “answer every subsequent question” about how the right might be invoked, or to convert any policy that harms the climate in the slightest into a rights violation—but that the court had a duty to answer the question.

Although the outcome in the Chernaik case in Oregon turned on the court’s unwillingness to expand the reach of the public trust doctrine, dissenting Chief Justice Martha Walters expressly rejected the suggestion that a judicial declaration regarding climate change would be inappropriate and such matters should be left to the legislative and executive branches. In fact, she concluded that “the judicial branch also has a role to play.” The dissenting opinion turns aside the separation of powers argument for judicial restraint, citing Marbury v. Madison and declaring that one of the core functions of the judicial branch “is to determine the legal authority and obligations of the other two branches of government.” In addition, the dissent takes apart the suggestion that the relief plaintiffs seek lacks judicially manageable standards (citing Baker v. Carr) and would require the court to make “particular policy decisions.” In rejecting the need for the court to show deference to the other branches in the face of a “political question,” Chief Justice Walters makes it clear that a court invalidating a policy decision of another branch is not the same thing as the court itself making a policy decision—and explains that she would have stepped up to the “obligation to determine what the law requires” and ordered a more robust state response to the threat climate change poses for public trust resources.

Signs that the tide may be turning in U.S. climate litigation have begun to emerge. Most notably, a trial court in Montana recently allowed the legal challenge brought by a group of youth plaintiffs to proceed to trial. In this case, the plaintiffs argued that several state statutory provisions violated the Montana Constitution’s environmental rights provision, the public trust doctrine, and their right to a stable climate system. The state argued that the claims presented by the plaintiffs presented non-justiciable political questions. In 2021, the trial court partially granted the state’s motion to dismiss on all requests for relief—except for the plaintiffs’ request for  declaratory judgment that the state had violated the plaintiffs’ rights. Accordingly, a trial will take place in 2023 on this question.

Judges and justices in other countries have not felt constrained by separation of powers or political question concerns—nor about the risk that ordering action on climate change involves decisions that have no judicially manageable standards. Scholars have similarly raised questions about the logic and advisability of declaring cases to be non-justiciable. Justice Barak, for instance, condemns the concept of normative non-justiciability (of the sort Justice Brennan develops in Baker v. Carr in concluding that there will be no judicially manageable standards for addressing some issues). He argues that every dispute has “criteria for its resolution . . . . There is no sphere containing no law and no legal criteria . . . . The mere fact that an issue is ‘political’—that is, holding political ramifications and predominant political elements—does not mean that it cannot be resolved by a court.” Justice Barak likewise rejects the notion of institutional non-justiciability. He declares Justice Brennan’s Baker v. Carr argument—that for a court to take up an issue that has been committed to another branch risks disrespecting a coordinate branch of the government or creating chaos with “multifarious pronouncements by various departments on one question”—to be “unconvincing.” As Justice Barak observes, “all of the issues that are considered in constitutional or administrative law” have been entrusted to political authorities.

The jurisprudence of non-justiciability—and the tradition of judicial restraint in the face of cases that raise separation of powers issues or political questions—clearly represents a distinct element of the American legal tradition. U.S. judicial norms in this regard stand apart from the legal frameworks in place in other nations—as the next section explores in more detail.

D.  Negative Rights

While the opinions in U.S. environmental rights cases and the related academic literature focus on various elements of the separations of powers and political question doctrine, what more notably underlies the U.S. legal framework and sets the nation apart from the practice of judges elsewhere in the world (and particularly in Europe) is the American constitutional emphasis on securing negative rights and wariness about assertions of positive rights—at least at the federal level. So while the commentary centers on non-justiciability and the various Baker v. Carr factors, what undergirds the American exceptionalism is a distinct approach to rights—building, of course, on a federal Constitution that emerged at a moment in time when the critical issue was protecting the citizen from an abusive state, therefore translating into a document that emphasizes negative rights.

1.  Positive Rights in State Constitutions

In fact, the U.S. structure of rights is somewhat more complicated than just suggested. Notably, some U.S. state constitutions explicitly secure positive rights. Indeed, the Massachusetts Constitution (of 1780) establishes a right to education. Other states have also written positive rights into their constitutions with a number of states having constitutionalized rights to education, labor protections, and protections for arrestees and prisoners. Many other states have adopted expansive legal interpretations of their state-level equivalents to the Bill of Rights in the context of abortion, death penalty, and criminal justice litigation. And seven states have expressly defined environmental rights in one form or another—with New York amending its constitution in 2021 to add an environmental rights amendment.

2.  Federal Positive Constitutional Rights

I note further that, while the federal Constitution largely takes the form of establishing rights against government intrusion on the liberties of the people, there are some exceptions where positive rights have been established. For example, American courts have come to recognize the right of an accused person to testify in court in their own defense. As my Yale colleague Akhil Amar notes, this reversal of the prior legal tradition came to be accepted because the old rules raised problems of “legal coherence.” Likewise, the advance of civil rights in the 1960s and gay rights in the 2000s might also be seen as the recognition of positive rights under the federal Constitution. The U.S. Supreme Court’s jurisprudence paints an inconsistent picture, however, of the legal logic the Court perceives itself to be advancing. A number of the Court’s civil rights opinions raise doubts about whether these rulings should be understood as advancing positive rights. Indeed, constitutional law scholars have criticized the landmark decisions in Lawrence v. Texas and Obergefell v. Hodges as rather imprecise in specifying the rights being extended.

3.  European Tradition of Positive Rights with Horizontal Effect

European courts (and a number of other judicial systems around the world) have taken another tack. Not only have they been more willing to specify positive rights, they have increasingly moved toward giving human rights horizontal effect—meaning that the courts have been willing to define rights (including environmental rights) that create obligations not only for governments but also for other citizens and corporate entities. This tradition has resulted in a framework of environmental rights that are not just broader than in the United States, but also deeper in that they have direct effect on private parties—creating affirmative duties to which companies (and others) must adhere.

Most notably, the ECtHR requires the parties to the ECHR to “secure to everyone within their jurisdiction the rights and freedoms defined in . . . [the] Convention.” The ECtHR has declared that states have a positive obligation to protect the rights under the ECHR, including the adoption of an adequate regulatory framework and a duty to prevent indirect or horizontal effects caused by other citizens or entities. Note that while individuals cannot make a claim of human rights violations against other private individuals, they can call upon the state to enforce their human rights vis-à-vis private parties. In Pla and Puncernau v. Andorra, a case involving the interpretation of a will, the ECtHR famously stated:

Admittedly, the Court is not in theory required to settle disputes of a purely private nature. That being said, in exercising the European supervision incumbent on it, it cannot remain passive where a national court’s interpretation of a legal act, be it a testamentary disposition, a private contract, a public document, a statutory provision or an administrative practice appears unreasonable, arbitrary or, as in the present case, blatantly inconsistent with the prohibition of discrimination established by Article 14 and more broadly with the principles underlying the Convention.

E.  Polycentric Problems and Judicial Overreach

The American judiciary’s hesitance to take up climate change cases reflects a further distinct legal tradition: a concern that polycentric problems—ones that involve balancing of interests and apportioning of costs—are particularly unsuitable for adjudication by the courts. This theory is often associated with Professor Lon Fuller, who analogized polycentric problems to a spider web, where a pull on one strand puts stress across the many other strands and leads to instability. Fuller thus argued that complex policy problems must be left to political processes and not resolved by the judiciary. His theorizing has had a broad impact within the Anglo-American legal tradition.

1.  Environmental Issues as Polycentric Problems

          Environmental problems generally, and climate change in particular, present just the sort of polycentric policy challenge that Fuller warned was inappropriate for courts to adjudicate. Not only does climate change policy involve many elements and choices—involving production processes, pollution control possibilities, transportation systems, power generation and energy strategies, clean technology development, and many other aspects of life in modern society—but it also involves multiple trade-offs in which environmental gains for some almost always imply environmental costs for others.

Thus, unlike assertions of civil rights, which will often present bright line choices with clear underlying moral imperatives, environmental rights seem much less clear—and indeed, potentially quite intricate and hard to specify with precision. In asserting a moral right and constitutional claim that Black citizens should have a right to vote, there is no balancing to be done nor really any legitimate other side to the argument. No one can claim a right to prevent Black citizens from voting. Likewise, when gay rights are asserted, those that might wish to prevent gay citizens from living their lives as they wish have no firm foundation on which to build. These rights are relatively absolute.

In contrast, assertions of environmental rights might seem to be relatively unbounded. Do my environmental rights extend to a pristine environment? To a habitable environment? How much money should society (or polluters) be forced to spend to vindicate my right? Do I have a right to experience Nature as it is? Does the fact that Nature is not static but rather in a constant state of flux change the scope of the rights? Simply put, if I have a right to a healthy environment, who owes me what duties and to what extent—and at what cost?

Moreover, the resolution of these questions is likely to have externalities on other citizens and private actors in society. A judicial order that a government adopt an emissions-reduction plan, while ostensibly requiring government action, will inevitably require private action to comply with the government regulations that follow. Courts may be more reluctant to order remedies that have these sort of economic impacts—as the Alaska Supreme Court in Sagoonick suggested. There, in rejecting the plaintiffs’ assertion of a right to a healthy environment and denying their requested relief, the court noted that the Alaska Constitution “directs the legislature (and not the judiciary) to manage and develop the State’s natural resources for the maximum common use and benefit of all Alaskans.” As the court made clear, the legislature is responsible for striking “the proper balance between development and environmental concerns,” and that the court “cannot, and should not, substitute [its] judgment for that of the political branches.”

While legal cases that require analysis of policy choices present challenges for the judiciary, a number of scholars, including Professor Owen Fiss, have pushed back on Lon Fuller’s arguments. Fiss suggests that the judiciary must not shy away from upholding fundamental rights even in the face of polycentric problems. He argues that “courts should not be viewed in isolation but as a coordinate source of governmental power, as an integral part of the larger political system.” In the American legal system, “the legitimacy of the courts and the power judges exercise in structural reform . . . are founded on the unique competence of the judiciary to . . . give concrete meaning and application to the public values embodied in an authoritative text such as the Constitution.” Cass Sunstein raises a parallel argument, that the task of judges in adjudicating disputes—even those seemingly governed by “some preexisting rule”—is intricate and necessarily requires value judgments. Moreover, it is a fiction that courts are not already making the sort of decisions that Fuller argues that they should not or do not. And as Abram Chayes argued, the scope and breath of injunctive relief—including that which is widely accepted in the American legal system—involves the precise sort of value judgments that courts theoretically ought to shy away from.

In the context of environmental rights, it is inapposite to suggest that the questions involve too many imprecise calculations or debatable values. If the judiciary is able to weigh the competing concerns of other rights, both those presently enshrined in the Constitution and those recognized in common law, which implicate nearly identical concerns, it is capable of doing so here. As the dissent in Juliana noted, if courts are skeptical of granting the kind of relief sought by plaintiffs—that is, a broad order to do something that requires a coordinated effort at all levels of government, likely needing to be overseen by individual judges or special masters—there is a readily available example in Brown v. Board of Education. There, “the Supreme Court was explicitly unconcerned with the fact that crafting relief would require individualized review of thousands of state and local policies that facilitated segregation.”

2.  Politicization of the Judiciary

A related argument suggests that courts are obliged to steer clear of cases that require making policy choices for fear of politicizing the judiciary. Under this line of thinking, courts in the United States are more concerned about politicization or the political nature of climate policy questions than are courts of other nations, perhaps reflecting the deep partisan divides in American politics over environmental issues and climate change—rifts that do not exist to the same extent in most other nations. Concerns over judicial policymaking are arguably reflected in the Supreme Court’s recent invocation of major questions doctrine to invalidate proposed climate regulations which, in leaving significant policy choices to Congress, could be read as the Court declining to resolve questions with partisan overtones.

But the Court’s reliance on the major questions doctrine to steer clear of hot political questions—and the argument that the Court is hyper-protective of its legitimacy as an apolitical arbiter—hardly seem convincing in a post-Dobbs v. Jackson Women’s Health Organization world. Justice Alito opens the majority opinion of Dobbs, in fact, with an acknowledgement that the issue of abortion is “a profound moral issue on which Americans hold sharply conflicting views.” The majority opinion recognizes the criticisms that a “decision overruling Roe would be perceived as having been made ‘under fire’ and as a ‘surrender to political pressure,’ ” but it concedes that political or politicized responses to the decision are immaterial in the Court’s eyes: “We do not pretend to know how our political system or society will respond to today’s decision. And even if we could foresee what will happen, we would have no authority to let that knowledge influence our decision.” In other words, fear of politicization was expressly  waved away by the Supreme Court in Dobbs. That climate change also represents a profound moral or political issue (or that there would no doubt be partisan backlash to a judicial decision demanding governmental action in the face of climate change) does not seem immediately distinguishable from the Court’s eagerness to take up arms in Dobbs.

Besides, concerns over politicized climate-related decision-making beg the question of whether adjudicating political matters relating to climate change would threaten the legitimacy of the judiciary to the extent that critics claim. The presumption of grave risk seems overstated. In “invalidating actions by other branches of the state . . . the court does not criticize the internal logic or practical efficiency of such political considerations,” instead solely focusing on the legality of the action taken. In this respect, it is entirely possible for a court to assert a right, and hold that a legislature’s action violates that right, without actually infringing on the legislature’s policymaking discretion. In Robinson Township v. Commonwealth, for example, the Pennsylvania Supreme Court invalidated portions of an oil and gas regulation passed by the legislature on the basis that it violated the plaintiffs’ state constitutional right to a healthy environment. In response to arguments that the plaintiffs’ claims presented non-justiciable political questions, the court had a forceful response. It noted that a court’s review of policy choices made by the legislature “does not challenge [its] power” to set policy; “it challenges whether, in the exercise of the power, the legislation produced by the policy runs afoul of constitutional command.” “[T]he political question doctrine,” it added, “is a shield and not a sword to deflect judicial review.” Thus, the idea that judicial restraint is to be applauded when courts face a case with political overtones should be questioned if not rejected outright, especially in the face of fundamental threats to society, such as those posed by climate change.

While it is understandable that the deep political divides that now riven America have pushed U.S. judges to be extra cautious about taking up political questions, there are good arguments to suggest that this posture is not just inappropriate but constitutionally incorrect. To the contrary, it may be that with regard to the most political issues—where the legislative branch is too divided to act—courts have a special obligation to step into the breach. In fact, in explaining when the Ninth and Fourteenth Amendments provide a foundation for unenumerated rights that should be acknowledged by courts, Akhil Amar notes that such rights are most easily recognized when there exists clear national support and particularly when Congress has recognized such rights. But he goes on to say courts may need to secure fundamental rights even without these signals of broader support because the judiciary has a “role as a critical backstop in the event that Congress ever fails to act with proper vigor.”

F.  America’s Benefit-Cost Approach to Environmental Regulation

One further explanation for the U.S. judiciary’s exceptionalism on environmental rights might be found in the relatively unique structure of American environmental law and regulations. In particular, American regulatory practice has developed around a law and economics approach to environmental protection that permits powerplants, mines, factories, and other entities to pollute (literally issuing these facilities permits) so long as the benefits to society of the economic activity exceed the emissions harms created by the enterprise. As Don Elliott and I explain, this net social benefits approach to pollution control—which builds on a Kaldor-Hicks economic efficiency logic (rather than a Pareto optimization that would require compensation to those suffering the pollution impacts)—results in significant unabated emissions in many instances. In privileging economic activities over environmental rights, this environmental policy framework could be read as a signal that America’s political branches have established a mechanism for balancing the competing interests discussed above and concluded that environmental rights should give way to economic growth and jobs as the priority. Such an observation might well lead U.S. judges to conclude that they should not take up cases that tread on this policy domain—particularly to the extent that the environmental arena is one of contested rights and divergent values.

IV.  THE SUSTAINABILITY IMPERATIVE AND PATHWAYS TO SECURING U.S. ENVIRONMENTAL RIGHTS

Perhaps the most curious aspect of the recent environmental-rights-based climate change decisions across several federal and state courts is the broad recognition that the problem plaintiffs seek to address is both real and significant. The Juliana majority says, in particular: “There is much to recommend the adoption of a comprehensive scheme to decrease fossil fuel emissions and combat climate change.” They further suggest that a more vigorous climate change policy response is “a matter of national survival.” Yet the court declines to act.

This paradox raises several important questions: Does the political dispute and ongoing contestation over environmental policy really justify the U.S. judiciary’s dodging of questions involving fundamental rights to a habitable environment? Is the judiciary’s restraint still justified if responding to climate change is seen as a matter of national survival? Or, to turn these questions around, what is the path forward that might allow environmental rights to be secured in the current U.S. political context? How might courts be positioned to respond to the threat posed by climate change and the need to put American society onto more sustainable footings?

In addressing these questions, the starting point must be the fact that climate science has established beyond any credible doubt the threat posed to humanity by the build-up of GHGs in the atmosphere. More generally, society has begun to recognize a sustainability imperative that derives from the ever-more-clear findings of ecosystems science, which suggest the need to restructure our economic activities to ensure that they do not create environmental impacts that transgress critical planetary boundaries in a manner that might destabilize the Earth systems on which all life depends.

Policy emphasis on sustainability is not new. A commitment to sustainable development that “meets the needs of the present without compromising the ability of future generations to meet their own needs” (the Brundtland Commission’s definition in the 1987 report, Our Common Future) has been a core commitment of the world community for decades and understood to require limits on pollution and natural resource depletion. The foundational importance of sustainability as a core principle for life in the twenty-first century has recently been reiterated with the adoption of the U.N. Sustainable Development Goals and the 2015 Paris Agreement on Climate Change, as well as the 2021 Glasgow Climate Pact, under which 197 nations (including the United States) committed to net-zero GHG emissions by mid-century.

I have argued elsewhere that sustainability (by its very definition) requires a changed foundation for business and our economic life centered on bringing an end to uninternalized environmental externalities. In this light, Part IV explores how environmental rights to a sustainable future might be established in the American political context.

A.  Reading Positive Environmental Rights into U.S. Constitution

While the U.S. Constitution does not explicitly recognize environmental rights nor even rights to life or health, the Supreme Court has built upon the Due Process Clauses of the Fifth and Fourteenth Amendments a set of protections for fundamental interests of American citizens including the right to marry, maintain a family, and choose one’s own occupation. These fundamental rights have been judicially defined and as such “may not be submitted to vote; they depend on the outcome of no elections.” In citing these cases and the pathway by which these rights were recognized, the dissenting judge in the Juliana case observed that the judiciary need not stand by in the face of climate change and allow a “calamity.” Rather, courts could secure a fundamental right to a habitable environment in a similar fashion.

Finding constitutional space for new rights has been done in a variety of ways in other circumstances. For example, unenumerated rights can be found in the “penumbras” and “emanations” of the Bill of Rights as Justice Douglas observed in his Griswold v. Connecticut opinion that established a constitutional right to privacy. Although the Supreme Court is increasingly strict in gatekeeping the Due Process Clause through the “deeply rooted” test on which it has relied in recent cases, there may be significant foundations upon which to build upon in securing environmental rights. Notably, the Preamble to the Constitution declares the purpose of the document to be promotion of “the general welfare.” For a court ready to take up the challenge of combating climate change, this phrase offers a foundation for an assertion of rights to a habitable environment, especially in the face of the threat to humanity posed by climate change. The Constitution should also be read in light of the natural rights beliefs that undergirded the American Revolution and the intentions of the Founders as expressed in the Declaration of Independence with regard to inalienable rights to “Life, Liberty, and pursuit of Happiness” and the insistence that the “new Government” should be designed to “effect” the “Safety and Happiness” of the people—all of which could be read as requiring courts to secure the environmental rights needed to avoid catastrophic climate change or “implicit in the concept of ordered liberty.”

As Akhil Amar suggests, unenumerated rights can alternatively be discovered in the “lived practices and beliefs of the American people.” As an example, he cites Justice Harlan’s concurring opinion in Griswold as offering a better basis for inferring a right to privacy than the majority opinion provides. Amar adds that such unenumerated rights are most easily advanced when they align with other “canonical sources” such as the Declaration of Independence or state constitutions—factors that might well now argue for recognition of environmental rights. Moreover, the facts that (1) polling suggests that a very substantial majority of Americans now support a more comprehensive response to climate change; and (2) businesses of all sizes and across virtually all industries across America have adopted net-zero GHG emissions targets perhaps opens the way for U.S. courts to define the right to a habitable environmental as constitutionally protected—as made clear by the expectations and values of the people as expressed in both their daily and professional lives.

While there are solid constitutional foundations for securing environmental rights in America—especially for a judiciary that understands its obligation to act in the face of an overarching threat and inaction on the part of the political branches—political reality means a high hurdle must be overcome to get U.S. courts to recognize positive environmental rights.

B.  Rights for Nature

Another path forward would be to heed Chris Stone’s call to give Nature legal personhood. But in the United States, the argument for extending legal rights to natural objects is seen by many as “radical” and has thus not gotten much traction—with one exception. In 2019, the citizens of Toledo, Ohio, voted to grant Lake Erie a “Bill of Rights,” which included “the right to exist, flourish, and naturally evolve.” But this initiative was quickly struck down in federal court, with the judge ruling that the proposed legal rights for the lake were unconstitutionally vague. The court asked, “What conduct infringes the right of Lake Erie and its watershed to ‘exist, flourish, and naturally evolve’?” It went on to say, “The line between clean and unclean, and between healthy and unhealthy, depends on who you ask.”

While the Toledo court was perhaps too quick to dismiss the idea of rights for Lake Erie, the revered environmental law professor Joseph Sax decades ago offered a logic for not trying to advance “rights for objects” as Stone proposed. In a review of the standing issues addressed in the Supreme Court’s Sierra Club v. Morton decision, Sax noted: “If Stone is saying only that we should take account of diffuse citizen interests not routinely represented,” then ascribing the rights to Nature is “verbal overkill.” What is really required, observed Sax, is “a more spacious view of the right of citizens” to ensure that courts take seriously the “risks of long-term, large scale practically irreversible disruptions to ecosystems”—thus specifying five decades ago the path forward that this Article seeks to advance.

C.  State Constitutions

As discussed in Part III, seven U.S. states have provisions that establish environmental rights in one form or another. Although these rights have not yet yielded promising outcomes for litigation to implement broad-based climate change policies, it may well be that vindication of these rights in state courts will provide a basis for mandating greater government action in the years ahead. State courts might also require a more vigorous climate change response by corporate entities, which could have implications more broadly across the national marketplace. If more states were to adopt New York’s recent example and adopt environmental rights constitutional amendments (a process that is much easier at the state level than the national one), this trend might be a further signal of changing values of the American people—therefore justifying the recognition of positive environmental rights by federal judges.

State experiences with rights to public education provide a theoretical, if incomplete, model of how an expansion of environmental rights might unfold driven by state leadership. In 1973, the Supreme Court held that education was not a fundamental right—and rejected claims brought regarding unequal funding of schools in Texas. But because many state constitutions provide a right to public education, state-court litigants have been successful in vindicating these rights—either by themselves or by linking the right to education with state-level equal protection analogs. As Professor Rob Klee has noted, it’s possible that this strategy could prove viable with environmental rights, as well.

Success at the state level could be critical to nudging federal courts to take similar action. Robinson Woodward-Burns observes that state constitutional change is “a steady, constant, quiet background process in American politics, the heretofore unnoticed channel for most American constitutional development.” He argues that “[n]ational outcomes attributed to the federal courts may instead be caused by state constitutional reform,” pointing out that prior to the Supreme Court’s decision in Harper v. Virginia Board of Elections, all but four states had already abolished poll taxes through constitutional amendments. Indeed, in the long tradition of ideas getting tested out in the laboratory of the states, it might well be that the state-level experience will demonstrate that environmental rights can be judicially managed—thus stripping away one of the core concerns federal judges have advanced for declining to take up cases where positive environmental rights are being asserted.

D.  Establishing Negative Environmental Rights: End Uncompensated Pollution Spillovers

Rather than seeking to establish a broad-based right to a healthy or habitable environment, it might be easier within the U.S. constitutional framework to secure negative environmental rights—specifically a right not to be harmed by pollution. Requiring an end to pollution spillovers or full compensation for all harms from residual emissions (mandating, as economists would say, an end to uninternalized environmental externalities) would simply align America’s environmental law and policy framework with long-standing principles of the common law. Indeed, the right not to be harmed by pollution goes back at least four centuries in the Anglo-American legal tradition to the 1610 decision in Aldred’s Case, which established an English plaintiff’s cause of action against the stench from his neighbor’s pigs. And the government’s obligation to protect shared natural resources has an even longer history insofar as the origins of the public trust doctrine can be traced not only to old English law but also ultimately to Roman law before that.

1.  Securing a Right to Be Free from Harmful Pollution

Establishing a right to be free from harmful pollution—to each of us as individuals and to the resources on which we depend for life—might be seen by American judges as more consistent with the negative rights tradition of the U.S. Constitution. This kind of negative right would be consistent with the widely accepted principle that people have affirmative duties not to harm others—a concept key to modern tort and property law. A duty not to harm others has been justified and explained by many scholars, including John Stuart Mill in On Liberty—in which he outlined the argument for a “harm principle”—and more recently by William David Ross.

Not only is such a conceptualization consistent with legal developments in the Anglo-American legal traditions, but the narrow frame of a right to be protected from damaging pollution impacts might also be seen as more judicially manageable and thus less of a worry with regard to the separation of powers and political question doctrines. As I have explained in some detail elsewhere, establishing such a right would not necessarily translate into no pollution. But an environmental rights framework that forbids uninternalized environmental externalities might require emissions reductions to the extent feasible—and full compensation to be paid for any residual harms.

A degree of scientific knowledge and expert analysis would still be required to determine which pollutants cause damage and at what scale—and thus what the harm charge for unabated pollution should be. While such calculations might require a redeployment of resources within the U.S. Environmental Protection Agency, the enormous base of epidemiological and ecological information that has been developed in recent decades along with advances in valuation methodologies makes the task manageable—especially if one excludes from the calculus de minimis levels of pollution that produce no real harm.

2.  Horizontal Effect but Narrow Framing Consistent with Emerging American Norms

While a right to be free from harmful pollution would have a horizontal effect—establishing duties for private parties as well as the government—it would do so in the most constitutionally protected domain: the right of individuals to the sanctity of their person, their home, and the necessities of life. Framed as a right against harmful pollution intrusions, these negative environmental rights would be seen as offering a bright line that keeps courts clear of the polycentric problem of trying to engage in setting policy goals, allocating costs, or making tradeoffs.

       Even more usefully, the idea that pollution spillovers should not be countenanced has already gained widespread support—and would be seen as consistent with emerging public expectations and business ethics. Evidence of this new reality can be seen, for instance, in the widespread adoption of net-zero GHG emissions targets. Not only have governments around the world—including the U.S. government—committed to net-zero emissions by 2050, but this target and timetable has cascaded to the business community where thousands of companies have made net-zero GHG pledges.

Growing public expectations of corporate transparency and reporting on sustainability performance more broadly has helped to reinforce the sustainability imperative framework. These new expectations around emissions disclosure reinforce corporate commitments to reduce pollution and end environmental externalities. Emissions disclosure, in turn, also provides the data needed to identify pollution spillovers that might be subject to legal action by a right to be free from harmful pollution. The finance world has added momentum to this trend with a growing number of investment advisors demanding expanded ESG (environmental, social, and governance) disclosures from the companies in their portfolios. Likewise, a sweeping array of Wall Street leaders and finance experts from around the world have declared their support for net-zero GHG emissions as a corporate target across all industries and for commitments to internalize externalities more generally.

In a similar vein, the Business Roundtable, a collection of 200 CEOs of America’s largest companies, has announced its support for full GHG pricing, which, if implemented, would effectively bring an end to uninternalized externalities in the climate change context. The Roundtable has also declared an end to the era of shareholder primacy (sometimes framed as the Friedman doctrine, which suggested that corporate leaders should seek to maximize the profits of their enterprises in any manner they could within the bounds of the law). Instead, these CEOs of the Business Roundtable have committed their companies to a mission of stakeholder responsibility in which companies have obligations beyond their owners to their customers, suppliers, employees, the communities in which they operate, and society as a whole (which would almost certainly include a duty not to inflict environmental harms on people or the planet). Simply put, private gain at public expense is increasingly seen as an inappropriate and unacceptable business model. Again, the emergence of what might be seen as a transformed base of business ethics makes a right to be free of uninternalized environmental externalities more of an incremental step than it might otherwise appear to be.

The momentum for net-zero GHG emissions and the broader movement away from a world where corporate pollution was seen as unavoidable has given way to a new reality where any company whose profitability depends on externalizing environmental costs faces ever greater scrutiny. Viewed cumulatively, these trends make clear the breadth of support for the new norm against uninternalized environmental externalities—making it ever easier for courts to adopt as a legal obligation what is already a pervasive business practice.

       To draw the obvious conclusion: a right not to be polluted is not the same as having a right to a healthy environment. But the implication of a prohibition on harmful pollution spillovers is that individuals have environmental rights—albeit more narrowly defined. This backdoor into securing environmental rights in the United States might not be the full victory that some environmental advocates would have hoped for, but it is the most expedient path forward given America’s legal traditions and political realities.

CONCLUSION

Fifty years ago, Christopher Stone launched a debate about environmental rights—and opened a conversation that has not yet come to an end, at least in the United States. This Article does not purport to bring the discussion to a close, but it offers a direction that might be taken up to ensure that U.S. courts are positioned to play an appropriate judicial role in addressing the threat of climate change and putting America on a trajectory toward a sustainable future.

I believe that there is ample basis for concluding that environmental rights should be understood as an element of natural law—meaning, as Dinah Shelton proposes, that a narrowly crafted right to a safe and healthy environment should be recognized as an element of human rights and respected in all nations at all times. But to advance this agenda in the United States, the most promising path forward appears to me to be a focus in the federal context on securing negative environmental rights—defined concretely as a right not to be harmed by pollution. In advancing a right centered on enforcing an end to uninternalized environmental externalities, U.S. judges would be able to respond to climate change litigation and other sustainability-related cases in a thoughtful, serious, and tightly focused manner that steers clear of concerns about the separation of powers, the political question doctrine, and appropriate modes of effective judicial relief. Simply put, a narrowly constructed right to be free from harmful emissions would give pollution victims in America standing, which might just be enough to save the planet.

APPENDIX: ENVIRONMENTAL RIGHTS PROVISIONS BY COUNTRY

 

National Constitution

International Treaty

 

 

National Constitution

International Treaty

Afghanistan

N

N

 

Liechtenstein

N

N

Albania

N

Y

 

Lithuania

Yi

Y

Algeria

Y

Y

 

Luxembourg

N

Y

Andorra

N

N

 

Madagascar

N

Y

Angola

Y

Y

 

Malawi

Y

Y

Antigua and Barbuda

N

N

 

Malaysia

Yi

N

Argentina

Y

Y

 

Maldives

Y

N

Armenia

N

Y

 

Mali

Y

Y

Australia

N

N

 

Malta

N

Y

Austria

N

Y

 

Marshall Islands

N

N

Azerbaijan

Y

Y

 

Mauritania

Y

Y

Bahamas

N

N

 

Mauritius

N

Y

Bahrain

N

Y

 

Mexico

Y

Y

Bangladesh

Yi

N

 

Micronesia (Federated States of)

N

N

Barbados

N

N

 

Monaco

N

N

Belarus

Y

Y

 

Mongolia

Y

N

Belgium

Y

Y

 

Montenegro

Y

Y

Belize

N

N

 

Morocco

Y

N

Benin

Y

Y

 

Mozambique

Y

Y

Bhutan

N

N

 

Myanmar

N

N

Bolivia (Plurinational State of)

Y

Y

 

Namibia

Yi

Y

Bosnia and Herzegovina

N

Y

 

Nauru

N

N

Botswana

N

Y

 

Nepal

Y

N

Brazil

Y

Y

 

Netherlands

N

Y

Brunei Darussalam

N

N

 

New Zealand

N

N

Bulgaria

Y

Y

 

Nicaragua

Y

Y

Burkina Faso

Y

Y

 

Niger

Y

Y

Burundi

Y

Y

 

Nigeria

Yi

Y

Cambodia

N

N

 

North Macedonia

Y

Y

Cabo Verde

Y

Y

 

Norway

Y

Y

Cameroon

Y

Y

 

Oman

N

N

Canada

N

N

 

Pakistan

Yi

N

Central African Republic

Y

Y

 

Palau

N

N

Chad

Y

Y

 

Panama

Yi

Y

Chile

Y

N

 

Papua New Guinea

N

N

China

N

N

 

Paraguay

Y

Y

Colombia

Y

Y

 

Peru

Y

Y

Comoros

Y

Y

 

Philippines

Y

N

Congo

Y

Y

 

Poland

N

Y

Costa Rica

Y

Y

 

Portugal

Y

Y

Cote d’Ivoire

Y

Y

 

Qatar

N

Y

Croatia

Y

Y

 

Republic of Korea

Y

N

Cuba

Y

N

 

Republic of Moldova

Y

Y

Cyprus

Yi

Y

 

Romania

Y

Y

Czechia

Y

Y

 

Russian Federation

Y

N

Democratic People’s Republic of Korea

N

N

 

Rwanda

Y

Y

Democratic Republic of the Congo

Y

Y

 

Saint Kitts and Nevis

N

Y

Denmark

N

Y

 

Saint Lucia

N

N

Djibouti

N

Y

 

Saint Vincent and the Grenadines

N

Y

Dominica

N

N

 

Samoa

N

N

Dominican Republic

Y

N

 

San Marino

N

N

Ecuador

Y

Y

 

Sao Tome and Principe

Y

Y

Egypt

Y

Y

 

Saudi Arabia

N

Y

El Salvador

Yi

Y

 

Senegal

Y

Y

Equatorial Guinea

N

Y

 

Serbia

Y

Y

Eritrea

N

Y

 

Seychelles

Y

Y

Estonia

Yi

Y

 

Sierra Leone

N

Y

Eswatini

N

Y

 

Singapore

N

N

Ethiopia

Y

Y

 

Slovakia

Y

Y

Fiji

Y

N

 

Slovenia

Y

Y

Finland

Y

Y

 

Solomon Islands

N

N

France

Y

Y

 

Somalia

Y

Y

Gabon

Y

Y

 

South Africa

Y

Y

Gambia

N

Y

 

South Sudan

Y

N

Georgia

Y

Y

 

Spain

Y

Y

Germany

Yi

Y

 

Sri Lanka

Yi

N

Ghana

Yi

Y

 

Sudan

Y

Y

Greece

Y

Y

 

Suriname

N

Y

Grenada

N

N

 

Sweden

N

Y

Guatemala

Yi

Y

 

Switzerland

N

Y

Guinea

Y

Y

 

Syrian Arab Republic

N

Y

Guinea-Bissau

N

Y

 

Tajikistan

N

Y

Guyana

Y

Y

 

Thailand

Y

N

Haiti

N

N

 

Timor-Leste

Y

N

Honduras

Y

Y

 

Togo

Y

Y

Hungary

Y

Y

 

Tonga

N

N

Iceland

N

Y

 

Trinidad and Tobago

N

N

India

Yi

N

 

Tunisia

Y

Y

Indonesia

Y

N

 

Turkey

Y

N

Iran

Y

N

 

Turkmenistan

Y

Y

Iraq

Y

Y

 

Tuvalu

N

N

Ireland

Yi

Y

 

Uganda

Y

Y

Israel

N

N

 

Ukraine

Y

Y

Italy

Yi

Y

 

United Arab Emirates

N

Y

Jamaica

Y

N

 

United Kingdom of Great Britain and Northern Ireland

N

N

Japan

N

N

 

United Republic of Tanzania

Yi

Y

Jordan

N

Y

 

United States of America

N

N

Kazakhstan

N

Y

 

Uruguay

N

Y

Kenya

Y

Y

 

Uzbekistan

N

N

Kiribati

N

N

 

Vanuatu

N

N

Kuwait

N

Y

 

Venezuela
(Bolivarian Republic of)

Y

N

Kyrgyzstan

Y

Y

 

Vietnam

Y

N

Lao People’s Democratic Republic

N

N

 

Yemen

N

Y

Latvia

Y

Y

 

Zambia

N

Y

Lebanon

N

Y

 

Zimbabwe

Y

Y

Lesotho

N

Y

 

TOTAL

110

126

Liberia

Yi

Y

 

 

 

 

Libya

N

Y

 

 

 

 

 

* Yi indicates implicit constitutional language. Adapted from Boyd et al., supra note 43, at 50–55.

95 S. Cal. L. Rev. 1345

Download

Hillhouse Professor of Environmental Law and Policy, Yale Law School and Yale School of the Environment. The author thanks Andrew Follett, Isabella Soparkar, Kirsten Williams, Zack Steigerwald Schnall, Jan-Baptist Lemaire, and Sara Gomez for their research assistance—and Professors Don Elliott and Quinn Yeargain for conversations that helped to shape the argument presented.