For nearly six decades, States have entered into approximately 3,000 bilateral investment promotion and protection treaties (“BITs”) and some multilateral treaties (“MITs”), which possess the same dual purposes as the North American Free Trade Agreement (“NAFTA”) and the Energy Charter Treaty (“ECT”). They have been signed, ratified, and entered into force for mutual benefit: investment in the States party to the BIT or MIT is mutually encouraged, in good part by each State party guaranteeing the other State party’s investors an acceptable level of legal protection, usually consisting of “fair and equitable treatment” (“FET”), “full protection and security” (“FPS”), specific rules governing compensation for expropriation, and, via a “most-favored-nation clause” (“MFN”), the same overall level of legal protection as is accorded to nationals of other States with whom the respondent State party to the BIT or MIT has similar treaties in force.
Key to the nationals of each State party who invest in the other State is the mechanism for enforcing those protections, which is known as investor-State arbitration, or investor-State dispute settlement (“ISDS”). As most treaty parties do not wish their nationals investing abroad to be compelled to dispute with the host State over whether the involved treaty has been breached decided by a national court of the host State, the parties agree in the BIT or the MIT that any dispute between a national of one party investing in the other party will be decided by, typically, a three-person arbitral tribunal, to which each party to the dispute—the investor and the host State—appoints one arbitrator. The third person, who is to chair the arbitration, is appointed by the other two arbitrators, or by the parties to the dispute, or—failing success in that effort for a stated period of time—by an agreed “appointing authority.” All three members of the arbitral tribunal are required and pledge to be independent and impartial to the arbitrating parties.
In 1897, a half-dozen great powers claimed sovereignty over nearly half the world’s land and souls, and these empires were expanding. The British Empire alone had grown by fifty million souls and two million square miles since 1891. The eminent naval strategist Alfred T. Mahan feared that the United States was dangerously secluded, in comparison, and sidelined in the global land rush underway. He also worried that the Atlantic Ocean no longer adequately protected the U.S. against European powers in an age of steamships. Like his fellow Republicans Theodore Roosevelt and Massachusetts Senator Henry Cabot Lodge, Mahan influentially advocated U.S. expansionism. He envisioned the United States ruling acquired lands as colonies. Their residents were as politically unfit for rule as children, criminals, women, and African Americans, he believed. But the Constitution presented a problem. Nearly three decades had passed since the last U.S. annexation. As Mahan complained, “any project of extending the sphere of the United States, by annexation or otherwise, is met by the constitutional lion in the path.”
During the 2016 Presidential campaign, the average adult saw at least one “fake news” item on social media. The people distributing the articles had a variety of aims and operated from a variety of locations. Among the locations we know about, some were in Los Angeles, others in Macedonia, and, yes, others were in Russia. The Angelenos aimed to make money and sow chaos. The Macedonians wanted to get rich. And the Russians aimed to weaken Hillary Clinton’s candidacy for president, foster division around fraught social issues, and make a spectacle out of the U.S. election. To these ends, the Russians mobilized trolls, bots, and so-called “useful idiots,” along with sophisticated ad-tracking and micro-targeting techniques to strategically distribute and amplify propaganda. The attacks are ongoing.
Cheap distribution and easy user targeting on social media enable the rapid spread of disinformation. Disinformative content, like other online political advertising, is “micro-targeted” at narrow segments of the electorate, based on their narrow political views or biases. The targeting aims to polarize and fragment the electorate. Tracing the money behind this kind of messaging is next to impossible under current regulations and advertising platforms’ current policies. Voters’ inability to “follow the money” has implications for our democracy, even in the absence of disinformation. And of course, an untraceable flood of disinformation prior to an election stands to undermine voters’ ability to choose the candidate that best aligns with their preferences.
Courts and scholars point to the sharing economy as proof that our labor and employment infrastructure is obsolete because it rests on a narrow and outmoded idea that only workers subjected to direct, personalized control by their employers need work-related protections and benefits. Since they diagnose the problem as being our system’s emphasis on control, these critics have long called for reducing or eliminating the primacy of the “control test” in classifying workers as either protected employees or unprotected independent contractors. Despite these persistent criticisms, however, the concept of control has been remarkably sticky in scholarly and judicial circles.
This Article argues that critics have misdiagnosed the reason why the control test is an unsatisfying method of classifying workers and dispensing work-related safeguards. Control-based analysis is faulty because it only captures one of the two conflicting ways in which workers, scholars, and decisionmakers think about freedom at work. One of these ways, freedom-as-non-interference, is adequately captured by the control test. The other, freedom-as-non-domination, is not. The tension between these two conceptions of freedom, both deeply entrenched in American culture, explains why the concept of control has been both “faulty” and “sticky” when it comes to worker classification.
This is the first academic work to show the need for, or to offer, a regulatory framework for exchange-traded funds (“ETFs”). The economic significance of this financial innovation is enormous. U.S.-listed ETFs now hold more than $3.6 trillion in assets and comprise seven of the country’s ten most actively traded securities. ETFs also possess an array of unique characteristics raising distinctive concerns. They offer what we here conceptualize as a nearly frictionless portal to a bewildering, continually expanding universe of plain vanilla and arcane asset classes, passive and active investment strategies, and long, short, and leveraged exposures. And we argue that ETFs are defined by a novel, model-driven device that we refer to as the “arbitrage mechanism,” a device that has sometimes failed catastrophically. These new products and the underlying innovation process create special risks for investors and the financial system.
Lawmakers are looking for Affordable Care Act savings in the wrong place. Removing sick people from risk pools or reducing health plan benefits—the focus of lawmakers’ attention—would harm vulnerable populations. Instead, reform should target the $210 billion worth of unnecessary care prescribed by doctors, consented to by patients, and paid for by insurers.
This Article unravels the mystery of why the insurance market has failed to excise this waste on its own. A toxic combination of mismatched legal incentives, market failures, and industry norms means that the insurance market cannot solve the problem absent intervention.
One of the most enduring debates in corporate law centers on why Delaware has become the dominant state in the market for corporate charters. Traditionally, two perspectives dominated the debate, the “race-to-the-top” perspective that sees competition among states as driving legal rules toward efficiency and the “race-to-the-bottom” perspective that sees competition among states as driving legal rules toward the interests of corporate managers. The two dominant perspectives have struggled to explain why approximately half of large companies incorporate in Delaware, while the other half incorporate in their home states. Whether the choices are attributable to the quality of state law, the characteristics of the companies themselves, or both has given rise to a large, but inconclusive empirical literature.
This Article argues that there was an important causal link, to date unrecognized, between the widespread dissatisfaction with the jury in the United States during the Gilded Age and Progressive era among many elite lawyers and judges and choices by U.S. policymakers and jurists about colonial governance in Puerto Rico and the Philippines. The story starts with the Insular Cases—landmark Supreme Court decisions from the early twentieth century holding that jury rights and some other constitutional guarantees did not apply in Puerto Rico and the Philippines until and unless Congress had taken decisive action to “incorporate” the territories into the union, which it never did. The conventional wisdom among scholars is that the Supreme Court in these decisions shamefully ratified the U.S. government’s discrimination and domination over the peoples of newly-acquired colonies. Racism and cultural chauvinism are blamed as primary causal factors.
The Article shows that Congress, the executive, the courts, and local legislatures in the Philippines and Puerto Rico granted almost every single right contained in the Constitution to the territorial inhabitants, with the exception of the jury. While racism was present and causally important, it is also true that U.S. governance in the territories was not a project of wholesale discrimination. Motivations, goals, and outcomes were complex. Protection of rights of local inhabitants was a key concern of U.S. policymakers. But the jury was considered a unique case, different than other rights.
Human beings should live in places where they are most productive, and megacities, where information, innovation, and opportunities congregate, would be the optimal choice. Yet megacities in both China and the United States are excluding people by limiting the housing supply. Why, despite their many differences, is the same type of exclusion happening in both Chinese and U.S. megacities? Urban law and policy scholars argue that Not-In-My-Back-Yard (“NIMBY”) homeowners are taking over megacities in the U.S. and hindering housing development. They pin their hopes on an efficient growth machine that makes sure “above all, nothing gets in the way of building.” Yet the growth-dominated megacities of China demonstrate that relying on business and political elites to provide affordable housing is a false hope. Our comparative study of the homeowner-dominated megacities of the U.S. and growth-dominated megacities of China demonstrates that the origin of exclusionary megacities is not a choice between growth elites and homeowners, but the exclusionary nature of property rights. Our study reveals that megacities in the two countries share a property-centered approach, which prioritizes the maximization of existing property interests and neglects the interests of the ultimate consumers of housing, resulting in housing that is unaffordable. Giving housing consumers a voice in land use control and urban governance becomes the last resort to counteract this result. This comparative study shows that the conventional triangular framework of land use—comprising government, developers, and homeowners—is incomplete, and argues for a citizenship-based approach to urban governance.
In one of his columns, the economist Paul Krugman observed that “liberals don’t need to claim that their policies will produce spectacular growth. All they need to claim is feasibility: that we can do things like, say, guaranteeing health insurance to everyone without killing the economy.” Krugman’s belief that providing everyone with health insurance is desirable unless doing so would “kill the economy” expresses a familiar, if debatable, position. Many of us believe that some goods should be provided to everyone, and they should be provided even if their provision comes at a cost in economic efficiency. The underlying belief is that some goods are essential to leading decent, independent lives, and their provision therefore has a special priority. As a society, we owe it to each other to secure the basic conditions necessary for people to lead decent and independent lives.
Like health, physical safety is a strong candidate for inclusion on a list of the essential conditions of a decent and independent life. Illness usually takes the form of physical harm, and accidental injury can impair basic powers of agency as much as ill-health can. Assertions that safety has priority over garden-variety “needs and interests” are commonplace in popular discourse. You might, therefore, expect to find a debate in the legal literature on risk and precaution over whether or not safety, too, should be prioritized over efficiency and secured to the extent that it is feasible to do so. Prominent federal statutes take this very position. Indeed, they echo Krugman’s exact word choice in requiring that the risks of certain activities be reduced as far as it is “feasible” to do so, and they mean the same thing that he does in choosing this word. “Feasible risk reduction” requires that the risks in question be reduced as far as possible without killing the activity in question. A chorus of contemporary commentators, however, insists that feasible risk reduction is not just normatively mistaken; it is indefensible. Jonathan Masur and Eric Posner, for example, argue that statutes prescribing feasible risk reduction have no defensible normative underpinning. Feasibility analysis, they write, “does not reflect deontological thinking . . . [and] does not reflect welfarism in any straightforward sense,” and “[n]o attempt to reverse-engineer a theory of well-being that justifies feasibility analysis has been successful.” According to this line of thought, efficiency is the only plausible standard of precaution, and its handmaiden, cost-benefit analysis, is the only plausible test.