Businesses and organizations expect their managers to use data science to improve and even optimize decisionmaking. Yet when it comes to some criminal justice institutions, such as prosecutors’ offices, there is an aversion to applying cognitive computing to high-stakes decisions. This aversion reflects extra-institutional forces, as activists and scholars are militating against the use of predictive analytics in criminal justice. The aversion also reflects prosecutors’ unease with the practice, as many prefer that decisional weight be placed on attorneys’ experience and intuition, even though experience and intuition have contributed to more than a century of criminal justice disparities.

Instead of viewing historical data and data-hungry academic researchers as liabilities, prosecutors and scholars should treat them as assets in the struggle to achieve outcome fairness. Cutting-edge research on fairness in machine learning is being conducted by computer scientists, applied mathematicians, and social scientists, and this research forms a foundation for the most promising path towards racial equality in criminal justice: suggestive modeling that creates baselines to guide prosecutorial decisionmaking.

Akin to every other legal issue that comes before the Court, reconciling the state’s discretion and the Supreme Court’s role in judicial review requires a judicially manageable standard that allows the Court to determine when a legislature has overstepped its bounds. Without a judicially discoverable and manageable standard, the Court is unable to develop clear and coherent principles to form its judgments, and challenges to partisan gerrymandering would thus be non-justiciable.

In the partisan gerrymandering context, such a standard needs to discern between garden-variety and excessive use of partisanship. The Court has stated that partisanship may be used in redistricting, but it may not be used “excessively.” In Vieth v. Jubelirer, Justice Scalia clarified, “Justice Stevens says we ‘er[r] in assuming that politics is ‘an ordinary and lawful motive’ in districting,’ but all he brings forward to contest that is the argument that an excessive injection of politics is unlawful. So it is, and so does our opinion assume.” Justice Souter, in a dissent joined by Justice Ginsburg, expressed a similar idea: courts must intervene, he says, when “partisan competition has reached an extremity of unfairness.”

At oral argument in Rucho, attorney Emmet Bondurant argued that “[t]his case involves the most extreme partisan gerrymander to rig congressional elections that has been presented to this Court since the one-person/one-vote case.” Justice Kavanaugh replied, “when you use the word ‘extreme,’ that implies a baseline. Extreme compared to what?”

Herein lies the issue that the Court has been grappling with in partisan gerrymandering claims. What is the proper baseline against which to judge whether partisanship has been used excessively? And how can this baseline be incorporated into a judicially manageable standard?

In the courtroom environment, oral presentations are becoming increasingly supplemented and replaced by advancing digital technologies that provide legal practitioners with effective demonstrative capabilities. Improvements in the field of virtual reality (“VR”) are facilitating the creation of immersive environments in which a user’s senses and perceptions of the physical world can be completely replaced with virtual renderings. As courts, lawyers, and experts continue to grapple with evidentiary questions of admissibility posed by evolving technologies in the field of computer-generated evidence (“CGE”), issues posed by the introduction of immersive virtual environments (“IVEs”) into the courtroom have, until recently, remained a largely theoretical discussion.

Though the widespread use of IVEs at trial has not yet occurred, research into the practical applications of these VR technologies in the courtroom is ongoing, with several studies having successfully integrated IVEs into mock scenarios. For example, in 2002, the Courtroom 21 Project (run by William & Mary Law School and the National Center for State Courts) hosted a lab trial in which a witness used an IVE. The issue in the case was whether a patient’s death was the result of the design of a cholesterol-removing stent or a surgeon’s error in implanting it upside down.

We are now some twenty years into the story of the Internet’s bold challenge to law and the legal system. In the early 2000s, Jack Goldsmith and I wrote Who Controls the Internet, a book that might be understood as a chronicle of some the early and more outlandish stages of the story. Professors Pollman and Barry’s excellent article, Regulatory Entrepreneurship, adds to and updates that story with subsequent chapters and a sophisticated analysis of the strategies more recently employed to avoid law using the Internet in some way. While Pollman and Barry’s article stands on its own, I write this Article to connect these two periods. I also wish to offer a slightly different normative assessment of the legal avoidance efforts described here, along with my opinion as to how law enforcement should conduct itself in these situations.

Behind regulatory entrepreneurship lies a history, albeit a short one, and one that has much to teach us about the very nature of law and the legal system as it interacts with new technologies. Viewed in context, Pollman and Barry’s “regulatory entrepreneurs” can be understood as, in fact, a second generation of entrepreneurs who learned lessons from an earlier generation that was active in the late 1990s and early 2000s. What both generations have in common is the idea that the Internet might provide profitable opportunities at the edges of the legal system. What has changed is the abandonment of so-called “evasion” strategies—ones that relied on concealment or geography (described below)—and a migration to strategies depending on “avoidance,” that is, avoiding the law’s direct application. In particular, the most successful entrepreneurs have relied on what might be called a mimicry strategy: they shape potentially illegal or regulated conduct to make it look like legal or unregulated conduct, thereby hopefully avoiding the weight of laws and regulatory regimes.

In January 2003, the Slammer worm hit the Internet. Five of the Internet’s thirteen root-name servers shut down. Three hundred thousand cable modems in Portugal went offline, all of South Korea’s cell phone and Internet services went down, and Continental Airlines cancelled flights from its Newark hub due to its inability to process tickets. It took only six months after the disclosure of a security flaw for a virus writer to write the 376 byte virus. When it unleashed, it took ten minutes to infect ninety percent of vulnerable systems.

The flaw was a buffer overflow in the Microsoft SQL Server 2000 software. Because the code is embedded in other Microsoft products, not all users were even aware that their systems were running a version of SQL Server. Unfortunately, this was a well-known, preventable security flaw. Moreover, Microsoft had released a patch for the flaw exploited by Slammer six months before the attack. Despite the widespread effects, no flood of lawsuits ensued.