Fool Me Once: Regulating “Fake News” and other Online Advertising
Abby K. Wood[*] and Ann M. Ravel[†]
A lack of transparency for online political advertising has long been a problem in American political campaigns. Disinformation attacks that American voters have experienced since the 2016 campaign have made the need for regulatory action more pressing.
Internet platforms prefer self-regulation and have only recently come around to supporting proposed transparency legislation. While government must not regulate the content of political speech, it can, and should, force transparency into the process. We propose several interventions aimed at transparency. First, and most importantly, campaign finance regulators should require platforms to store and make available (1) ads run on their platforms, and (2) the audience at whom the ad was targeted. Audience availability can be structured to avoid privacy concerns, and it meets an important speech value in the “marketplace of ideas” theory of the First Amendment—that of enabling counter speech. Our proposed regulations would capture any political advertising, including disinformation, that is promoted via paid distribution on social media, as well as all other online political advertising. Second, existing loopholes in transparency regulations related to online advertising should be closed. Congress has a role here as it has prevented regulatory agencies from acting to require disclosure from so-called dark money groups. Finally, government should require that platforms offer an opt-in system for social media users to view narrowly-targeted ads or disputed content.
During the 2016 Presidential campaign, the average adult saw at least one “fake news” item on social media. The people distributing the articles had a variety of aims and operated from a variety of locations. Among the locations we know about, some were in Los Angeles, others in Macedonia, and, yes, others were in Russia. The Angelenos aimed to make money and sow chaos. The Macedonians wanted to get rich. And the Russians aimed to weaken Hillary Clinton’s candidacy for president, foster division around fraught social issues, and make a spectacle out of the U.S. election. To these ends, the Russians mobilized trolls, bots, and so-called “useful idiots,” along with sophisticated ad-tracking and micro-targeting techniques to strategically distribute and amplify propaganda. The attacks are ongoing.
Cheap distribution and easy user targeting on social media enable the rapid spread of disinformation. Disinformative content, like other online political advertising, is “micro-targeted” at narrow segments of the electorate, based on their narrow political views or biases. The targeting aims to polarize and fragment the electorate. Tracing the money behind this kind of messaging is next to impossible under current regulations and advertising platforms’ current policies. Voters’ inability to “follow the money” has implications for our democracy, even in the absence of disinformation. And of course, an untraceable flood of disinformation prior to an election stands to undermine voters’ ability to choose the candidate that best aligns with their preferences.
Untraceable online political advertising undermines key democratic values, and the problem is exacerbated by disinformation. Scholars and analysts are writing about fake news and the failures of platforms to contain it. Some have focused on evaluating the impact of fake news on voter behavior and beliefs or on political agenda setting. Others focus on legal fixes, such as direct platform regulation by restoring (or modifying) a statute that exempts platforms from liability arising from others’ speech on their platforms. Still others offer media-based solutions or emphasize that platforms are the only entities who can, or should, correct the problem while staying within the existing First Amendment framework. A few are ready to re-interpret the First Amendment in light of the new imbalance between speakers and listeners. Yet other scholars have suggested that platforms should be regulated in a way that fits a pre-existing regulatory framework, such as the way we regulate media organizations or public utilities.
We add to this conversation that fake news and other online political advertising should be addressed with existing regulatory tools developed for older kinds of political advertising. Our argument begins with the simple observation that fake news is not “news.” It is political advertising. Like other kinds of political advertising, fake news seeks to persuade, mobilize, or suppress voters and votes. And like other kinds of political advertising, it involves costs for production and distribution. Fake news is an especially confusing type of political advertising for two reasons. It is native, meaning that it poses as editorial or reporting content, and it is disinformative. Fake news is not the only format in which disinformation advertising occurs. Disinformation advertising is also distributed in the form of memes, videos, and images. The common themes among disinformation advertising are that it is false, it aims to affect people’s political opinions and the probability that they will turn out to vote, and the advertiser pays to produce or distribute it.
The First Amendment provides clear limits on the government’s ability to regulate politically-related messaging. However, the Constitution allows for more regulation than currently exists for political speech on social media. Courts have repeatedly upheld campaign finance disclaimers and disclosure of the funding behind political spending. At a minimum, the sources of disinformation advertising should be transparent.
Our campaign finance laws are riddled with gaps and loopholes, which exclude a large portion of online advertising from disclosure and disclaimer requirements. The lack of transparency for online ads facilitates violations of the ban on foreign spending in U.S. elections, and even where the source of the political communication is domestic, the public’s inability to “follow the money” may impact voters’ ability to make the right choice for them. Adding disinformation to the mix further damages voters’ ability to make the choice that best aligns with their preferences. While regulations responding to this problem have been proposed, the agency tasked with regulating is unlikely to enact anything in the near term.
The government should not rely upon the platforms to regulate themselves. While each platform is making proposals to increase transparency for online political advertising, the lack of transparency originated with the platforms, and for at least a decade, it appeared to serve their profit interests. Nevertheless, constitutional limits mean that only the platforms are able to implement some potential fixes. If platforms are unable or unwilling to act in those areas, government cannot step in.
In this Article, we propose three regulations to increase transparency of political advertising and begin to address the problem of disinformation advertising. Our proposed regulations are all modest extensions of the way the federal government already regulates political advertising, and they will help make visible the sources of political messaging online. Part I of this Article explains disinformation advertising as it existed in 2016—unregulated, from unknown sources, and aimed to fragment our politics—and how it creates a problem for our democracy. In Part II, we explain the constitutional framework in which additional regulation would occur. We also explain the tradeoffs between regulation by government and regulation by platforms. In Part III, we discuss the loopholes in our existing regulatory system for online political advertising. The loopholes have enabled disinformation advertising to be distributed without regulation even when paid for by a foreign government. Part IV proposes several regulatory solutions that could reduce disinformation advertising and, short of reducing it, would make enforcement and following the money much easier. We also suggest guidelines for platform self-regulation to attack the problem. A brief review of regulations in several foreign jurisdictions, which concludes Part IV, demonstrates that social media platforms are already willing and able to comply with stricter regulations in other countries. Finally, in Part V, we consider task assignment within the federal bureaucracy, as well as actions taken at other levels of government. Federal inaction on the threat posed by Russian disinformation is not the whole story; rather, disinformation campaigns have the potential to impact city and state elections too, causing local government to begin regulating platforms for their own elections.
I. Documenting and Framing the Problem
“Fake news,” or fabricated news articles or blog posts that are intentionally false or misleading, have received a lot of attention since the 2016 U.S. presidential election. Fake news articles are distributed via social media to drive web traffic to websites.
We argue that the problem of “fake news” is better framed as a problem of native political advertising and that the phenomenon benefits from lack of campaign finance transparency online. In this Section, we describe the fake news phenomenon, tie fake news to campaign advertising in ways that allow for regulatory traction, and explain how disinformation presents challenges to democracy.
A. Fake News is Political Advertising
Fake news stories inundated social media networks during the 2016 election, sometimes generating millions of comments and reactions from users. Sophisticated disinformation is persuasive because it looks like credible journalism. But fake news is not “news.” It is native advertising and should be regulated as such. In the same way that commercial advertisers seek to persuade by projecting a particular image of a product, purveyors of political disinformation ads use fabricated information to persuade voters that a candidate is untrustworthy or unfit for office, or to sow division among Americans. During the 2016 presidential election, many disinformation ads were strategically targeted at select groups to either encourage or suppress votes. Persuasion and targeting are the cornerstones of advertising. We therefore reject the label “fake news” and adopt “disinformation advertising.”
Plenty of disinformation advertising was produced in the United States. Indeed, a company called “DisInfoMedia,” which was the source of several fake news articles during the election, lists its address in suburban Los Angeles. But the public’s attention has been captured by fake news placed by foreign actors, especially Russians aiming to intervene in U.S. elections. Russia’s attack occurred (and continues) on social media platforms. Expert estimates of the number of shares of Russian-sourced “fake news” on Facebook vary widely, from over 100 million to “into the billions.” These estimates include content ranging from fake news articles to generic ideological statements from foreign sources with no disinformative content. The fact is, lack of disclosure of online political spending means that no one captured the entire universe of political ads. The best evidence we have so far, from a user-generated ad collection of 5 million ads by 10,000 Facebook users, suggests that 86% of the groups running paid ads on Facebook in the last six weeks before the election were suspicious groups (53%), astroturf movement groups (17.1%), and questionable news outlets (15.8%).
For a small fee, anyone can distribute content and generate impressions on social media. Using Facebook as an example, political ads, including disinformation ads, could be promoted, or boosted, for a fee, just like any other ad. Boosted ads appear higher on users’ newsfeeds. When boosting an ad, the creator selects which audience to target using filters like location, age, gender, or even interest. Some disinformation advertisers used Facebook’s “Custom Audiences” feature, which allows for much more sophisticated targeting than other methods, because it allows advertisers to place cookies on the browsers of those who click on their ads and then re-target people who clicked through. Russian meddlers used Custom Audiences to create websites and Facebook Pages with political sounding names that focused on socially divisive issues such as undocumented immigrants or African-American activism. The operatives later re-targeted people who had visited their sites with further political messaging. The Trump campaign, itself, also used Custom Audience’s “diabolical little brother,” Lookalike Audiences, to target people that “look like” their custom audiences, based on their online habits. If these tools remain available to advertisers in future elections, it is likely that disinformation advertisers will use them in the future as well.
Russia also deploys tens of thousands of “sock puppets,” trolls, cyborgs, and bots to amplify and distribute their messages. Mass posting causes hashtags to trend, amplifying the bots’ messages. Social media users can easily build a large social media following using cheap third-party services to promote their Twitter or Facebook accounts. Helping distribute the propaganda are so-called “useful idiots,” American social media users who unwittingly support the Russian disinformation campaign by reacting to, commenting on, and sharing the sensational stories with their social media networks.
There is spending at many steps of this process, including in salaries and production costs to make the content in the first place. Some of this spending triggers the existing rules. Once aggregate expenditures reach the threshold to trigger registration, the advertiser is subject to regulations like any other group regulated by the Federal Election Commission (“FEC”). While communications distributed on the Internet for free are generally exempt from FEC regulations, many political ads—including many disinformation ads—are placed into our newsfeeds for a fee and, therefore, are subject to regulation under existing rules. We also know that some of the advertisers violated the ban on foreign expenditures in connection with a U.S. election because they were paid for by foreign sources, providing another example of existing rules applying to disinformation ads. Disaggregated ads and audiences, disappearing ads, and other difficulties would complicate enforcement efforts, even for a motivated agency. The problem is data availability to establish the fact of the violation and facilitate enforcement. Therefore, at a minimum, effective enforcement of existing rules requires retaining data and advertising content. And in order to allow groups to counter disinformation against them or their preferred candidates, we must also retain the audience targeting information, which we discuss in Part IV.
* * *
Media organizations are exempt from campaign finance regulations. Even if we are correct that “fake news” is better thought of as advertising, is it also “news” that should be exempted from the rules? The FEC lacks a coherent regulatory approach to implementing the Federal Election Campaign Act’s press (or “media”) exemption from campaign finance regulation. The exemption allows legitimate media sources to avoid registration with the FEC and compliance with campaign finance regulations. The Commission walks a tightrope in interpreting the exemption. If it defines “press” too broadly, the exemption will swallow the statute and allow all advertisers to claim exemptions as “press entities.” With an overly narrow definition, however, the FEC would run afoul of the First Amendment by burdening the speech of legitimate news media.
In determining whether an item should be subject to the press exemption, the FEC asks whether the entity is “a press entity,” and “whether [it] is acting in its ‘legitimate press function.’” To determine whether a publication or organization is a press entity, the FEC asks “whether the entity in question produces on a regular basis a program that disseminates news stories, commentary, and/or editorials.” When analyzing whether a press entity is acting “in its legitimate press function,” the FEC looks at “(1) whether the press entity’s materials are available to the general public, and (2) whether the materials are comparable in form to those ordinarily issued by the press entity.” The Commission does not analyze whether the materials are produced by trained journalists, whether the organization employs a fact checker or conducts fact checking functions, or any other typical indicia of a legitimate media organization. As such, the test may be too lax: because it does not consider indicia of traditional journalism when granting the exemption, the Russian government propaganda outlet, Russia Today, was deemed a “legitimate press entity” by the FEC.
Even under this minimalist test, the FEC would not consider much of the disinformation on social media to be the product of a “press entity.” Take the Denver Guardian as an example. It existed only briefly before running a story about a murder-suicide committed by “an FBI agent believed to be responsible for the latest [DNC] email leaks.” Its registered address is actually a parking lot. The site had ads, Denver’s weather, and no more than two news stories during its entire existence. Similarly, Facebook Pages that disseminated content and memes, like the “Blacktivist” page, would not be considered press entities. They were created in the months before the election and claimed to be activists, not journalists.
B. How Disinformation Can Weaken Democracy
Lack of transparency for online political advertising pre-dates the 2016 election, but the disinformation attacks have given the problem new urgency. Disinformation attacks threaten democracy, because:
[F]actual knowledge about politics is a critical component of citizenship, one that is essential if citizens are to discern their real interests and take effective advantage of the civic opportunities afforded them. . . . [K]nowledge is a keystone to other civic requisites. In the absence of adequate information neither passion nor reason is likely to lead to decisions that reflect the real interests of the public.
Disinformation advertising works like other kinds of propaganda, by sowing doubt about institutions. Here, the propaganda uses a fake media source to undermine trust in the media. The flood of false, hyperbolic, repetitive, and divisive information is difficult for its viewers to resist over time and can distort the information environment. Voters are left trying to select the candidate that is right for them, or to form opinions about policy, in the face of a “media fire hose which has diluted trusted sources of information . . . .” As Tim Wu explains, “[w]hen listeners have highly limited bandwidth to devote to any given issue, they will rarely dig deeply, and they are less likely to hear dissenting opinions. In such an environment, [information] flooding can be just as effective as more traditional forms of censorship.”
Scholars have argued that an informed electorate is a constitutional value and that we should recognize a canon of “effective accountability” which relies upon an informed electorate. Many voters are poorly informed about the candidates and issues on the ballot. Most also lack a basic understandings of government structure and policies. Indeed, the “limited effects” found by Alcott and Gentzkow of disinformation in the 2016 election may be floor effects that result from the already low level of information among the electorate. Of course, uninformed voters are not unteachable: some studies show that providing voters with information increases voter competence, or their ability to vote in line with their preferences. More generally, voters have informational workarounds. They use heuristics, or informational shortcuts, to help them reach a decision. Uninformed voters can also take cues from elites they trust. If the cues from elites, or the information they provide, are disinformative, voters are left worse off than if they had not paid attention in the first place. Corrections to disinformation do not help much, either. It is hard to “un-ring the bell” of misinformation—the effects of misinformation remain even after corrections are issued and even when they are issued right away. Moreover, corrections can be misremembered and serve to further entrench the faulty information.
Disinformation campaigns share a targeting strategy with more run-of-the-mill political advertising on social media: microtargeting. Microtargeting small groups of voters with content that appeals to their pre-existing biases can deepen the democratic problem by subdividing the electorate, creating an endless number of potential cleavages among voters. As Elmendorf and Wood warn:
[I]t seems reasonable to fear that as broad, public appeals to the common good and national identity are supplanted by microtargeted appeals to the idiosyncratic beliefs, preferences, and prejudices of individual voters, voters will come to think of politics as less a common project than an occasion for expressing and affirming their narrow identities and interests. . . . Voters with out-of-the-mainstream and even abhorrent beliefs (such as overt racism) may find their beliefs legitimated and reinforced by micro-targeted messaging.
Microtargeting stands to fragment the electorate into countless groups. When disinformation is microtargeted, each group has its own set of unreliable “facts” about our civic life. Moreover, because more extreme voters are more easily targeted for turnout or suppression, a vast, moderate center is left out of the discussion of issues surrounding the election, undermining a key First Amendment value that campaigning enhances the “marketplace of ideas.”
Online “echo chambers” are asymmetric and more common among conservatives than liberals. Cass Sunstein proposes that a diversity of information and views are necessary to fix the problem of group polarization. But diversifying one’s information is harder than it seems, even if voters want to do so. Platform algorithms are designed to give users more of what they have liked in the past, creating so-called “filter bubbles.” The more frequently a social media user clicks on disinformation advertising or visits a hyperpartisan website, the more frequently similar content will be promoted on their Facebook newsfeeds or Internet search auto completions.
In sum, disinformation hurts our democracy by undermining our faith in our institutions, weakening voter competence, and splintering the electorate. The nature of social media, with its affinity groups and algorithms, makes it likely that disinformation will echo among one’s social media networks and that countervailing information will not reach the user. The lack of transparency in online political advertising has long been a problem, and the recent disinformation attacks have made shedding light on online political advertising more urgent.
II. First Amendment, Political Speech, and Choice of Regulator
Political opinions and information posted online are indisputably political speech and thus protected by the First Amendment. Activities that are less obviously “speech” have also been constitutionalized by courts deregulating in the name of the First Amendment. This includes political expenditures. The “constitutionalization” of campaign finance has implications for regulation of online political advertising, including disinformation advertising. Government regulation of online political advertising, including disinformation advertising, is on firmest constitutional ground when it requires disclosure of who is speaking to whom, when, and about what. A lot of the remaining responsibility for reducing disinformation on social media falls to social media platforms. This is because doing so involves banning or restricting speakers or their speech—actions that would be unconstitutional for the government to require. Yet here’s the rub: however much the platforms claim they want to self-regulate, their short-term profit motives suggest platforms will be, at best, unreliable and inconsistent self-regulators.
Here, we explain the current state of play in First Amendment jurisprudence and discuss the merits of platform self-regulation and government regulation.
A. Constitutional Framework for Campaign Advertising Regulation
First Amendment protections for political speech are strong in the United States, enhanced by conservative-libertarian rhetoric among First Amendment scholars. Campaign finance cases analyze regulations differently depending on whether they ban speech or merely burden it in some way. Courts apply strict scrutiny to content regulation of political speech. Several legislative attempts to regulate the content, amount, or source of political speech have met their demise under this standard. In order to survive strict scrutiny, the government must show that a regulation is necessary to serve a compelling state interest and is narrowly drawn to achieve that end.
The Court has granted “compelling interest” status to a limited set of campaign and election-related interests that governments try to protect via regulation. Preserving fair and honest elections and preventing foreign influence in our elections are compelling government interests. Courts have acknowledged that the government “indisputably” has a compelling interest in protecting election integrity and have upheld narrowly-tailored government regulations of some kinds of speech around elections. For example, the Court has upheld restrictions on our right to political speech in physical proximity to an election place such as requiring a physical setback for political activities near polling places, and banning campaign signs and clothing that advocates for a candidate or initiative near people who are voting. And in Bluman v. FEC, the Supreme Court voiced strong views that the government has a compelling interest in limiting direct campaign contributions by foreign nationals, though the language is somewhat uncertain about other involvement of foreign nationals.
When it comes to the government’s interest in preventing fraud on the electorate, the Court has stopped short of calling the interest “compelling,” saying that it “carries special weight during election campaigns when false statements, if credited, may have serious adverse consequences for the public at large.” Nevertheless, given the existing case law permitting restrictions in space, if not yet time (campaign season), the possibility remains open (though admittedly quite distant) that a narrowly-tailored prohibition on fraudulent online political speech could survive constitutional scrutiny where prior prohibitions on fraudulent speech have failed. In the meantime, the Court has said that the answer to false speech is not a blanket rule either allowing or prohibiting censorship. Rather, the answer to false speech is counter-speech.
Where government regulation of political speech falls short of a ban or a limit, as is the case with campaign finance disclosure and disclaimer regulations, it is subject to exacting scrutiny. To survive exacting scrutiny, the government must identify an overriding or sufficiently important government interest, which is substantially related, or even narrowly tailored, to meet it. The primary government interest supported by the disclosure regulations the Court upheld in Citizens United, McConnell, and Buckley, is the “informational benefit,” which is about improving voter competence by “[e]nabling the electorate to make informed decisions and give proper weight to different speakers and messages.”
The Buckley Court fleshed out the assumption, saying, “[d]isclosure provides the electorate with information as to where political campaign money comes from and how it is spent by the candidate in order to aid the voters in evaluating those who seek federal office.” It allows voters to place each candidate in the political spectrum more precisely than is often possible solely on the basis of party labels and campaign speeches. The sources of a candidate’s financial support also alert the voter to the interests to which a candidate is most likely to be responsive and thus facilitate predictions of future performance in office.
Social science findings support the Buckley court’s hypothesis that disclosure informs voters. In a series of experiments, Dowling and Wichowsky have shown that campaign finance transparency affects voter opinion. Adam Bonica has shown, using campaign finance data from decades of elections and legislator voting records at the state and federal level, that campaign finance contributions are as strong a predictor of legislative behavior—as informative, in other words—as incumbent legislators’ prior votes. We also have evidence that voters demand disclosure and learn from a group or candidate’s decision to not disclose.
While there is little judicial guidance on the constitutionality of government actions we propose in Part IV, the courts should uphold government efforts to educate voters and social media users about disinformation and fact-checking. Similarly, the courts would likely uphold a regulation requiring platforms to provide an opt-in or opt-out system allowing social media users to control whether they view content previously flagged as false.
* * *
Under the existing jurisprudential framework, government’s main involvement to combat disinformation advertising will be related to transparency. But it may be time to re-visit the foundations of our First Amendment jurisprudence. The cases fleshing out First Amendment protection of political speech are a relatively late addition to our constitutional jurisprudence, and like all law, they were created in a specific historical context. The jurisprudence developed at a time when listeners were plentiful and speech less so. Recent Supreme Court majorities have interpreted the First Amendment to protect speakers, not listeners. Our transparency proposals fit this existing framework comfortably. But should government be able to do more to protect listeners from the “flood” of disinformation advertising before elections?
The Internet platforms themselves lack a coherent theory of the First Amendment. Platforms are not merely a venue for debates in the “marketplace of ideas,” in which truth can eventually win out. The truth stood little chance against the volume of disinformation advertising and other false political messaging that flooded the “marketplace” in the weeks leading up to the 2016 election. Nor are the platforms exclusively supportive of speakers’ personal autonomy to say whatever they want—another theory of the First Amendment. Terms of service for even the most libertarian platforms forbid behavior that is offensive, despite not being illegal. Platform users are speakers and listeners; and platforms should want to balance their interests. Unfortunately, only speakers pay platforms for their services, leading platforms to cater their terms of service to speakers rather than listeners. The platforms also have not taken a collectivist, or deliberation-enhancing approach to speech on their platforms, under a theory that the First Amendment should promote political engagement and public discourse. At best, they have adopted an inconsistent amalgam of these ideas.
As Volokh explains, with the advent of “cheap speech” online, intermediaries are weakened. Speakers, freed from editorial gatekeeping, have become less trustworthy. Listeners are better able to select speakers that affirm, rather than challenge, their ideologies; and political advertisers are better able to target them “to make arguments to small groups that they would rather not make to the public at large.” Tim Wu argues that First Amendment jurisprudence should adapt to our current conditions, in which speakers are plentiful and listeners receive so much messaging that it is harder for speakers to “break through” than ever. In the age of cheap speech, the flood of disinformation advertising distributed by bots works to dilute human political speech, biasing the playing field in favor of machine-generated echoes of highly amplified, reckless, or even malevolent, speakers. Policing non-human speakers would help to promote a “robust speech environment surrounding matters of public concern.” This collectivist-oriented shift would allow for government to at least backstop the platforms in their efforts to root out disinformation advertising.
Like Hasen, we see that a headwind may be building against government regulation requiring transparency of online political advertisements, even where the regulation would stop disinformation. Nevertheless, prior libertarian efforts to build a case for a “substantial overbreadth” doctrine would be less likely to succeed in the wake of the 2016 election campaign. Regulators can now show demonstrable damage, intent by meddlers (both foreign and domestic) to mislead and to affect elections, and involvement by two entities with little First Amendment protection: foreigners and non-humans.
B. Choice of Regulator
Negative market externalities justify regulation. Market externalities are often conceived of as negative effects from market activity on our environment or public health—say, from air pollution. Here, the market activities are platforms chasing profits without exercising gatekeeping or transparency responsibilities, and the externalities are costs borne by social media users in their roles as voters and participants in civic life. The platforms have so far not internalized the cost that their ad placement systems impose.
Here we discuss the relative merits of industry self-regulation and government regulation, each within their own constitutionally permissible spheres of action.
1. Industry Self-Regulation and Co-Regulation
It is not a forgone conclusion that government must be the main regulator to address the disinformation advertising problem. Platforms have long resisted government regulation. Nate Persily has argued that “the principal regulator of political communication will not be a government agency but rather the internet portals themselves.” The platforms are well situated, technologically, to minimize the amount of disinformation advertising that reaches their users and have already experienced some success in that regard.
Facebook and Twitter were the locations of most of the “attacks” in the 2016 election, so this Article focuses on them. After dragging their heels, both companies have taken steps to prevent future attacks, actions that are also aimed at heading off government regulation. The platforms have also continued to experience disinformation attacks.
The problem with leaning on platforms to self-regulate is their conflicts of interest and political vulnerabilities that push them away from strong action to combat the problem. Platforms make money from advertising, including disinformation advertising. The more ads they sell, the more content that is promoted on their platforms, the scarcer the space for ads, and the more they can charge per ad. The more users that click through on any pay-per-click ad, the higher the platforms’ ad revenues. Disinformation advertising headlines are refined to attract the most clicks, accruing money for the platforms in the process. The presence of bots and other non-human accounts inflates the number of users on the platforms, increasing the amount they can charge to sell ads to all users, not just to foreign interlopers in our democracy. While bots and disinformation advertising can degrade the user experience and damage long-term revenues of the platforms, their short-term bottom line increases because of advertising and inflated user counts.
Platforms are also politically vulnerable. After Mark Zuckerberg initially announced the self-regulatory measures Facebook plans to implement, within a week, he had softened his stance and had begun to “both-sides” the issue, saying “[b]oth sides are upset about ideas and content they don’t like.” Professor Zeynep Tufekci, who researches online disinformation and authoritarianism, was quick to point out that his reaction reflected a common fear of social media companies: that they be depicted as “anti-conservative.” In other words, the social media companies will feel pressured to over-correct: even though the disinformation advertising that currently circulates online is overwhelmingly anti-liberal or pro-conservative, the political vulnerability of the platforms means that they will under-address the problem. Their political vulnerability leads them to be an unreliable self-regulator.
To the extent that the platforms do self-regulate, their current efforts are still far from the typical model of industry self-regulation or co-regulation. Industry self-regulation requires an industry-level organization that regulates its members by setting rules and standards about how they should conduct their business. Industry self-regulation is almost never “pure” self-regulation, but involves a nexus to a government co-regulator. Government agencies provide legal backstops to the self-regulation negotiated by industry participants, along with imposition of civil or criminal penalties on violators. Co-regulation stands the best chance of success when certain conditions exist. Most importantly, industry actors must be committed to the purpose of the regulation. The government must also be able to extract information from industry—here, the platforms—as to how the self-regulation efforts are succeeding. The state requires both “expertise and capacity to assess the performance of nongovernmental regulators; and those nongovernmental regulators must face a credible threat that their public overseers will assume regulatory jurisdiction if they do not meet their obligations.”
An analogy to co-regulation by an industry group closely related to the issue at hand illustrates industry self-regulation with government backstops. The Digital Advertising Alliance runs an opt-out program from online advertisements based on cookie-tracking. The industry enforcement process consists of confidential review of complaints by a committee, followed by board-level censure, membership suspension or expulsion, referral to the Federal Trade Commission or law enforcement, and publicity for non-compliance. By comparison, the platforms’ initial offerings to address disinformation advertising are paltry. It took Facebook over a year to even suggest it would reach out to other companies to “share information on bad actors and make sure they stay off all platforms.” We are a long way from effective and comprehensive industry self-regulation or co-regulation. Therefore, we must consider ways the government can constitutionally, and effectively, regulate in this area.
2. Government Regulation
Government regulation is coordination-facilitating and symbolically important. It facilitates coordination between industry members in mundane, but important, ways. For example, government can require platforms to collect information and provide it to the government or directly to the public in a uniform format. Standardized reporting allows the public, watchdog groups, journalists, and scholars to compare across platforms and over time in their data analysis. Moreover, shared information across platforms would be useful for platforms wanting to ban identifiable bad-actors who use the same accounts to buy, place, and promote ads. Government regulations also facilitate coordination through disclosure and audits to ensure compliance.
Government action in the realm of online political advertising is also symbolically important. In areas of national security and elections, signaling matters. The fact that our policymakers have been so quiet in the face of disinformation advertising and multiple strong statements by national security experts sends important signals to the attackers and the public. The attackers learn that they may continue with impunity. The public may perceive that government does not take the attacks seriously.
Government regulation also matters because law has expressive value. Law itself has special gravity, and adopting a policy into law signals the importance of the policy to the government. Codifying a policy can affect citizen expectations and behavior. It also signals that all members of a regulated industry must play by the same rules, an important rule-of-law value. In deciding on a regulatory approach, policymakers should keep in mind that
[p]olicy choices do not just bring about certain immediate material consequences; they also will be understood, at times, to be important for what they reflect about various value commitments—about which values take priority over others, or how various values are best understood. Both the material consequences and the expressive consequences of policy choices are appropriate concerns for policymakers.
Therefore, even in areas of regulation where the industry could self-regulate (or co-regulate with government), sometimes the government should still act to signal its seriousness in protecting important values.
Government is constitutionally prohibited from anything resembling censorship, and moreover, the platforms are in a better position to experiment with interventions that address the disinformation problem head-on. Nevertheless, where, as here, the platforms’ incentives and the public’s social welfare are misaligned in a way that would prevent the platforms from self-regulating (or prevent them from credibly committing to a self-regulation scheme), government should do what it can within constitutional limits, to help re-align actors’ incentives.
All of this political disinformation flooded into social media at a time when the FEC lacked an effective framework for regulating any political advertising online, regardless of content. When political advertising occurs on television, cable, satellite, and radio, government disclosure requirements are comprehensive, and compliance is high. Due to gaps in the regulatory regime and clever lawyering by political attorneys, the same advertisement that would be subject to disclaimer and other transparency requirements on television can go without them if it instead appears online. We explain these gaps in Part III.
III. Our Current, Insufficient, Regulatory Framework for Online Political Advertising
In the years leading up to the 2016 election, voters learned about the inadequacy of the federal campaign finance regulatory framework to handle the coming flood of money and advertising, both online and off. Insiders, such as former FEC lawyers quoted in the media, called campaign finance in the United States “the Wild West” and reported that “[c]andidates and political groups are increasingly willing to push the limits . . . and the F.E.C.’s inaction means that there’s very little threat of getting caught.” All of the regulatory and institutional weaknesses that drove this kind of reporting are even more extreme in the narrow regulatory regime we consider here—that of online advertising. Online political advertising differs from older forms of political advertising in important ways and deserves a regulatory framework that accounts for the differences. First, it is more likely to be disguised as informational content, or “native.” Second, it is more likely to contain disinformation. Third, it is more likely to be untraceable by the public or candidates hoping to speak to the same audience. And fourth, it is much cheaper. All of these features matter to shaping a regulatory framework that helps the public trace the source of the (dis)information they view online and the government keep foreign influence out of our elections. In this Part, we describe the current regulatory framework and its gaps.
“Public Communications.” Most FEC transparency requirements attach to “public communications.” Public communications include messages displayed on broadcast television, in print, on billboards, etc. It also includes all committee websites and emails whenever a committee sends more than 500 “substantially similar” messages. Importantly, the current definition excludes Internet ads “except for communications placed for a fee on another person’s or entity’s website.”
Disclaimers. The law requires disclaimers for many kinds of political advertisements. They say “Paid for by the XYZ State Party Committee and authorized by the Sheridan for Congress Committee,” or “Paid for by the QRS Committee (www.QRScommittee.org) and not authorized by any candidate or candidate’s committee.” On broadcast, cable, and satellite political messages, the FEC requires disclaimers on all public communications (1) made by a political committee, (2) expressly advocating for the election or defeat of a “clearly identified” candidate, or (3) soliciting contributions. Disclaimers are also required on (4) electioneering communications, which are publicly distributed communications that refer to a “clearly identified candidate for Federal office” and are distributed sixty days or fewer before a general election or thirty days or fewer before a primary. When we apply these four disclaimer triggers to Internet communications, regulatory coverage and disclaimer requirements decrease substantially. The first three triggers, for communications from political committees, containing express advocacy or solicitations, apply only where the communication is “placed for a fee.” The fourth, electioneering communications, is completely inapplicable, because electioneering communications are defined to exclude political messaging on the Internet.
As noted in Part I, in the weeks leading up to the election, well within the electioneering communications window, disinformation ads explicitly naming presidential candidates generated more attention than news articles from leading national newspapers. Among the disinformation ads that did not expressly advocate for the election or defeat of a candidate, many still mentioned candidates by name or showed their images. Were they on broadcast, satellite, or cable, our regulations would have required disclaimers as electioneering communications. Because they were placed online, we do not know who paid for them.
When we combine the current definition of political communications with the current disclaimer requirements, we end up with the following: A paid ad distributed via social media (on the Internet) must carry disclaimers like any other public communication if it advocates for the election or defeat of a clearly identified candidate. However, anything posted for free, like a blog post, a Tweet, or even disinformation that one generates personally from their personal profile or page, requires no disclaimer, even if it mentions a candidate by name right before the election, and even if it is amplified by a paid “bot army” or purchased “shares” on Facebook.
Many communications placed online for a fee—which would otherwise require disclaimers—have not had them. Presumably, the advertiser is either willing to disregard the regulatory requirements, is spending below the threshold requiring regulatory compliance, or would claim an exemption under the “small items” or “impracticable” exceptions to disclaimer requirements. The small items exception applies to communications on physical items, such as bumper stickers, buttons, and pens, which were considered too small to bear a disclaimer. The impracticable exception applies to communications in skywriting, water towers, and clothing, where it would be too difficult to include a disclaimer. However, applying these exceptions to political advertising would have been disingenuous. Because of landing pages on click-through political advertisements, it has never truly been impracticable for an advertiser to provide a disclaimer. They could always have provided one at the landing page. That fact did not stop platforms from asking the FEC whether the exceptions apply to character-limited ads on their platforms. In 2011, the FEC could not decide whether Facebook ads with fewer than 200 characters of text could qualify under either exception; a 3-3 vote resulted that was long interpreted as an exemption. The FEC has since clarified that a disclaimer is required, but they could not agree on the rationale. The FEC has also recently failed to decide whether nonconnected political committees may use Twitter without placing a disclaimer on their Twitter profiles. This opinion gives the green light to groups that want to hide behind Twitter handles and not reveal even the group’s website or physical address.
Disclosure. In addition to gaps in our disclaimer requirements, our disclosure rules are also fraught with holes and exceptions that have led to untraceable money pumping through our elections. Campaigns, party committees, and PACs must all submit regular reports to the FEC, disclosing their contributions and expenditures. However, since Citizens United, over half a billion dollars has flowed through 501(c) tax-exempt non-profits, which are typically organized as 501(c)(4) or 501(c)(6) “social welfare” organizations, to either make independent expenditures or to support groups that do. These “dark money” groups are not required to publicly disclose their donors. Funds can be donated to 501(c)s by individuals, corporations (including LLCs), unions, and anyone seeking anonymity—including foreign sources. (Foreign spending “in connection with an election” is illegal, but would be easy to do via these avenues, as we discuss below.)
The groups do disclose their contributions to the IRS. But with an audit rate of 1% for tax-exempt non-profits, the IRS is unlikely to investigate the sources behind donations to so-called “dark money” organizations, even where they use their resources to spread disinformation. Congress has prohibited the Securities and Exchange Commission from using appropriated funds to draft or implement rules requiring the corporations it regulates to disclose political spending.
Transaction-level disclosures are important. In order to aid enforcement on broadcast, cable, satellite, and radio ads, the Federal Communications Commission (“FCC”) requires reporting of the financial details of a transaction purchasing an ad, as well as the station, time, and programming during which the ad ran. The ads themselves, while not required to be retained by broadcasters, are captured by the public in all the ways the public records live programming. There is currently no requirement at the federal level that online political ads or the data around their placement be retained, making enforcement virtually impossible.
Foreign influence. Some political disinformation ads may also violate the FEC’s ban on spending by foreign nationals “in connection with any federal, state, or local election in the United States” and making any disbursement for an electioneering communication. The restriction was upheld in Bluman v. Federal Election Commission. At least some disinformation ads violate the ban on foreign spending for independent expenditures. Independent expenditures advocate for the election or defeat of a “clearly identified candidate” in express terms.
Of course, some disinformation ads are merely “issue ads.” They seek to influence voters by shifting public perception, but do not advocate for the election of or defeat of any particular candidate or even mention a candidate. Under our current regulatory framework, a hostile foreign government can disseminate divisive information about fraught social issues or spread disinformation about a candidate without violating American campaign finance law, even if they are placed right before the election.
In sum, because of outdated loopholes, we face the reality that disinformation advertisements, which often mention or display candidate names and images and would be considered electioneering communications if placed elsewhere, are distributed online with no disclaimers, little disclosure, and, sometimes, with foreign money. Online advertising has become exponentially more important for political campaigns since the FEC adopted its outdated regulations in 2006, and it will become the most important way for politicians to communicate with voters in the very near future. Excluding a large portion of online advertising from disclosure and disclaimer regulations is problematic, particularly in light of the studies reviewed in Part II suggesting that disclaimers and disclosures provide information that affects voter decisions, and the court’s longstanding belief that using disclosure to inform voters is a compelling government interest.
IV. Constitutionally-Permissible Regulations to Address Disinformation Advertising
We now turn to our proposals. We focus on transparency, education, and “nudges” that government can constitutionally implement. The reforms we propose would reach any political advertising that is placed, promoted, or produced for a fee. Viral disinformation without paid shares or re-tweets, memes made by individuals at home for free and posted to personal social media sites, and similar low-cost and low-volume activity, would not be subject to the regulations we propose.
We recognize that defining which advertisements deserve regulation is a persistent and sticky problem in campaign finance regulation. Our definition has two main components: (1) cost and (2) intent to influence peoples’ votes. Political ads cost money to produce, post, or disseminate—including payments for microtargeting, any off-platform payments to “bot farms,” and paid “likes” and “shares” for distribution. Political ads also aim to influence elections. Evidence that an ad aims to influence the election, rather than merely discuss “issues” is a particularly thorny category. The current line between an ad aiming to influence the election and one merely discussing “issues” includes “express advocacy” or, within a certain window before the election, reference to a clearly identified candidate. This line is hard to police, and the window is meaningless in the online setting, in which an ad can persist over time.
An example may help illustrate the definitional challenge. Suppose that a group called “Liberals Against Forced Motherhood” has spent more than the minimum threshold on political advertising and is registered with the FEC. Consider three scenarios.
1. Suppose the group posts a meme online and pays Facebook to promote it in the newsfeeds of its followers. The text of the meme says, “Hands off our birth control!” With no other words or imagery, this would be considered an issue ad under the current federal rules, no matter when it runs, and would not require a disclaimer.
2. Now suppose the group posts the meme and pays Facebook to promote it in the newsfeeds of its followers, and the text of the meme overlays a photograph of a Republican presidential candidate. Under the current federal rules, that advertisement would not be subject to disclaimer requirements unless it ran right before the election, during the “electioneering communications” window, because the photograph shows a “clearly identified candidate.” Of course, given the nature of social media, it can be posted well before the “electioneering communications” window opens, and members of the group can continue sharing and circulating it, disclaimer-free, right before the election.
3. Finally, suppose the group posts a meme online and pays Facebook to promote it in the newsfeeds of its followers, and the text of the meme says “Hands off our birth control! Vote against Candidate X!” Under the current federal rules, this meme requires a disclaimer no matter when it is posted because it contains “express advocacy.”
Now change the facts. What if the meme is posted “for free” on the group’s Facebook Page, and fake Facebook users have been paid, off-platform, to share it? The group does not pay Facebook for promotion, but the ad circulates, nevertheless. The current federal rules have been interpreted in a way that would not require disclaimers on any of them. But we believe this interpretation, made in the days before bots and fake “shares,” should be updated to account for our new reality.
Finally, consider one more distributional change. Suppose now that, instead of paying Facebook to promote only to page subscribers, the group pays Facebook to promote the ad to anyone who “looks like” its subscribers and any women who are between the ages of 18–45, who have a college education, who are White, who “like” Planned Parenthood, and who live in swing states. Does this kind of micro-targeting turn the issue ad in the first scenario into a political ad? We think it does—particularly the “swing state” targeting. Even if disclaimers should not be required, the ad itself should be retained so that targeted users can know who is attempting to persuade them.
Before social media, most ads appeared on television, radio, or in print. They were fewer in number, limited in time, and targeted large groups of the electorate. In that context, it was easier to police the line between electioneering advertising and issue speech. In light of the realities and challenges of political advertising online, issue speech has become so politicized and so microtargeted that we need to have a national conversation on where to draw the line.
Our proposal follows. It is modest, it is constitutional, and it will not solve the problem of online disinformation. It is, however, a necessary and important step in the right direction. After discussing our proposal, we briefly provide self-regulatory considerations for platforms wanting to take real steps to reduce the quantity of disinformation advertising on their platforms.
A. Improve Transparency
As more political advertising moves online, without regulatory changes, the likelihood that voters see untraceable ads increases. Without transparency, we cannot “follow the money” behind political advertising we see online. Most relevant to the world of disinformation advertising, we cannot know how much of the messaging we see online is foreign-funded or distributed. It took almost a year for Facebook to make public some of the foreign-funded ads it displayed to its users. If online advertising, including disinformation advertising, were subject to transparency regulations, we would have seen these funding sources in real time.
In order to subject online political advertising to disclaimer and disclosure requirements, the groups producing large amounts of it should be required to register with election administrators, just as they do when making political expenditures offline. A regulation adopting disclosure and disclaimer rules for online advertisements would be a step in the right direction. We also propose a repository to facilitate real-time transparency of all online political ads as well as ex post enforcement of campaign finance rules. In this Section, we discuss three transparency-related regulatory changes for online political advertising.
1. Require Platforms to Keep and Disclose all Political Communications and Audiences
Government should require political advertisers on large social medial platforms to save and post every version of every political communication placed online, whether video, print, or image, and whether placed “for a fee” or not. The communications should be placed on a dedicated and easy-to-locate page on the campaign’s or group’s website or user page on the platform, as well as on a dedicated page created by the platform. The communications should be stored in their entirety, and they should be posted along with a uniform set of data stored in a uniform format for easy analysis and comparison across campaigns, across platforms, and over time. The FEC should also retain this data, for longer term storage, and to ensure that it exists even when platforms change or cease to operate.
In addition to the communication itself, the online political advertising repository should contain the following data: when the communications ran; how much they cost to place and promote; candidates to which the communications refer; contested seat/issues mentioned; targeting criteria used; number of people targeted; and a platform-provided Audience identifier (“Audience ID”). For example, if a communication was aimed at women Facebook has identified as Democrats (from their profile pages), who “like” the show “Blackish” and also “like” Black Lives Matter, that information should be disclosed with the communication. Similarly, if the advertiser used outside consultants or internal data to generate a list of names, including through Custom or Lookalike Audiences on Facebook or similar services on other platforms, the advertiser must provide an Audience ID that will enable groups to engage in “counter speech” to the same audience. The Audience ID will be linked within the platform to a list of user names, but the platform should not disclose the audience names to anyone but the FEC.
The repository we propose is simply an improved version of the Political File for television commercials. The design of the Political File is outdated, and our political advertising repository will better serve our current technological abilities and democratic needs, with which political advertisers already comply, and which reveals their targets. Of course, political advertisers will protest that this disclosure burdens their speech by requiring that they disclose their microtargeting strategies. The objection is weak, considering they reveal targets via the Political File already. Crucially, the Political File contains targeting information, because the broadcaster, time of day, and programming are all disclosed. A media company’s audience at a certain time of day for a certain program is a particular set of people the advertiser is targeting.
Consider an example to illustrate how television advertising already embeds audience information. When a campaign runs a television ad during an 8:30 p.m. airing of “Blackish” on the ABC affiliate in the St. Louis market—all of which is information that is disclosed in the Political File—the campaign’s targeting strategy is revealed. Online targeting can be “narrower,” in that the communications can be targeted to a smaller group of people, but just because online targeting strategies are more precise does not grant the speakers more First Amendment protection. The size of the audience is irrelevant to the constitutional question of whether or not targeting criteria should be disclosed. If anything, communications targeting a narrower audience may be more damaging to civic values because they are aimed at suppressing or mobilizing voters, rather than making broad persuasive appeals. Narrow targeting may therefore deserve less, rather than more, constitutional protection. Finally, posting targeting criteria and Audience IDs for online ads facilitates counter speech in the same way that disclosure of the date, time, station, and program in which a television ad runs facilitates counter speech to the same audience.
The repository is particularly helpful when it comes to enforcement. Advertisers peddling disinformation—particularly those located abroad—have little incentive to make truthful and timely disclosures and disclaimers. Penalties occur long after the election after all. The current enforcement mechanism is triggered with a complaint to the FEC. It is a purely reactive system, and it relies on a complainant actually seeing the offending content. The advertising repository we propose facilitates decentralized enforcement by allowing groups to flag disclaimer violations after they occur. It is therefore crucial that the repository hold communications for a reasonable length of time. Television stations and cable and satellite companies are required to maintain the Political File for two years. The Honest Ads Act, a Senate bill introduced in 2017—which calls for a repository—would require platforms to retain the communication for four years. Facebook’s current advertising archive holds ads for seven years. Maintaining the repository for the duration of the campaign plus a reasonable amount of time post-campaign is important.
It is also important that reporting be coordinated across all online platforms. Platforms and political advertisers must use a uniform reporting format for all advertisers and distributors to report their activity. Gone are the days of handwritten and scanned forms, like we see in the Political File. Platforms can offer repository reporting and storage as a service to ad buyers and distributors, and reporting can happen as soon as the ad begins to appear in users’ feeds. Regulators, researchers, civil society watchdogs, and data journalists can analyze the data, act based on it, and report to the public the current state of affairs in online political advertising. And yes, opposing campaigns can run counter-messaging based on it, just as they can with disclosures to the Political File for television.
These transparency requirements should also have the effect of reducing the incentives to produce disinformation advertising and other any divisive advertising microtargeted at small subsets of the population. Microtargeting is not, in itself, bad. But modern day campaigns are best able to target extreme voters. Microtargeting skews the demographics of the voting population away from the district itself and contributes to elite political ignorance about the political preferences of constituencies. As individual microtargeting possibilities increase, campaigns and groups will want to give slightly different messages to different people. Indeed, one particular ad buy containing disinformation advertising (and paid for by Russians) was aimed in exactly this way, targeting people who had expressed interest in “LGBT community, black social issues, the Second Amendment, and Immigration.” If advertisers are required to post every version of every ad on the same site, along with targeting information, voters could detect when a group is trying to “divide and conquer” parts of the electorate. The message will reach voters via informational intermediaries. Opposition researchers can use their opponents’ divisive strategies against them. Smart data analysts can create tools that voters can use to see what their newsfeed would look like with a different configuration of “likes” and information. A user who sees ads in favor of guns, against abortion, and in support of Republican candidates could use the tool to see how her feed would look if she lived in a different zip code, “liked” Planned Parenthood and Everytown, or identified herself as a Democrat on her profile. Knowing the kind of advertising (and disinformation) our fellow voters receive can help aid deliberation in democracy.
i. Triggering Conditions
Which online messages should be subject to transparency rules? Three, non-exclusive options are possible: (1) the traditional bright line rule of candidate or ballot initiative mentions; (2) a more-easily automated rule of identifying political content by targeting; and (3) classifying the advertisers as political or not, gating their access to the platforms for advertising buys, and requiring repository storage of everything they run. We think all three can be deployed together, where any ad that fits any of the three rules would be included in the repository. Inclusion in the repository does not mean that disclaimers and disclosure are required. That is a separate determination to be made based on a loophole-free version of our existing regulations and described more fully in Section IV.C.
a. References to Candidates or Ballot Propositions
The cleanest regulatory line tracks the current regulatory requirements for disclaimers in other contexts: ask whether the ad advocates for the election or defeat of a clearly identified candidate or ballot initiative; or whether the ad mentions or shows a candidate or proposition and airs within a certain specified time before the election. We believe an ad belongs in the repository if it mentions or shows a candidate or issue any time after a candidate declares her candidacy or the issue is approved for the ballot. Given that disinformation advertising preceded the 2016 election by more than a year, we believe this modest temporal expansion for electioneering communications is wise given the realities of campaigning. We also believe that tying the expansion to declarations of candidacy and ballot qualification—when campaigning heats up—helps its chances against a First Amendment challenge. Our proposal is also gameable, encouraging groups to place as many ads as they can without repository capture before their preferred candidate declares, in hopes that they will still be circulating as the election approaches. Nevertheless, without more research into the realities of online political messaging over time, our proposal is as far as we think policymakers can confidently go within the bounds of the First Amendment.
Facebook already monitors ad content in order to minimize the amount that violates its terms of service. It prohibits or restricts advertising for tobacco, drugs (illegal or prescription), weapons, adult content, “sensational content” (“[a]ds must not contain shocking, sensational, disrespectful, or excessively violent content”), misleading or false content, and many other categories that the platform already tries to identify and reject before it goes live as an advertisement. The advertising review process—until the post-2016 disinformation advertising political maelstrom—was entirely automated, though Facebook has begun to include humans in advertising review. Our broader point is that reviewing ads for mentions of candidates and political issues is not difficult, particularly with human involvement.
As a back-up method, the platforms should require advertisers to indicate whether the ad mentions a candidate. The platforms can attach penalties (refuse to sell ad space, raise prices, temporarily suspend accounts, report to government regulators) on advertisers who lie about the content of their ads. A system that is based on ad content will require spot checks and a way for advertisers to object to their inclusion in the repository as well as for viewers to report whether an ad that should contain a disclaimer actually does.
b. Political Targeting Categories
Another triggering criteria would be easy for social media companies to automate. We can require ad disclaimers and inclusion in the repository when an ad is targeted at explicitly political groups or contains “suspect classes.” Targeting categories might include political parties; “likes” or “follows” of political parties, candidates, issues, or groups that have parties, candidates or issues in the group’s name (like “Texans for Hillary” or “Minnesotans Against Abortion”); a racial category combined with any other listed criteria; and other similar categories. Even if this is the only trigger, the likelihood that a consumer advertisement would be swept up in a repository requirement is probably slim, as consumer data is not very predictive of political persuasion and not very useful for campaigns.
c. Identify Political Ad Content by the Speaker (and Know the Speaker)
Facebook has a political advertising sales and operations team—indeed, it has teams “specialized by political party, and charged with convincing deep-pocketed politicians that [Facebook does] have the kind of influence needed to alter the outcome of elections.” There are teams assigned to campaigns for each major party. Antonio García Martínez, a former Facebook product manager who ran the targeted ads program, argues that Facebook is already set up to adopt a “know your customer” type approach, similar to those used in the banking sector to prevent money laundering. Platforms should be required to “log each and every candidate and SuperPAC that advertises on Facebook. No initial vetting means no right to political advertising.” For the platforms, the “know your customer” approach is useful for creating a “gate” that allows platforms to avoid obvious foreign money and to intercept and stop foreign disinformation advertising in our elections. A similar intervention could require a U.S. bank account to purchase ads, which will not stop foreign intervention, but will ease enforcers in tracing the source of advertisements.
Facebook does not currently gate political account creation from the beginning. Political advertising is targeted in such a way that the platforms could identify Pages that attempt to circumvent the additional check on political content by passing off their advertising as commercial advertising. Subjecting political advertisers to a source check can be done by Facebook with little difficulty. In the interest of national security, government should require that the platforms report when an ad is obviously funded by a foreign source, in real time, or as soon as the platform becomes aware of it.
ii. Limits to a Repository Requirement
The repository requirement cannot solve all challenges of online political advertising. We imagine a challenge to the scope of the repository—perhaps it is underinclusive. What is special about the online context—why not require a repository for offline messaging as well, such as mailers and print ads? Some cities, like Los Angeles, require that all campaign and independent expenditure communications be retained and disclosed, which includes any “message that conveys information or views in a scripted or reproduceable format, including but not limited to paper, audio, video, telephone, electronic, Internet, Web logs, and social media.” Requiring retention and disclosure of printed communications is helpful and important, but it is less urgent than creating a repository for online ads, because printed materials do not disappear like online ads currently can. Enforcement of our disclosure, disclaimer, and substantive campaign finance rules for online political advertising is almost impossible without the repository.
An administrability concern lies in another game-able aspect of the current regulatory framework, and it should be updated for the age of social media and viral ads. Some ads are placed for free, but promoted via bots, sock puppets, and inauthentic social media users (machine or human). Their promotion “services” are designed to appear organic, and payment to secure the ad shares and re-tweets occurs off-platform. Platforms are now able to identify suspicious activity from accounts that have an outsized impact, so some of these faux-organic posts are detectible now. Payments for ad promotion by humans and non-humans alike are important expenditures, and they should trigger reporting requirements once they reach a minimum threshold. In brief, political ads that would otherwise be subject to disclaimers if they were placed for a “fee” under the current regulations, but which are placed for “free” and promoted via paid bots should contain disclaimers. They aren’t “free” content. This is only administratively difficult where the group making the payments is inclined to avoid reporting payments to services providing bots, trolls, and other inauthentic users in order to boost their messages. Nevertheless, its violation provides an important enforcement “hook” to reduce disinformation online.
iii. Current Efforts to Aggregate Ads
Facebook is the most advanced of the platforms in its efforts to collect political communications, but its efforts still fall short of what its users deserve. In May 2018, Facebook posted an Archive of Ads with Political Content. The Archive discloses the Page that paid for the ad, all ads run by the Page, and the audience makeup, but not the targeting criteria. While the Facebook’s Archive addresses several reforms we have requested publicly in the past eighteen months, their design falls short in several important ways. First, because it does not require information about the true source of the communications, voters still do not know who is speaking to them. Rather, they know who paid to boost an ad into their feeds. Second, the Facebook Archive does not provide the targeting categories or an audience ID for a list of users that were targeted with the political communication. The Archive reveals age and gender distribution of the audience, as well as the state in which they reside, but those are certainly not the only targeting criteria used. For any given ad, the women and men of various ages were not targeted merely because of their age, sex, and location; they were targeted because of other information that Facebook knows about them, such as what issue-oriented groups or other candidates they like or follow on the platform. A candidate who is the subject of a disinformation campaign would not be able to speak to the same audience unless she spoke to the entire population in the geographic areas targeted by the disinformative campaign. This is no remedy for disinformation attacks on social media. Moreover, the First Amendment does not require this level of protection for disinformative political speech. Facebook should make targeting criteria plain, to enable counter speech. Third, the Archive affects only one corner of the vast world of social media, when we know industry-wide coordination is needed.
Looking around the industry, each platform has suggested its own “fixes,” all of which suffer the ills of not providing targeting criteria and not requiring information about the true source of the communication. Moreover, the platforms’ proposals are not coordinated, but will create an overlapping web of platform-specific fixes. Voters want to know who is trying to influence them, and to accomplish this, they need one online “file” for all political communications, which is easily searchable, and which is divided into categories of who was targeted and for what reason.
The Honest Ads Act contains a rough description of a set of transparency requirements that would apply to any person or group spending more than $500 (aggregate) to make electioneering communications online and would require that the platform maintain a public file. The current draft of the bill is vague on whether the system is disaggregated, like the FCC’s Political File, where users must search station-by-station and year by year. If the current proposal’s design is also disaggregated, then members of the public wanting to view the ads would be stymied by having to search advertiser-by-advertiser to find the ads they seek. This early design can be improved. First, disclosure should be standardized across platforms. Second, the $500 aggregate spending trigger is probably at the upper limit of what will be effective. It may be politically pragmatic to include a spending trigger, but the Constitution does not require one, and the Political File does not have one. Five-hundred dollars is well below the campaign contribution limit and the registration thresholds with the FEC, but it has enormous advertising reach on Facebook. A numerical example illustrates. Imagine a Super PAC called Vermonters for Bernie. Vermont has around 500,000 voting-aged residents. Suppose that 400,000 of them are on Facebook. For less than $4,000 and the current cost-per-impression price of less than a penny, the group could show all voting-age residents of Vermont the ad. Of course, a group would only target voters that it knew it wanted to turn out to vote or that it knew it wanted to suppress—in other words, a much smaller number than the 400,000 or so registered voters on Facebook. For $250, an ad will have 25,000 “impressions,” appearing in the newsfeeds of 25,000 people. Considering the last election came down to fewer than 80,000 voters in three states, we believe the threshold triggering regulation should be fairly low. The platforms can also advise the advertisers of their obligation to register with and report to the FEC once they hit a certain threshold, to avoid a situation in which unsophisticated actors are swept up in the regulatory regime for very small expenditures.
2. Close the Loophole for Disclaimers in Online Ads
Despite its recent embrace of it, Facebook has long opposed transparency in online political advertising. Political advertising placed “for free” is still political advertising, and the public has a right to know who paid for its creation or distribution. To enforce disclaimer requirements, platforms can deputize users to report disclaimer violations, in the same way that the platforms allow users to report violations of the terms of service. They can also perform random spot-checks to help enforce the requirement (and deter attempts to circumvent it), by asking users after the ad is shown whether it contained a disclaimer.
The FEC is again feeling public pressure to close the loophole for disclaimers in online ads. It held a hearing about online advertising disclaimers, but given the political and institutional realities of that body in 2018 (with a bare quorum and inability to agree on many issues), it seems unlikely that the FEC itself will make much progress in the near term.
As for the content of disclaimers, at a minimum, the disclaimers should reveal the same information required when ads are run on television or radio. Since Citizens United, legislators and activists have urged that disclaimers on all ads (online or not) contain the names of the top donors to the entity running the ad. This strikes us as reasonable, and political science research has shown aspects of these more detailed disclosures to be effective.
3. Eliminate Donor Anonymity for LLCs and 501(c) Organizations
Under our current disclosure and disclaimer framework, the public only sees the actual names of donors under certain circumstances, such as when the donors give to a campaign, party, SuperPAC, or other outside group subject to disclosure requirements. Even if the loophole for online advertising disclaimers is closed, the broader problem of LLC and 501(c) disclosure will remain. This loophole matters for disinformation advertising, because even if the disclaimer requirements are extended to online ads run and distributed by LLCs and 501(c) groups, voters cannot “follow the money” without extending disclosure requirements to corporations making independent expenditures.
Why does this matter? For starters, the holdings in Citizens United and SpeechNow combine to imply that limits on independent expenditures are unconstitutional. Mega donors to outside groups can—and do—seek anonymity by making their independent expenditures through either their own anonymous LLCs or through 501(c) groups. Money is passed from group to group in a “daisy chain” of limited transparency.
We do not know what share of online ads is currently run by groups without disclosure requirements. The current legal regime means that there is no limit to the amount of political messaging that could come from anonymous sources. Moreover, corporate anonymity can hide foreign influence in our elections. Saving ads run by corporations in the repository without requiring disclosure of their funders truncates voters’ ability to follow the money to learn about candidates and policies that matter to them.
B. “Nudge” and Educate Sharers and Viewers
We now turn our attention to ways the government can help reduce the spread of disinformation advertising. User education is paramount. Scholars call efforts to preempt disinformation via education “inoculation.” There are various successful forms of inoculation, such as educating users about the “potentially misleading effect of false-balance media coverage,” preemptive warnings to people about tactics used to spread misinformation, and even online games that teach the main strategies of disinformation.
A simple education campaign on platforms can inoculate users, helping them learn how to avoid spreading disinformation. For example, users can be taught how to tighten their security settings and reminded not to interact with disinformation in their newsfeeds, because the algorithms promote content based on interactions with it. Whether this requirement would invite a challenge as “compelled speech” under normal circumstances, it seems unlikely that platforms would protest it in this political climate. On firmer constitutional ground, though much more expensively, the government could pay to place inoculating ads on the platforms.
Viewing less disinformation in the first place is important, because we are bad at recognizing and remembering corrections to false information. Disinformation, especially when repeated, persists in our minds. Users can view less disinformation if platforms provide an opt-out or opt-in system to viewing disinformation and viewing content from sources that have regularly spread disinformation. An opt-out system for consumer and service advertising already exists. AdChoices, run by Digital Advertising Alliance, allows Internet users to opt out of being tracked by advertisers who are members of the alliance, who use “cookies” and tracking to present ads to Internet users based on previous internet activity. Default settings can be sticky. For example, under the AdChoices program, only a small number of people actually opt out. If government required platforms to default users to not view narrowly targeted political or issue ads, and instead platforms offered to users the choice to opt-in to viewing that content, low up-take would reduce the amount of disinformation that each viewer encounters. An opt-in (or out) system would reduce ad revenues for platforms selling political ads, but political ads are a miniscule part of platforms’ overall advertising revenue. As for the constitutionality of a government-imposed opt-in or opt-out requirement, there is no case directly on point. Government action is not strictly required here, if platforms are willing to sacrifice a bit of profit. They can create an opt-in system voluntarily.
These interventions will not stop everyone who shares political disinformation. Some people are particularly motivated to share it. Partisan perceptual bias and motivated reasoning present additional challenges to efforts to convince people to stop spreading disinformation advertising. Partisan perceptual bias is distortion of “actual-world information” in the direction of “preferred-world states,” which can occur when a fact has positive or negative implications for one’s party. Motivated reasoning, observed here as directionally motivated reasoning, “leads people to seek out information that reinforces their preferences (i.e., confirmation bias), counterargue information that contradicts their preferences (i.e., disconfirmation bias), and view proattitudinal information as more convincing than counterattitudinal information (i.e., prior attitude effect).” Partisan bias and motivated reasoning mean that it may be difficult to affect the utility calculations of people “under the sway” of disinformation that agrees with their preferred policy positions. Some social media users do not care that the items they share on social media have been debunked by third-party fact checkers. Political scientists Brendan Nyhan and Jason Reifler have observed that corrections to factual misperceptions can backfire to the point that “corrections actually increase misperceptions” among the group whose ideology is threatened by the correction, an effect observed (so far) among those who describe themselves as “very conservative.” In sum, our politics may be so group-based that users could happily circulate news with contested content as long as it supports their candidate.
Therefore, platforms may need to be very active to reduce sharing of disinformation. A one-time opt-in (or out) process would be a helpful start, but the amount of disinformation that persists may still be damaging to democracy. That brings us to general approaches that the platforms can use, which probably would not survive a constitutional challenge if the efforts were required by government regulators.
C. Considerations for Platform Efforts to Reduce Disinformation
Disinformation is “sticky.” A series of papers by Nyhan and coauthors suggest that “political myths are extremely difficult to counter.” Reducing the amount of disinformation that voters are subjected to is useful from a human cognition standpoint, and as we have argued, from the standpoint of a thriving democracy. After an early period of minimizing its role, Facebook has begun to address its disinformation problem. It has experimented with using third-party fact checkers to identify and label disinformation, with mixed results. It has also experimented with offering “related” stories that serve as fact correctives, polling users on which news sources they trust most, and suppressing all news in its users’ newsfeeds. Finally, it has begun to move away from including news in newsfeeds. That is a move away from publishers, but not necessarily a move away from disinformation, since so much disinformation seems to have emerged from Pages set up by so-called astroturf groups and amplifying fake media sites.
Three general considerations will help any private regulatory framework to be effective. First, any efforts to label and identify questionable (or trustworthy) stories or sources should be consistent across platforms. All voters should be able to quickly identify untrustworthy content across platforms and trust that all platforms use the same standards to classify it. Second, the platforms should aim at incentives. They can do so in overt ways, such as Facebook’s plan to temporarily ban advertisers who repeatedly share disinformation advertising that has been marked by fact checkers as “false news.” They can also aim at incentives in deeper ways, such as the way Facebook’s algorithm demotes ads that provide “low quality” experiences when users click through. Third, the platforms can turn down the volume of disinformation advertising by enforcing their terms of service, which prohibit bots and “inauthentic likes.”
D. A Note About Feasibility
As much as the social media companies argue that the best answer is self-regulation, a broader look around the world shows that social media companies comply with fairly tight regulations in other countries. Some of these regulations would not survive First Amendment muster or might not be otherwise desirable in the United States. Nevertheless, platform compliance with regulations elsewhere belies platforms’ claim that the U.S. government regulations would be overly-burdensome.
Consider several examples from European regulations. First, Germany passed a law that fines media platforms for failure to delete “illegal, racist or slanderous comments and posts within 24 hours of being notified to do so.” Because disinformation ads are often slanderous, a lot of disinformation ads will expose the platforms to penalties if not removed. The fines are steep: up to €50 million ($57 million), and estimates are that it will cost the platforms around €530 million ($622 million) a year to increase monitoring to avoid fines. Germany has apparently seen a decline in disinformation on Facebook since the law was implemented in summer 2017.
In the Czech Republic, the government is particularly concerned about Russian efforts to destabilize their democracy. Its interior ministry has launched a Center Against Terrorism and Hybrid Threats “tasked with identifying and countering fake news.” Dozens of jurisdictions worldwide observe “election silence,” or a media blackout, in the time leading up to voting day, or during voting day itself. These blackouts range from not allowing the mention of candidates aside from the fact that the candidate voted (France) to halting advertising except online and billboard advertising placed before the blackout period and not altered during it (Ontario, Canada).
Many of these regulations would be considered government censorship beyond that which is tolerated for political speech in the United States. It is certainly true that autocratic leaders may use “combatting disinformation” as a convenient excuse for a crackdown on speech and expression. However, the broader point, for our purposes, is that social media platforms are subject to regulations worldwide and tolerate a good deal of regulation in order to enjoy the benefits of doing business in other countries. Therefore, they can certainly handle some government-imposed transparency requirements here in the United States.
V. Task Assignment and Action Across Multiple Jurisdictions
Who should implement the government regulations? In this Part, we briefly survey existing federal regulator capabilities, as well as identify cities and states that have started to act in the absence of federal government regulation.
A. Federal Agency Competencies and Task Assignment
Administrative agencies have a wide variety of missions, specializations, and clients. The FEC’s core mission is to “protect the integrity of the federal campaign finance process by providing transparency and fairly enforcing and administering federal campaign finance laws.” Its clients are comprised of voters (beneficiaries) and the candidates, parties, outside groups who finance messaging, and elected officials (regulated entities). Its position is complex because the regulated entities also control its funding. Perhaps as a result, the FEC’s mission statement is heavy on transparency and tepid on enforcement and administration. Nevertheless, it moves slowly, is gridlocked by partisan balance, and its skills are no match for sophisticated disinformation agents.
FEC enforcement is slow. By law, the FEC is a bipartisan agency and can have no more than three out of six commissioners from one political party. Partisan gridlock frequently prevents enforcement actions from progressing. The FEC’s enforcement procedures require multiple rounds of voting: to proceed to an investigation; to allow the general counsel to conduct formal discovery and issue subpoenas; to determine whether there is “probable cause” to believe a violation has occurred; and to litigate the matter in court if a settlement cannot be reached. Resolving a matter can take years.
FEC suffers from partisan gridlock. For a decade, Republican commissioners have resisted updating campaign finance laws and enforcing the existing ones. Even as Facebook disclosed that Russian-linked trolls had purchased political ads on its platform during the 2016 election, the Republican FEC commissioners expressed worry that changing its policies would hinder “First Amendment rights to participate in the political process.”
FEC’s jurisdiction and its employee skills do not match those needed to combat disinformation. It is charged with enforcing the ban on foreign contributions and expenditures, though its jurisdiction only extends to civil penalties. Tracking down disinformation advertisers will require skills with money tracing. The FEC lawyers who conduct investigations are not expert in tracing money to its source using sophisticated computer-assisted tracing and data investigations. Even if it could escape partisan gridlock, the FEC is probably not the best fit for pursuing enforcement actions against disinformation advertising.
Our election security would be better served by placing investigation and enforcement capabilities in other agencies. One candidate is the U.S. Treasury’s Financial Crimes Enforcement Network, which has a core mission entirely related to financing, national security, and intelligence: “safeguard the financial system from illicit use and combat money laundering and promote national security through the collection, analysis, and dissemination of financial intelligence and strategic use of financial authorities.” Other candidates to aid in investigation and enforcement are the FBI’s Cyber Crimes Division and the FCC. The FCC is ostensibly the regulator of social media companies. They keep the Political File for television ads, but have shown no interest in regulating political advertising on social media.
B. The Role of State and Local Government
Regulation occurs at all levels of government. Individual cities and states control their own elections and can—and do—regulate the financing of those elections. Some states have already regulated disclaimers for online ads, for example, to provide more transparency than the federal regulatory regime requires. These state laws currently target the advertiser and not the platforms, but if the states are comfortable departing from the low bar set by the federal government in this realm, they should also be comfortable doing so to keep disinformation out of their state and local elections. In the same way that the platforms are already accustomed to dealing with multiple regulatory jurisdictions across the world, they can handle a diversity of regulations domestically. If an overarching regulatory framework that protects voters in all elections does not emerge soon, local and state governments will continue to create new frameworks to protect voters in their own elections from disinformation.
As of this writing, the main state-level action has been in New York and Maryland. New York’s Democracy Protection Act requires disclosure of all online ads, advertiser verification and registration with the NY Board of Elections, and an online archive. The State of Maryland has enacted legislation requiring the platforms to retain all ads and audiences. The California legislature is considering a similar bill. Washington State and the city of Seattle are enforcing a longstanding legal requirement that “commercial advertisers” disclose the “exact nature and extent” of ads, the “names and addresses” of ad purchasers, and specific payment details. The Seattle enforcement body is interpreting the ordinance to require copies of the ads in question and information about their intended and actual audiences—in other words, Seattle is requiring a repository very similar to the one we recommend for all jurisdictions. Los Angeles already requires candidates to store all political communications. Along with Chris Elmendorf, we have urged the City of San Francisco to adopt our model.
Fake news is not news; it is native advertising to spread disinformation, and it belongs to the broader category of “disinformation advertising.” We have proposed a menu of ways for government to regulate online political advertising, including disinformation advertising. We believe that signaling matters and that the government must act, rather standing by while Facebook slowly comes around to partial self-regulation and attempts to drag a couple of its competitors along. The platforms have too many conflicts of interest and are too politically vulnerable to be trusted to carry out comprehensive self-regulation. Within the constraints of the First Amendment, the government must regulate, and while the jurisprudence may need updating in light of the rapid change in our communications, our proposed regulations should pass muster under the current state of First Amendment jurisprudence.
Most of what scholars have studied and courts assume about the effects of campaign finance regulations developed with “offline” political advertisements as the motivating example. The underlying behavioral expectations around regulating political advertising online should hold in a broad sense, but the 2016 election drove home four features of online advertising that distinguish it from television advertising. Online political advertising is more likely to be “native” advertising, more likely to contain disinformation, more likely to be untraceable (preventing counter-speech), and much cheaper. Our current regulatory framework is insufficient to fully address disinformation advertising online.
Government must extend and update existing campaign finance transparency regulations for use online. Our proposals will facilitate enforcement, improve voter competence, and facilitate counter-speech. They have the ancillary benefit of reducing the attractiveness of online political microtargeting. It defies logic that political ads run on television, cable, and radio, and are accessible to the public long after they run, but we have such large transparency deficits when it comes to online political advertising.
Whether government can constitutionally require platforms to inoculate users or provide opt-in and opt-out regimes are both open questions under the First Amendment. Of course, nothing (except their financial conflict of interest) is preventing the platforms from instituting these reforms without being required to by government. Direct content regulation should under no circumstances be performed or required by the government. If the platforms are unable or unwilling to reduce disinformation advertising in these ways, government cannot step in.
Democracy in the United States is at a crucial point. A foreign regime attempted to destabilize our democracy using disinformation, and their attacks are ongoing. Opportunists, foreign and domestic, are also producing political disinformation to make a quick buck. Transparency for online political advertising will shed light on a dark process and enable enforcement against people attempting to sow conflict and discord.
Since we finalized this Article, the platforms have continued to battle political disinformation. None has provided audience identifiers to enable counter speech. Nor have they joined together or formed a co-regulatory arrangement with the government. Some are attempting to “nudge” users, but none has provided an opt-in or opt-out for narrowly-targeted political content. As it stands, without co-regulation or comprehensive industry self-regulation, any positive reforms they make may be changed at any time, with no accountability.