The Limitations of Applying the Stored Communications Act to Social Media

The advent of social media has increasingly affected how people live and communicate. Millions of Americans use social media every day, and the numbers continue to grow. The motivation to post on social media is multifactorial and includes a desire to stay connected, find others with shared interests, change opinions, and encourage action, but posting also serves to boost one’s self-esteem and self-worth. However, posting on social media creates a serious risk of self-disclosure, with people revealing more intimate details online than they would in more traditional settings without really appreciating the privacy issues and potential negative consequences related to such disclosures.

As social media use continues to grow, its use as a tool in police investigations has also increased. Both the content and metadata associated with social media posts now routinely aid law enforcement authorities in finding patterns and, importantly, in establishing timelines in criminal investigations. Thus, there is an urgent need to revise the existing laws governing stored communications—to better adapt them to these new, evolving technologies and improve the legal framework governing online privacy rights. This Note argues that various aspects of the Stored Communications Act (“SCA”) are outdated and that thirty-six years after it was enacted, it is time for an update that reflects the changing landscape of evolving technological advances.

The Note explores how the internet and social media use have evolved over the years and explains why the SCA no longer sufficiently protects consumers from government acquisition of their information. Particular emphasis is placed on the novelty of social media “Stories,” a technology unlike any that Congress could have imagined when it enacted the SCA in 1986. The Note examines the history of the SCA—with a focus on the Fourth Amendment, the Electronic Communications Privacy Act, and Supreme Court cases addressing the applicability of the Fourth Amendment to various forms of communication technology—before analyzing the SCA in detail, and looks at how law enforcement agencies can obtain these communications for use in criminal investigations. The Note concludes by arguing that the SCA needs to be revised to more adequately apply to today’s social media technologies since their content, and non-content, does not easily fit into the currently delineated categories. Revising the SCA would afford greater protection to consumer communication rights: not only would the SCA better apply to modern technology, but it would also be more readily applicable to future emerging media technologies.

INTRODUCTION

The rise of social media has significantly impacted the way people live and communicate, and the trend toward extensive social media use will likely only continue to grow. According to a Pew Research Center study, seven in ten Americans use social media. On average, people spend an estimated two and a half hours on social media platforms over the course of their day, and “[a] majority of Facebook, Snapchat and Instagram users say they visit these platforms on a daily basis.” More specifically, 69% of Americans use Facebook, 40% of Americans use Instagram, and 25% of Americans use Snapchat. These percentages represent a significant number of people—approximately 230 million, 133 million, and 83 million, respectively. Further, social media users make extensive use of the “Stories” feature, with one billion Facebook Stories being posted daily and five hundred million daily active users of Instagram Stories worldwide. The motivation to post on social media is multifactorial and includes a desire to stay connected, find others with shared interests, change opinions, and encourage action, but posting also serves to boost one’s self-esteem and self-worth. These desires create a serious risk of self-disclosure on social media, with people revealing more intimate details online than they would in more traditional settings without really appreciating the privacy issues and potential negative consequences related to such disclosures.

Just as social media has become popular with the American public, it is also becoming increasingly utilized as a tool in police investigations. A 2012 survey showed that four out of five law enforcement agents used social media to gather intelligence during investigations. Not only do authorities look online for public information, but they also request access to private data directly from social media providers—which can help them build their criminal cases. For example, after finding photos and comments “glamorizing alcohol abuse” on a woman’s MySpace page, prosecutors were able to use them as evidence and advocate for a longer sentence for her vehicular manslaughter conviction. Since people are less inhibited when it comes to social media disclosures, they often share details of their lives and more controversial opinions than they may in other forums. After these once private thoughts are stored electronically, they become more easily accessible to investigators. Not only can the content of social media posts aid criminal investigations, but the related metadata alone “can help law enforcement authorities to find patterns, establish timelines and point to gaps in the data.” Therefore, social media metadata can be just as easily used to gather information on a suspect as the actual content of a post. Because the trend toward extensive social media use will likely endure, there is an urgent need to revise the laws governing stored communications—to better adapt them to these evolving technologies and improve the legal framework governing online privacy rights.

This Note argues that various aspects of the Stored Communications Act (“SCA”) are outdated and that thirty-six years after it was enacted, it is time for an update that reflects the changing landscape of evolving technological advances. Part I of this Note explores how the internet and social media have evolved throughout the years and explains why the SCA no longer affords sufficient protections against government acquisition of consumer information. It discusses the evolution and expansion of social media platforms. Particular emphasis is placed on the novelty of social media Stories, which are unlike any technology that Congress could have imagined when they enacted the SCA in 1986.

Next, Part II examines the history behind the SCA to explain why the law was initially passed by Congress, with a focus on the Fourth Amendment, the Electronic Communications Privacy Act (“ECPA”), and Supreme Court cases addressing the applicability of the Fourth Amendment to various forms of technology. Part III analyzes the SCA in detail, focusing on the distinctions made between the different types of internet service providers (“ISPs”) and the different aspects of communications (content versus non-content data). It looks at how the content and non-content information—for example, metadata including a user’s identity, location, and other data not part of the main substance of the communication—can be obtained by law enforcement in the course of a criminal investigation.

Part IV argues that the SCA cannot be easily applied to social media today because it does not fit within the categories delineated in the SCA. Most importantly, it highlights how (1) social media content does not easily fit into either of the SCA’s currently defined categories because Congress could not have anticipated the advances in the technologies that exist today; and (2) “non-content” is not fully defined in the statute, and therefore lends itself to being more easily obtained in some situations as opposed to others. Finally, Part V suggests ways in which the SCA can be revised to more adequately apply to social media today and ultimately protect the right to privacy guaranteed by the U.S. Constitution.

I. INTERNET PRIVACY AND EVOLVING TECHNOLOGY

Americans are entitled to their right to privacy, which on third-party ISPs such as Facebook and MySpace is protected by the SCA. One problem with the SCA, however, is that it is dated. Although the internet was invented in the 1960s, it was not widely used until 1983, when computers on different networks were finally able to easily communicate with one another. When the SCA was enacted in 1986—just three years later—Congress had only a limited experience with internet use and the potential privacy problems it could create, and had certainly not envisioned the extensive modern use of social media. This partially accounts for some of the weaknesses in this legislation and why the SCA is often difficult to apply to social media today.

A. Evolution of Social Media Platforms

Social media is defined as “forms of electronic communication . . . through which users create online communities to share information, ideas, personal messages, and other content.” This definition implies that social media could not exist without the internet, and that it depends on user-generated content. While it can be said that social media began in 1971, when the first email was sent, for many people social media really began in the late 1990s or early 2000s—years after the SCA was enacted—with the advent of messaging services such as AOL and MSN Messenger. MySpace, arguably the “most popular and influential” of the early social media platforms, was later launched in August 2003, and it allowed individuals to interact by commenting on each other’s profiles and sending private messages. It was the largest social media platform until Facebook, created in 2004, overtook it in 2008. Facebook has now grown to be the largest social media platform in the world with almost three billion monthly active users.

The number and types of social media platforms have grown extensively. Today, other prominent social media platforms include Instagram and Snapchat. Instagram was launched in 2010 and is a platform focused on sharing photos and videos. Snapchat was created in 2011 and gained its popularity from users’ ability to send each other pictures or videos (“Snaps”) that disappear shortly after being opened. These platforms allow users to share content with their friends, some of which they believe to be “private,” visible only to those friends they allow to see it. However, the widespread use of these platforms has created new issues with how the government can legally access and use these communications.

B. Emergence of Stories on Social Media Platforms

The continued evolution and development of new information sharing functions on social media platforms have created multiple issues concerning user privacy rights. For example, in 2013, Snapchat began to allow people to share “Stories” that are displayed for twenty-four hours before becoming inaccessible. Stories are a collection of individual Snaps that are played in the order in which they were created and allow users to share their entire day in a narrative manner. Today, Stories are also available on a variety of other social media platforms, including Facebook and Instagram. Part of the reason why Stories are so successful is because they are only available temporarily, so people can post small daily updates or silly images that they only want visible for a short period of time. Therefore, users reasonably believe that their content will remain private and then disappear, becoming permanently inaccessible. Another reason for the success of Stories is that “social media [S]tories tend to be more spontaneous” than an individual’s carefully curated feed, making it feel more “casual.” As a result, these Stories can be extremely useful to law enforcement, as they can provide a less filtered view of an individual’s daily life and a timeline for the posted events. Thus, the challenge becomes balancing users’ right to privacy with the government’s need for access to information in order to investigate criminal offenses.

As it exists now, the SCA does not provide an adequate statutory framework for protecting communications on the various aforementioned social media platforms and, importantly, does not specifically address new advances in technology such as transient Snapchat and Instagram Stories. Since the SCA does not adequately protect individuals from unlawful searches of their private social media data, there is a need for Congress to reform the statute to accommodate evolving technology.

II. HISTORY OF THE STORED COMMUNICATIONS ACT

A. The Fourth Amendment

The Fourth Amendment to the Constitution protects “[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” While the meaning of “search” is not immediately defined by the Amendment, the Supreme Court has held that “[a] ‘search’ occurs when an expectation of privacy that society is prepared to consider reasonable is infringed” and that “[i]f the inspection by police does not intrude upon a legitimate expectation of privacy, there is no ‘search.’ ” Thus, when it comes to physical searches, the meaning of the Fourth Amendment is well understood, whereas what constitutes a search in the digital context is more uncertain.

In Olmstead v. United States, the Supreme Court held that wiretapping did not violate the Fourth Amendment because the lack of physical trespass and seizure of anything tangible meant there was no search or seizure. Because the Court refused to expand the Fourth Amendment to protect telephone communications, the government could legally intercept citizens’ communications as long as they did not physically enter their homes. Olmstead was later overruled by Katz v. United States, indicating a change in ideology that afforded citizens protection of their privacy even without a physical search. Because Katz held that a physical intrusion was not necessary to invoke the Fourth Amendment, online searches—which lack physical intrusions—can still violate the Fourth Amendment.

B. The Electronic Communications Privacy Act

In light of these changing viewpoints on the applicability of Fourth Amendment protections, Congress enacted the ECPA in 1986 in an effort to adapt the doctrines of the Fourth Amendment to the various emerging technologies. The SCA, which provides privacy protections to stored electronic and wire communications, is one part of the ECPA. The ECPA was created with the purpose of protecting American citizens from “the unauthorized interception of electronic communications.” Congress recognized a need to “update and clarify Federal privacy protections and standards in light of dramatic changes in new computer and telecommunications technologies.” Rightly, Congress worried that due to these advances, personal communications could be intercepted by individuals who had no right to obtain them, and thus felt it was important to enact the ECPA. However, the scope of the ECPA did not fully anticipate the impact of the growth and extent of social media.

C. Supreme Court Cases Addressing the Fourth Amendment and Technology

More recently, the Supreme Court heard a series of cases that addressed the applicability of the Fourth Amendment to newer technologies. In each of these cases, the Supreme Court Justices grappled with applying the existing legal framework, indicating that it is time for a change. In Justice Sotomayor’s concurring opinion in United States v. Jones, she emphasized that in the absence of a physical trespass, a Fourth Amendment search occurs “when the government violates a subjective expectation of privacy that society recognizes as reasonable.” She also argued that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties” because “[t]his approach is ill suited to the digital age.” Justice Sotomayor’s statements highlight the need to reevaluate the applicability of the current legal framework to new technologies.

Two years later, in Riley v. California, Justice Roberts acknowledged that because technology enables modern cell phones to contain and potentially reveal a wealth of private information, cell phones require greater privacy protections than would be necessary for a traditional search. Four years after Riley, the Court once again addressed warrantless searches in Carpenter v. United States, this time through the collection of cell phone records from a third party. Again, Justice Roberts recognized the need for stronger privacy protections, stating that “a warrant is required in the rare case where the suspect has a legitimate privacy interest in records held by a third party,” such as the cell site records indicating the defendant’s location and movements. The government had acquired this information pursuant to a court order issued under the SCA, which was obtained based on evidence that the information might be relevant to the ongoing investigation. Finding this burden of proof—requiring only that the information might be relevant, which is lower than the probable cause required to obtain a warrant—to be unacceptable, the Court held that to access these cell site records, a warrant was required. The differing standards of proof required to obtain warrants and court orders to access records from these new technologies illustrate that sometimes the SCA troublingly affords lesser protections to individuals’ private information.

III.  THE STORED COMMUNICATIONS ACT

The SCA was enacted to regulate electronic and wire communications that are stored on third-party servers and therefore governs the interaction between government investigators and administrators of third-party service providers. It was meant to expand the privacy protections afforded by the Fourth Amendment to digital content, clarifying its applicability. However, the SCA regulates retrospective communications, meaning it only applies when the government seeks to obtain information already in a provider’s possession. Additionally, the SCA only applies to two types of ISPs: providers of electronic communication service (“ECS”) and providers of remote computing service (“RCS”). An ECS is defined as “any service which provides . . . the ability to send or receive wire or electronic communications;” email and cell phone service providers would therefore be examples of ECS providers. An RCS, on the other hand, is defined as any service that provides to the public “computer storage or processing services by means of an electronic communications system.” Thus, once an email has been received but not deleted or a voicemail has been left in storage for later review, email and cell phone services are treated as RCS providers. Because ECS and RCS providers are afforded different levels of protection, it is important to be able to appropriately categorize modern ISPs to determine how much protection users’ communications will be given.

While transmitting communications and storing communications are different functions, this distinction matters less today, as many modern ISPs provide both services. In 1986, however, Congress was concerned about businesses such as hospitals and banks using remote computing services to store records and process data. Thus, they felt the need to create the RCS category to address this concern. Generally, the SCA prohibits disclosure of both content and non-content data of customer communications, but the SCA provides exceptions to this rule. These exceptions, which are discussed below, are divided between § 2702, which regulates voluntary disclosure, and § 2703, which regulates required disclosure.

A. Disclosure of the Contents of Social Media Posts

1. Voluntary Disclosure of Customer Communications

Section 2702(b) details the nine circumstances in which a provider may voluntarily disclose the contents of a customer’s communications. These exceptions include allowing the contents to be disclosed “to an addressee or intended recipient of such communication” and “with the lawful consent of the originator or an addressee or intended recipient of such communication.” For the most part, the communications can be disclosed only with the permission of the sender or intended recipient, which protects the user, or without their permission in the case of an emergency, such as a missing child. Therefore, while individuals are generally protected against voluntary disclosures of their private information by ISPs, it does not mean that the government is unable to obtain this information; it can be compelled through required disclosure under § 2703.

2. Required Disclosure of Customer Communications

Should the government decide that obtaining an individual’s communications is essential for building a criminal case against them, the disclosure of those communications is governed by § 2703. This is where the largest privacy threat to social media users lies, as ISPs are then legally required to turn over the contents of customer communications to law enforcement. How the government goes about getting this information under § 2703, however, depends on a variety of factors, beginning with whether the ISP is categorized as an ECS or an RCS.

If the government requires information from an RCS, there are three ways for it to compel disclosure. First, the government can compel disclosure without notifying the customer if “the governmental entity obtains a warrant issued using the procedures described in the Federal Rules of Criminal Procedure (or, in the case of a State court, issued using State warrant procedures . . . ) by a court of competent jurisdiction.” Alternatively, if the government provides notice to the customer, it can compel disclosure by using either (1) “an administrative subpoena authorized by a Federal or State statute or a Federal or State grand jury or trial subpoena;” or (2) “a court order . . . [obtained] under subsection [2703](d).” Warrants place a higher burden on the government in order to obtain the requested information, while subpoenas and court orders are more easily obtainable. Thus, allowing the government to choose the second or third method to avoid having to obtain a warrant shifts the burden to the individual, who then must object to the subpoena or court order to protect their private information.

Required disclosure from an ECS, on the other hand, is even more complicated because it also considers information about the age of the communication. If the communication is 180 days old or less, the government may only compel disclosure “pursuant to a warrant issued using the procedures described in the Federal Rules of Criminal Procedure (or, in the case of a State court, issued using State warrant procedures . . . ) by a court of competent jurisdiction.” If the communication is more than 180 days old, however, the government can compel disclosure with either a warrant or, if prior notice is provided, a subpoena or court order. In effect, this makes it easier for investigators to obtain older communications, with no explanation as to why the 180-day mark is significant; thus, in this situation, users are arbitrarily afforded less protections.

B. Disclosure of the Non-Content Data of Social Media Posts

1. Voluntary Disclosure of Customer Records

Section 2702(a)(3) prohibits ECS and RCS providers from “divulg[ing] a record or other information pertaining to a subscriber to or customer of such service . . . to any governmental entity.” However, § 2702(c) provides an exception to this rule: “A provider . . . may divulge a record or other information pertaining to a subscriber to or customer of such service . . . as otherwise authorized in section 2703.” Therefore, while the SCA prevents ECS and RCS providers from voluntarily disclosing non-content information to governmental entities, as with content, the government can still obtain the information by utilizing § 2703’s required disclosure provision.

2. Required Disclosure of Customer Records

Section 2703(c)(1) states that a governmental entity can require an ECS or RCS provider to disclose a record or other information when the governmental entity “obtains a warrant issued using the procedures described in the Federal Rules of Criminal Procedure (or, in the case of a State court, issued using State warrant procedures . . . ) by a court of competent jurisdiction”; “obtains a court order”; “has the consent of the subscriber or customer”; “submits a formal written request relevant to a law enforcement investigation concerning telemarketing fraud”; or “seeks information” under § 2703(c)(2). Section 2703(c)(2) allows ECS and RCS providers to disclose the name; address; telephone connection records (or records of session times and durations); length of service and types of service utilized; subscriber number; and “means and source of payment” when the governmental entity “uses an administrative subpoena authorized by a Federal or State statute or a Federal or State grand jury or trial subpoena or any means available under [§ 2703(c)](1)].” Again, governmental entities are able to obtain varying amounts of private information about customers from ECS and RCS providers with either a warrant or a court order, sometimes even with only a subpoena. Even more troubling, § 2703(c) does not require the government entity receiving the records or information to provide notice to the customer. Thus, subscribers’ privacy may be being infringed without their knowledge, providing them with fewer opportunities to protect themselves.

IV. SOCIAL MEDIA AND THE STORED COMMUNICATIONS ACT

Prior to 2010, no court had specifically addressed whether social media platforms were within the jurisdiction of the SCA. In order for the SCA to apply to social media platforms, these ISPs must be considered either ECS or RCS providers. The District Court for the Central District of California was the first to examine whether social media platforms were ECS or RCS providers in Crispin v. Christian Audigier, Inc. The district court held that because the three social media platforms in question provided either private messaging or email services, they qualified as ECS providers. This demonstrated that the SCA could be applied to social media platforms and can, therefore, be used to control the release of social media communications. While Crispin made it clear that Facebook, Instagram, and Snapchat would be governed by the SCA, it remains unclear whether these platforms qualify as an ECS, an RCS, or both, in the context of specific functions. As a result, which regulations should be applied when the government seeks to obtain users’ content (or non-content) from social media platforms during a criminal investigation remains uncertain.

A. Obtaining Contents of Social Media Posts

1. Obtaining Contents from Private Social Media Accounts

The SCA only applies to communications that are not “readily accessible to the general public.” Thus, it is important to understand how a user’s varying privacy settings on social media platforms can affect the applicability of the SCA. Facebook, Instagram, and Snapchat each have varying features that provide users with controls to limit who can see the content they have posted on their individual accounts, in some instances allowing the users to limit who can view individual posts as well, and the ability to block other users from viewing their content. Accordingly, should a user want their social media content to be private, they have the ability to set those limits using the social media platform settings.

In Crispin, the court held that “[u]nquestionably, the case law . . . require[s] that [user content] be restricted in some fashion . . . [to] merit protection under the SCA.” Therefore, if a user sets their content visibility to anything other than public, it qualifies as private. This was confirmed in Ehling v. Monmouth-Ocean Hospital Service Corp., in which the District Court of New Jersey found that “when users ma[d]e their Facebook wall posts inaccessible to the general public, the wall posts [we]re ‘configured to be private’ for the purposes of the SCA.” Similarly, in Facebook v. Superior Court (Hunter), the Supreme Court of California held that social media posts that were configured to be public fell within § 2702(b)(3)’s lawful consent exception, which allows ISPs to disclose a user’s content with the user’s consent. By this logic, if a user’s content is visible to the public, they are consenting to the RCS provider’s disclosure of their content. The SCA, therefore, does not protect social media content that is posted publicly because consent is an exception to the prohibition of voluntary disclosure under § 2702. The Hunter court also held that “restricted communications sent to numerous recipients cannot be deemed to be public—and do not fall within the lawful consent exception.” In other words, even if social media communications are limited to a large group of people, that does not mean these posts are considered public. According to the Ehling court, “the critical inquiry is whether Facebook users took steps to limit access to the information . . . . Privacy protection provided by the SCA does not depend on the number of Facebook friends that a user has.” By restricting one’s content with privacy settings, a social media user can therefore take advantage of the SCA’s privacy protections and make it more difficult for the government to obtain their content—by requiring them to get a warrant, for example—for use in a criminal case, but not all users are that savvy or careful.

Based on this jurisprudence, it should not matter how broad the user’s privacy settings are—as long as the individual specifically took steps to limit who can view their content, it becomes protected from voluntary disclosure. This is not foolproof, however, because, as discussed earlier, disclosure may still be permitted if authorized by § 2703. This remains problematic because, as Justice Sotomayor stated in Jones, a Fourth Amendment search online occurs when the government violates a “subjective expectation of privacy[,]” and one could argue that when an individual invokes privacy settings, they reasonably expect that their content will be kept private. If obtaining individuals’ social media data constitutes a search, then under Justice Roberts’s logic in Carpenter, a warrant should be required because social media content can contain lots of information about a person’s day, including their location and movements, like the cell site records in Carpenter. Therefore, it stands to reason that all searches of private social media content should require a warrant, which is not currently the case under the SCA.

2. Social Media: Does Disclosure of Its Content Follow ECS or RCS Regulations?

As previously discussed, the SCA has different standards for an ECS than for an RCS—the government can more easily obtain communications from an RCS, whereas obtaining communications from an ECS depends on how long ago the communications were created, thus emphasizing the importance of properly categorizing each social media platform. In Crispin, the court found that social media platforms can be characterized differently depending on the state of the messages: before the messages have been opened, ISPs operate as ECS providers, but once the messages have been opened and retained, the ISPs operate as RCS providers. This creates significant complexity and results in variability in how the SCA is applied to each social media platform, given the different standards between RCS and ECS providers and the difficulty in determining which standard will apply.

The Crispin court acknowledged that Facebook wall posts and MySpace comments “present a distinct and more difficult question” as to whether the social media platforms are acting as ECS or RCS providers. On one hand, the court stated that Facebook and MySpace were ECS providers with respect to wall posts and comments because they were being held for “backup purposes once read.” Here, the court relied on Snow v. DIRECTV, Inc., in which a district court found that because electronic bulletin board services (“BBS”) did not have temporary, intermediate storage, they were actually storing the information for backup purposes and thus were an ECS. The court analogized Facebook and MySpace wall posts and comments to BBS, concluding that these posts and comments were also being stored for backup purposes since they were not deleted after being read, and thus the social media platforms should be considered ECS providers.

On the other hand, the court also said that Facebook and MySpace could be considered RCS providers with respect to wall posts and comments because they maintained these communications not only for storage, but also for display purposes, as users wanted their friends to be able to see the communications. The court relied on Viacom International Inc. v. YouTube Inc. in this instance, analogizing Facebook wall posts and MySpace comments to private YouTube videos. In Viacom, the court found that YouTube was an RCS provider because it stored videos on behalf of its subscribers. Thus, the Crispin court concluded that Facebook wall posts and MySpace comments, like YouTube videos, can be stored for the purpose of allowing other users to view the content, thus making Facebook and MySpace RCS providers, like YouTube. Ultimately, the court did not rule whether Facebook and MySpace were ECS or RCS providers with respect to wall posts and comments, remanding the case for further development. This complexity demonstrates how ill-suited the SCA currently is to protect individuals’ privacy on social media platforms, as there is no clear and consistent way to apply it. Further, the arguments made in Crispin emphasize just how arbitrary the distinction between an RCS and ECS provider can be when it comes to social media platforms. Because social media platforms do not fit neatly into either category, courts can come to different conclusions as to how these ISPs should be regulated, thus leading to uncertainty regarding the protection of privacy rights of social media users. This arbitrariness can be explained by the fact that the SCA was written in 1986, as articulated in Konop v. Hawaiian Airlines, Inc.:

[T]he ECPA was written prior to the advent of the Internet and the World Wide Web. As a result, the existing statutory framework is ill-suited to address modern forms of communication like [social media platforms]. Courts have struggled to analyze problems involving modern technology within the confines of this statutory framework, often with unsatisfying results.

The Konop court’s words make clear that the SCA has become outdated because Congress was unable to foresee the problems that would arise for privacy protections resulting from not yet existing communication technologies. This is further supported by the fact that the Crispin court was unable to make a decision regarding the status of Facebook and MySpace with respect to wall posts and comments, given the limitations in clearly and consistently applying the SCA to communications on the various social media platforms. Courts’ inability to readily place certain features of social media platforms into existing categories highlights the inadequacy of the SCA in affording privacy rights to users of the prevalent modern technologies and supports that now is the time to change the SCA to clarify its applicability and afford stronger protections for various types of social media communications by creating more appropriate categories that these ISPs can be classified into.

3. Challenges in Applying SCA Content Disclosure to Stories

Stories are a relatively new feature of social media platforms, having only been in existence since 2013. Like with the aforementioned difficulty in generally applying the SCA to social media platforms and user content, Stories, which disappear within twenty-four hours, provide another example that highlights the limited applicability of the current statutory framework under the SCA to modern communication technologies. From a privacy perspective, the good news is that most of these posts are removed from ISPs’ servers as soon as the twenty-four hour period is up. Since the content is no longer on the social media platform’s server, it is not possible for ISPs to disclose this content—even pursuant to a court order, subpoena, or warrant—because the content would no longer be in storage. However, concerns remain for any content that remains saved on the server, which might still be obtainable for criminal investigations under the current SCA.

In addition, both Facebook and Instagram Stories can be saved in Story Archives, and Snapchat Stories can be saved in Memories. This content, therefore, could feasibly be disclosed to the government under the SCA if the proper exceptions and procedures were met. Because part of the appeal of Stories is that posts are only available for twenty-four hours, users likely do not think about how long their content is maintained in storage. Rather, many incorrectly assume that the content has been permanently deleted when the twenty-four hours expire. The problem here is that if Stories are governed by current ECS rules, once Stories are more than 180 days old, they can be obtained with notice and a subpoena or court order. This goes against the intent underlying Justice Robert’s opinion in Carpenter because one could similarly argue that individuals who post Stories believe they have a reasonable expectation of privacy in these Stories that are now only available for their own view, yet they can, in fact, still be obtained with lesser protections than a warrant. Therefore, even though the SCA was intended to extend the protections of the Fourth Amendment to online communications, currently it does so unsuccessfully, particularly in the case of Stories.

Because Stories are so new, there have not been many cases addressing how the SCA applies to them. In Facebook, Inc. v. Pepe, the District of Columbia Court of Appeals considered an allegedly sent “disappearing Instagram ‘Story’ ” for the first time. The court found that the Instagram Story was content under the SCA, and that because James Pepe was an “addressee or intended recipient” under § 2702(b), Facebook was permitted to disclose any Instagram Stories that were responsive to the subpoena. However, this addressee or intended recipient exception would not apply if the government were seeking disclosure in a criminal case, as the individual who posted the Story would likely not have invited a government official to view their private Facebook, Instagram, or Snapchat Story. Thus, the inquiry then shifts to consider whether social media platforms are acting as RCS or ECS providers when it comes to Stories.

One could analogize Stories to Facebook wall posts and MySpace comments when applying the SCA to social media Stories. Following the Crispin court, this would mean that ISPs offering Stories could be considered either RSC or ECS providers. The first argument is that Facebook, Instagram, and Snapchat act as ECS providers when individuals post Stories because the individual is “sending” the electronic communication to the people who they have allowed to view it. This would follow from analogizing Stories to wall posts or comments that are in “backup” storage. As per Crispin, if the messages are being stored on the servers solely because they were not deleted, then they are in backup storage and, thus, should be governed by ECS rules. Unfortunately, users do not usually think about deleting this type of content because they know that once it disappears, no one else can see it. However, what they often fail to realize is that these communications are then considered to be in backup storage, meaning they can still be disclosed to the government under the SCA.

Alternatively, Facebook, Instagram, and Snapchat could be considered RCS providers because they are simply storing the Stories on the server for others to view. In Crispin, wall posts were compared to YouTube videos that were stored for the purpose of allowing other users to view the content. Arguably, Stories are also stored for the purpose of allowing others to view them, not simply because they have not been deleted. Therefore, even though a Story disappears after twenty-four hours, the user can reshare the content from their Archive, similar to changing a YouTube video’s settings to modify who can view it at any point in time.

On the other hand, Stories could also be analogized to private messages, which further complicates the analysis of SCA protections, particularly when considering the reasoning in Crispin, which stated that when a message is unread, the ISP acts as an ECS, but once the message has been read, the ISP then acts as an RCS. Stories can be viewed by whomever the user allows, depending on their privacy settings, meaning that at any given point in time, the Story might have been viewed by a portion, but not all, of the potential audience. Thus, is the Story considered “unread” until all possible viewers have seen it, or does it switch to being “read” once at least one individual has viewed it? Alternatively, a Story could be “sent” while it is available for viewing by others but then switched to “read” once the twenty-four hours are up.

Whether or not a Story is considered to be an ECS or an RCS function directly impacts how law enforcement agencies can obtain its contents since the content of a Story would only be protected with a warrant if it were governed by ECS rules and 180 days old or less. Otherwise, Stories could be obtained with either a subpoena or a court order, making them easier to acquire for criminal investigations. These types of questions have not yet been adequately addressed by courts, and because Stories have qualities of both RCS and ECS communications, it is not possible to consistently predict whether RCS or ECS rules should govern in individual cases. The difficulty in determining how to appropriately apply the SCA to Stories supports the need for the proposed changes to the SCA.

B. Obtaining Non-Content Data From Social Media Posts

1. Applying SCA Non-Content Disclosure to Social Media Platforms

Disclosure of non-content data stored by social media platforms is different from disclosure of content in that non-content disclosure does not depend on whether the provider is an ECS or an RCS. While content is defined as including “any information concerning the substance, purport, or meaning of that communication,” non-content is not well-defined. The SCA does, however, define some non-content data that can be obtained with only a subpoena, including the user’s name, address, and telephone number. This stems from the third-party doctrine, which states “the
Fourth Amendment does not prohibit the [government from] obtaining . . . information revealed to a third party.” This creates an exception to the reasonable expectation of privacy that is protected by the Fourth Amendment: once an individual voluntarily shares information with a third party, they lose any reasonable expectation of privacy in that information. It can be assumed, however, that non-content data is any information that is not the main substance of the communication, including the metadata incorporated in the communication, for example, the user’s identity, location, payment information, and telephone number. This is problematic because under § 2703(c), non-content data can sometimes be easily obtained by the government with a court order. Because the SCA does not explicitly state which types of non-content data can be obtained with a court order and which require a warrant, a lot of discretion is left to police officers and the courts.

“Some non-content information, particularly associational information and location information, is inherently expressive, capable of directly exposing intimate details of an individual’s life.” In the age of social media, people are constantly posting images and videos online; when people take photos, for example, the image files contain metadata that includes the time and date when the image was taken, along with the exact location where the photograph was taken. Facebook, Instagram, and Snapchat collect a lot of information about an individual’s daily life, including sensitive location information. Like wireless providers, Facebook, Instagram, and Snapchat are all able to collect individuals’ locations from Bluetooth signals, wireless networks, and cell towers. Additionally, these platforms also store information such as the location, date, and time at which the photograph or file was created. This information could be used in a criminal investigation to pinpoint the time and place where a crime occurred or where a suspect was located at a particular time, making it highly valuable for the government when charging someone with a crime. Thus, it is important to afford this information the highest level of protection.

Because social media is a newer phenomenon, most courts have yet to address the issue of obtaining non-content data, which can include time and location information from a social media platform. In In re Application of the United States of America for an Order Pursuant to 18 U.S.C. § 2703(d), a magistrate judge ordered Twitter to turn over information
pertaining to multiple subscribers; this information included “records
of user activity . . . including the date [and] time” as well as
“non-content information associated with the contents of any communication . . . [including] IP addresses.” The Virginia district court held that because § 2703(d) requires the government to show only “reasonable grounds” that the records sought are relevant and material to an ongoing criminal investigation, and because the third-party doctrine applies to IP address information, the court order was valid. The court differentiated IP addresses from beeper monitoring because IP addresses are shared with all internet routers when a user accesses Twitter, while tracking a beeper allowed the government to monitor inside a private residence, which was not otherwise open for visual surveillance. While this case clarified what one district court believed the SCA means for IP addresses, it does not help to clarify how the SCA applies to exact location information such as the metadata embedded in Facebook, Instagram, and Snapchat posts.

However, courts have addressed the issue of whether obtaining location information from a wireless carrier constitutes a search under the Fourth Amendment. In Carpenter, the Court held that a court order obtained under § 2703(d) was not a permissible means of acquiring a defendant’s historical cell-site location information (“CSLI”) from a wireless carrier. The Court found that individuals have a reasonable expectation of privacy in their physical location, and when the government accessed CSLI from the wireless carriers, it violated the defendant’s reasonable expectation of privacy. As a result, the Court held that the government “must generally obtain a warrant supported by probable cause” before acquiring records containing location information.

Because the SCA was intended to extend Fourth Amendment rights to online communications, it might be acceptable to infer that obtaining location information from social media platforms would also require obtaining a warrant supported by probable cause. However, the Carpenter Court articulated that its decision was “narrow” and that it does not “address other business records that might incidentally reveal location information,” which means that the metadata contained in the photos and videos posted on social media may not require the government to obtain a warrant, which could compromise people’s privacy rights. As Justice Sotomayor pointed out in her concurrence in Jones, “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy” in the information they disclose online. “This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.” Justice Sotomayor is right: in the digital age, individuals post a wealth of information online that they expect—as a result of their privacy settings—to be visible only to those they choose. Thus, it is time to reconsider the notion that revealing this information to third-party social media platforms means that the government should be able to easily obtain their locational information because there is no “reasonable expectation of privacy.”

2. Challenges in Applying SCA Non-Content Data Disclosure to Stories

Stories provide users with the unique opportunity to create information that can qualify as both content and non-content data at the same time. When an individual posts their Story online, they are able to add “stickers,” which can indicate to those viewing the Story the exact location of the individual and the date and time the Story was posted, among other things. Thus, when a user posts a location in their social media Story, it actually appears as part of a graphic. In this sense, it would appear to be content because it is part of the image. On the other hand, since it is a location, Instagram will likely also collect that information separately from the content. It would then appear that, in this situation, the location information would be both content and non-content data at the same time; how then should a court determine whether a subpoena, court order, or warrant is required to compel the information from Instagram? Unfortunately, this is unclear under the current statutory framework of the SCA.

Former CIA agent Michael Morell admits that “[t]here’s a lot of content in metadata” and that “[t]here’s not a sharp difference between metadata and content . . . It’s more of a continuum.” If even the government accepts that it is difficult to distinguish between content and non-content data, then the SCA should not be differentiating between the two and allowing weaker protections for non-content data when, in fact, it may reveal information just as sensitive as content. Because the SCA was created prior to the creation of social media, it does not account for the overlap in the types of information that can be obtained from non-content and content data. This is another reason why the SCA needs to be rewritten: to clarify and remove the ambiguity of how sensitive non-content information can be disclosed.

V. REVISING THE STORED COMMUNICATIONS ACT

A. Requiring Warrants for All Compelled Content Disclosures

While the SCA provides some protections for private communications on ISPs, the statute needs to be updated and better tailored so that it is applicable to all the various nuances of modern technologies. Currently, the strongest protections are afforded to unretrieved emails and other temporarily stored files that are 180 days old or less. All other communications can be more easily obtained with a subpoena combined with prior notice. Under the Federal Rules of Criminal Procedure, a subpoena “may order the witness to produce any books, papers, documents, data, or other objects the subpoena designates.” This is even less protective of an individual’s right to privacy than having to obtain a court order, which requires that the “governmental entity offers specific and articulable facts showing that there are reasonable grounds to believe that the contents of a[n] . . . electronic communication . . . are relevant and material to an ongoing criminal investigation.” To obtain a warrant, on the other hand, there must be “probable cause to search for and seize a person or property.” This places a heavier burden on the government and thus ensures that social media users are not losing their right to privacy without stringent protections, which should be the goal of any such legislation.

Because the line between defining a social media platform as either an ECS provider or an RCS provider is so unclear, applying existing laws can lead to variable results that negatively impact users’ privacy rights. As previously discussed, under the SCA, the same ISP can be treated as an ECS for some functions, but an RCS for others; this leaves users with inconsistencies in the treatment of their personal communications, which can infringe on their privacy. Importantly, whether a social media platform is characterized as an ECS or an RCS has a direct impact on the stringency of the procedures that law enforcement must follow to obtain the content. Further, although the SCA does not specifically differentiate between public and private social media accounts, because the SCA was only intended to cover private communications, it inadvertently creates counterintuitive privacy protections. For example, in Crispin, the court held that opened private messages on Facebook and MySpace were covered by RCS rules, while ECS rules covered restricted wall posts and comments. Effectively, this meant that wall posts and comments, which can arguably be seen by all of an individual user’s friends, were afforded greater protections than private messages, which are typically only seen by the sender and the intended recipient. This is counterintuitive because it means that less private communications receive greater protection than more private communications.

Consequently, there is a clear need for Congress to reform the SCA now, and as a first step, require warrants for all communications, regardless of whether an ISP is characterized as an RCS or ECS. Warrants provide the strongest protection for social media users, and when it comes to individual liberties, the government has an obligation to preserve these liberties with the broadest legal protections possible. This is especially important considering the case law, which argues that individuals have a right to be protected under the SCA if they took steps to protect their content. By requiring warrants for the disclosure of all social media communications, the SCA would be able to provide the strongest statutory framework to protect users’ privacy and prevent the unjust use of their social media content against them in criminal court.

B. Removing the Differentiation Between RCS and ECS

The previously highlighted variability and liability in characterizing social media platforms as RCS providers in some instances and ECS providers in others has become even more problematic with the recent emergence of social media Stories. If Stories are analogized to emails or private messages—because the user posts the Story with the intention that others will see it and it will be gone shortly after the message is read—they would be governed by ECS rules, similar to the private messages in Crispin. Alternatively, Stories considered analogous to YouTube videos—because they are stored for only a limited number of people to view—would be governed by RCS rules. The courts have yet to address whether Stories should be governed by ECS or RCS rules, but there are arguments for both sides because Stories do not fit neatly into either category.

Because the SCA was not created to accommodate these newer technologies, it would be more effective to revise the SCA categories rather than attempting to fit new technologies into the existing categories. Because social media platforms offer various functions that involve both message transmissions and electronic storage, the language of the SCA needs to be amended to eliminate the distinction between RCS and ECS altogether. Orin Kerr suggested doing this by identifying that the SCA applies only to “network service providers,” which would encapsulate the current definitions of ECS and RCS and then apply the SCA rules to different types of files held by the network service providers. This would alleviate the difficulty of determining which rules apply to social media providers in different situations and would further clarify privacy rights for users by establishing when and how their content is protected. Importantly, this would also provide consistency and give users a better understanding of their rights online, which may, in turn, influence what information they choose to post on social media—especially if they know it could later be used against them in a criminal case. Without this clarity, social media users do not know whether their content is protected and what steps they need to take to protect their private communications, which may, consequently, have a “chilling effect” on their conduct.

C. Requiring Warrants for All Compelled Non-Content Data Disclosures

As technology has grown and evolved, the distinction between content and non-content data has continued to blur. This is particularly true when individuals include the date, time, and location of their posts in the actual post or Story. When Facebook, Instagram, and Snapchat collect that information, it becomes non-content data, some of which can be disclosed pursuant to only a subpoena, and some of which requires either a court order or a warrant. One way to address this issue would be to require warrants for all compelled disclosures of non-content data. This is in line with the suggestion to require warrants for all compelled disclosures of content.

By requiring warrants for compelled disclosures of non-content data, criminal investigators would then have to show probable cause before obtaining the information, which is the highest standard available. In Carpenter, the Court acknowledged that individuals have a reasonable expectation of privacy regarding their physical location. Unlike cell-site records, social media platforms do not collect information on users every time their phone pings a cell tower. Instead, locations are collected when individuals post to social media. Therefore, it is currently unclear whether location information would always be protected by a warrant under the SCA.

While it is true that some non-content data records reveal more than others, advances in metadata analysis have shown that assembling disparate pieces of metadata can lead to larger discoveries. Thus, although one might argue that it would be better to specify which types of records require a subpoena, which require a court order, and which require a warrant, this practice would be difficult to consistently implement. Rewriting the SCA to guarantee that such non-content metadata is protected by the highest protection affordable would ensure that social media users are provided their First Amendment rights.

D. Removing the Distinction Between Content and Non-Content Data

Perhaps a simpler solution to this problem of differentiating between content and non-content data would be to eliminate the distinction altogether. The distinction comes from Ex parte Jackson, in which the Court held that “a distinction is to be made between different kinds of mail matter,—between what is intended to be kept free from inspection, such as letters . . . and what is open to inspection, such as . . . printed matter, purposely left in a condition to be examined.” The Court held that mail can only be opened and examined under a warrant because otherwise it would constitute an illegal search. Thus, content is what is “intended to be kept free from inspection,” as it is sealed away, and non-content data is what is left in the open.

When the Court first created this distinction in Ex parte Jackson, it made sense to differentiate between the information on the outside of an envelope, which could be openly seen by others, and the content that was stored within an envelope. However, trying to apply that logic to social media now no longer makes sense because the distinction between content and non-content data has become so blurred. For example, when a user posts a picture of their dog on their Instagram profile, they can include a geotagged location to where the photograph was taken. Is the location still non-content data because it is not the “substance” of the post, or is the location content because the user is using it to indicate where the picture was taken and, therefore, it is part of the description? If the latter were true, it would then arguably be content.

If the same information can be considered both content and non-content, it does not make sense to allow law enforcement to obtain the same information with lesser protections solely because they can argue that it is non-content data. Eliminating the distinction between non-content and content data would remove the uncertainty and enable social media users to be confident that all aspects of their posts would be protected.

CONCLUSION

The Ninth Circuit had it right when it said, “until Congress brings the laws in line with modern technology, protection of the Internet and websites such as [social media platforms] will remain a confusing and uncertain area of the law.” Social media platforms, as a whole, do not fit nicely into the existing ECS and RCS categories that Congress created when drafting the SCA in 1986. Some functions of social media platforms lead to the platform being treated as an ECS, while other functions lead to the platform being treated as an RCS. In other instances, it is difficult to determine whether a specific function indicates that the social media platform is acting as an ECS or an RCS. As a result, the SCA can be inconsistently applied to disclosures of social media content. Most importantly, certain functions on social media are arbitrarily afforded stricter protections than others, solely because of how they are inconsistently categorized under the current SCA. The rationale for affording communications greater protections when they are classified as an ECS that is 180 days old or less versus the fewer protections afforded to an ECS that is more than 180 days old or as an RCS is unclear. As a result of these arbitrary distinctions, law enforcement has an easier time searching an individual’s private social media, which may only require a subpoena or court order, than it would going through someone’s diary, which requires a warrant.

Further complicating the application of the SCA to social media today is the fact that in the age of social media, it is becoming more difficult to distinguish content from non-content data. When Congress drafted the SCA, it attempted to apply the Fourth Amendment to online communications and therefore made a distinction between content and non-content data; however, the difference between what constitutes content—analogous to what is contained inside an envelope—and non-content—analogous to what is on the outside of an envelope—in the digital context has become difficult to discern. Courts have also considered the third-party doctrine when determining what information could be obtained with a subpoena, reasoning that because the information had been disclosed to a third party, the user had no reasonable expectation of privacy. However, social media users disclose a variety of personal information when signing up for an account, often including, at a minimum, their name, birthdate, and email address, and their posts include lots of additional metadata. The privacy of these data is critical to define because they can be used by law enforcement to piece together where an individual was at the time they posted to social media or where an individual was when the content they posted was retrieved. Whether this very sensitive information should require a warrant or a lesser means to be retrieved by law enforcement is not currently clearly defined in the SCA.

The ECPA—which includes the SCA—was enacted to protect citizens from having their electronic communications intercepted without the proper authorization, but these protections need to change in response to evolving communication technologies. This legislation was intended to extend Fourth Amendment protections to new technologies, but because social media technologies have evolved so rapidly since 1986, the SCA no longer truly affords the intended protections. For citizens to be protected against unreasonable searches of their digital media, Congress needs to restructure the existing legislation to properly address how communication technologies have evolved over the past thirty-six years. Not only can one social media platform function as both an ECS and an RCS provider under the current SCA definitions, but it is now also difficult to determine whether a specific social media function, such as Stories, which has properties of both, should be governed by ECS or RCS rules. Further, there is now duplication of content and non-content data, making it difficult to clearly differentiate them and ensure that all of this personal information is being adequately protected under the SCA.

To ensure the protection of constitutional privacy rights and prevent private social media communications from being unfairly used against their creators in court, Congress should require that all compelled disclosures be governed by the same rules as the Fourth Amendment; that is, it should require that there be a warrant and “probable cause.” If all compelled disclosures were to require a warrant, then equal protections would be applied in all situations, as the standard would be consistent across physical and digital searches; this would help ensure that defendants’ due process rights were not violated. Further, because the distinctions between an ECS and RCS, as well as content and non-content data, are no longer appropriate, it would be advantageous for Congress to revise the SCA to better align with modern technologies by drawing the necessary delineations based on the functions being used, not on the specific type of provider. This way, the SCA would not only better apply to modern technology, but it would hopefully also better apply to future emerging technologies.

 

96 S. Cal. L. Rev. 707

Download

Executive Senior Editor, Southern California Law Review, Volume 96; J.D. Candidate 2023, University of Southern California Gould School of Law; M.S. Clinical Research Methods 2020, Fordham University; B.A. Psychology 2015, New York University. My thanks to my parents, Marlene and Lee Allen, and Jennifer Guillen for their input and support throughout the note-writing process. I would also like to thank my Note advisor, Professor Eileen Decker, for her guidance, and the members of the Southern California Law Review for their hard work and thoughtful suggestions.

Big Data in Health Care– Predicting Your Future Health by Kristina Funahashi

Note | Health Care & Life Sciences
Big Data in Health Care — Predicting Your Future Health
by Kristina Funahashi*

From Vol. 94, No. 2
94 S. Cal. L. Rev. 355 (2021)

Keywords: Health Care & Life Sciences; Data Privacy

Predictive analytics—a branch of data analysis that generates predictions about future outcomes through the power of computers to process large amounts of data using statistical modeling and machine learning—is increasingly applied in health care. While it has the potential to improve patient health and lower health care costs, the ability to peer into people’s future health status has also raised significant concerns about privacy and patient self-determination. Part I of this Note explains predictive analytics and machine learning in healthcare; it discusses data sources (which may not all be medical records) and examines several predictive analytics models. It concludes by assessing the risks posed by predictive health analytics, including psychological harms to patients and discrimination by healthcare insurers, healthcare providers, and employers. Part II summarizes existing federal data privacy and nondiscrimination legislation relevant to healthcare information in order to assess where the law leaves gaps regarding the regulation of predictive health data. By comparing predictive health analytics with genetic testing—another method of predicting an individual’s risk of disease where laws have been enacted to protect perceived “misuses” of test results—Part III reaches conclusions about how the law could treat the use of predictive health analytics and makes recommendations about future protections for patients.

* Executive Articles Editor, Southern California Law Review, Volume 94; J.D. Candidate 2021, University of Southern California Gould School of Law; B.A. Organismic and Evolutionary Biology 2014, Harvard University. I would like to thank Professor Alexander M. Capron for his invaluable guidance and insights during the drafting of this Note. I would also like to thank the Southern California Law Review Staff for their incredibly detailed and diligent assistance throughout the editing process. Last but far from least, a heartfelt thank you to my grandfather, Jerry D. Wu, M.D., and my parents, Lenora and Ted Funahashi, for their unwavering encouragement, love, and support.

View Full PDF

Navigating the Atlantic: Understanding EU Data Privacy Compliance Amidst a Sea of Uncertainty – Note by Griffin Drake

From Volume 91, Number 1 (November 2017)
DOWNLOAD PDF



Navigating the Atlantic: Understanding EU Data Privacy Compliance Amidst a Sea of Uncertainty

Griffin Drake[*]

TABLE OF CONTENTS

INTRODUCTION

I. BACKGROUND

A. Key Principles of Privacy Regulations

B. Schrems I and the Invalidation of the Safe Harbor

C. The Road to the Privacy Shield

D. Other Available Transfer Mechanisms

II. THE FUNDAMENTAL DIFFERENCES BETWEEN U.S. AND EU DATA PRIVACY POLICIES

A. EU Privacy Policies

B. U.S. Privacy Policies

III. HOW THE GDPR AFFECTS THE CURRENT AND FUTURE DATA PROTECTION LANDSCAPE

A. What’s New in the GDPR?

B. How Does This Affect Data Transfer Mechanisms?

1. BCRs

2. Model Clauses

3. Codes of Conduct and Certification

IV. THE FATAL FLAWS OF THE PRIVACY SHIELD, MODEL CLAUSES, AND BCRS

A. Privacy Shield

B. Model Clauses

C. BCRs

V. SO, WHAT OPTIONS DO COMPANIES HAVE?

A. Consent

B. Prepare for the GDPR

 

INTRODUCTION

United States government surveillance has reached a point where the government “c[an] construct a complete electronic narrative of an individual’s life: their friends, lovers, joys, sorrows.”[1] In June 2013, Edward Snowden released thousands of confidential documents from the National Security Agency (NSA) regarding classified government surveillance programs.[2] The documents brought to light the fact that that the NSA was spying on individuals, including foreign citizens, and deliberately misleading Congress about these activities.[3] According to Snowden, the spying was so extensive that the spying measures, including a program known as “PRISM,” involved the improper mass collection of data from citizens worldwide through NSA interactions with telecom giants like Google, Microsoft, and Facebook, and by tapping into global fiber optic cables.[4]

These revelations sent shockwaves around the globe, and the backlash was swift and unforgiving. One thing became clear to Americans and the rest of the world: the NSA and the U.S. government had prioritized the massive collection of private information over and above the personal privacy rights of the global population.[5] The concept of throwing civil liberties to the wayside through grossly intrusive surveillance pushed Snowden to step forward and reveal what he had seen all too closely.[6] He no longer wanted to “live in a world where everything that I say, everything that I do, everyone I talk to, every expression of love or friendship is recorded.[7]

Across the Atlantic, the priorities of European Union member nations stand in stark contrast to those of the United States. The EU takes a much stronger stance on privacy and data protection and restricts how companies transfer data to non-EU nations. In the EU’s Data Protection Directive (the “Directive”), the right to privacy is described as a “fundamental right[] and freedom[].”[8] This sentiment is echoed in other landmark EU documents such as the Convention for the Protection of Human Rights and Fundamental Freedoms.[9]

Despite the very different treatment of the right to privacy in the U.S. and EU, we live in an era of lightningquick information transfers and an interconnected global economy in which the sharing of private data (including names, IP addresses, health care information, and so forth) across borders is essential to companies conducting business worldwide.[10] The current state of the world necessitates that data flow seamlessly from country to country.[11] This reality led to the EU’s Safe Harbor Decision (“Safe Harbor”), allowing American companies to self-certify their compliance with certain heightened privacy restrictions when handling the private information of EU citizens and thus facilitating the transfer of information from the EU to the U.S.[12] However, the Safe Harbor was invalidated in Schrems v. Data Protection Commissioner (“Schrems I”).[13] This left American companies to rely on other EUapproved data transfer mechanismsnamely, Model Clauses,[14] Binding Corporate Rules (BCRs), or specific statutory derogations. In need of a replacement for the Safe Harbor, the EU and the United States agreed on a new deal known as the “Privacy Shield,” despite heavy criticism.[15] An additional layer of complexity exists due to the fact that the Directive, which long governed the handling of private information in the EU, is now being replaced with the significantly stronger General Data Protection Regulation (“GDPR”).

This Note will argue that in light of the pending commencement of the GDPR, American companies relying on the Privacy Shield are exposed to potential risk, as it fails to satisfy the “essentially equivalent protection” standard set forth in Schrems I, and that alternative data protection mechanisms, such as Model Clauses or BCRs, have serious drawbacks and face similar questions regarding their validity.[16] Subsequently, I will discuss some of the potential alternative mechanisms that companies can use to best mitigate exposure to the risks inherent in transatlantic data transfers.

Part I of this Note will describe the background that has led to the current uncertainty in the validity of the various data protection mechanisms. This Part will discuss the key principles behind data privacy protections, the Schrems I case and the subsequent invalidation of the Safe Harbor, the buildup to the Privacy Shield, and the other possible transfer mechanisms. Part II will discuss the fundamental differences between the United States’ and the European Union’s approaches to protecting individuals’ private information. This section will highlight the irreconcilable differences between U.S. surveillance policies and the EU’s view of the fundamental right to privacy. Part III will discuss the pending implementation of the GDPR and the relevant changes this directive will have to the current transatlantic data transfer legal regime. Part IV will outline the shortcomings inherent in the Privacy Shield, Model Clauses, and BCRs individually. Part V will conclude this Note by briefly discussing potential alternatives that companies can use to attempt to weather the shaky data privacy landscape that exists today. The proposed alternatives include obtaining consent, using codes of conduct and certification, and layering transfer mechanisms.

I.  Background

A.  Key Principles of Privacy Regulations

With the ability of companies to transfer swaths of consumers’ personal data globally at the click of a button, the United States and the European Union have been forced to adapt privacy regulations to meet this rapidly changing reality. In doing so, certain fundamental principles have arisen and been used to shape modern data privacy laws. In 1973, the U.S. Department of Health, Education, and Welfare developed a committee to review the use of automated data systems that maintained personal information.[17] This committee laid out five principles for data protection, known as the “Fair Information Practices” (FIPs).[18] These principles were incorporated, though not by name, in the Privacy Act of 1974.[19] The Privacy Act of 1974 also established the Privacy Protection Study Commission, which in 1977 refined the FIPs into eight clear principles.[20] The principles are: Openness, Individual Access, Individual Participation, Collection Limitation, Use Limitation, Disclosure Limitation, Information Management, and Accountability.[21] These principles, however, apply only to the public sector and were not formally referenced by Congress until 2002.[22]

In the EU in the 1970s, many laws were already consistent with the principles described in the FIPs.[23] In 1980, the Organization for Economic Cooperation and Development (OECD) developed a set of privacy guidelines with its own eight principles for data protection.[24] These principles include: Collection Limitation, Data Quality, Purpose Specification, Use Limitation, Security Safeguards, Openness, Individual Participation, and Accountability.[25] These principles clearly bear a strong resemblance to the FIPs with one major differencethey are broadly intended to apply across both the public and private sectors. In 1995, the EU took the principles a step further and adopted the Directive to protect individuals and their private data.[26] These principles were also included in the GDPR, along with a few additional principles.[27] All in all, the principles created in 1973 and revised over time often serve as the foundation for data privacy regulations today.

B.  Schrems I and the Invalidation of the Safe Harbor

While transferring data around the world is a practical necessity for large companies, governments in the EU and the United States recognize that due to how quickly and easily personal data is being transferred, this data must be protected. Acknowledging these two conflicting important interests, the EU and the United States struck a deal. In 2000, the European Commission passed a decision known as the Safe Harbor, determining that the United States, in conjunction with the terms of the agreement, provided adequate privacy protection.[28] The Safe Harbor decision allowed U.S. companies to self-certify that they will abide by EU data protection standards when transferring data across the Atlantic.[29] This option was attractive to companies because it was relatively easy to institute and it efficiently lowered transaction costs compared to Model Clauses or BCRs—so much so that over five thousand companies chose to self-certify.[30] Self-certification involved companies (1) outlining specific information about the company and the company’s use of personal data obtained from EU citizens on an online form and (2) paying a processing fee of $200.[31] This option was considered to fall into the category of an “adequacy decision” by the Commission in accordance with Article 25 of the Directive.[32] It is important to note, though, that this decision did not allow free rein for all U.S. companies to freely exchange information across the Atlantic. Instead, this method of achieving adequate protections only applied to the companies that self-certified and complied with the requisite standards.

While this solution worked for over a decade, the revelations published by Edward Snowden served as evidence that the Safe Harbor was built on false assurances. The Safe Harbor met its ultimate demise in Schrems I, in which Maximillian Schrems, an Austrian privacy activist, complained to the Data Protection Commissioner that Facebook, a Safe Harborcertified company incorporated in Ireland, was transferring personal data into the United States where “the law and practice in force in that country did not ensure adequate protection of the personal data held in its territory against the surveillance activities that were engaged in there by the public authorities.”[33] In his original case, Schrems cited Facebook’s voluntary participation in the aforementioned NSA PRISM program, which gave the U.S. government access to substantial amounts of private personal information.[34] The claim was that “there was no meaningful protection in US law or practice regarding data transferred that was subject to US state surveillance.[35]

The Irish High Court agreed with Schrems, stating that “[t]here is, perhaps, much to be said for the Snowden revelations exposing gaping holes in contemporary US data protection.”[36] Accordingly, the Irish High Court, in line with EU law, referred the matter to the Court of Justice of the European Union (“CJEU”) to adjudicate the validity of the adequacy decision regarding the United States.[37]

The CJEU agreed with the Irish High Court and took a large step by fully invalidating the Safe Harbor.[38] The standard as stated by the court vastly elevated the requirements for all future transfer mechanisms by stating that privacy protection measures in non-EU member nations need to be “essentially equivalent to that guaranteed in the EU legal order.[39] Thus, the CJEU found that U.S. privacy law was incompatible with the EU charter.[40]

C.  The Road to the Privacy Shield

With roughly five thousand companies relying on an invalidated measure, uncertainty as to what steps to take was apparent and widespread. But just as economic necessity drove the United States and the EU into the eventually invalidated Safe Harbor, it likewise drove them to craft a new, seemingly more robust agreement.[41] In coming to this agreement, the two parties faced incredible time constraints and deadlines from the Article 29 Working Party, the group designated to represent the EU member nations’ data protection authorities. The agreement that was developed, known as the Privacy Shield, was fully approved and placed into effect in July 2016, despite facing some bumps in the road,[42] and was intended to guarantee that the United States will provide the necessary “essentially equivalent” protections to individuals as those individuals would receive under the Directive.[43] The goal was that the Privacy Shield would fix the weaknesses inherent in the Safe Harbor as identified by the CJEU while providing a useful means to maintain the free flow of information.[44]

The dilemma faced by both the EU and the United States was that data necessarily needs to flow between them to maintain everyday business functions, while at the same time there must be protections in place to ensure the proper handling of the data being transferred.[45] The Privacy Shield was agreed upon because of this dilemma, and it has been described by some as a much stronger version of the invalidated Safe Harbor.[46] The Privacy Shield now includes stronger obligations regarding how companies handle data, increases transparency regarding how data is used, safeguards against U.S. government access, and provides new protections and remedies for individuals and a joint review mechanism.[47]

The agreement, though, was created in line with the Directive (and the Schrems I decision, which was made based on the Directive). Come 2018, the Directive will be replaced by the GDPR.[48] The GDPR was developed to modernize the protections given by the EU to individuals while greatly strengthening individuals rights.[49] The GDPR is intended to protect personal data in a manner significantly stronger than under the Directive.[50] Further, the new, stronger protections of the GDPR may lead to the invalidation or revision of the Privacy Shield, which was hurriedly designed to comply with the CJEU court decision and the Directive. Even today, there are already complaints about the adequacy of the Privacy Shield’s ability to adequately protect EU citizens’ data, similar to those raised against the Safe Harbor.[51] These complaints have been exacerbated by an executive order issued by President Trump, excluding non-U.S. citizens from the protections of the Privacy Act of 1974.[52]

D.  Other Available Transfer Mechanisms

So, what options does a U.S. company have for transferring personal data? The Directive outlines acceptable methods for such transfers, including an adequacy decision by the Commission, a Commissionapproved transfer mechanism, or a statutory derogation.[53] A brief overview of these transfer mechanisms follows here, but they are discussed in more depth in Parts II, III, and IV.

An “adequacy decision” is a determination by the Commission that a non-EU member country “ensures an adequate level of protection.[54] The Safe Harbor and the Privacy Shield were considered adequacy decisions in the sense that they developed certain rules and regulations that would strengthen the United States’ privacy protections to an “adequate” level. The Privacy Shield remains approved, meaning that a company can legally rely on it to transfer data. However, this mechanism could place a company in a position where if the Privacy Shield is invalidated or undergoes substantial revision, the company will need to undertake costly measures to ensure that  its data transfers comply with the applicable laws and regulations in order to avoid hefty fines for non-compliance.[55]

A second option is either of the two European Commission-approved transfer mechanisms: Model Clauses or BCRs.[56] BCRs are company-developed rules governing the protection of private data that must undergo a rigorous, multi-step approval process by EU data authorities; they may be used to ensure that all transfers within a single group or company provide adequate protection as described in Article 26(2) of the Directive.[57] It is worth noting, though, that BCRs only legitimize data transfers made within a single overarching group.[58] A major benefit of BCRs is that unlike Model Clauses, there is no need to sign new contracts with each transaction.[59] This allows a company to have a clear internal procedure for handling private data and can lead to particular efficiencies.[60] Any company that is sharing or transferring data outside of its broader corporate entity structure, however, will still need to use a different method to validate those transfers, making this option less attractive to companies that exchange information externally.

This leads some companies to turn to Model Clauses, sets of contract clauses that, as determined by the European Commission, provide adequate safeguards to data privacy.[61] These have become an option oftrecommended by privacy experts and lawyers[62] due to the relative ease of implementation and their long-standing legal validity in the EU.[63] In order to receive the immunity given to companies using Model Clauses, the Clauses must be included in agreements verbatim, leading to the benefit of needing no prior authorization from country-specific data authorities.[64] Model Clauses also have the distinct advantage of covering a wide range of data transfers. Specifically, Model Clauses, like BCRs, can be used for intra-company transfers; they can be used for U.S.-EU transfers, like the Privacy Shield; and they have the additional benefit of being available for transfers between the EU and entities in any other jurisdiction, unlike the other two options.[65] This added flexibility, combined with the lower transactions costs associated with implementing these clauses, can be especially appealing to large, multinational companies that transfer data to different jurisdictions and between different entities. Model Clauses, though, are not without flaws, many of which will be discussed in Part IV.

Lastly, the data transfer itself may qualify for a statutory derogation.[66] Derogations may include a data transfer necessary to protect the vital interests of the data subject or a data transfer after the subject has given unambiguous consent, amongst other options.[67] Due to the highly specific and less common nature of many of the derogations, only consent will be discussed in this Note.

II.  The Fundamental Differences Between U.S. and EU Data Privacy Policies

Data protection as a concept is itself a novel and rapidly changing field, due in large part to the fact that commercialized Internet is only a few decades old.[68] Despite the relative infancy of this field, developments in how data is used and managed electronically evolve rapidly, and legislators fight a constant battle to keep pace with these changes. In light of the practical realities that attach to this field, the EU and the United States have taken substantially different views on what measures should be taken to protect the data filling the technological universe. The EU has widely confirmed the belief that citizens have a “fundamental right[]” to data protection.[69] The United States, however, does not explicitly share the view that data privacy protection is a fundamental right of all persons.[70]

A.  EU Privacy Policies

The notion that “[e]veryone has the right to the protection of personal data concerning him or her” is stated plainly in the Charter of Fundamental Rights of the European Union, a document designed to lay out the basic rights of European citizens and provide guidelines relating to these rights.[71] As mentioned earlier, this EU-recognized right is reiterated in the Directive with its specifically stated purpose to ensure that member states [] protect the fundamental rights and freedoms of natural persons, and in particular their right to privacy with respect to the processing of personal data.[72]

One explanation put forward by some commentators regarding the EU stance that data protection is a fundamental right stems from the 1940s.[73] During the Second World War, the Nazis appropriated European census records, using these records to expedite deportations to concentration camps and to strengthen Germany’s hold over Europe.[74] I argue that this experience, in part, prompted the EU to take a stronger stance on privacy protections, whereas the United States, a country that has not experienced such a scarring example of what can happen when private information falls into the wrong hands, is less inclined to push for stronger protections.

Another explanation can be seen by the early adoption of the FIPs by many EU nations and the EU as a whole.[75] By adopting these principles and incorporating them into early data privacy rules and regulations, the EU set a precedential course that influenced all future privacyrelated decisions. This created a multi-generational awareness of, and belief in, the importance of protecting individuals privacy.

The focal point of the EU privacy regime has historically been the Directive. The Directive is an omnibus legislation protecting personal data, as opposed to a fragmented, country-by-country approach. The Directive has been hailed by commentators as “the most influential national data protection law.”[76] Additionally, the drafters of the Directive took an important step in Article 28, making the Directive applicable in countries outside of the EU.[77] Specifically, transfers of data outside of the EU require contracts or other legal acts explicitly governed by EU or member-nation law.[78]

Internationally, the trend has been to follow the EU in creating legislation that applies to all data processing inside and outside of the country, largely mirroring the strict protections laid out in the Directive.[79] The thought is that if foreign countries cannot process information about EU residents, private interests will lose out on a major global market, and thus, countries will have an overwhelming incentive to come into compliance. However, despite a global trend of compliance, two powerful nations have remained defiant in the face of such measuresChina and the United States.[80]

Although at first glance it may appear that the EU has come up with a comprehensive and invaluable solution to the data privacy issue, it remains, like most legislation, imperfect. One flaw is apparent simply from the name of the document: it is a directive. As such, member nations maintain some control in dictating their own privacy laws, which has led to fragmentation in the interpretations of the principles laid out in the Directive.[81] This materially limits one of the major strengths of the Directive: its being a single document utilized by all member nations.

This, however, will change with the commencement of the GDPR.[82] The key again comes in the name of the document: here it is “regulation.” As a regulation, member nations no longer have the ability to interpret the document to create their individual data policies.[83] Regulations, therefore, carry with them an increased level of strength that does not exist in the Directive. All things considered, the general idea is to centralize power regarding data privacy and eliminate the sometimes patchwork effects of the Directive. This will be discussed in more detail in Part III.

B.  U.S. Privacy Policies

In describing the United States’ approach to data privacy policy, it may be useful to imagine a scheme opposite to that of the EU. The United States government does not recognize a fundamental right to privacy.[84] Additionally, the United States “uses a sectoral approach that relies on a mix of legislation, regulation, and self-regulation.”[85] U.S. privacy laws are often responses to particular events and are tailored to particular industries and types of data, similar to a firefighter running around putting out individual fires one at a time.[86] This has led to not only inefficiently overlapping polices but also notable gaps in the U.S. privacy framework.[87] These gaps in  protection have been used as an explanation as to why the United States failed to satisfy an adequacy decision by the EU before the initiation of the Safe Harbor.[88]

As discussed in Part I, the United States produced the FIPs in 1973 as an early step in privacy protection. Here, however, the United States went in a different direction than the EU, which is one possible explanation for the very different positions that each holds today. The United States did not explicitly create broad legislation with the FIPs in mind;[89] instead, it opted for various acts and statutes determined by the needs of certain industries and agencies which interpreted and revised the FIPs in various ways.[90] Further, early laws incorporating the FIPs were applicable only to public sector entities, applying only in specific circumstances to the private sector.[91] I argue that because of the lack of a longstanding and broad commitment to the protection of individuals’ private information, U.S. citizens do not have their EU peers’ deep-rooted, multi-generational awareness of and belief in the importance of protecting individuals’ privacy. This leads to less political pressure on the U.S. government to enact strong privacy policies, perpetuating a cycle of citizens accustomed to weaker protections.

Another explanation for why the United States would take an approach to privacy substantially different from that of the vast majority of developed nations is similar to one rationale behind the EU policynamely, a massive tragedy. As one commentator described, “[t]he attacks of September 11, 2001, have further weakened Washington’s will to protect data. [In fact, t]hrough new laws and new offices, Washington now has more unfettered access to citizens’ data than ever before.[92] Another author, in 2002, went so far as to predict that “[c]ommunications technology is necessarily intrusive and, spurred on by international efforts to ferret out terrorism as a result of the September 11, 2001, attacks on the United States, will become even more so.”[93] In summation, the September 11 tragedy planted an unshakable image in the minds of U.S. citizens as a whole, leading to an increase in concern and vigilance regarding terror threats. Whether this sentiment remains as vibrant today is beyond the scope of this Note, but terror threats are everpresent,[94] suggesting this rationale is unlikely to fade. Evidence of an ongoing desire to manage the danger includes the U.S. government’s covert surveillance tactics, as exposed by the documents leaked by Edward Snowden.[95]

An additional rationale for the U.S. stance on privacy regulation results from a desire to maintain a free market economy with limited government regulation. The idea is that the government should limit regulations on businesses and allow the market to police itself. For instance, the Clinton administration advocated for industry-specific self-regulation, as opposed to government regulation.[96] That is not to say that the Clinton administration was opposed to privacy regulations, but this advocacy was a clear endorsement of a fragmented system of dealing with privacy issues. Additionally, one commentator described the Safe Harbor as being a “minimalist solution” in order to avoid a trade war “that was supposed to evolve into something stronger. It transpired, however, that the United States never intended to follow through on commitments to strengthen it.”[97] While these anecdotes are far from dispositive, they do point to the endurance of an American philosophy holding that the government should not over-regulate markets.

This rationale, though, is at least debatable. For instance, President Obama released a report in January 2017 calling for increased privacy regulations and re-emphasizing the right to be protected from governmental intrusion.[98] The Obama administration itself, though, was heavily criticized upon the exposure of the PRISM program undertaken by the NSA.[99] Furthermore, the views expressed in this report may not be shared by the new administration, which removed the report from the White House website the day after President Trump’s inauguration and issued an executive order cutting back privacy protections for non-citizens just days after his inauguration.[100]

It would be remiss to paint a picture of the United States as being completely indifferent to individuals’ privacy rights. For instance, the First, Third, Fourth, Fifth and Fourteenth Amendments collectively provide the implicit foundation for many of the laws and regulations regarding privacy in the United States.[101] There are also numerous federal laws, including the Health Insurance Portability and Accountability Act of 1996, the Fair Credit Reporting Act, the Gramm-Leach-Bliley Act, and many others, that address the protection of private information.[102] Additionally, the Federal Trade Commission has broad powers to take enforcement actions regarding “unfair or deceptive acts or practices in or affecting commerce.”[103] On top of this, individual states have passed their own regulations, with California’s regarded as amongst the most comprehensive.[104] These different protective measures are likely in place because the U.S. government places at least some value on protecting individuals’ privacy.

The issue, however, is that a system like this is inherently flawed. Using a patchwork structure necessarily leaves gaps.[105] In addition to gaps, individual state and federal laws are often inconsistent with one another.[106] Unfortunately, the United States has consistently rejected both omnibus legislation and the fundamentalrights approach to data protection.[107] There is no more clear depiction of this than the egregious surveillance tactics used by the U.S. government and revealed in the Snowden leak. Just as September 11 dramatically changed the landscape of data privacy protection in the United States, the Snowden documents dramatically altered the state of EU-U.S. privacy relations.

III.  How the GDPR Affects the Current and Future Data Protection Landscape

The Directive has stood as the basis for EU data privacy law since 1995. The Directive provides the structure and legal guidelines with which the Safe Harbor, the Privacy Shield, the Model Clauses, and other transfer mechanisms seek to comply. The Directive, however, is nearing extinction. On April 14, 2016, the European Parliament approved the GDPR; it takes effect on May 25, 2018, at which point companies will need to be in compliance with the new, stronger regulation.[108] This section of this Note will focus on how the GDPR differs from the Directive and what that means in terms of compliance and the potential transfer mechanisms.

A.  What’s New in the GDPR?

The GDPR sets out to tackle the same goal as the Directiveprotecting the fundamental rights and freedoms of the EU citizenry with regard to the handling of personal data.[109] The goal is to do this while also facilitating efficiencies within the European economy and helping to promote economic and social progress.[110] These goals, however, are pursued slightly differently in the GDPR than in the Directive.

First, as mentioned earlier, a relevant distinction between the GDPR and the Directive is identifiable by looking at the titles of the two enactments. The GDPR is a “regulation,whereas the Directive is a directive.” This matters because a directive gives only guidance to member nations, allowing each member nation to interpret the directive and achieve its purposes in whatever way they deem appropriate.[111] A regulation, however, is applicable to each member nation and does not have to be enacted into each individual country’s legal framework.[112]

The impact of this should not be understated. A major issue with the current system is that companies must deal with greatly differing regulations in each nation in which they maintain data. This, in large part, will be eliminated. The EU stated in a press release that the estimated savings from creating a “one-stop-shop” will be in the neighborhood of €2.3 billion per year.[113] Nevertheless, while the GDPR will remove a substantial amount of the difficulty that has arisen from potentially having to comply with twenty-eight different member-state data protection laws, companies must be aware that there are still some areas in which member nations have discretion.[114] An example can be seen in Article 6(1)(e), regarding one way in which a company can legally process personal data.[115] This provision allows processing when “processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller.”[116] All in all, though, one of the most consequential differences of the GDPR will be the decrease in administrative costs faced by companies who no longer have to negotiate, communicate, and work with data protection authorities from many different nations.

A second difference between the GDPR and the Directive is the strengthened focus on individuals’ rights vis-à-vis the way the world transfers, accesses, and uses data. In 2017, personal data is being transferred at speeds and in volumes that were unthinkable not long ago, and consumers recognize a need for strong protection. As stated by the EU, “[n]ine out of ten Europeans have expressed concern about mobile apps collecting their data without their consent.”[117]

The specific individual rights highlighted in the GDPR are the right to be informed, the right of access, the right of rectification, the right to erasure, the right to restrict processing, the right to data portability, the right to object, and rights related to automated decision-making and profiling.[118] These rights focus on two overarching goals of the GDPR. First, the GDPR increases the availability and clarity of the information provided to individuals whose data is being processed. Second, it grants citizens more control over the data they provide and also gives the citizens easier access to legal remedies for breaches. While not all of these rights are completely new or different than rights discussed in the Directive, in general they are written in a way that strengthens the rights of the citizen.[119]

Third, the definition and application of “consent” have been adjusted to further protect individuals. Consent needs to be clear, unambiguous, specific, informed, and freely given.[120] Further, the language in the GDPR seems to have noticeably narrowed the possibility of a type of implied consent arguably possible under the Directive.[121] The GDPR also has another important new feature regarding consent. Individuals are now allowed to withdraw consent at any time, and this withdrawal must be as easy to execute as the original consent.[122] This further emphasizes the strong weight the EU has placed on strengthening the role of the individual in the handling of one’s private information.

Fourth, the enforceability of the GDPR and the accountability of companies have been enhanced by new procedures, which companies must follow in order to ensure that data is appropriately protected and processed. The accountability principle accompanies transparency in an attempt to strengthen citizens’ trust in how their data is handled.[123] One way of accomplishing corporate accountability is by mandating “[d]ata protection by design” and “[d]ata protection by default.[124] These concepts, in short, mean that projects being designed or undertaken by companies must consider appropriate data protection mechanisms from inception and throughout their duration.[125] This includes safeguards such as minimizing the processing of personal data, anonymizing data as soon as possible, and building services and applications with stateoftheart data protection.[126] Accountability is also addressed in a few other ways. First, there are stricter regulations governing how companies record what data they are processing and for what purpose.[127] Second, extensive privacy impact assessments are necessary to comply with the requirement that companies maintain effective procedures to protect personal data.[128] These assessments analyze the risks to individuals, determine the necessity and proportionality of the processing in relation to the purpose, and give a description of the processing operations and the legitimate interests pursued by the data controller.[129] Lastly, data protection authorities will be able to fine companies up to 4 percent of their global annual revenue for violations of the rules.[130]

Certainly there are other differences between the two enactments, but I have highlighted the most relevant to the issue at hand. Altogether, the key differences between the GDPR and the Directive are that the GDPR (1) takes  a stronger stance on the accountability and enforcement of the principles that underlie the regulation and (2) gives individuals access to more information and a larger role to play in the data processing process. Each of these goals is championed by the EU and appears to have played an important role in the creation of the GDPR.[131] The GDPR balanced pro-economic benefits by achieving a one-stop-shop” concept to dramatically reduce transaction costs for companiesespecially those operating in more than one EU nationand secured pro-individual rights through greater transparency and accountability from companies processing personal data.

B.  How Does This Affect Data Transfer Mechanisms?

As alluded to in the previous section, there are more than a few new and unique challenges that companies will face in trying to transfer data across the Atlantic. The GDPR, however, does quite a bit to clarify the transfer mechanisms available to companies, while also introducing a few new ones. I will focus on BCRs, Model Clauses, and Codes of Conduct and Certification Mechanisms.

1.  BCRs

The GDPR provides a very important upgrade to the BCRs that were developed based on the Directive. In an attempt to increase consistency of the enforcement of the data protection laws, indirectly reducing transaction costs and thus appeasing businesses, the GDPR formally recognizes the use of BCRs and lays out a mechanism for utilizing and monitoring BCRs in Article 47.[132] Prior to this change, companies would need separate approvals from each country in which they handled personal data, and only two-thirds of EU member nations recognized BCRs as appropriate protective measures.[133] These upgrades will certainly help to make BCRs much more efficient for companies with entities in various countries.[134] However, as will be discussed in Part IV, BCRs are still far from a perfect option for the vast majority of companies.

2.  Model Clauses

As stated in Article 46, Model Clauses will remain an appropriate safeguard for transferring data so long as the clauses are approved as described in Article 93(2).[135] As with BCRs, the provisions of the GDPR substantially reduce the administrative burden of Model Clauses. There are a few relevant changes that facilitate this increase in efficiency. First, the EU commission will create a new set of Model Clauses pursuant to the GDPR, which will not require the prior authorization of the nation from which the data is being processed.[136] While the Model Clauses have long been intended to need littletono approval from individual nations under the Directive, nation-specific issues still existed regarding appropriate filings, monitoring, and additional objections.[137] Another relevant change involves ad hoc contractual clauses. These can include independently drafted clauses or some variations to the terms of the Model Clauses. The GDPR makes it so that these clauses will need to be approved only by an appropriate supervisory authority in order to apply to all EU nations.[138] In contrast, the Directive’s clauses required approval by each and every nation’s data protection authority before they could be considered adequate.[139] Here, the important differences are that these clauses are intended to increase efficiencyaccomplished by the overarching one-stop-shop” notionand to provide flexibility for companies to create adequate provisions that better fit their businesses.

3.  Codes of Conduct and Certification

Two of the unique transfer mechanisms detailed in the GDPR are the Codes of Conduct and Certification. Article 40 of the GDPR explains that a notable goal of EU privacy officials is to encourage the creation of Codes of Conduct.[140] The Codes of Conduct in large part work like a non-member state seeking to acquire an adequacy decision under the Directive or a single entity seeking approval of BCRs, except that the codes apply to associations or representative bodies.[141] This option is targeted at smalland medium-size companies within certain sectors of the economy that frequently do business with one another.[142] The codesif certified by an appropriate supervisory authority and combined with binding and enforceable commitments of the controller/processer to use adequate safeguardsqualify as an appropriate transfer mechanism for data leaving the EU.[143] The codes, however, must be reviewed by multiple levels of the EU data privacy hierarchy in order to be deemed to have “general validity within the Union,” which places an administrative hurdle on the use of this option.[144]

Certification, as described in Article 42, is a transfer mechanism that remains in its infancy, but it is very similar to the Codes of Conduct.[145] Certification mirrors the Codes of Conduct in the sense that it is intended to benefit small and medium-size companies, it has a similar registration and approval process, and it legitimizes data transfers when combined with appropriate commitments of the controller/processer.[146] It also bears similarity in that it is has the effect of a non-member state’s receiving an adequacy decision, but the key difference between the two is that Certification can be obtained by a single company.

IV.  The Fatal Flaws of the Privacy ShiEld, Model Clauses, and BCRs

A.  Privacy Shield

It is worth stating at the outset that the Privacy Shield agreement is between the United States and the EU. This is an important starting point, because this transfer mechanism is unique: companies relying on it are relying not just on their own compliance with EU data regulations, but also on the assumption that actions of the U.S. government (such as the illegal surveillance actions that led to Schrems I and the Safe Harbor invalidation) will not jeopardize privacy relations with Europe. This is a risky position for a corporation to place itself in, as the relationship between the EU and the United States is sewn with distrust and remains incredibly fragile due to the Snowden revelations. Additionally, the necessity for a better understanding of the shortcomings of the Privacy Shield is underscored by the fact that over 2,400 companies have signed up for it as of late 2017.[147] This Note will now address some of the risks associated with choosing this method.

First, the Privacy Shield is an unsatisfactory solution for companies aware of the GDPR’s imminence. The Privacy Shield was created in line with the nolongerapplicable provisions of the Directive, instead of with the stronger privacy protections contained in the GDPR. Because of this, it will likely fail to meet the heightened requirements of the GDPR, and it will thus have to undergo serious revision.[148] As seen with the struggle to agree on the Privacy Shield in a quick and efficient manner following the invalidation of the Safe Harbor,[149] revisions to the Privacy Shield or the drafting of a new agreement altogether may create substantial delays and unwanted uncertainty.

Second, as laid out in Part II of this Note, the United States and EU have vastly different views on privacy rights. Granted, they each have a strong incentive to bridge the gap, given the undeniable economic benefits for doing so. But this may be especially hard to do in light of President Trump’s strong stance regarding the utilization of surveillance to combat terrorism. Before taking office, Trump had already encouraged a boycott of Apple products due to its refusal to create a back door” entry into the cell phone of one of the San Bernardino shooters,[150] and said that he believed that the NSA “should be given as much leeway as possible. However . . . . [t]here must be a balance between those Constitutional protections and the role of the government in protecting its citizens.”[151]

Once in the White House, Trump further strained EU-U.S. privacy relations by issuing an executive order excluding non-U.S. citizens from the protections of the Privacy Act of 1974.[152] In reply, Jan Philipp Albrecht, the rapporteur for the EU’s data protection regulation, tweeted that the EU should immediately suspend the Privacy Shield and sanction the United States.[153] The European Commission issued a statement noting that the Privacy Shield does not rely on the protections under the U.S. Privacy Act.”[154] Nonetheless, this has added to the tension between the EU and United States and further brought the validity of the Privacy Shield into question. While it is unclear how President Trump and Congress will handle impending issues related to privacy protections, like the expiration of Section 702 of the U.S. Foreign Intelligence Surveillance Act,[155] companies should be aware of the potential for the White House and Congress—each with an eye toward increasing government surveillance—to drastically increase U.S.-EU tensions and put the Privacy Shield at risk.

Third, there are fundamental aspects of the Privacy Shield that are inconsistent with the GDPR and are subject to the same criticisms that led to the Safe Harbor’s invalidation. First, the EU hails U.S. assurances that it will limit mass surveillance.[156] Not only did these assurances come from the potentially more privacyfriendly Obama administration, but they also seem weaker than is acceptable under the GDPR standards. For instance, the NSA maintains the ability to utilize “bulk” collection tactics, so long as they are consistent with various opaque limitations subject to a good deal of interpretation.[157] Second, the Privacy Shield’s lauded redress mechanisms, which utilize an independent ombudsperson,[158] are vastly overstated, as well as undermined by a clear conflict of interest: the ombudsperson is appointed by, and reports to, the U.S. Secretary of State.[159] Certainly, the Privacy Shield attempts to lay out provisions to ensure the independence of the ombudsperson, but these provisions are speculative at best. Most importantly, it is difficult to imagine their being considered protections “essentially equivalent” to those afforded by EU member nations.

Fourth, the Privacy Shield is already facing legal challenges, largely in line with the above points,[160] and the initial version received harsh criticism from the Article 29 Working Party regarding the precise issues that led to the Safe Harbor invalidation.[161] Are these legal challenges likely to succeed? It is unclear. Was the Privacy Shield revised to try and appease the Article 29 Working Party? Yes.[162] Regardless, it is concerning that the Privacy Shield is facing such hurdles so early on, especially considering the panicked state in which the Safe Harbor invalidation left so many companies, as well as the already tenuous relationship between the U.S. and EU.[163]

In summation, the Privacy Shield agreement is a potentially dangerous option for U.S. companies. While it certainly has some benefits in terms of relative ease of implementation and flexibility,[164] it is shrouded in uncertainty and question marks. The question marks remain the same as those that led to the invalidation of the Safe Harbor, and with a surveillance-friendly administration in the White House, the relationship between the EU and U.S. will likely remain uneasy going forward. A potential invalidation would leave thousands of companies scrambling for an alternative method of compliance while risking steep fines. Therefore, the decision to certify under the Privacy Shield is the decision to place faith in a hastily prepared band-aid fix for the bursting dam that followed the invalidation of the Safe Harbor. It requires not only trust in one’s own ability to comply with the more complex EU regulations but also trust that U.S.-EU privacy relations will not slip from the shaky ground on which they already reside. That is a scary decision to make, and one that I would not advise.

B.  Model Clauses

While the forecast for the Privacy Shield is decidedly gloomy, the outlook for Model Clauses seems at least somewhat brighter. However, there are a few definitive practical flaws that make Model Clauses an insufficient option for long-term GDPR compliance. I will briefly discuss some of the basic practical issues with using Model Clauses, including their rigidity and the cumbersome aspect of having to include them in every datatransferrelated contract, before focusing on the more concerning, potentially fatal flaws regarding the legal validity of this compliance mechanism.

First, the GDPR has not expressly accepted the current Model Clauses. Instead, as described in Part III above, the GDPR outlines a process through which the EU Commission will create a new set of Model Clauses.[165] Utilizing one of the three current sets of Model Clauses is therefore a temporary solution at best. One additional general criticism of Model Clauses is that companies must be sure to include them in every single contract they have in order to validly transfer data. Thus, if the current Model Clauses are not valid under the GDPR, companies will be forced to amend every single contract relating to data transfers. While it is certainly possible that the current Model Clauses may be determined to provide adequate safeguards, it seems unlikely that the GDPR would make no mention of them if this were more assuredly the case, particularly since BCRs were explicitly included and described.

Second, and to go even further with the point above, the current Model Clauses’ validity is hotly contested. One of the strongest examples of pushback came in a position paper from the Independent Center for Privacy Protection in Schleswig-Holstein (“ULD”).[166] In this paper, the ULD took a powerful stance, stating that “a data transfer on the basis of Standard Contractual Clauses to the US is no longer permitted.[167] Soon after, a conference of Germany’s data protection commissioners largely agreed.[168] Model Clauses also face legal challenges via Maximilian Schrems’s classaction lawsuit against Facebook.[169] The case is progressing slowly due to procedural issues, but it highlights the volatility surrounding the Model Clauses.[170] However, the views of those objecting to the validity of the Model Clauses are not unanimously held. For instance, the Article 29 Working Party and the EU Commission have continued to back the Model Clauses in spite of Schrems I.[171] Even so, it is difficult to ignore the uncertainty surrounding these clausesand the potential expense their invalidation or amendment would incur.

Third, the current challenges described above have legitimacy. As the ULD stated, American companies using Model Clauses are subject to American surveillance lawsthe same ones that led to the invalidation of the Safe Harbor and which make it impossible to provide the necessary protections for citizens.[172] The notion is simple: having Model Clauses in a contract will do nothing to stop the United States from conducting the types of surveillance that led to the invalidation of the Safe Harbor. Because of this, U.S. companies will not be able to comply with the section of the clauses stating that U.S. companies are not subject to laws that make it impossible to follow the instructions of the data exporter.[173] This contention has not yet led to the invalidation of the Model Clauses, but it remains a cloud hanging over their legitimacy.

In summation, Model Clauses are a risky option for companies for multiple reasons. First, using the current Model Clauses will lead to companies having to amend every one of their contracts when the GDPR begins to be enforced. This will be both costly and timeconsuming. Also, the Model Clauses already face scrutiny from certain nations data protection authorities and could very well be invalidated even before the GDPR comes into play. Again, this would leave companies scrambling to find a new, legally valid mechanism. All this being said, of course, once the EU Commission approves GDPRcompliant Model Clauses, it may well be smart to utilize them, and they should be analyzed at that time. The issue is that these clauses do not yet exist, and the current Model Clauses are riddled with issues.

C.  BCRs

BCRs are a long-standing mechanism available by which U.S. companies comply with EU privacy laws. Despite having a history of valid and adequate protection, however, BCRs today are practically useless for most companies. The fatal flaws of BCRs generally stem from the practical impediments to their use as well as their now-questionable legal validity.

First, BCRs only apply to a very specific type of data transfer, making them unavailable to many companies. They apply when data is transferred amongst entities that are part of the same corporate group.[174] Because of this, BCRs are useless for companies that transfer data externally. This excludes a wide variety of industries, including those which transfer human resources data to third parties and which transfer third-party market research data. Thus, many companies cannot use BCRs based upon a basic limiting factor.

Second, practical impediments to BCR approval eliminate this option for the vast majority of remaining companies. Companies must receive approvals from each separate data protection authority, which can take between eighteen and twenty-four months.[175] To further illustrate the difficulty and limited usefulness of BCRs, in more than ten years of their validity as a transfer mechanism, only around one hundred companies have actually obtained approval.[176] The enormous costs of compiling the BCRs make them viable only for massive multinational corporations like General Electric or Shell.[177] Entities with both the resources to pursue the BCR process and strictly (or mainly) intra-company data transfer requirements comprise a decidedly limited category, and many within it will still choose to pursue less burdensome and more practical mechanisms.

Third, BCRs currently face the same legal challenges as Model Clauses. To summarize, some data protection authorities have stopped considering BCRs as an acceptable transfer mechanism.[178] Currently, BCRs are only recognized by about two-thirds of member nations.[179] Ultimately, companies must recognize that the validity of BCRs, like Model Clauses, is necessarily clouded following Schrems I, and that countries have already begun to show distaste for them.

However, BCRs were significantly strengthened via the GDPR, and their future legal validity seems to stand on much firmer ground than the Model Clauses. The GDPR will also allow BCRs to apply to transfers outside the corporate group.[180] These transfers must be accompanied by commitments and agreements of the external parties to provide adequate protections,[181] a requirement that essentially replicates the Model Clauses. Companies will now have to take the time and effort to include contractual protections in every contract they make, thus removing one of the benefits of BCRs—not having the burden of exacting privacy commitments in every contract. Additionally, if a company is going to pursue this option, it is important to guarantee that its BCRs are GDPRcompliant. Companies currently using BCRs may see them invalidated or in need of revision in the future.

Nonetheless, BCRs remain an untenable option for most companies. While the GDPR appears to streamline the process of BCR adoption through the one-stop-shop concept that is inherent in the regulation,[182] it is still a complex process demanding substantial resources. Further, there is no evidence that approvals will indeed be streamlined using the GDPR. At this point, any increase in efficiency promised by the GDPR’s passage is speculative at best.

Ultimately, BCRs may be better suited to overcome legal concerns than the other mechanisms and may serve as a relatively stable transfer mechanism under the GDPR. However, BCRs still face the limitations mentioned in the first two points above: they are only viable for large, multinational corporations that are primarily transferring data amongst their own corporate groups. Because of this, BCRs are a solution in only very limited circumstances.

V.  So, What Options Do Companies Have?

All hope is not lost. Data is still going to flow across the Atlantic. Many of the above mechanisms will continue to be used, and companies will, at least for the time being, be able to get away without updating and adjusting their privacy policies to conform with the upcoming implementation of the GDPR. For instance, a survey from July 2017 found that 89% of U.S. organizations impacted by the GDPR are unprepared for the upcoming changes.[183] Companies that choose not to address this matter risk facing massive expenses if and when their privacy policies become inadequate.

There are a few potential options that companies can begin to adopt in order to best prepare themselves for privacy regulations going forward. However, there simply is no right answer, no magic solution to insulate companies from all risk. The suggestions below have their flaws, but in my estimation, they provide additional security for companies facing an uncertain privacy landscape. Finally, though it almost goes without saying, companies must strongly consider layering their privacy measures. Having multiple levels of transfer mechanisms enables companies to continue operations if one mechanism faces legal troubles, and they can save companies from the substantial costs of having to rapidly institute new compliance measures. It would be foolish for cautious firms not to diversify their privacy measures, just as it would be foolish for cautious investors not to diversify their investments.

That said, I will discuss how obtaining consent and utilizing the GDPR Codes of Conduct and Certification are useful privacy protections to layer on top of other transfer mechanisms.

A.  Consent

As discussed above, a major goal of the GDPR is to increase transparency and give individuals more of a role in how their data is handled.[184] Because of this, consent is discussed at great length in the GDPR.[185] The notion of consent necessarily depends on providing information to the individual whose data will be transferred. Thus, obtaining consent is a valuable tool for acting in accordance with the spirit of the GDPR and thus (potentially) appeasing privacy officials. Consent, however, is not a perfect solution. Consent must be free and specific.[186] This standard can be difficult to achieve in some situations and may not be in a company’s best interest in other situations. For instance, consent to the transfer of human resource data is problematic in an employer-employee relationship in which there is a clear bargaining advantage for the side receiving the data.[187] For example, if a job offer is conditioned on consent to data transfers, the consent that is received is unlikely to be considered “free. Also, the GDPR mandates that individuals need to consent to the specific use of their data.[188] Some companies may be using data in ways that may be dissatisfying to its users or customers, which could cause bad publicity. Consent is also limited by the age of the individual whose data is being processed. The GDPR states that the processing of data of individuals younger than sixteen will require parental permission, and it gives member nations the choice to lower this age to thirteen.[189] Because of this, companies—like Facebook—with younger users face real difficulties in obtaining adequate consent.

Nonetheless, this is a very good starting point for many firms. Companies are already required to process data in a manner consistent with a clear purpose.[190] This purpose should be articulable to the individuals whose data is being processed, and so consent should be at least theoretically possible. Finally, the cost and additional burden associated with obtaining consent may be minimal for companies, depending on their specific situations, and proper attempts to obtain that consent will likely be viewed positively by the data protection authorities, who have clearly placed an emphasis on this transfer mechanism.

B.  Prepare for the GDPR

During this notably volatile time for data privacy compliance, a company should utilize multiple transfer mechanisms, and beyond this, organizations would be wise to begin preparing to meet the stricter regulations of the GDPR. Updating transfer mechanisms in line with the GDPR is a timeconsuming and expensive venture,[191] but it is the single best way to minimize risk during this volatile time. To do this, companies will want to work with data protection authorities and/or hire a data protection officer to revise their current Model Clauses or BCRs in line with what the GDPR expects. Further, companies should consider pursuing Codes of Conduct and Certification. These options allow for a certain level of flexibility and insulation from regulatory charges in the country.[192] Additionally, the EU Commission specifically emphasized using these mechanisms.[193] Using the mechanism may thus show an intention to act in line with the goals of the Commission and engender some goodwill. It is not to say that these must be pursued, but at minimum, they should be considered and evaluated. Moreover, despite the criticisms of Model Clauses and BCRs, they can be viable options when drafted in compliance with the GDPR. What is most important here is that companies take the time to work with data protection officers or agencies to ensure that the mechanisms they plan to utilize are GDPR compliant.

In conclusion, depending on the company’s data processing activities, Model Clauses, BCRs, Informed Consent, and/or Codes of Conduct/Certification may be utilized as viable transfer mechanisms if managed and developed in line with the stricter language of the GDPR. On the other hand, companies relying solely on the Privacy Shield, despite its questionable validity and the fragile state of EUU.S. affairs, expose themselves to substantial risk, which could prove costly to the greater of €20,000,000 or 4% of annual revenue. That being said, determining the best way to insulate any given company from the risks associated with volatile data privacy laws is incredibly difficult. The best thing a company can do to combat this difficulty is to understand what exactly the GDPR will demand and to prepare accordingly. In the meantime, companies can weather the storm, using their understanding of the GDPR to revise current policies to align with the stricter realities of the future. Ultimately, developing an understanding of the variety of options that can be used, employing different transfer mechanisms based on particular data transfer needs and data types, and being proactive will save a company substantial costs and significantly reduce its risk exposure.

 


[*] J.D. candidate, University of Southern California Gould School of Law, 2018. I am forever grateful to my best friend and fiancée, Venessa Simpson, for the endless love and support she has provided me throughout college and law school, and to my mom and dad, the most loving, caring, and supportive parents there are; you three are my inspiration and make me want me to be a better person each and every day. Many thanks also to Professor Valerie Barreiro for your guidance and feedback during the note-writing process and to Jonathan Frimpong, Emily Arndt, and James Salzmann for your invaluable and much-needed feedback and editing expertise.

 [1]. Luke Harding, How Edward Snowden Went from Loyal NSA Contractor to Whistleblower, Guardian (Feb. 1, 2014, 6:00 A.M.), https://www.theguardian.com/world/2014/feb/01/edward-snowden-intelligence-leak-nsa-contractor-extract.

 [2]. Id.

 [3]. Id.

 [4]. Id.

 [5]. See Schrems v. Data Protection Commissioner, Electronic Privacy Info. Ctr. [hereinafter Schrems], https://epic.org/privacy/intl/schrems (last visited Nov. 15, 2017).

 [6]. See Harding, supra note 1.

 [7]. Id.

 [8]. Directive 95/46, of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 1, 1995 O.J. (L 281) 31, 38 (EC) [hereinafter Directive 95/46/EC]. The Directive has since been replaced by the General Data Protection Regulation (“GDPR”). See Commission Regulation 2016/679, 2016 O.J. (L 119) 1 [hereinafter General Data Protection Regulation]. The GDPR will be addressed in depth in Part III of this Note.

 [9]. See Convention for the Protection of Human Rights and Fundamental Freedoms, art. 8, Nov. 4, 1950, 213 U.N.T.S. 221, 230.

 [10]. See McKay Cunningham, Complying with International Data Protection Law, 84 U. Cin. L. Rev. 421, 422 (2016).

 [11]. See id.

 [12]. See Commission Decision of 26 July 2000 Pursuant to Directive 95/46/EC of the European Parliament and of the Council on the Adequacy of the Protection Provided by the Safe Harbour Privacy Principles and Related Frequently Asked Questions Issued by the US Department of Commerce, art. 1, 2000 O.J. (L 215) 7, 8 [hereinafter Safe Harbor].

 [13]. Case C-362/14, Schrems v. Data Prot. Comm’r, ECLI:EU:C:2015:650, http://curia.europa.eu/ juris/document/document.jsf?docid=169195&doclang=EN.

 [14]. The EU Model Clauses are also referred to as Standard Contractual Clauses. For convenience, the term “Model Clauses” will be used throughout this Note.

 [15]. See Article 29 Data Protection Working Party, Opinion 01/2016 on the EU-U.S. Privacy Shield Draft Adequacy Decision (2016) [hereinafter Opinion 01/2016], http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2016/wp238_en.pdf.

 [16]. Schrems, ECLI:EU:C:2015:650, ¶¶ 73–74, 96.

 [17]. U.S. Dep’t. of Health, Educ., & Welfare, No. (OS) 73–94, Records, Computers, and the Rights of Citizens: Report of the Secretary’s Advisory Committee on Automated Personal Data Systems 41 (1973).

 [18]. See id.

 [19]. Privacy Act of 1974, Pub. L. No. 93-579, 88 Stat. 1896 (codified as amended at 5 U.S.C. § 552a (2012)).

 [20]. Robert Gellman, Fair Information Practices: A Basic History 5 (Apr. 10, 2017) (unpublished manuscript) (https://bobgellman.com/rg-docs/rg-FIPshistory.pdf).

 [21]. Gellman, supra note 20, at 5.

 [22]. Id. at 10. See also 6 U.S.C. § 142. For further discussion, see infra Part II.

 [23]. Gellman, supra note 20, at 6.

 [24]. Org. for Econ. Co-operation & Dev., Recommendation of the Council Concerning Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (Sept. 23, 1980), reprinted in OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data 11 (2002).

 [25]. Id. at 14–16. As further proof of the enduring nature of these principles, the OECD reviewed the principles in 2013 in light of the changes over the past thirty years, choosing to maintain the eight principles in their original form. Org. for Econ. Co-operation & Dev., The OECD Privacy Framework 14–15 (2013), http://www.oecd.org/sti/ieconomy/oecd_privacy_framework.pdf.

 [26]. See Directive 95/46/EC, supra note 8, art. 1, at 38 (“In accordance with this Directive, Member States shall protect the fundamental rights and freedoms of natural persons, and in particular their right to privacy with respect to the processing of personal data.”).

 [27]. See General Data Protection Regulation, supra note 8, art. 5, at 35–36.

 [28]. See Safe Harbor, supra note 12, art. 1, at 8 (describing how companies that self-certify can comply with the Safe Harbor requirements).

 [29]. See Kelli Clark, The EU Safe Harbor Agreement Is Dead, Here’s What to Do About It, Forbes (Oct. 27, 2015, 3:30 P.M.), http://www.forbes.com/sites/riskmap/2015/10/27/the-eu-safe-harbor-agreement-is-dead-heres-what-to-do-about-it/#29a319fc7171.

 [30]. See id.

 [31]. See U.S. Dep’t of Commerce, U.S.-EU Safe Harbor Framework: Guide to Self-Certification 4–10 (2013), https://build.export.gov/build/groups/public/@eg_main/@safeharbor/ documents/webcontent/eg_main_061613.pdf. See also Safe Harbor Fees, Export.gov, https://2016.export.gov/safeharbor/eg_main_020436.asp (last               visited Oct. 15, 2017) (“An organization that is self-certifying its compliance with the U.S.-EU Safe Harbor Framework and/or the U.S.-Swiss Safe Harbor Framework for the first time on or after March 1, 2009 must remit a one-time processing fee of $200.00.”)

 [32]. See Directive 95/46/EC, supra note 8, art. 25, at 45–46.

 [33]. Case C-362/14, Schrems v. Data Prot. Comm’r, ECLI:EU:C:2015:650, ¶ 28, http://curia.europa.eu/juris/document/document.jsf?docid=169195&doclang=EN. See also Schrems, supra note 5.

 [34]. See Schrems v. Data Protection Comm’n [2014] IR 75, ¶ 29 (H. Ct.) (Ir.).

 [35]. Nora Ni Loidean, The End of Safe Harbor: Implications for EU Digital Privacy and Data Protection Law, 19 No. 8 J. Internet L. 1, 1, 9 (2016) (quoting Schrems, IR 75, ¶ 29).

 [36]. Schrems, IR 75, ¶ 69.

 [37]. See id. ¶ 71.

 [38]. See Case C-362/14, Schrems, ¶ 107.

 [39]. Id. ¶ 96.

 [40]. Id. ¶ 86.

 [41]. See Clark, supra note 29.

 [42]. See Opinion 01/2016, supra note 15.

 [43]. See European Commission Press Release IP/16/2461, European Commission Launches EU-U.S. Privacy Shield: Stronger Protection for Transatlantic Data Flows (Jul. 12, 2016), http://europa.eu/rapid/press-release_IP-16-2461_en.htm.

 [44]. Id.

 [45]. Loidean, supra note 35, at 7–12.

 [46]. See European Commission Press Release IP/16/2461, supra note 43.

 [47]. Id.

 [48]. European Commission Statement 16/1403, Joint Statement on the Final Adoption of the New EU Rules for Personal Data Protection (Apr. 14, 2016), http://europa.eu/rapid/press-release_STATEMENT-16-1403_en.htm.

 [49]. European Commission Memorandum 15/6385, Questions and Answers—Data Protection Reform (Dec. 21, 2015), http://europa.eu/rapid/press-release_MEMO-15-6385_en.htm.

 [50]. European Commission Press Release IP/16/2461, supra note 43.

 [51]. See Schrems, supra note 5; Tomaso Falchetta, New ‘Shield’, Old Problems, Privacy Int’l (July 7, 2016), https://www.privacyinternational.org/node/889.

 [52]. See Exec. Order No. 13,768, 82 Fed. Reg. 8799 (Jan. 25, 2017). See also infra Part IV.A.

 [53]. Schrems, supra note 5.             

 [54]. Id.

 [55]. See General Data Protection Regulation, supra note 8, art. 83, at 82–83. Fines can total up to 20,000,000 or up to 4 percent of the total worldwide annual turnover of the preceding financial year, whichever is higher. Id. at 83.

 [56]. Francoise Gilbert, EU General Data Protection Regulation: What Impact for Businesses Established Outside the European Union, 19 No. 11 J. Internet L., May 2016, at 3, 4–6.

 [57]. Overview on Binding Corporate Rules, Directorate General for Just. & Consumers, http://ec.europa.eu/justice/data-protection/international-transfers/binding-corporate-rules/index_en.htm (last visited Nov. 16, 2017).

 [58]. Id.

 [59]. Id.

 [60]. Id.

 [61]. Id.

 [62]. See Melinda L. McLellan & William W. Hellmuth, Safe Harbor is Dead, Long Live Standard Contractual Clauses?, Data Privacy Monitor (Oct. 22, 2015), https://www.dataprivacymonitor.com/enforcement/safe-harbor-is-dead-long-live-standard-contractual-clauses (summarizing best practices for the usage of Model Clauses following the invalidation of the Safe Harbor Framework by the CJEU).

 [63]. See id. See also Model Contracts for the Transfer of Personal Data to Third Countries, Directorate General for Just. & Consumers, http://ec.europa.eu/justice/data-protection/international-transfers/transfer/index_en.htm (last visited Nov. 16, 2017).

 [64]. Data Prot. Unit, Directorate Gen. for Justice and Consumers, Frequently Asked Questions Relating to Transfers of Personal Data from the EU/EEA to Third Countries 26–28 (2009), http://ec.europa.eu/justice/data-protection/international-transfers/files/international_ transfers_faq.pdf.

 [65]. McLellan & Hellmuth, supra note 62.

 [66]. Practical Law Intellectual Prop. & Tech., Expert Q&A: EU-US Personal Information Data Transfers (2016), Westlaw W-000-8901.

 [67]. Id.; Data Prot. Unit, supra note 64, at 48.

 [68]. Cunningham, supra note 10, at 422.

 [69]. Directive 95/46/EC, supra note 8, art. 1, at 38.

 [70]. See generally Cunningham, supra note 10, at 422 (“Unlike in Europe, U.S. law does not recognize a fundamental right to privacy.”); Loidean, supra note 35, at 8 (stating that the United States has a framework that has “rejected the fundamental rights approach to information privacy”).

 [71]. Charter of Fundamental Rights of the European Union, art. 8, 2012 O.J. (C 326) 391, 397. Cf. Bradyn Fairclough, Privacy Piracy: The Shortcomings of the United States’ Data Privacy Regime and How to Fix It, 42 J. Corp. L. 461, 466 (2016) (discussing how in the United States this right is never explicitly stated in the Constitution, and it is only implied to be relevant in certain specific areas).

 [72]. Jörg Rehder & Erika C. Collins, The Legal Transfer of Employment-Related Data to Outside the European Union: Is It Even Still Possible?, 39 Int’l Law. 129, 130 (2005) (quoting Directive 95/46/EC, supra note 8, art. 1, at 38).

 [73]. Cunningham, supra note 10, at 426–27.

 [74]. Id.

 [75]. See Gellman, supra note 20, at 6–10.

 [76]. Cunningham, supra note 10, at 427.

 [77]. Directive 95/46/EC, supra note 8, art. 28, at 47–48.

 [78]. See id. art. 25, at 45–46.

 [79]. Cunningham, supra note 10, at 426–27.

 [80]. See id. at 426–27 (“The Directive set the international standard for data privacy and security regulation and facilitated a trend among technologically advanced countries toward adopting nationalized data privacy laws.”).

 [81]. See generally Rehder & Collins, supra note 72, at 132.

 [82]. Manu J. Sebastian, The European Union’s General Data Protection Regulation: How Will It Affect Non-EU Enterprises?, 31 Syracuse J. Sci & Tech. L. 216, 225–26 (2015).

 [83]. See id.

 [84]. See Cunningham, supra note 10, at 422; Fairclough, supra note 71, at 464–66; Loidean, supra note 35, at 8.

 [85].  W. Gregory Voss, The Future of Transatlantic Data Flows: Privacy Shield or Bust?, 19 No. 11 J. Internet L. 1, 1, 9 (2016). See also Julie Brill, Commissioner, Fed. Trade Comm’n, Keynote Address at the Amsterdam Privacy Conference, Transatlantic Privacy After Schrems: Time for an Honest Conversation (Oct. 23, 2015), 2015 WL 9684096.

 [86]. See Cunningham, supra note 10, at 422–26.

 [87]. See id.

 [88]. Martin A. Weiss & Kristin Archick, Cong. Research Serv., R44257, U.S.-EU Data Privacy: From Safe Harbor to Privacy Shield 3, 7 (2016).

 [89]. Gellman, supra note 20, at 10.

 [90]. Fairclough, supra note 71, at 463–66, 476.

 [91]. Gellman, supra note 20, at 19–20.

 [92]. See generally Rehder & Collins, supra note 72, at 131 (quoting David Scheer, Europe’s New High-Tech Role: Playing Privacy Cop to the World, Wall Street J., Oct. 10, 2003, at A1).

 [93]. Marsha Cope Huie et al., The Right to Privacy in Personal Data: The EU Prods the U.S. and Controversy Continues, 9 Tulsa J. Comp. & Int’l L. 391, 392 (2002).

 [94]. See generally Uri Friedman, Is Terrorism Getting Worse?, Atlantic (July 14, 2016), https://www.theatlantic.com/international/archive/2016/07/terrorism-isis-global-america/490352 (explaining the rise of terrorist attacks in the period from Operation Iraqi Freedom to the present).

 [95]. Harding, supra note 6, at 4–6.

 [96]. See Cunningham, supra note 10, at 423.

 [97]. Voss, supra note 85, at 10 (quoting Simon Davies, Privacy Opportunities and Challenges with Europe’s New Data Protection Regime, in Privacy in the Modern Age 55, 57 (Marc Rotenberg et al. eds., 2015)).

 [98]. White House, Privacy in our Digital Lives: Protecting Individuals and Promoting Innovation, 3–9, 12–14 (2017).

 [99]. Kate Kaye, New Privacy Report Already Removed from White House Site, Ad Age (Jan. 20, 2017), http://adage.com/article/privacy-and-regulation/privacy-report-removed-white-house-site/307632.

 [100]. See Exec. Order No. 13,768, 82 Fed. Reg. 8799 (Jan. 25, 2017).

 [101]. Cunningham, supra note 10, at 422.

 [102]. Id. at 423–24. See Gramm-Leach-Bliley Act, Pub. L. 106-102, 113 Stat. 1338 (1999) (codified as amended at scattered sections of 12 U.S.C. (2012)); Health Insurance Portability and Accountability Act of 1996, Pub. L. 104-191, 110 Stat. 1936 (codified as amended at scattered sections of 18 U.S.C., 26 U.S.C., 29 U.S.C., and 42 U.S.C.); Fair Credit Reporting Act, Pub. L. 91-508, 84 Stat. 1114-2 (1970) (codified at 15 U.S.C. 1681).

 [103]. Brill, supra note 85, at 1 (quoting 15 U.S.C. § 45(a)).

 [104]. Loidean, supra note 35, at 8.

 [105]. Id.

 [106]. See Cunningham, supra note 10, at 423.

 [107]. Loidean, supra note 35, at 8.

 [108]. EU GDPR Portal, http://www.eugdpr.org (last visited Nov. 16, 2017).

 [109]. General Data Protection Regulation, supra note 8, at 1.

 [110]. Id.

 [111]. Gilbert, supra note 56, at 4.

 [112]. Id.

 [113]. European Commission Statement 16/1403, supra note 48.

 [114]. Gilbert, supra note 56, at 4.

 [115]. Lawful Processing, Info. Commissioner’s Off., https://ico.org.uk/for-organisations/data-protection-reform/overview-of-the-gdpr/key-areas-to-consider (last visited Nov. 16, 2017).

 [116]. General Data Protection Regulation, supra note 8, at 9.

 [117]. European Commission Memorandum 15/6385, supra note 49. There is a growing concern over data privacy associated with in-home connected devices and apps, such as Amazon’s Alexa, and health-tracking devices, like Fitbit. For further discussion, see Sarah Kellogg, Every Breath You Take: Data Privacy and Your Wearable Fitness Device, 72 J. Mo. B. 76, 78–81 (2016); Adam R. Pearlman & Erick S. Lee, National Security, Narcissism, Voyeurism, and Kyllo: How Intelligence Programs and Social Norms Are Affecting the Fourth Amendment, 2 Tex. A&M L. Rev. 719, 760–62 (2015).

 [118]. Individuals’ Rights, Info. Commissioner’s Off., https://ico.org.uk/for-organisations/data-protection-reform/overview-of-the-gdpr/individuals-rights (last visited Nov. 16, 2017).

 [119]. European Commission Memorandum 15/6385, supra note 49.

 [120]. General Data Protection Regulation, supra note 8, arts. 4, 7, at 34, 37. Consent is further discussed throughout the GDPR. See id., passim.

 [121]. See Gilbert, supra note 56, at 6–7. But see Cunningham, supra note 10, at 437–38.

 [122]. Sebastian, supra note 82, at 233.

 [123]. European Commission Memorandum 15/6385, supra note 49.

 [124]. Id. See also Ann Cavoukian, Privacy by Design: The 7 Foundational Principles (2011), https://www.iab.org/wp-content/IAB-uploads/2011/03/fred_carter.pdf.

 [125]. Sebastian, supra note 82, at 230.

 [126]. General Data Protection Regulation, supra note 8, art. 25, at 48.

 [127]. Accountability and Governance, Info. Commissioner’s Off., https://ico.org.uk/for-organisations/data-protection-reform/overview-of-the-gdpr/accountability-and-governance (last visited Nov. 16, 2017).

 [128]. Sebastian, supra note 82, at 231.

 [129]. Accountability and Governance, supra note 127.

 [130]. European Commission Memorandum 15/6385, supra note 49. To understand the potentially massive scope of these penalties, the fines that could be levied against Amazon and Google, based on their 2016 reported revenues, would be approximately $5.4 and $3.6 billion, respectively. Richard Stiennon, Unintended Consequences of the European Union’s GDPR, Forbes (Nov. 27, 2017, 6:26 P.M.), https://www.forbes.com/sites/richardstiennon/2017/11/27/unintended-consequences-of-the-european-unions-gdpr/#46aae406243c,

 [131]. Id.

 [132]. General Data Protection Regulation, supra note 8, art. 47, at 62–64.

 [133]. Gilbert, supra note 56, at 5 (stating that fewer than one hundred companies have sought to use BCRs, despite this option having been available for a decade).

 [134]. See Practical Law Intellectual Prop. & Tech, supra note 66.

 [135]. General Data Protection Regulation, supra note 8, arts. 46, 93, at 62, 86.

 [136]. Gilbert, supra note 56, at 4–5.

 [137]. See Directive 95/46/EC, supra note 8, arts. 21, 26, at 44, 46 (outlining the roles of member states in ensuring adequate protection for data transfers and the objections and limits that they may put in place). See also ULD Position Paper on the Judgment of the Court of Justice of the European Union of 6 October 2015, C-362/14 (Oct. 14, 2015), https://www.datenschutzzentrum.de/uploads/ internationales/20151014_ULD-PositionPapier-on-CJEU_EN.pdf (arguing that Model Clauses are an inappropriate transfer mechanism for transfers to the United States, due to direct conflicts between U.S. law and the provisions in the Model Clauses.).

 [138]. General Data Protection Regulation, supra note 8, arts. 92–93, at 85–86.

 [139]. Cunningham, supra note 10, at 438–40.

 [140]. General Data Protection Regulation, supra note 8, art. 40, at 56.

 [141]. See Directive 95/46/EC, supra note 8, arts. 25–26, 30, at 45–46, 48–49 (providing language regarding adequacy decisions).

 [142]. General Data Protection Regulation, supra note 8, art. 40, at 56.

 [143]. Gilbert, supra note 56, at 5.

 [144]. General Data Protection Regulation, supra note 8, art. 40, at 57.

 [145]. See generally id. art. 42, at 58–59.

 [146]. Compare id. with id. art. 40, at 56.

 [147]. Report from the Commission to the European Parliament and the Council on the First Annual Review of the Functioning of the EU–U.S. Privacy Shield, at 4, SWD (2017) 344 final (Oct. 18, 2017) [hereinafter Report on the First Annual Review]; Grant Gross, Tech Companies Like Privacy Shield but Worry About Legal Challenges, PCWorld (Dec. 21, 2016, 3:00 AM), http://www.pcworld.com/article/3152559/security/tech-companies-like-privacy-shield-but-worry-about-legal-challenges.html.

 [148]. Doron S. Goldstein et al., Understanding the EU-US “Privacy Shield” Data Transfer Framework, 20 No. 5 J. Internet L. 1, 1, 21 (2016).

 [149]. Privacy Shield Timeline, PrivacyTrust, https://www.privacytrust.com/privacyshield/ privacy-shield-timeline.html (last visited Nov. 16, 2017).

 [150]. Reuters, Trump Election Ignites Fears over U.S. Encryption, Surveillance Policy, Fortune, (Nov. 9, 2016), http://fortune.com/2016/11/09/trump-encryption-surveillance-policy.

 [151]. Yoni Heisler, A Comprehensive Look at All of Donald Trump’s Positions on Technology Issues, Boy Genius Rep. (Oct. 19, 2016, 10:53 A.M.), http://bgr.com/2016/10/19/donald-trump-politics-technology-opinions.

 [152]. See Exec. Order No. 13,768, 82 Fed. Reg. 8799 (Jan. 25, 2017).

 [153]. Jan Philipp Albrecht (@JanAlbrecht), Twitter (Jan. 26, 2017, 1:45 AM), https://twitter.com/ JanAlbrecht/status/824553962678390784.

 [154]. Natasha Lomas, Trump Order Strips Privacy Rights from Non-U.S. Citizens, Could Nix EU-US Data Flows, TechCrunch (Jan. 26, 2017), https://techcrunch.com/2017/01/26/trump-order-strips-privacy-rights-from-non-u-s-citizens-could-nix-eu-us-data-flows.

 [155]. See Report on the First Annual Review, supra note 147, at 4. For additional discussion, see Kaye, supra note 99.

 [156]. European Commission Press Release IP/16/2461, supra note 43.

 [157]. See Commission Implementing Decision 2016/1250, 2016 O.J (L 207) 1, 13–20 (EU).

 [158]. See id. at 28–29 (explaining that the ombudsperson is supposed to be independent from the U.S. intelligence agencies and is in charge of following up on complaints and enquiries from individuals regarding potential privacy violations).

 [159]. See id. at 27–29, 71.

 [160]. See Loyens & Loeff, Digital Rights Ireland Challenges EU-US “Privacy Shield,” Lexology (Nov. 4, 2016), http://www.lexology.com/library/detail.aspx?g=5055de04-e2d7-4b0b-9bbe-789a4a97b318; Reuters, French Privacy Groups Challenge the EU’s Personal Data Pact with U.S., Fortune (Nov. 2, 2016), http://fortune.com/2016/11/02/privacy-shield-pact-challenge.

 [161]. See Opinion 01/2016, supra note 15, at 9–14.

 [162]. See generally Voss, supra note 85 (discussing how the Privacy Shield came about and what it is meant to do).

 [163]. See Steven C. Bennett, EU Privacy Shield: Practical Implications for U.S. Litigation, 2 Prac. Law., Apr. 2016, at 60, 62–64.

 [164]. Goldstein et al., supra note 148, at 20 (discussing the Privacy Shield requirements and implications for participating organizations).

 [165]. See Cunningham, supra note 10, at 426–28; Gilbert, supra note 56, at 4–5.

 [166]. See ULD Position Paper, supra note 137.

 [167]. Id., at 4.

 [168]. See DSK Position Paper (Oct. 21, 2015), https://www.datenschutz-hamburg.de/fileadmin/ user_upload/documents/DSK_position_paper_Safe-Harbor_2015-10-21.pdf.

 [169]. Matt Burgess, Facebook Privacy Case Is Making Its Way to the European Court of Justice, Wired (Sept. 13, 2016), http://www.wired.co.uk/article/facebook-privacy-eu-case-cjeu.

 [170]. Id.

 [171]. Darren Isaacs, Practical Strategies for Maintaining HR Data Flows from Europe to the US and Beyond—After the Schrems Case, ‘Safe Harbor 2.0’ and the Incoming Data Protection Regulation, 1 Emp. & Indus. Rel. L. 33, 33, 35 (2016).

 [172]. ULD Position Paper, supra note 137, at 4. See also Gross, supra note 147.

 [173]. See Commission Decision 2001/497/EC, app. 2, 2001 O.J. (L 181) 19, 22, 30 (EC).

 [174]. Overview on Binding Corporate Rules, supra note 57. See also Cunningham, supra note 10, at 439–40.

 [175]. See Cunningham, supra note 10, at 440; Gilbert, supra note 56, at 5.

 [176]. See Gilbert, supra note 56, at 5.

 [177]. Sebastian, supra note 82, at 242.

 [178]. DSK Position Paper, supra note 168, ¶ 2.

 [179]. Gilbert, supra note 56, at 5.

 [180]. See General Data Protection Regulation, supra note 8, art. 47, at 63.

 [181]. See id.

 [182]. European Commission Statement 16/1403, supra note 48.

 [183]. Alex Hickey, 6 Months to GDPR: What’s Next, CIO Dive (Nov. 28, 2017), https://www.ciodive.com/news/6-months-to-gdpr-whats-next/511761.

 [184]. See European Commission Statement 16/1403, supra note 48.

 [185]. See General Data Protection Regulation, supra note 8, passim.

 [186]. See id., arts. 4, 6–8, at 34, 36–38.

 [187]. See Isaacs, supra note 171, at 35.

 [188]. General Data Protection Regulation, supra note 8, art. 6, at 37.

 [189]. Gilbert, supra note 56, at 4.

 [190]. General Data Protection Regulation, supra note 8, at 6–7.

 [191]. See Pulse Survey: GDPR Budgets Top $10 Million for 40% of Surveyed Companies, PwC, https://www.pwc.com/us/en/increasing-it-effectiveness/publications/general-data-protection-regulation-gdpr-budgets.html (last visited Nov. 29, 2017) (finding that 40% of companies that have completed their GDPR preparations have spent more than $10 million).

 [192]. See General Data Protection Regulation, supra note 8, art. 40, at 56–58. See also Gilbert, supra note 56, at 3–5.

 [193]. See Gilbert, supra note 56, at 3–5.