What if you could transport your jury from a courtroom to the scene of a catastrophic event? . . . Imagine how much more empathy you would feel for the victim of a catastrophic collision if you were to experience the tragedy first-hand.
In the courtroom environment, oral presentations are becoming increasingly supplemented and replaced by advancing digital technologies that provide legal practitioners with effective demonstrative capabilities. Improvements in the field of virtual reality (“VR”) are facilitating the creation of immersive environments in which a user’s senses and perceptions of the physical world can be completely replaced with virtual renderings. As courts, lawyers, and experts continue to grapple with evidentiary questions of admissibility posed by evolving technologies in the field of computer-generated evidence (“CGE”), issues posed by the introduction of immersive virtual environments (“IVEs”) into the courtroom have, until recently, remained a largely theoretical discussion.
Though the widespread use of IVEs at trial has not yet occurred, research into the practical applications of these VR technologies in the courtroom is ongoing, with several studies having successfully integrated IVEs into mock scenarios. For example, in 2002, the Courtroom 21 Project (run by William & Mary Law School and the National Center for State Courts) hosted a lab trial in which a witness used an IVE. The issue in the case was whether a patient’s death was the result of the design of a cholesterol–removing stent or a surgeon’s error in implanting it upside down.
During the mock trial, a key defense witness who was present during the surgery donned a VR headset, which re–created the operating room, and then projected to the jury her view of the operation on a large screen as she reenacted her role in the surgery. The demonstration significantly reduced the credibility of the witness when it revealed that she could not possibly have seen the doctor’s hands or wrists.
In another experiment, Swiss researchers successfully used an Oculus Rift headset and Unity 3–D software to render an IVE that made it possible for a viewer to assess how close bullets came to severely injuring a victim during a shooting. Using a laser scan of the crime scene, footage taken from an onlooking security camera, and the final position of the projectiles, researchers were able to reconstruct the scene of the shooting to enable viewers to review the bullet trajectories, visibility, speed, and distance.
Similarly, the Bavarian State criminal office, which currently handles the prosecution of Nazi war criminals tied to the Holocaust, applied laser scanning technology to develop a VR model of the Auschwitz concentration camp. The model was recently adapted into an IVE for future use at trial, allowing jurors to examine the camp from almost any point of view.
As research continues and new applications of IVE technology have been investigated, the use of VR technology is becoming increasingly mainstream and cost–effective, making it more practical to use an IVE in the courtroom. As such, early adapters in civil practice have announced plans to use IVEs at trial, while litigation support providers are beginning to advertise VR development services. Rising use of laser imaging software and body cameras among law enforcement departments, with the capacity to be converted into an IVE format for use at trial, also has significant potential to facilitate the rapid expansion of these technologies in criminal proceedings.
From the standpoint of a legal practitioner, the potential value in applications of IVE use at trial are numerous. As a form of evidence, IVEs have the potential to redefine the way in which litigators can re–create crime and accident scenes for the jury. Rather than having a jury watch a video rendering or review images after-the-fact, an IVE could allow jury members to witness an event firsthand—from any specific moment, angle, or viewpoint. As a demonstrative technology, an IVE can be easily adapted to depict eyewitness and expert testimony, explain highly technical concepts, or transport users into an interactive environment in any given scenario.
While some commentators have welcomed the onset of IVEs into the courtroom as a natural progression and the next step in technological development of visual media, others have argued that IVEs are fundamentally different from prior forms of evidence and warrant heightened caution due to potential prejudicial effects on juries. This Note supports the latter position and, drawing on psychological research, ultimately argues for revisions to be made in the admission of IVEs as demonstrative evidence.
Part I of this Note defines and distinguishes IVEs from other forms of VR and CGE. Part II compares the treatment of substantive and demonstrative evidence under the Federal Rules of Evidence and discusses the relevant evidentiary rules for the use of an IVE as an illustrative aid. Part III outlines applicable psychological and cognitive research and potential prejudicial effects on juries stemming from the employment of IVEs in a trial setting under the current rules. Part IV examines several cases in which computer-generated animations were subjected to lower evidentiary standards and raises further concerns in applying the current rules to an IVE. Part V explains the need for revisions to the procedures for admitting an IVE as demonstrative evidence and concludes by recommending new procedures which should be implemented prior to the proliferation of IVEs in the courtroom.
I. Distinguishing Immersive Virtual Environments
The term “virtual reality” is used in many contexts, and it is important to note the distinctions between VR technologies capable of facilitating IVEs, which are the subject of this Note, and other mediums for virtual environment (“VE”) interaction and display. Computer–generated VEs can be roughly grouped into three broad categories based on the level of user immersion: non-immersive (desktop), semi-immersive, and immersive virtual environments.
Non-immersive systems, which include Fish Tank and Desktop VR, are monitor-based VR systems where users engage with the VE through a basic desktop display using stereoscopic lenses or an inherent autostereoscopic feature. These kinds of displays do not necessitate that the user wear a VR headset or glasses and typically do not surround the user visually. Likewise, semi-immersive systems have similar technologies but use large screen monitors, large screen projector systems, or multiple television projection systems that increase the user’s field of view, thereby increasing the level of immersion.
Separate from these categories are mixed-reality, or augmented reality (“AR”), technologies that combine physical and virtual objects and align them with the real-world environment. AR environments create a local virtuality, which is mapped onto the physical environment around the user, rather than completely replacing the surrounding environment with a virtual one.
An IVE, by contrast, “perceptually surrounds the user.” This is accomplished with a combination of three-dimensional computer graphics, high-resolution stereoscopic projections, and motion tracking technologies that continually render virtual scenes to match the movements and viewpoint of the user. Through the use of a head-mounted display (“HMD”), sensory information from the physical world is replaced with the perception of a computer–generated, three-dimensional world in which the user is free to move and explore. In the context of an IVE, VR can therefore be understood to mean “a computer-generated display that allows or compels the user (or users) to have a feeling of being present in an environment other than the one they are actually in and to interact with that environment.”
The resulting sense of presence felt by the user is described as a function of an individual’s psychology, representing the degree to which that user experiences a conscious presence in the virtual setting. This effect on a user’s state of consciousness has been attributed to the unique vividness and interactivity of an IVE, which distinguishes IVEs from prior forms of CGE. This sense of consciousness created by an IVE also forms the basis for psychological concerns about leading to potential risks of unfair prejudice in using an IVE at trial. However, prior to further discussion of the unique psychological issues raised by IVEs, it is important to understand how an IVE offered for use at trial would be evaluated under the current rules of evidence.
II. Immersive Virtual Environments and the Federal Rules of Evidence
As previously noted, at trial, an IVE could be applied by courtroom attorneys for presentations to the jury that recreate crime and accident scenes, illustrate highly technical procedures, and demonstrate eyewitness or expert testimony. The most practical method of IVE application in the courtroom would be jurors donning individual HMDs during the course of, or simultaneous with, live testimony.
Though the use of IVEs in the courtroom remains largely unprecedented, the process for addressing the question of an IVE’s use at trial will likely be similar to that used for other forms of visual media. At present, the Federal Rules of Evidence fail to make specific reference to any form of CGE, and therefore do not address the concept of an IVE. Yet, in the absence of legislative revision, it is fair to assume that the admissibility of IVE evidence will be evaluated under existing basic evidentiary rules as well as accompanying general principles which have developed among the courts for determining the admissibility of other forms of CGE.
As a form of visual media, an IVE would need to be classified as either demonstrative—also called illustrative—or substantive evidence. In the realm of CGE, courts have generally labeled 3–D renderings as either computer animations (typically treated as demonstrative evidence) or computer simulations (typically treated as substantive evidence). This classification is critical in determining the applicable foundational requirements, which vary due to the differing purposes for which the evidence is introduced.
Substantive evidence is offered by the proponent “to help establish a fact in issue.” Thus, a computer-generated simulation created through the application of scientific principles would be considered to have independent evidentiary value and therefore be evaluated as substantive evidence. If treated similarly, an IVE used to reconstruct the moment of a car accident, created through software that was programmed to analyze and draw conclusions from pre-existing data (such as calculations, eyewitness testimony, and so forth) would be considered substantive evidence.
One of the primary hurdles facing an IVE entered as substantive evidence at trial would be in laying the foundation for its admission. Because of these foundational challenges, the primary method for introducing an IVE as substantive evidence at trial would likely be in a form accompanying expert testimony. This introduction could be done in several ways: as “part of the basis for expert opinion testimony, an illustrative aid to expert testimony, or a stand-alone exhibit introduced through the testimony of an expert involved in creating the IVE.” As substantive evidence, a testifying expert could draw conclusions about the accident based on the IVE simulation, and it might be admitted as an exhibit that would be made available to the jury for review in deliberations. Yet, as such, both the expert who prepared the IVE and the underlying scientific principles and data used in its construction would be subject to validation.
Demonstrative evidence, in contrast, is defined as “physical evidence that one can see and inspect . . . and that, while of probative value and [usually] offered to clarify testimony, does not play a direct part in the incident in question.” Meaning that, in theory, demonstrative evidence itself serves merely to illustrate the verbal testimony of a witness and should not independently hold any probative value to the case. As such, visual aids introduced as demonstrative evidence are not typically allowed into jury deliberations and are not relied on as the basis for expert opinion. Because visual aids offered as demonstrative evidence are not formally admitted as exhibits, courts treat this kind of evidence more leniently than substantive evidence when evaluating its use at trial. An IVE presented as an illustrative aid to expert testimony, rather than as a basis for expert testimony or an independent exhibit, would therefore not be subject to the same level of scrutiny as substantive evidence.
Despite these standards being significantly lowered, an IVE offered as demonstrative evidence would still need to meet basic evidentiary standards of relevancy, fairness, and authentication. However, it is important to note that the extent to which these requirements would be enforced is a question of judicial discretion and ultimately rests with the presiding trial judge.
The initial inquiry into an IVE, regardless of whether it was offered for demonstrative purposes, would determine whether it was relevant under Federal Rules 401 and 402. Rule 401 would require that the IVE have a “tendency to make a fact more or less probable than it would be without the evidence” and be “of consequence in determining the action.” After a preliminary determination of relevancy, and absent any restrictions in Rule 402, a demonstrative IVE would also need to be authenticated using the guidelines of Rule 901.
Rule 901(a) states that to “satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.” With respect to computer-generated animations used as demonstrative evidence, the animation must “fairly and accurately reflect the underlying oral testimony . . . aid the jury’s understanding” and be authenticated by a witness. Thus, an animation used solely to illustrate witness testimony requires only that the witness testify that it was an accurate representation of the testimony and, in the case of an expert witness, that it would help the jury to understand the expert’s theory or opinion. Using the current method for computer–generated animations, a witness with personal knowledge of the event in question or an expert who had been made aware of the circumstances surrounding the event could simply testify that the IVE was a fair and accurate portrayal of the expert’s testimony.
Importantly, some commentators have posited that, as a newer technology, the foundational requirements imposed on an IVE could be higher than those required for existing forms of illustrative aid. This might necessitate that the proponent of an IVE meet some or all of the more difficult foundational hurdles—briefly mentioned above—regarding the use of scientific evidence. As with other questions of admissibility, however, this determination would be made by the trial judge and the imposition of additional requirements, more akin to substantive evidence, should not be taken as a certainty. Though the underlying data in an IVE offered as demonstrative evidence would undoubtedly be challenged by an opposing party, similar challenges were made in the context of computer–generated animations and were rejected by the courts even during the earliest stages of that technology’s introduction into the legal system.
Regardless of the outcome of future methods used for authentication and despite a finding of relevance using Rules 401 and 402, an IVE could still be excluded by the trial judge under the balancing test of Rule 403. Rule 403 states that “[t]he court may exclude relevant evidence if its probative value is substantially outweighed by a danger of one or more of the following: unfair prejudice, confusing the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence.” These broad standards set out by Rule 403 are a result of the high level of subjectivity required in making an admissibility determination, which essentially dictates a case-by-case analysis. As such, decisions made by the trial judge pursuant to Rule 403 are largely exercises of discretion and are reviewed almost exclusively for abuse of discretion at the appellate level. Although a trial judge might exclude an IVE for any of the above reasons listed under Rule 403, the distinct potential for unfair prejudice created by an IVE is the source of concern for much of the remaining discussion in this Note.
The Rule 403 advisory committee notes define unfair prejudice as “an undue tendency to suggest decision on an improper basis, commonly, though not necessarily, an emotional one.” Broadly speaking, decisions to exclude a piece of evidence for unfair prejudice can be broken down into two primary categories: emotionalism and misuse of evidence. Unfair prejudice caused by overreliance on emotion can be understood as evidence deemed to be “overly charged with appeal to this less rational side of human nature.” Though the goal of Rule 403 is not to exclude all forms of evidence that elicit emotional response, the aim of the trial judge is to moderate the extent to which this response occurs. Aside from emotional concerns, unfair prejudice also results when evidence is misused by the jury after being deemed “admissible for one purpose (or against one party) but not another.” The risk of misuse arises when there is a high likelihood “that the jury will mistakenly consider the evidence on a particular issue or against a particular party, even when properly instructed not to do so.”
In either case, it is necessary for the judge to evaluate whether the probative value of the evidence is substantially outweighed by the risk of a juror’s reliance on an improper basis. To do so, the judge must also take into consideration whether or not the risk can be remedied by issuing a limiting instruction. In making determinations about admissibility, however, it is important for a judge to understand the unique psychological factors implicated by the use of an IVE. Without so doing, a judge may come to a decision which appears on the surface to be well-founded, but ultimately fails to consider the full extent of the risks posed by the use of an IVE. In the next Part, I will discuss several psychological and cognitive factors which should be measured when determining the admissibility of an IVE as demonstrative evidence.
III. Potential Prejudicial Impacts of Immersive Virtual Environments on Jury Decisionmaking
A. Designing Emotion in a Virtual Environment
As discussed in Part I, the element of presence in an IVE distinguishes this form of presentation from other forms of CGE. The concept of presence can be understood to manifest itself in a VE in three ways: via social presence, physical presence, and self presence. This Note is primarily concerned with the latter two. Self presence has been defined as “a psychological state in which virtual (para-authentic or artificial) self/selves are experienced as the actual self in either sensory or nonsensory ways.” Similarly, physical presence has been explained as “a psychological state in which virtual (para-authentic or artificial) physical objects are experienced as actual physical objects in either sensory or nonsensory ways.” Reported experiences of both user self and physical presence in IVEs have led researchers to examine the ways in which IVEs influence user emotion, empathy, and embodiment, each of which will be addressed in turn below.
While research into the effects of IVEs on user emotion remains an active area for experimentation and debate, initial studies have shown significant links between user presence in an IVE and stimulated emotion. One particular area of research has focused on the impact of emotional content in VEs and the relationship between user feelings of presence and actual user emotion. The basic premise behind this type of research follows the logic that “if a dark and scary real-life environment elicits anxiety, so will a corresponding VE if the user experiences presence in it.”
Following this theory, studies have been conducted involving mood induction procedures (“MIPs”), in which VEs have been intentionally designed to provoke specific emotional states. For example, one such study presented participants with three different virtual park scenarios using an HMD with head tracking software and an accompanying joystick to facilitate movement. The three park renderings shared the same virtual structure and objects (for example, trees, lamps, and so forth), but the developers manipulated the sound, music, shadows, lights, and textures with the purpose of inducing either anxiety or relaxation in users. The third park served as a neutral control that was not designed to induce any emotion. Participants were assessed for emotional predisposition prior to the study, and they answered questionnaires regarding emotion and presence throughout the study. The results showed significant variability in user happiness and sadness depending on which park the participant experienced. The anxious park, which contained darker imagery and shadows, reduced user happiness and positive effects, while increasing feelings of sadness and anxiety. In contrast, the relaxing park, which contained brighter imagery, increased user quietness and happiness, while reducing anger, sadness, anxiety, and negative effects. The neutral park, however, did not elicit significant measurable changes.
Building on the same research, a more recent study exposed participants to different virtual park scenarios intentionally designed to elicit one of five specific affective states: joy; anger; boredom; anxiety; and sadness. Effects on participants’ emotional reactions were measured through both physiological responses (monitoring electrodermal activity) and self-reporting. Based on these measures, researchers found they were able to induce the intended emotions in almost all cases and that they could elicit different emotional states by applying only slight changes to the lighting conditions or sounds in the VE. Thus, these measures exhibit further support for the notion that VEs may be specifically designed to induce intended emotional states through various MIPs and alterations to the design elements in a virtual scenario.
In addition to studies on inducing emotional states, others have examined the effects of IVEs on user empathy. As previously noted, a core fundamental difference between traditional CGE and IVEs is in the form of presentation. Any time an image is rendered on a screen, there is a possibility that a viewer will interpret the image objectively because it appears without a human operator (who would be viewed as a subjective party). Yet, in a traditional CGE display, the physical surroundings of the courtroom remain within the perspective of the viewer and the animation or simulation playing on the screen often retains a fixed camera viewpoint. In contrast, through an IVE, the user can effectively take on the role of any specific actor or third-party observer in any given scenario.
A recent study examining the influence of a user’s point of view on his or her assessment of vehicle speed and culpability in a computer animated car crash sequence demonstrates this effect. Participants were presented with three separate animations of a two-car collision from different points of view: overhead (behind and above Car 1), internal (inside Car 1), and facing (looking directly at Car 1). They were then asked to fill out a questionnaire which involved apportioning blame to either Car 1 or Car 2. The study results demonstrated substantial differences in overall culpability assessments depending on the participant’s point of view, with participants apportioning 92% of the blame to Car 1 from the facing position, but only 43% from the overhead view and 34% from the internal view. Though the study acknowledged limitations on ecological validity, the results were in line with Feigenson and Dunn’s hypothesis that small changes and manipulations to an observer’s point of view in a computer-generated animation may “have various legally significant effects.”
In another study, participants were divided into 2 x 2 groups based on levels of immersion and user personality traits. Participants then watched a documentary news series through VR–content-based or flat-screen-based technologies, depending on the immersion group. The study found that presence in the VE positively influenced both empathy and embodiment—meaning that users in a higher immersion setting were more likely to feel a sense of compassion for the subjects of the news story. Importantly, the authors of the study urged that immersion in a VE should be recharacterized “as a cognitive dimension alongside consciousness, awareness, understanding, empathizing, embodying, and contextualizing” rather than as a strong stimulus for facilitating illusion. In other words, instead of viewing IVE technology as an illustrative aid in storytelling, it should be viewed as a factor influencing user cognition in reasoning through a proposed narrative.
Based on current findings in both areas of research and despite ongoing debate regarding specific limitations and interplay between these factors in a VE, the potential for an IVE to be purposefully designed to elicit user emotions and empathy appears to exist. While relying on emotion and empathy in our day-to-day decisionmaking can be an ecologically valid tool of assessment, in the courtroom—an intentionally hermetically sealed universe—it poses a distinct risk of unintended prejudicial effects. Murtha v. City of Hartford provides an example of how these potential effects might be implicated in the trial setting. In 2006, Connecticut Police Officer Robert Murtha was acquitted on all charges relating to his shooting a suspect who was evading police in a stolen car. During the pursuit, the car stalled in snow on the side of the road. As Murtha left his cruiser and approached the car, the suspect attempted to reenter the road and speed off. Murtha fired multiple shots into the driver’s side window that injured the fleeing driver. Dashcam footage from another police cruiser positioned behind Murtha showed him chasing the vehicle and firing into the car as it sped off.
At trial, Murtha argued that his use of deadly force was justified as an act of self-defense because, at the time, he believed that the car was headed towards him. Murtha presented the jury with a hybrid of the dash cam footage and a computer–generated animation to illustrate his point of view. As the driver begins to pull onto the road, the original video freezes and an interspliced animation rotates the field of view from the live–action shot to a recreation of Murtha’s first-person perspective. Comparing the original footage to the animation, there are some clear discrepancies: (1) the car re-enters the road at a sharper angle; (2) Murtha is placed partially within the path of the car and his gun is already drawn and extended; (3) as the car begins to drive off, Murtha moves slowly alongside the car while firing instead of running. However, over the prosecutor’s objections as to the inaccuracy of the animation, the judge determined that the video was a fair and accurate depiction of Murtha’s recollection and issued a limiting instruction that the animation was not meant to depict a precise reenactment.
In creating a computer-generated display, a designer’s decision to provide one viewpoint over another “can potentially alter which ‘character’ in an evidence presentation a viewer identifies with, or aligns themselves with.” Through the animation in Murtha, the jury effectively took on the role of the officer in the shooting. Putting any discrepancies in the animation aside, placing the jurors in the shoes of the officer alone created the potential for unfair prejudice resulting from actor–observer bias. If the same animation in Murtha was presented in the form of an IVE, the additional factor of user presence would further complicate this potential. Based on the above studies, an IVE can be intentionally designed to elicit, or even unintentionally cause, a user to feel strong emotions, empathy, and overall self-alignment, which would significantly magnify the risk of unfair prejudice. Though these potential sources of prejudice may not ultimately have been grounds for reversal in Murtha, they should be recognized as important factors when addressing the question of prejudicial effects in an IVE.
B. Body Ownership Illusions
When an IVE user feels strongly about another person’s emotions or circumstances in a VE, this can translate into a cognitive feeling of embodiment. Thus, in addition to increasing user emotion and empathy through presence, the virtual body experienced by the user can begin to feel like an analog of the user’s biological body generated through user cognition. As a result, the user-tracking technologies used to facilitate an IVE uniquely involve the potential to produce body ownership illusions (“BOIs”). BOIs are created when non-bodily objects (like a virtual projection or prosthetic limb) are experienced as part of the body through a perceived association with bodily sensations such as touch or movement. The first experiment by Botvinick and Cohen introduced the concept of BOIs through a rubber hand illusion. Participants in the original experiment had their hands concealed and a rubber hand with a similar posture was placed in front of them. An experimenter then stroked both the real and rubber hands simultaneously, causing the majority of participants to report feeling that the rubber hand was a part of their own body. This phenomenon, termed the rubber hand illusion, was later shown to activate areas of the brain “associated with anxiety and interoceptive awareness” when “the fake limb is under threat and at a similar level as when the real hand is threatened.” Thus, participants in one study reacted in anticipation of pain, empathic pain, and anxiety when experimenters occasionally threatened a rubber hand with a needle while participants were under the effects of a BOI.
Subsequent experiments have also tested the extent to which certain multisensory factors are necessary to induce BOIs. While the original experiment involved a visuotactile cue (where participants experienced a combination of visual stimulation and physical contact), further experiments have induced BOIs solely through visuomotor input. Visuomotor stimulation involves participants performing active or passive movements while simultaneously seeing the artificial body (or body part) perform the same movements. Most significantly, this phenomenon has been shown to occur in VEs.
For example, in one study, experimenters outfitted participants with an HMD and a hand–tracking data glove and asked them to focus on the movement of a virtually projected right arm which moved synchronously with the actions of their real right arm, hand, and fingers. The participants’ real right arm was located approximately twenty centimeters away from the virtual projection. Participants were then asked to use their left arm, which was not tracked or projected, to point to their right arm. The participants largely tended to misidentify their real hand and instead identify the virtual hand, in some cases even after the virtual simulation had terminated. The results were consistent with prior studies involving the rubber hand illusion and showed that the illusion of ownership could occur as a result of visuomotor synchrony in movements between the real and virtual hand.
Additional studies of BOIs in VR have led to consistent findings that VEs can produce these effects when homogenous body parts are moved synchronously. These studies have found BOIs resulting from the synchronous movement of virtual legs, upper bodies, and even full bodies.
In an IVE, the occurrence of BOIs as a result of visuomotor stimulation has significant implications as a potential source of unfair prejudice. Beyond the concern that user emotion and empathy in an IVE might cause a juror to sympathize more with a party whose perspective he or she shares, BOIs introduce a separate issue: synchrony between a juror’s movements and those of an actor perceived in an IVE could cause the juror to temporarily feel as if he or she is that person. While some psychological studies have highlighted benefits of inducing BOIs through VR in the courtroom, for example in the potential for reducing racial biases, the risk for unfair prejudice is also exceptionally high. From the standpoint of emotional prejudice, BOIs created through an IVE can both cause the viewer to feel anxious or threatened in a scenario and ultimately to identify with the avatar. For example, if the animation in Murtha were presented through an IVE (with jurors wearing an HMD and data gloves), the jurors could feel as if the car was coming towards their own bodies, eliciting fear or anxiety through an apprehension of contact. Moreover, this vivid and emotional experience could cause a juror to disregard conflicting pallid evidence in the case as to the car’s trajectory or the sequence of events and unduly rely on the IVE, despite its being used merely as a representation of the propounding party or witness’s theory of the case.
IV. Problems with the Current Rules for Demonstrative Computer-Generated Evidence
A. Case Studies
When subjecting jurors to an IVE, both presence and the phenomenon of BOIs create a unique potential for unfair prejudice. Even though IVEs are uniquely immersive and extremely vivid when introduced as demonstrative evidence, they could still remain subject to surprisingly low evidentiary standards. While the rules presented in Part II may at face value appear to be a significant burden for the proponent of an IVE, as stated previously, the characterization of an IVE as substantive or demonstrative and the broad discretion afforded to trial judges can significantly impact the extent to which the rules are used to allow the use of IVE at trial. The treatment of CGE in the following cases is illustrative of the more lenient approach applied in many jurisdictions when dealing with demonstrative evidence.
In Commonwealth v. Serge, a defendant found guilty of first-degree murder for killing his wife appealed the State’s use of a computer-generated animation as demonstrative evidence. The animation—introduced to illustrate the expert testimonies of a forensic pathologist and crime scene reconstructionist—purported to show the manner in which the defendant shot his wife. Prior to admitting the animation, the trial court required that it be authenticated as both a fair and accurate depiction of the testimony and that any potentially inflammatory material be excluded. The trial court also issued a lengthy jury instruction at trial cautioning that the animation was a demonstrative exhibit for the sole purpose of illustrating expert testimony and cautioned the jury not to “confuse art with reality.” The defendant challenged the animation as unfairly prejudicial and improperly authenticated under Pennsylvania Rule of Evidence 901(a) given that the depictions were unsupported by the record or the accompanying expert opinions. The Pennsylvania Supreme Court found both that the animation was a proper depiction of the witness testimony and that the limiting instruction and lack of dramatic elements in the animation were sufficient to eliminate any concerns over prejudice. The court affirmed the admissibility of the animation and held that the animation properly satisfied the basic requirements of Pennsylvania Rules of Evidence 401, 402, 403, and 901.
More recently, in a Utah case—State v. Perea—a defendant convicted of two counts of aggravated murder and two counts of attempted murder appealed his sentence, arguing, in part, that computer-generated animations, excluded by the district court, were sufficiently authenticated under Utah Rule of Evidence 901(a). At trial, the defendant attempted to introduce two animations to visually represent the testimony of a crime scene reconstruction expert. The expert testified that “although he did not personally create the animations, they ‘g[a]ve an indication of what [he] believe[d] may have happened,’” making it easier for the jury to understand his testimony. The State objected for lack of foundation and on the grounds that the animations did not accurately represent the facts, because under the State’s theory there was only one shooter. Reversing the ruling of the district court, the Utah Supreme Court held that despite a lack of knowledge about the creation of the animation on the part of the testifying expert, Rule 901 “does not require that the demonstrative evidence be uncontroversial, but only that it accurately represents what its proponent claims.” The district court’s exclusion was an error because the crime scene reconstruction expert confirmed that the animations accurately represented his interpretation of the facts.
In both cases, the computer-generated animations were deemed relevant under Rules 401 and 402, properly authenticated under Rule 901, and passed the balancing test of Rule 403. However, in neither case were the proponents of the animations obligated to meet foundational requirements beyond an assertion that the animation “fairly and accurately” depicted the testimony of the witnesses—despite the fact that the animations were constructed solely using witness testimony about their memories of the event. Additionally, both courts found that the trial court’s issuance of limiting instructions to the jury was sufficient to combat any prejudicial effects. Under examination, the court’s analyses contain multiple flaws which would be further complicated if IVEs were at issue.
B. Issues with the Court’s Analyses
First, in creating computer-generated representations of a witness’s testimony “[n]o matter how much evidence exists, there is never enough to fill in every detail necessary. . . . The expert (or the animator) must make assumptions to fill in the blanks.” In Serge, like in Murtha, the animators took significant liberties in creating the animation. By placing a knife next to the victim and dressing the defendant’s character in red plaid, the animators made decisions that were not necessarily supported by the physical evidence but were then authenticated by the accompanying witness’s memory or an expert’s theory as to what happened.
Like an animation, the creation of an IVE inevitably involves choices by a designer regarding not only what is perceived, but also how it is perceived. Without proper safeguards or consideration, a party at trial could ostensibly introduce an IVE for demonstrative purposes which appeared to be sufficiently limited in emotional content to the eyes of the trial judge but was designed using MIPs to subtly influence jury attitudes towards a given scenario. For example, in arguing a self-defense claim, a party could ask designers of an IVE to select color palettes and illumination levels more likely to elicit fear and anxiety. As explained in Part III, even subtle or indirect changes to factors such as lighting, point of view, level of interactivity, or synchrony of movement can have significant psychological implications for users of an IVE. However, none of these factors are involved in the current analysis for demonstrative CGE in many jurisdictions.
Second, it seems clear that in combatting highly vivid demonstrative evidence, “the opponent of the animation should be allowed [on cross-examination] to demonstrate to the jury that the . . . animation [is] based, at least partially, on assumptions and conjectures, and not on purely objective, scientific factual determinations.” Yet, under the current standards for demonstrative CGE, many jurisdictions do not require the testifying witness to have personal knowledge regarding the creation of the animation. In Perea, for example, the animation was admitted despite the accompanying witness possessing no information about the creation of the animation. A similar decision by a trial judge to admit an IVE as demonstrative evidence, without an accompanying witness having knowledge about the decisions or assumptions made in creating the IVE, would likewise significantly disadvantage an opponent in combatting its highly vivid qualities through cross-examination.
Third, both courts relied heavily on jury instructions to moderate the potential prejudicial impacts of the animations on the jury. Though the general rule is to assume that juries will abide by limiting instructions, the Supreme Court has previously recognized that “there are some contexts in which the risk that the jury will not, or cannot, follow instructions is so great . . . that the practical and human limitations of the jury system cannot be ignored.” Moreover, research in the field of social psychology has “repeatedly demonstrated that . . . limiting instructions are unsuccessful at controlling jurors’ cognitive processes.” While this does not necessitate the presumption that all jury instructions are ineffective, it does call into question whether a jury subjected to the highly vivid and unique psychological effects of an IVE might have trouble following a judge’s directions as to the permissible and impermissible purposes for its use.
In anticipation of the onset of IVEs in the courtroom, this Note proposes several changes to the current standards for admissibility, as well as judicial guidelines for best practice in moderating the prejudicial impacts of IVEs.
A. Stricter Foundational Requirements
Though it would be impractical to develop a “one-size-fits-all” method in dealing with the numerous potential contexts and purposes for which an IVE might be offered as demonstrative evidence, uniformly increasing the foundational requirements for admitting demonstrative IVEs would help to combat some of the potential sources for prejudice.
In State v. Swinton, the Connecticut Supreme Court recognized the need for changes in the rules governing demonstrative evidence with regard to evolving computer technologies. Addressing the binary distinction of the courts between computer animations and computer simulations, the court recognized that there are some kinds of evidence which do “not fall cleanly within either category.” Though Swinton addressed the enhancement of photographs through Adobe Photoshop, the court’s discussion is particularly applicable in relation to an IVE. The court found that “the difference between presenting evidence and creating evidence was blurred” and endorsed a previously established general rule requiring that in all cases involving CGE there be “testimony by a person with some degree of computer expertise, who has sufficient knowledge to be examined and cross-examined about the functioning of the computer.” In addition, the court went one step further in setting out factors with which the expert should be familiar and which could be weighed in determining the reliability of, and adherence to, procedural requirements.
Adopting the court’s logic, this Note recommends that as a basic requirement, an expert who prepared the IVE should be present at the trial to testify regarding the expert’s qualifications and the underlying processes used to create the IVE. This would ensure that the opposing party has the opportunity to cross-examine the expert regarding the underlying data and assumptions used in its creation. In continued recognition of the differences between substantive and demonstrative evidence, this would not necessitate that the proponent satisfy all of the requirements for admitting scientific evidence under Rule 702 (and the Daubert or Frye tests). However, this would at least afford the opposing party the opportunity to cross-examine someone with personal knowledge of the IVE technology and its creation.
B. Evaluating and Limiting Prejudicial Effects
While establishing an adequate foundation by requiring the presence of an informed expert works to combat some of the unfairness stemming from reliability and misuse of evidence under the current demonstrative standards, this alone is insufficient to curb the range of significant potentials for prejudice. In addition to raising the foundational requirements, there are several factors which should be considered by a judge in conducting the Rule 403 balancing test. In addressing the potential for juror’s unfair reliance on an IVE, consideration of the factors identified in Part IV—chiefly the role of presence and BOIs—should be a necessary predicate to admission. This would require judges to scrutinize not only the design factors in an IVE, but also the level of interactivity facilitated.
Interestingly, beyond mere consideration of such factors, it may also be possible for judges to take affirmative steps to impose limitations on an IVE which could help to mitigate juror overreliance. As this Note has repeatedly stated, the source of many of the potentials for prejudice created by IVEs is their unique vividness and interactivity, which produce feelings of presence and body ownership in the user. Both psychological–presence research and BOI studies indicate that there may be ways to limit, reduce, or remove the feelings of presence and ownership in a VE. Such phenomenon, termed as “breaks in presence” (“BIPs”), occur when the user’s feelings of ownership or consciousness within the VE are disrupted by perceived virtual or real-world interferences.
Under Rule 611(a), judges have broad authority to regulate the admission of demonstrative evidence. As such, judges could potentially use BIPs to mitigate the prejudicial effects of an IVE. Multiple studies have concluded that BOIs occur in VEs only when the movements depicted are relatively synchronous. Because of this, “[w]hen there is asynchrony the illusion does not occur.” With this knowledge, a judge would have the option to instruct the proponent of an IVE to increase the latency (delay) between the movements of the juror and the avatar, thereby reducing the likelihood that a BOI would occur. In another study, examiners found that replacing a perceived limb with a virtual arrow indicator would similarly reduce the BOI phenomenon. Thus, an alternative option might be to instruct the proponent to limit the realistic qualities of the avatar by replacing human features with indicators. Naturally, as further studies are completed and the concepts of presence and ownership in VEs become better understood, so too will the options available to judges in imposing limitations.
As was recognized by the drafters of the Federal Rules of Evidence, it is difficult to define bright line admissibility rules. Despite these difficulties, it stands that the current treatment of demonstrative evidence in many jurisdictions does not properly accommodate IVEs. Though it may appear contrary to logic to think that an IVE could be treated like a chart or graph in the courtroom, under current standards this might very well become the case in some jurisdictions. This author agrees that “every new development is eligible for a first day in court;” however, we as a legal community should be cognizant of the differences between past and emerging technologies and of the potential prejudicial risks newer technologies may pose. It is inevitable that IVEs will continue to make their way into the courtroom, but they should not proceed unchecked. The proposed increase in authentication requirements, as well as the potential factors for judges in evaluating and moderating the use of IVEs in the courtroom, are but an initial step in integrating IVEs for courtroom use. Thus, it remains essential that further psychological and cognitive studies be conducted with regard to the use of IVEs in the courtroom.
[*] *.. Senior Submissions Editor, Southern California Law Review, Volume 92; J.D. Candidate 2019, University of Southern California Gould School of Law; B.A. 2015, University of California, Riverside. My sincere gratitude to Professor Dan Simon for his guidance and the editors of the Southern California Law Review for their excellent work. I would also like to thank my parents, Pamela and Robert Bunker, for their unwavering support and encouragement.
. Damian Schofield, The Use of Computer Generated Imagery in Legal Proceedings, 13 Digital Evidence & Electronic Signature L. Rev. 3, 3 (2016). Some commentators have attributed the increase in use of computer-generated evidence (“CGE”) in the courtroom to three primary factors: (1) we have become a more visual society; (2) people retain much more of what they see than what they hear; and (3) technological advancements and decreasing costs are making this form of evidence more affordable for clients. See Mary C. Kelly & Jack N. Bernstein, Comment, Virtual Reality: The Reality of Getting It Admitted, 13 John Marshall J. Computer & Info. L. 145, 148–50 (1994).
. Carrie Leonetti & Jeremy Bailenson, High-Tech View: The Use of Immersive Virtual Environments in Jury Trials, 93 Marq. L. Rev. 1073, 1073 (2010).
. Compare Betsy S. Fielder, Are Your Eyes Deceiving You?: The Evidentiary Crisis Regarding the Admissibility of Computer Generated Evidence, 48 N.Y.L. Sch. L. Rev. 295 (2003) (discussing potential problems posed by the use of CGE), and Gareth Norris, Computer-Generated Exhibits, the Use and Abuse of Animations in Legal Proceedings, 40 Brief 10 (2011) (weighing the pros and cons of computer-generated animations in the courtroom), with Fred Galves, Where the Not-So-Wild Things Are: Computers in the Courtroom, the Federal Rules Of Evidence, and the Need for Institutional Reform and More Judicial Acceptance, 13 Harv. J.L. & Tech. 161 (2000) (arguing that computer-generated animations are akin to earlier forms of demonstrative media and should be introduced into the courtroom under existing standards).
. See, e.g., Juries ‘Could Enter Virtual Crime Scenes’ Following Research, BBC (May 24, 2016), http://www.bbc.com/news/uk-england-stoke-staffordshire-36363172 (reporting on a £140,000 European Commission grant to the Staffordshire University project for research and experiments on technology and techniques to transport jurors to virtual crime scenes).
. Fredric I. Lederer, The Courtroom 21 Project: Creating the Courtroom of the Twenty-First Century, 43 Judges’ J., Winter 2004, at 39, 42.
. Lars C. Ebert et al., The Forensic Holodeck: An Immersive Display for Forensic Crime Scene Reconstructions, 10 Forensic Sci. Med. Pathology 623, 624–26 (2014).
. Id. A similar virtual reality (“VR”) reconstruction was developed in the United States by Emblematic Group in 2012 using audio files of 911 calls, witness testimony, and architectural drawings to re-create the events of the widely publicized Trayvon Martin shooting. Emblematic Group, One Dark Night-Emblematic Group VR, YouTube (May 9, 2015), https://www.youtube.com/watch?v
=1hW7WcwdnEg. It is also offered for download in the Google Play and Steam Store. See Mike McPhate, California Today: In Virtual Reality, Investigating the Trayvon Martin Case, NY Times (Feb. 24, 2017), https://nyti.ms/2mflo8f (interviewing one of the creators).
. Although the immersive virtual environment (“IVE”) version has not yet been used at trial, the same 3-D model was previously utilized in the prosecution of wartime SS camp guard Reinhold Hanning to help assert his point of view from his post at a watchtower in the camp. Id.
. Basic VR headsets can be purchased for under $100 (for example, Google Cardboard and Samsung Gear VR), with more high-end headsets costing around $600 (for example, Oculus Rift and HTC Vive). See John Gaudiosi, Over 200 Million VR Headsets to Be Sold by 2020, Fortune (Jan. 21, 2016), http://fortune.com/2016/01/21/200-million-vr-headsets-2020; see also Stevi Rex, Global Virtual Reality Industry to Reach $7.2 Billion in Revenues in 2017, Greenlight Insights (Apr. 11, 2017), https://greenlightinsights.com/virtual-reality-industry-report-7b-2017 (forecasting global VR product sales to reach $7.2 billion by the end of 2017).
. See, e.g., Lamber Goodnow Legal Team Brings Virtual Reality Technology to the Courtroom, PR Newswire (Jan. 27, 2017), https://www.prnewswire.com/news-releases/lamber-goodnow-legal-team
-brings-virtual-reality-technology-to-the-courtroom-300397710.html (reporting on Arizona personal injury firm advertising use of VR in pending cases) (“In the old days, I’d use demonstrative exhibits, visual aids and witness statements in an attempt to ‘transport a jury to an accident scene.’ With virtual reality, not only can I transport jurors to the accident scene, I can put them in the car at impact.”).
. See, e.g., High Impact Bringing Virtual Reality to the Courtroom, supra note 1.
. See Nsikan Akpan, How Cops Used Virtual Reality to Recreate Tamir Rice, San Bernardino Shootings, PBS News Hour (Jan. 13, 2016, 5:00 PM), https://www.pbs.org/newshour/science/virtual-reality-tamir-rice-3d-laser-scans-shootings-san-bernardino (discussing law enforcement agencies use of laser scanners at crime scenes and current projects to convert these kinds of scans for use with VR headsets) (“That’s what I see coming. We’re going to be putting these goggles on juries and say look around and tell me what you see.”). For more on various types of 3-D laser scanning devices employed by law enforcement in the United States, including use with drone technologies, see Robert Galvin, Capture the Crime Scene, Officer (Jul. 19, 2017), https://www.officer.com/investigations/article
. See Jeremy N. Bailenson et al., Courtroom Applications of Virtual Environments, Immersive Virtual Environments, and Collaborative Virtual Environments, 28 Law & Pol’y 249, 255–58 (2006).
. Leonetti & Bailenson, supra note 3, at 1076.
. See Bailenson et al., supra note 17, at 258–60.
. Leonetti & Bailenson, supra note 3, at 1118.
. Caitlin O. Young, Note, Employing Virtual Reality Technology at Trial: New Issues Posed by Rapid Technological Advances and Their Effects on Jurors’ Search for “The Truth,” 93 Tex. L. Rev. 257, 258 (2014).
. For further explanation of the concept of immersion in virtual environments (“VEs”), see Mel Slater & Sylvia Wilbur, A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments, 6 Presence 603, 604–05 (1997) (“Immersion is a description of a technology, and describes the extent to which the computer displays are capable of delivering an inclusive, extensive, surrounding and vivid illusion of reality to the senses of a human participant.” (emphasis in original)).
. See Frank Stenicke et al., Interscopic User Interface Concepts for Fish Tank Virtual Reality Systems, in 2007 IEEE Virtual Reality Conference 27, 27–28 (2007).
. George Robertson et al., Immersion in Desktop Virtual Reality, in Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology 11, 11 (1997); see also Stenicke et al., supra note 24, at 27. Modern-day desktop VR examples can be seen in video games, like the Call of Duty franchise, where users control their in-game avatars through a handheld controller or mouse/keyboard interface. These kinds of video games can be played from both first-person and third-person perspectives and computer-generated animations are rendered on a monitor (primarily via television and computer screens).
. Stenicke et al., supra note 24, at 27.
. See D.W.F. van Krevelen & R. Poelman, A Survey of Augmented Reality Technologies, Applications and Limitations, 9 Int’l J. Virtual Reality, no. 2, 2010, at 1, 1.
. Id. A popular example of this type of technology can be seen in Niantic’s Pokémon Go, which was released for mobile devices in July 2016. The game utilizes a user’s phone/tablet camera (which functions to depict their surrounding physical environment) and overlays virtual animations of monsters onto the camera image. Users can interact with the monsters through their touch-screen interface and the user’s real-world movements are tracked using their devices GPS services. See Pokémon Go, https://support.pokemongo.nianticlabs.com/hc/en-us (last visited Dec. 28, 2018).
. See Bailenson et al., supra note 17, at 251.
. Id. at 250–53, 259.
. An alternative configuration is a Cave Automatic Virtual Environment (“CAVE”) where the user moves in a room surrounded by rear-projection screens. The user, wearing stereoscopic glasses instead of a head-mounted display (“HMD”), is tracked through an electromagnetic device and updated visual images are reflected on the screens. See id. at 253.
. Ralph Schroeder, Social Interaction in Virtual Environments: Key Issues, Common Themes,
and a Framework for Research, in The Social Life of Avatars 1, 2 (2002) (citation omitted).
. For a comprehensive overview of studies on user feelings of presence in IVEs, see generally James J. Cummings & Jeremy N. Bailenson, How Immersive Is Enough? A Meta-Analysis of the Effect of Immersive Technology on User Presence, 19 Media Psychol. 272 (2016) (analyzing meta data collected from eighty-three studies on immersive system technology and user experiences of presence).
. Id. at 274. Of the factors relating to the level of user presence, “results show that increased levels of user-tracking, the use of stereoscopic visuals, and wider fields of view of visual displays are significantly more impactful than improvements to most other immersive system features, including quality of visual and auditory content.” Id. at 272.
. Neal Feigenson, Too Real? The Future of Virtual Reality Evidence, 28 Law & Pol’y 271,
273 (2006). Vividness means the extent to which the display forms a “sensorially rich environment,” and interactivity results from the ability of the user to “influence the form or content of the mediated environment.” Id.
. See Young, supra note 21, at 261.
. See infra Part III.
. Leonetti & Bailenson, supra note 3, at 1077.
. See generally Fed. R. Evid.
. Feigenson, supra note 36, at 276.
. See generally Laura Wilkinson Smalley, Establishing Foundation to Admit Computer-Generated Evidence as Demonstrative or Substantive Evidence, 57 Am. Juris. Proof of Facts 3d 455 (Westlaw 2018) (providing an overview of the various legal foundations for CGE’s admission into evidence).
. Karen L. Campbell et al., Avatar in the Courtroom: Is 3D Technology Ready for Primetime?, 63 Fed’n Def. & Corp. Counsel Q. 295, 296 (2013).
. Substantive Evidence, Black’s Law Dictionary (10th ed. 2014).
. Kurtis A. Kemper, Admissibility of Computer–Generated Animation, 111 A.L.R. 5th 529, § 2 (2003).
. Leonetti & Bailenson, supra note 3, at 1098–99 (“The impediments that a proponent of an IVE would face, under Rule 403, the best evidence rule, or Rule 901, are chiefly matters of foundation, i.e., the admissibility of an IVE turns on whether the proponent could establish its accuracy, reliability, and authenticity.”).
. Id. For example, a blood spatter analyst could use a recreation of the crime scene to explain her findings.
. Id. at 1099 (footnotes omitted). For a comprehensive view of potential courtroom and pre-trial IVE applications, see generally Bailenson et al., supra note 17.
. Leonetti & Bailenson, supra note 3, at 1099.
. Campbell et al., supra note 43, at 299. Thus, requiring a sufficient showing of:
(1) the qualifications of the expert who prepared the simulation and (2) the capability and reliability of the computer hardware and software used to create the simulation . . . [that] (3) the calculations and processing of data were done on the basis of principles meeting the standards for scientific evidence under Rule 702; (4) the data used to make the calculations were reliable, relevant, complete, and input properly; and (5) the process produced an accurate result.
. Demonstrative Evidence, Black’s Law Dictionary (10th ed. 2014).
. I. Neel Chatterjee, Admitting Computer Animations: More Caution and New Approach Are Needed, 62 Def. Couns. J. 36, 37 (1995).
. Smalley, supra note 42, § 8.
. Despite the fact that an IVE would utilize computer programming to create the illustrative aid, the separate treatment of an IVE as demonstrative or substantive evidence would not depend on whether VR technology was employed to achieve the rendering. See Galves, supra note 4, at 228 (“Although demonstrative animations use programs in design, the substantive result they create is based on the witness’s testimony rather than numerical calculations and other underlying input data.”).
. Feigenson, supra note 36, at 276. Although demonstrative evidence is not technically “evidence” in the context of the Federal Rules, standards of relevance, fairness, and authentication are still enforced by courts in weighing the admissibility of demonstrative evidence through analogy. Id.
. See Fed. R. Evid. 611(a). “The court should exercise reasonable control over the mode and order of examining witnesses and presenting evidence so as to: (1) make those procedures effective for determining the truth; (2) avoid wasting time; and (3) protect witnesses from harassment or undue embarrassment.” Id.
. See Fed. R. Evid. 402. “Relevant evidence is admissible unless any of the following provides otherwise: the United States Constitution; a federal statute; these rules; or other rules prescribed by the Supreme Court. Irrelevant evidence is not admissible.” Id.
. See Fed. R. Evid. 901(a).
. Chatterjee, supra note 55, at 37.
. Smalley, supra note 42, § 9.
. See, e.g., Gosser v. Commonwealth, 31 S.W.3d 897, 903 (Ky. 2000) (“[B]ecause a computer-generated diagram, like any diagram, is merely illustrative of a witness’s testimony, its admission normally does not depend on testimony as to how the diagram was prepared, e.g., how the data was gathered or inputted into the computer.”), abrogated on other grounds by Elery v. Commonwealth, 368 S.W.3d 78 (Ky. 2012).
. See Fed. R. Evid. 901(b)(1). Significantly, this would include a re-creation of a scene or accident based on the personal knowledge of a sponsoring witness. See Leonetti & Bailenson, supra note 3, at 1098.
. See Feigenson, supra note 36, at 277.
. Campbell et al., supra note 43, at 299.
. Though, as argued in Part V, subjecting all IVE evidence to more substantive standards could have a moderating effect on some of the concerns raised in Part III.
. See, e.g., People v. McHugh, 476 N.Y.S.2d 721, 722–23 (Sup. Ct. 1984) (rejecting a motion for a pre-trial Frye hearing despite no prior instances of computer-generated animations being used at trial) (“While this appears to be the first time such a graphic computer presentation has been offered at a criminal trial, every new development is eligible for a first day in court.”); see also People v. Hood, 62 Cal. Rptr. 2d 137, 140 (Ct. App. 1997) (holding that the Kelly formulation for “new scientific procedures” does not apply to computer-generated animations when introduced as demonstrative evidence).
. See Fed. R. Evid. 403.
. Christopher B. Mueller & Laird C. Kirkpatrick, Federal Evidence § 4:12 (4th ed. 2013) (“Much depends on surrounding facts, circumstances, issues, the conduct of trial, and the evidence adduced already and expected as proceedings move forward.”).
. Fed. R. Evid. 403, advisory committee’s notes to 1972 proposed rules.
. Mueller & Kirkpatrick, supra note 75, § 4:13.
[E]vidence is unfairly prejudicial in the sense of being too emotional if it is best characterized as sensational or shocking; if it provokes anger, inflames passions, or if it arouses overwhelmingly sympathetic reactions; provokes hostility or revulsion; arouses punitive impulses; or appeals to emotion in ways that seem likely to overpower reason.
Id. (footnotes omitted).
. Id.; see, e.g., United States v. Brown, 490 F.2d 758, 764 (D.C. Cir. 1973) (“Despite a limiting instruction to the effect that the evidence is to be considered solely on the issue of the declarant’s state of mind (the proper purpose), there is the ever-present danger that the jury will be unwilling or unable to so confine itself.”).
. Fed. R. Evid. 403, advisory committee’s notes to 1972 proposed rules.
. See Kwan Min Lee, Presence, Explicated, 14 Comm. Theory 27, 42 (2004). Though important with respect to the study of co-presence and other social phenomenon experienced in an IVE, social presence falls outside the scope of this Note. Social presence pertains to the way in which virtually rendered social actors are experienced as actual social actors by a user and is an important concept in the understanding of feelings of co-presence between multiple users in a VE. For more on social presence, see id. at 45.
. Julia Diemer et al., The Impact of Perception and Presence on Emotional Reactions: A Review of Research in Virtual Reality, 6 Frontiers Psychol., Jan. 2015, at 1.
. See R.M. Baños et al., Immersion and Emotion: Their Impact on the Sense of Presence, 7 CyberPsychology & Behav. 734, 735 (2004); see also Rosa M. Baños et al., Presence and Emotions in Virtual Environments: The Influence of Stereoscopy, 11 CyberPsychology & Behav. 1, 2–3 (2008).
. Anna Felnhofer et al., Is Virtual Reality Emotionally Arousing? Investigating Five Emotion Inducing Virtual Park Scenarios, 48 Int’l J. Hum.-Computer Stud. 48, 49 (2015) (citation omitted).
. For a seminal text on psychological laboratory designs for mood induction procedures, see generally Maryanne Martin, On the Induction of Mood, 10 Clinical Psychol. R. 669 (1990).
. Giuseppe Riva et al., Affective Interactions Using Virtual Reality: The Link Between Presence and Emotions, 10 CyberPsychology & Behav. 45, 46–47 (2007).
. See Felnhofer et al., supra note 89, at 50.
. Id. at 54. Interestingly, in contrast to these findings, an experiment performed using a desktop VR system to attempt to assess whether a simulated level of illumination could impact the affective appraisal of users in a VE failed to yield any measurable results. See Alexander Toet et al., Is a Dark Virtual Environment Scary?, 12 CyberPsychology & Behav. 363, 363 (2009). This suggests that the lack of interactivity in a non-immersive environment means that these kinds of systems may not pose the same risks as an IVE in strongly influencing user emotion through design. See id.
. See generally, e.g., Donghee Shin, Empathy and Embodied Experience in Virtual Environment: To What Extent Can Virtual Reality Stimulate Empathy and Embodied Experience?, 78 Computers Hum. Behav. 64 (2017).
. Schofield, supra note 2, at 13.
. See Gareth Norris, The Influence of Angle of View on Perceptions of Culpability and Vehicle Speed for a Computer-Generated Animation of a Road Traffic Accident, 20 Psychiatry, Psychol. & L. 248, 252–53 (2013).
. Id. at 252 (citation omitted).
. Id. (“By experiencing a virtual version of the story location as a witness/participant, and by feeling the perspective of a character depicted in the story, users received specialized access to the sights and sounds (and even to the feelings and emotions) associated with the story.”).
. Id. at 71. Interestingly, the study also found that, despite higher levels of immersion, users with a lower empathy trait had lower levels of reported embodiment and empathy—suggesting that the disposition of certain users may have a correlation on their empathy within a virtual world. Id. at 69.
. Id. at 69 (“VR developers propose immersion but users process it.”).
. See State v. Murtha, CR03-0568598T (Conn. Super. Ct., JD Hartford, 2006); see also Neal Feigenson & Christina Spiesel, Law on Display: The Digital Transformation of Legal Persuasion and Judgment 92–103 (2009) (discussing the case in detail).
. Feigenson & Spiesel, supra note 114, at 92.
. Id.; see also NYU Press, Law on Display – Murtha Video, Part One, YouTube (Sept. 23, 2009), https://youtu.be/kWMyBg6Zt-o (showing the original police footage); NYU Press, Law on Display – Murtha Video, Part 2, YouTube (Sept. 23, 2009), https://youtu.be/J0kd-vv9DeM (showing the edited footage with the animation used at trial).
. Feigenson & Spiesel, supra note 114, at 97.
. Schofield, supra note 2, at 13.
. Feigenson & Spiesel, supra note 114, at 251 n.113.
. See Konstantina Kilteni et al., The Sense of Embodiment in Virtual Reality, 21 Presence 373, 381–82 (2012).
. Natalie Salmanowitz, Unconventional Methods for a Traditional Setting: The Use of Virtual Reality to Reduce Implicit Racial Bias in the Courtroom, 15 U.N.H. L. Rev. 117, 141 (2016) (“Instead of simply personifying an animated character in a digital game, immersive virtual environments can induce body ownership illusions, in which individuals temporarily feel as though another person’s body part is in fact their own.”).
. Konstantina Kilteni et al., Over My Fake Body: Body Ownership Illusions for Studying the Multisensory Basis of Own-Body Perception, Frontiers Hum. Neuroscience, Mar. 2015, at 1, 2.
. Matthew Botvinick & Jonathan Cohen, Rubber Hands ‘Feel’ Touch that Eyes See, 391 Nature 756, 756 (1998).
. Kilteni et al., supra note 128, at 4.
. See generally H. Henrik Ehrsson et al., Threatening a Rubber Hand that You Feel Is Yours Elicits a Cortical Anxiety Response, 104 Proc. Nat’l Acad. Sci. U.S. 9828 (2007).
. See, e.g., Kilteni et al., supra note 128, at 3.
. Id. at 5 (“[I]n spite of the fact that they saw the virtual hand move, did not feel their hand move, nor move it, they still blindly pointed towards the virtual hand when asked to point where they felt their hand to be.”).
. See, e.g., Kilteni et al., supra note 128, at 9.
. See Elena Kokkinara & Mel Slater, Measuring the Effects Through Time of the Influence of Visuomotor and Visuotactile Synchronous Stimulation on a Virtual Body Ownership Illusion, 43 Perception 43, 56 (2014) (“The results provide evidence that congruent multisensory and sensorimotor feedback between the unseen real and the seen virtual legs can induce sensations that the seen legs are part of the actual body.”).
. See Konstantina Kilteni et al., Drumming in Immersive Virtual Reality: The Body Shapes the Way We Play, 19 IEEE Transactions on Visualization & Computer Graphics 597, 599, 603 (2013) (“Seeing a virtual body from first person perspective, and receiving spatiotemporally congruent multisensory and sensorimotor feedback with respect to the physical body entails an illusion of ownership over that virtual body.”).
. See Domna Banakou et al., Illusory Ownership of a Virtual Child Body Causes Overestimation of Object Sizes and Implicit Attitude Changes, 110 Proc. Nat’l Acad. Sci. 12846, 12849 (2013) (“[I]t is possible to generate a subjective illusion of ownership with respect to a virtual body that represents a child and a scaled-down adult of the same size when there is real-time synchronous movement between the real and virtual body.”); see also Tabitha C. Peck et al., Putting Yourself in the Skin of a Black Avatar Reduces Implicit Racial Bias, 22 Consciousness & Cognition 779, 786 (2013) (“IVR can be used to generate an illusion of body ownership through first person perspective of a virtual body that substitutes their own body. . . . [M]ultisensory feedback, such as visuomotor synchrony as used in our experiment, may heighten this illusion.”).
. See id. at 786 (finding that embodiment of light-skinned people in darker-skinned avatars can lead to comparative reductions in implicit racial bias).
. Ehrsson et al., supra note 132, at 9830.
. See Peck et al., supra note 144, at 786.
. While the following cases are taken from the Pennsylvania and Utah Supreme Courts respectively, the applicable rules of evidence are basically identical to the Federal Rules. See Pa. R. Evid. 403 cmt. (“Pa.R.E. 403 eliminates the word ‘substantially’ to conform the text of the rule more closely to Pennsylvania law.”); see also Pa. R. Evid. 901(a) cmt. (“Pa.R.E. 901(a) is identical to F.R.E. 901(a)”); Utah R. Evid. 901(a), 2011 advisory committee note (noting that the Utah rule is “the federal rule, verbatim.”); Utah R. Evid. 403, 2011 advisory committee note (same). For a general overview and survey of the treatment of computer animations at both the state and federal level, see generally Victoria Webster & Fred E. (Trey) Bourn III, The Use of Computer-Generated Animations and Simulations at Trial, 83 Def. Couns. J. 439 (2016).
. Commonwealth v. Serge, 896 A.2d 1170, 1176 (Pa. 2006).
. Id. at 1187. Notably, the animation was devoid of any “(1) sounds; (2) facial expressions; (3) evocative or even life-like movements; (4) transition between the scenes to suggest a story line or add a subconscious prejudicial effect; or (5) evidence of injury such as blood or other wounds.” Id. at 1183.
. State v. Perea, 322 P.3d 624, 635–36 (Utah 2013).
. Id. at 635 (alterations in original).
. Id. at 635–637. Stating that
[t]he State objected and the district court refused to admit the animations, finding that “there [was] no foundation for the animation[s]” because Mr. Gaskill did not know “who created [them],” “the background of the people who created [them],” “how [they were] created,” or “what [the animators] relied upon in creating [them].”
. David S. Santee, More than Words: Rethinking the Role of Modern Demonstrative Evidence, 52 Santa Clara L. Rev. 105, 135 (2012).
. See id. at 136, 136 n.180, 140.
. In thinking about the effect of lighting, one cannot help but remember the first televised Nixon-Kennedy debate in which Richard Nixon refused makeup for the studio camera lighting, instead applying a cheap “coat of [drugstore] Lazy Shave to hide his five o’clock shadow.” Dan Gunderman, The Story of the First TV Presidential Debate Between Nixon and Kennedy—‘My God, They’ve Embalmed Him Before He Even Died’, N.Y. Daily News (Sept. 24, 2016, 4:25 AM), http://www.nydailynews.com/news
/politics/story-televised-debate-nixon-jfk-article-1.2803277. The interesting result being that most viewers who listened to the radio felt Nixon had prevailed, but those viewing the televised debate overwhelmingly found favor with Kennedy, who had subtly applied powder. See id.
. See supra Part III.
. See Webster & Bourn, supra note 148, at 441–42.
. John Selbak, Comment, Digital Litigation: The Prejudicial Effects of Computer-Generated Animation in the Courtroom, 9 High Tech. L.J. 337, 366 (1994).
. See Webster & Bourn, supra note 148, at 441–42.
. See State v. Perea, 322 P.3d 624, 635–36 (Utah 2013).
. As previously mentioned, federal courts are advised to rely on jury instructions to attempt to limit prejudice following the Advisory Committee Notes to Rule 403. Fed. R. Evid. 403, advisory committee’s notes to 1972 proposed rules. At the federal level, most jurisdictions rely on jury instructions which essentially include the following:
(1) an admonition that the jury is not to give the animation or simulation more weight just because it comes from a computer; (2) a statement clarifying that the exhibit is based on the supporting witness’s evaluation of the evidence; and, (3) in the case of an animation, a statement that the evidence is not meant to be an exact recreation of the event, but is, instead, a representation of the witness’s testimony.
Webster & Bourn, supra note 148, at 442.
. Bruton v. United States, 391 U.S. 123, 135 (1968) (“Unless we proceed on the basis that the jury will follow the court’s instructions where those instructions are clear and the circumstances are such that the jury can reasonably be expected to follow them, the jury system makes little sense.” (citation omitted)). But see Krulewitch v. United States, 336 U.S. 440, 453 (1949) (Jackson, J., concurring) (“The naive assumption that prejudicial effects can be overcome by instructions to the jury . . . all practicing lawyers know to be unmitigated fiction.”).
. Bruton, 391 U.S. at 135.
. Joel D. Lieberman & Jamie Arndt, Understanding the Limits of Limiting Instructions, 6 Psychol., Pub. Pol’y & L. 677, 686 (2000).
. See State v. Swinton, 847 A.2d 921, 945–46 (Conn. 2004).
. Id. at 938 (emphasis omitted).
. Id. at 942 (citation omitted).
. Id. at 942–43. These procedural factors included:
(1) the underlying information itself; (2) entering the information into the computer; (3) the computer hardware; (4) the computer software (the programs or instructions that tell the computer what to do); (5) the execution of the instructions, which transforms the information in some way—for example, by calculating numbers, sorting names, or storing information and retrieving it later; (6) the output (the information as produced by the computer in a useful form, such as a printout of tax return information, a transcript of a recorded conversation, or an animated graphics simulation); (7) the security system that is used to control access to the computer; and (8) user errors, which may arise at any stage.
Id. (citation omitted).
. See Fed. R. Evid. 702.
. Therefore, avoiding a situation like in Perea, where the witness cannot speak to the design of the accompanying computer-generated exhibit beyond asserting that it is a fair and accurate depiction of their testimony. See State v. Perea, 322 P.3d 624, 637 (Utah 2013).
. See Cummings & Bailenson, supra note 34, at 273.
. See Mel Slater & Anthony Steed, A Virtual Presence Counter, 9 Presence: Teleoperators & Virtual Environments 413, 426 (2000) (measuring the occurrence of user breaks in presence (“BIPs”) using an HMD); see also Sanchez-Vives et al., supra note 137, at 5; Kokkinara & Slater, supra note 142, at 56 finding that:
[T]he analysis of breaks suggest that asynchronous [visuotacticle] may be discounted when synchronous [visuomotor] cues are provided. . . . [W]e can predict a high or low estimated probability of the illusion solely from knowing which [visuomotor] group (synchronous or asynchronous) the person was in . . . asynchronous [visuotacticle] stimulation combined with asynchronous [visuomotor] stimulation is shown to be incompatible with the illusion.
Kokkinara & Slater, supra note 142, at 56.
. For a further explanation of BIPs, see generally Maria V. Sanchez-Vives & Mel Slater, From Presence to Consciousness Through Virtual Reality, 6 Nature Reviews Neuroscience 332 (2005).
. Take, for example, when a person is deeply engrossed in watching a movie:
Every so often . . . some real world event, or some event within the movie itself, will occur that will throw you out of this state of absorption and back to the real world of the theatre: someone nearby unwraps a sweet wrapper, someone coughs, some aspect of the storyline becomes especially ridiculous, and so on.
Slater & Steed, supra note 183, at 419.
. See Fed. R. Evid. 611(a), advisory committee’s notes to proposed rule (describing the broad powers of the judge to regulate demonstrative evidence).
. See, e.g., Sanchez-Vives et al., supra note 137, at 2.
. See Ye Yuan & Anthony Steed, Is the Rubber Hand Illusion Induced by Immersive Virtual Reality?, in 2010 IEEE Virtual Reality Conference 95, 101 (2010) (“[T]he IVR arm ownership illusion appears to exist when the virtual arm roughly appears in shape and animation like the participant’s own arm, but not when there is a virtual arrow.”).
. See Mueller & Kirkpatrick, supra note 75.
. People v. McHugh, 476 N.Y.S.2d 721, 722 (Sup. Ct. 1984).