Prosecutors offer Facebook posts to show that a gang leader “green lighted” the hatchet killing of a homeless man for “snitching” on him.1 A plaintiff in an Internet stalking case offers the hundreds of abusive emails she received from anonymous senders after spurning the defendant’s advances.2 The government secures a conviction for illegal firearm possession by offering Facebook photos of the defendant with a .45 caliber pistol—but no physical evidence.3
These cases illustrate how social media evidence has become an important feature of modern trial practice, just as it affects how we shop, work, eat, vote, watch TV, and interact with one another. We can summon and use social media virtually instantly with smart phones—devices the Supreme Court recently called “almost a feature of human anatomy.”4 Given social media’s pervasiveness in our culture, and the frequency with which people use it compared to other forms of communication, social media evidence is a broader and deeper trove of courtroom evidence than has ever been available before. At the same time, however, social media evidence is uniquely vulnerable to alteration or forgery, particularly as advances in technology allow so-called “bot” accounts to create social media content autonomously.5
A NEW FRONTIER BRINGS NEW CHALLENGES
Offering instant messages, tweets, and social media posts of all types at trial is now commonplace. Such evidence can be useful, for example, to prove a party’s mental state or to prove that someone was in a given place at a given time—like on a ski slope days after an alleged injury.6 Even before trial, social media may provide strategic value—for instance, if a plaintiff’s statements on product-review forums contradict the allegations in a consumer class action complaint—that could potentially help a defendant secure pretrial dismissal.
But while social media has improved our ability to tell the jury “what really happened,” it also creates new challenges for how that story can be told. The jury cannot see evidence unless it is authenticated and admitted. Federal Rule of Evidence 901(a) (and numerous state analogs) requires the proponent of evidence to “produce evidence sufficient to support a finding that the item is what the proponent claims it is.” This standard imposes a relatively low bar, requiring “[o]nly a prima facie showing of genuineness . . . ; the task of deciding the evidence’s true authenticity and probative value is left to the jury.”7 Compared to a voicemail, a letter, or even an email, however, authenticating social media evidence can be challenging due to “the ease with which a social media account may be falsified or a legitimate account may be accessed by an imposter.”8 Thus, lawyers must lay a foundation that addresses the “concern that someone other than the alleged author may have accessed the account and posted the message in question.”9
Courts sometimes disagree on what must be shown to satisfy this concern. Some impose a relatively high bar, requiring the proponent to all but eliminate the possibility of phony authorship.10 Others hold that social media evidence is just like any other type of evidence,11 requiring only the introduction of facts from which a reasonable juror could find that the evidence was created by the purported author. We submit that the permissive approach aligns better with the text of Rule 901 and is thus correct.12 Rule 901(a) requires only a preliminary showing that the evidence is what the proponent claims; this “does not require . . . rul[ing] out all possibilities inconsistent with authenticity.”13 Evidence that an imposter created the content might be a basis for admitting the evidence conditionally under Rule 104(b) or for excluding it under Rule 403, but it should not affect whether Rule 901’s threshold for authentication can be met.14 Once the proponent presents enough evidence for a reasonable juror to find that the author was who the proponent asserts, evidence suggesting otherwise may affect the weight the jury gives the evidence but should not impact its admissibility.15
Even so, some courts continue to apply the more stringent approach.16 For example, in United States v. Vayner, the U.S. Second Circuit Court of Appeals reversed a district court’s decision to admit screenshots from a social media profile that contained the defendant’s name, photo, and work history.17 Vayner holds that merely presenting evidence proving that a post came from a particular user’s account is insufficient to authenticate the post as actually coming from that user.18 Regardless of which approach is correct, lawyers cannot take for granted that courts will rule in their favor on evidentiary issues—particularly those involving complex technology and novel evidence in the heat of trial, amid numerous other evidentiary motions and objections.
AUTHENTICATING SOCIAL MEDIA EVIDENCE AT TRIAL
Lawyers offering social media evidence at trial should be prepared to “over-authenticate” their evidence by laying a foundation that, if possible, substantially eliminates the possibility that an imposter created the content. If a witness will admit to authoring a post or owning a social media profile, and can lay a foundation supporting that admission, then the proponent’s work should be done.19 But in criminal cases (and even some civil ones), the Fifth Amendment may make this type of testimony unavailable if the witness believes that providing such testimony could be self-incriminating. Regardless, adverse witnesses often will simply be unwilling to admit they created a post or that they can remember doing so. Authentication of social media evidence should thus rely on foundational testimony about three topics: (1) circumstantial evidence of authorship or account creation, (2) how the evidence was identified and verified (i.e., “chain of custody”), and (3) how the social media platform itself provides the evidence with indicia of reliability. Below we suggest three ways a proponent can provide this authentication.
1. Circumstantial evidence of authenticity
Witnesses can testify from personal knowledge about “contextual clues in the communication tending to reveal the identity of the sender.”20 This is the type of testimony that Rule 901(b) contemplates for circumstantially authenticating any type of communication. Consider the following lines of questioning:
Does the evidence contain information—photos, friends, locations, etc.—that is consistent with a witness’s testimony about the asserted author or of how that person writes, speaks, or behaves? For instance, in Allen v. Zonis, an Internet stalking case in which one of the authors of this article was appellate counsel, the plaintiff testified that the writing style in abusive emails she received from anonymous senders matched that from messages the defendant had sent her previously.21 Also, in Burgess v. State, a Myspace account bearing the name “Oops” was properly authenticated through an officer’s testimony that he had confirmed with the defendant’s sister that the defendant’s nickname was “Oops.”22
Have witnesses previously communicated with the asserted author using this profile? In Allen, the plaintiff’s authenticating testimony included the fact that she received the anonymous, threatening messages at an email account that only the defendant had ever used to communicate with her. This illustrates how linking a previously used communication channel with the purported author can be an effective means of establishing genuine authorship.
Does the post include a username that is consistent with posts on other platforms that are more readily linked to the asserted author? For example, even if a Facebook page contains no photos or uses a false name, witness testimony that the same name appears on other social media platforms containing visual depictions of the purported author can be sufficient to authenticate the Facebook page.23
Have the asserted author’s offline activities ever corresponded to events or experiences described over social media? This can be a particularly persuasive way to authenticate social media evidence. Even a single instance where, for example, the purported author met with someone after arranging the encounter through social media can be enough to authenticate not only the messages arranging the encounter, but all messages coming from the account in question.24
Do timestamps or geolocation data associated with the post help connect it to particular people or events? Social media posts often contain information indicating the date, time, and location of the post’s creation.25 Witness testimony that the purported author was in that location on that date can thus help authenticate the evidence. This type of data is not always accurate, however,26 and attorneys should be prepared to offer testimony explaining any discrepancies.27
2. “Chain of custody” evidence
Offering testimony from investigators, electronic discovery specialists, or expert witnesses can help authenticate social media evidence by establishing the evidence’s “chain of custody,” that is, how the proponent’s investigation identified the information, verified it, and led to its inclusion in the exhibit offered at trial. In particular:
How was the evidence identified and then copied, reproduced, or transcribed into the exhibit being offered in court? This testimony should include a description from the witness of how the evidence was accessed and turned into an exhibit. For instance, an investigator could testify to accessing a particular website or app, taking a “screen shot” of the device’s monitor, and printing out the screen shot. A percipient witness can then testify as to whether the printout fairly and accurately reflects the social media evidence that the witness initially saw.
Do IP addresses or social media subscriber records link the evidence to a particular person? Social media companies may be compelled to disclose certain records in response to a subpoena, including subscriber information, which contains phone numbers or emails linked to a social media account, and IP address logs. Social media companies will generally also provide a certification from an authorized records custodian to establish a self-authenticating business record under Fed. R. Civ. P. 902(11).28 Note, however, that this certification establishes only “that the depicted communications took place between certain Facebook accounts, on particular dates, or at particular times,” which is not sufficient in isolation to authenticate the content of a social media post in relation to a particular author.29
What steps were taken to rule out other accounts with the same or similar usernames? Commonwealth v. Mangel affirmed the trial court’s denial of the prosecution’s motion in limine to admit Facebook communications where, among other things, a search on Facebook for the defendant’s name yielded five profiles under that name, contradicting a detective’s testimony that only one such account appeared during her search.30 This illustrates the importance of using multiple avenues to authenticate evidence; an investigator’s testimony about chain of custody may be insufficient in isolation if multiple profiles use the same name.
Did the proponent obtain the account’s username and password to verify the source of the evidence? The trial court in Mangel faulted the prosecution for not obtaining the username or password for the Facebook account at issue to confirm its authenticity. To the extent available, obtaining login credentials for a social media account—which, in theory, only the account’s true owner should possess—is a reliable means of authenticating the social media account. However, given the intimacy and breadth of personal information often contained in social media accounts, courts may be wary about compelling parties to produce their login credentials, particularly in civil cases.31
Were social media apps on devices in the asserted author’s possession logged in to accounts associated with the evidence at issue? In United States v. Lewisbey, the court held that incriminating Facebook posts were properly authenticated because (among many other circumstantial links between the defendant and the Facebook account) the Facebook app on a mobile phone confiscated from the defendant was linked to the account from which the incriminating statements were posted.32 Likewise, in an Internet child pornography case tried by one of the authors of this article, a computer in the defendant’s bedroom was logged into AOL Instant Messenger at the time of his arrest under a screenname involved in chat logs discussing child pornography.33 As mentioned above, in theory, only the true owner of a social media account has the means to access that account. Therefore, the fact that an account is accessible on a device in the asserted author’s possession is a particularly strong indicator of genuine authorship.
3. Technological safeguards of authenticity
Background information about a social media platform’s operation can explain how the platform, by design, seeks to guard against phony content. This might require testimony from an expert or from a representative of the social media company. For example: • Do the platform’s terms of service prohibit using false or invented profiles?
-
Does the platform require users to create accounts using unique login credentials?
-
Is the post from the account of a public figure whose identity the social media company has “verified”?34
-
Must users verify their accounts using email confirmation, two-factor authentication, or other additional layers of security?35
-
In the witness’s training or experience, how often has evidence of this type proven to be fraudulent, and what would one expect to see if that were the case?
Eliciting testimony on these issues in isolation likely will not be sufficient to authenticate the substance of a social media communication. However, covering all three of the areas discussed above—circumstantial evidence of authorship, chain of custody, and the operation of the platform—will help ensure that social media evidence is properly authenticated. Authentication is supposed to be a lenient standard. Once the proponent meets the low bar of authentication, arguments to the contrary should go to the weight to be given the evidence rather than to its admissibility, and it should ultimately be up to the trier of fact to accept or reject such evidence.
THE NEXT FRONTIER: EVEN MORE CHALLENGES
Three years ago, researchers used many hours of video from Barack Obama’s weekly addresses to teach an artificial intelligence program to map spoken-word audio onto video of mouth shapes. Researchers then used the program to create a photorealistic video of Mr. Obama appearing to speak the words from an audio clip of the researchers’ choosing.36 These techniques can be used to make convincing videos, known as “deepfakes,” of people appearing to say just about anything.37 Similar technology is being used to create photorealistic images of people who do not exist38 and to paint public figures such as Facebook CEO Mark Zuckerberg or House Speaker Nancy Pelosi in an unflattering light.39 Technology of this sort is becoming widespread, and similar types of digital deception are already prevalent,40 with one study estimating that between 9 percent and 15 percent of all Twitter users were not people but “bots,” software-controlled accounts “algorithmically generating content and establishing interactions.”41 The capacity to create convincing forgeries of social media content likely will continue to increase.
While authentication under the rules of evidence is a lenient standard, it must be scrupulously applied as the pervasiveness of digital fakery increases. Lawyers must be creative and thorough in authenticating social media evidence, presenting information not only linking evidence to an asserted author, but also tending to rule out links to potential imposters—a showing that courts are increasingly starting to require.42 Likewise, lawyers opposing the admission of evidence should require the proponent to demonstrate that evidence is not fabricated. For example:
-
Is there any reason to think someone other than the asserted author would have the desire, means, and opportunity to falsely create the evidence?
-
Do the social media company’s records indicate that the account in question was affected by a data breach, and if so, has the account’s password been changed since then?
-
Does the platform allow users to modify or edit media before posting it?43
-
Are there any identifiable instances in which someone other than the asserted author posted to the account in question? Were any necessary remedial steps taken, and were those steps documented?
-
Does the platform actively review content in an effort to identify and remove false or misleading posts?44 If, so, how often and are such efforts documented?
-
Is there any forensic evidence that indicates the evidence has been tampered with?45
Finally, lawyers should also consider whether, given the purpose for offering the evidence, authenticating authorship even matters. For example, in United States v. Vazquez-Soto, the First Circuit rejected the argument that Facebook photos were not properly authenticated, explaining that “the [Facebook] account’s ownership is not relevant. . . . [W]hat is at issue is only the authenticity of the photographs, not the Facebook page.”46 Thus, the court held that an agent’s testimony that he recognized the defendant in social media photos was sufficient to authenticate the photos, particularly since jurors could view the photos and rely on their own observations of the defendant in the courtroom.47 Similarly, in Penn v. Detweiler, the court denied a police officer’s motion to exclude Facebook videos allegedly showing the use of excessive force, which the plaintiff planned to present without testimony from the individuals who recorded the videos.48 The court reasoned that, since both parties were shown in the video, their testimony would be sufficient to authenticate it, essentially acknowledging that the videographer’s identity was irrelevant.49
This reasoning would be of no help, however, if the court is concerned that the content itself is manipulated.50 For example, in Gray v. Perry, the defendants moved to exclude an expert’s reliance on a YouTube video comparing the plaintiff’s song with an allegedly infringing song, arguing among other things, that “the creators of those videos may have changed the songs to make them sound more similar.”51 The court agreed, holding that without “testimony from the creators of those videos as to the manner by which they altered the sound recordings” the videos could not be properly authenticated.52
These are not idle concerns. In June 2020, the American Bar Association reported on a British family law proceeding in which a party doctored a recording of her spouse to make it sound like he was threatening her.53 Deepfakes are already here, and trial attorneys must adapt their authentication strategies to meet this new challenge. Presenting testimony from experts who understand digital fakes and are adept at identifying them54 may become an informal requirement.55 These concerns will be particularly important in criminal cases to ensure that the government does not knowingly or unknowingly use adulterated evidence to prove criminal culpability.
1 See Colorado v. Glover, 363 P.3d 736 (Colo. Ct. App. 2015).
2 See Allen v. Zonis, No. 76768-2-I, 2018 WL 6787925, at *11 (Wash. Ct. App. Dec. 24, 2018) (unpublished).
3 See United States v. Farrad, 895 F.3d 859 (6th Cir. 2018).
4 United States v. Carpenter, 585 U.S.—, 138 S. Ct. 2206 (2018).
5 See, e.g., Onur Varol et al., Online Human-Bot Interactions: Detection, Estimation, and Characterization, ARXIV:1703.03107v2 [cs.SI] (Mar. 27, 2017).
6 See generally Hon. Paul W. Grimm et al., Authentication of Social Media Evidence, 36 AM. J. TRIAL ADV. 433, 437–38 (2013).
7 United States v. Fluker, 698 F.3d 988, 999 (7th Cir. 2012); see also United States v. Jones, 107 F.3d 1147, 1155 n.1 (6th Cir. 1997) (“The [authentication] rule requires only that the court admit evidence if sufficient proof has been introduced so that a reasonable juror could find in favor of authenticity or identification. The rest is up to the jury.”) (quoting 5 Jack B. Weinstein et al., WEINSTEIN’S EVIDENCE ¶ 901(a), at 901–19 (1996)).
8 United States v. Browne, 834 F.3d 403, 412 (3d Cir. 2016).
9 Griffin v. State, 19 A.3d 415, 423 (Md. 2011).
10 See, e.g., id.
11 See, e.g., Tienda v. State, 358 S.W. 3d 633, 634–35 (Tex. Crim. App. 2012).
12 See Grimm, supra note 6, at 455–56 (describing these two approaches and concluding that the latter—which lets the party opposing authentication rebut the proponent’s showing and lets the court admit the evidence conditionally if a reasonable jury could find either way—is superior).
13 United States v. Blanchard, 867 F.3d 1, 6 (1st Cir. 2017) (district court did not abuse its discretion by admitting into evidence prostitution ads from Backpage.com; although there were discrepancies between image metadata and witness testimony regarding date and location at which ads were posted, witness’s testimony about creating and posting the ad, as corroborated by other testimony, was sufficient to authenticate them, and discrepancies went to weight, not admissibility).
14 FED. R. EVID. 104(b) (“Relevance That Depends on a Fact. When the relevance of evidence depends on whether a fact exists, proof must be introduced sufficient to support a finding that the fact does exist. The court may admit the proposed evidence on the condition that the proof be introduced later.”); FED. R. EVID. 403 (“The court may exclude relevant evidence if its probative value is substantially outweighed by a danger of one or more of the following: unfair prejudice, confusing the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence.”).
15 See, e.g., Allen v. Zonis, No. 76768-2-I, 2018 WL 6787925, at *11 (“. . . Zonis’s argument that Allen also wrote e-mails in a similar manner, specifically using all caps, is contrary evidence that goes to weight, but not authentication or admissibility.”).
16 See, e.g., Richardson v. State, 79 N.E.3d 958, 962–64 (Ind. Ct. App. 2017) (affirming exclusion of evidence even though messages came from Facebook Messenger app on password-protected phone recovered from victim’s body; state’s authenticating witness admitted not knowing who wrote the message and that Facebook messages could be sent through another device logged into the same account).
17 United States v. Vayner, 769 F.3d 125, 131 (2d Cir. 2014). But see Burgess v. State, 742 S.E. 2d 464, 467 (Ga. 2015) (reaching a different result under similar facts by holding that screenshots from a Myspace profile were properly authenticated where officer testified he had confirmed with defendant’s sister that defendant went by the nickname shown on the profile, and where photos on Myspace were consistent with other photos of defendant).
18 Commonwealth v. Mangel, 181 A.3d 1154, 1162 (Pa. Super Ct. 2018) (citing Vayner, 769 F.3d at 131).
19 See, e.g., Stout v. Jefferson Cnty. Bd. of Educ., 882 F.3d 988, 1008 (11h Cir. 2018) (Facebook posts were properly authenticated when alleged authors admitted to creating them).
20 Mangel, 181 A.3d at 1162.
21 See Allen v. Zonis, No. 76768-2-I, 2018 WL 6787925, at *10–12.
22 Burgess, 742 S.E. at 467.
23 See Cotton v. State, 773 S.E. 2d 242, 245 (Ga. 2015) (incriminating Facebook messages were properly authenticated where witness testified, inter alia, that she had seen videos depicting the defendant on YouTube under the same screen name associated with the messages and saw that the defendant’s friends and family were Facebook friends with an account under the same alias).
24 See Commonwealth v. Foster F., 20 N.E. 3d 967, 971 (Mass. App. Ct. 2014) (where the juvenile defendant “appeared on January 28 to play a dating game with the victim . . . exactly as the person sending messages from the Juvenile’s Facebook account had proposed,” Facebook messages sent after the July 28 sexual assault, which contained incriminating admissions, were properly authenticated).
25 See United States v Blanchard, 867 F.3d 1, 6 (1st Cir. 2017).
26 For instance, virtual private network, or “VPN,” services can be used to make it appear as though one’s computer is located somewhere other than its true location. Likewise, sending emails across different time zones can sometimes affect the accuracy of the times listed in the email chain.
27 See supra note 26.
28 See United States v. Farrad, 895 F.3d 859, 865–66 (6th Cir. 2018).
29 Id. (citing United States v. Browne, 834 F.3d 403, 410–11 (3d Cir. 2016)).
30 See Commonwealth v. Mangel, 181 A.3d 1154, 1163 (Pa. Super Ct. 2018).
31 See John G. Browning, With “Friends” Like These, Who Needs Enemies? Passwords, Privacy, and the Discovery of Social Media Content, 36 AM. J. TRIAL ADVOC. 505 (2013) (discussing courts’ differing approaches in addressing motions to compel social media login credentials).
32 United States v. Lewisbey, 843 F.3d 653, 658 (7th Cir 2016).
33 See Brief of Appellee at 4–5, United States v. Allen, 605 F.3d 461 (7th Cir. 2010) (No. 09-2539).
34 Naffe v. Frey, 789 F.3d 1030, 1037 n.2 (9th Cir. 2015) (“Twitter ‘verifies’ certain accounts to ‘establish authenticity of identities of key individuals and brands on Twitter.’ FAQs about verified accounts, Twitter.com, (last visited May 26, 2015). In other words, verification is Twitter’s method of ensuring at least some of its users are who they say they are. Twitter identifies verified users by displaying a blue check symbol next to the user’s Twitter handle.”).
35 See, e.g., Matt Elliott, Two-factor authentication: How and why to use it, CNET (Mar. 28, 2017, 3:51 PM PDT) (explaining two-factor authentication).
36 Supasorn Suwajanakorn et al., Synthesizing Obama: learning lip sync from audio, ACM TRANS. GRAPH. 36, 4, Article 95 (July 2017).
37 James Vincent, Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news, THE VERGE - TL;DR (Apr. 17, 2018 1:14 PM EDT).
38 James Vincent, ThisPersonDoesNotExist.com uses AI to generate endless fake faces, THE VERGE - TL;DR (Feb. 15, 2019, 7:38 AM EST).
39 Allyson Chiu, Facebook wouldn’t delete an altered video of Nancy Pelosi. What about one of Mark Zuckerberg?, WASH. POST (June 12, 2019, 6:32 AM); see also Hannah Denham, Another fake video of Pelosi goes viral on Facebook, WASH. POST (Aug. 3, 2020 1:52 PM).
40 The doctored video of Nancy Pelosi was not an AI-created “deepfake,” but merely a slowed, pitch-altered video clip in which Ms. Pelosi appeared to be drunkenly slurring her words, illustrating how misleading evidence can be created easily and distributed quickly even without sophisticated technology. See Ian Bogost, Facebook’s Dystopian Definition of ‘Fake’, THE ATL. (May 28, 2019).
41 Varol et al., supra note 5.
42 United States v. James, No. 17-184 (RJL), 2019 WL 2516413, at *4 (D.D.C. June 18, 2019) (“The Government . . . has not offered sufficient extrinsic corroboration to establish the necessary reliability of its social media evidence. . . . [T]he Government would need to establish that the images and videos that do include James’s image have not been doctored to give the illusion that James possessed firearms he never actually had.”).
43 Id. at *3 (“Snapchat is similarly problematic as some of its ‘key function[s]’ give users the ability to edit images before sharing them.”) (citing Agnieszka McPeak, Disappearing Data, 2018 WIS. L. REV. 17, 34 (2018) (“A key function within Snapchat is the use of Filters, which allows users to add multiple overlays to their images. Lenses also allow ‘real-time special effects and sounds’ to be added to images.”)).
44 See, e.g., Christine Fisher, Facebook fact checkers will soon review Instagram posts, ENGADGET (May 6, 2019).
45 See Jonathan Mraunac, The Future Of Authenticating Audio And Video Evidence, LAW360 (July 26, 2018, 12:57 PM EDT), (discussing the idea that video and audio recording devices could encode uneditable encrypted digital signatures on recordings, “similar to the ballistic markings left on a bullet by the barrel of a firearm”); Jennifer Langston, Lip-syncing Obama: New tools turn audio clips into realistic video, UW NEWS (July 11, 2017), (discussing Suwajanakorn et al., supra note 36, and stating that “[b]y reversing the process—feeding video into the network instead of just audio—the team could also potentially develop algorithms that could detect whether a video is real or manufactured”).
46 939 F.3d 365, 373-74 (1st Cir. 2019).
47 Id.; accord Zen Design Grp., Ltd. v. Scholastic, Inc., No. 16-12936, 2019 WL 2996190, at * (E.D. Mich. July 9. 2019) (stating that for purposes of summary judgment motion in patent case, YouTube videos depicting reviews of allegedly infringing product were properly authenticated because videos “clearly display the Top Secret UV Pen, identifiable by its name, packaging, appearance, and operation”).
48 No. 1:18-CV-00912, 2020 WL 1016203, at *6 (M.D. Pa. Jan. 22, 2020).
49 Id.
50 Compare People v. Beckley, 185 Cal. App. 4th 509, 516 (2010) (holding that, absent expert testimony, Myspace photo had not been doctored, and precluding possibility that page had been hacked, photo was not adequately authenticated), with People v. Cruz, 46 Cal. App. 5th 715, 730 (2020) (distinguishing Beckley because “[h]ere, we are not concerned with the authentication of a photograph of a person doing something . . . [r]ather, we are concerned with whether . . . the Facebook messages . . . were sent by defendant”).
51 No. 2:15-cv-05642-CAS (JCx), 2019 WL 2992007, at *17 (C.D. Cal. July 5, 2019).
52 Id.
53 Matt Reynolds, Courts and lawyers struggle with growing prevalence of deepfakes, ABA J. (June 9, 2020).
54 Travis Hartman & Raphael Satter, These Faces Are Not Real, REUTERS GRAPHICS (July 15, 2020). (discussing telltale indicators that an image of a face is digitally generated)
55 The California Court of Appeals has suggested that this is a formal requirement in certain scenarios. See Beckley, 185 Cal. App. 4th at 516 (trial court erred in admitting Myspace photos absent expert testimony to rule out risks of hacking or digital manipulation).