Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1544492
88 D eepfakes are "the latest, most sophisticated form of visual disinformation in an already corroding information ecosystem," as described by author and AI advisor Nina Schick. Encyclopaedia Britannica describes a deepfake as an AI-generated or AI-manipulated audio or video designed to look real, even when the events never occurred. For legal teams, the rise of synthetic media has sparked a new question: If video can be manipulated, can it still be trusted in court? In practice, the opposite is increasingly true. In modern litigation, video is not just supporting evidence. Often, it is the case itself. Few events illustrate this better than the prosecutions that followed the Jan. 6 United States Capitol attack. Those cases became one of the clearest real-world tests of how courts evaluate video evidence in an era of AI- generated media. Prosecutors did not rely on a single piece of footage. Instead, they assembled synchronized timelines from multiple sources, including Capitol security cameras, police body cameras, journalists' footage, bystander recordings, and social media posts. The result was not just corroboration. It was a multi-angle reconstruction of events. When dozens of independent videos show the same person in the same place at the same moment, claims that footage is a deepfake quickly lose credibility. The strength of the evidence comes not from a single video, but from how the videos reinforce each other. THE DEEPFAKE DEFENSE Legal scholars have warned about what some call the "liar's dividend." Law professors Robert Chesney and Danielle Citron describe it as the advantage someone gains by dismissing real evidence as fake simply because deepfakes exist. Researchers Kaylyn Schiff, Daniel Schiff, and Natália Bueno similarly argue that public awareness of synthetic media can be weaponized to erode trust in authentic content. We are now in an era where defendants try to cast doubt on genuine footage because "AI" has entered the public consciousness. It is the modern equivalent of saying, "That's not me in the video; that's a hologram." In other words, once people know deepfakes exist, they may try to claim any inconvenient video is fabricated. That strategy has already appeared in court. Legal scholar Rebecca Delfino reported that at least one defendant charged in connection with the Jan. 6 attack attempted to dismiss incriminating helmet-camera footage as a potential deepfake. It was a bold strategy, right up until the prosecution produced a mountain of digital receipts. LESSONS FROM THE SANDRA BLAND CASE Not every case offers dozens of corroborating angles; some reveal what happens when the camera's view is partial or delayed. As reported by PBS NewsHour in "Sandra Bland's Own Cellphone Video Surfaces for the First Time, Raising Questions," the public's understanding of Sandra Bland's traffic stop was shaped for years almost entirely by dashcam footage. Then in 2019, Bland's own cellphone video surfaced. Coverage from PBS NewsHour and USA Today showed the trooper at Bland's window WHEN VIDEO MAKES THE CASE: Redefining Truth in the Digital Age FEATURES BY DONNA MEDREK

