DES MOINES, Wash. — In what's believed to be a first-of-its-kind ruling, Judge Leroy McCullough barred the use of video that was enhanced with artificial intelligence in a triple murder trial he's overseeing.
This case stems from a 2021 shooting outside of a Des Moines sports bar. The defense is arguing its client fired shots in self-defense. Along with the three people killed, three others were hurt.
The video has not been released, but it's reported to show the shooting. According to court documents, the defense's expert witness, Brian Racherbaeumer, used "at least one AI enhancement tool to enhance a total of seven videos." The defense was planning to use, at minimum, one of the AI-enhanced videos recorded by a witness who first shared it on Snapchat. It's about 10 seconds long.
Racherbaeumer admitted he is not, and never claimed to be, a forensic video technician and has not been forensically trained. According to his testimony, he began working with video in 1993 and considers himself a videographer and filmmaker. He testified the reason for using AI on video stemmed from the fact it was of "low resolution and contained substantial motion blur." He went on to say the tool added sharpness, definition, and smoother edges to objects in the video, whereas the source video contained fuzzier images with blocky edge patterns.
The state's expert witness, Grant Fredericks, is a certified forensic video analyst who holds national and international forensic video analysis credentials.
According to Fredericks, the AI tool used by Racherbaeumer added approximately 16 times the number of pixels compared to the original video. It also utilized an algorithm and enhancement method unknown to and unreviewed by a forensic video expert. He said the AI created a fake video, making forensic analysis of the video impossible.
Considering all the testimony, Judge McCullough barred the use of the AI-enhanced video. Both sides will use the original video for their arguments. This decision does not surprise University of Washington Law Professor Ryan Calo, who specializes in law and technology.
"You can't use a process like this and feel comfortable that what's being represented is what actually happened," Calo said. "We need to be concerned about the use of AI in the investigative stage, we need to be concerned about its use in trial as evidence, and we need to be concerned about its use by litigants or judges to make decisions."
Calo is currently part of a task force working to help state judges navigate issues and problems they might encounter and how to address them. He said AI should be treated like a visual aid as opposed to being used like eye-witness testimony.
"Everyone has seen one of those reconstructed scenes where you show how the accident might have happened," Calo said. "That's how it should be used."
Calo thinks it's going to be a while before judges feel comfortable allowing AI-enhanced video to be used in court cases.
"I think it's going to take more comfort on the part of the judiciary with the technology itself. They want to feel like their audience, the jury, is sophisticated enough to understand this is guesswork," he said.