Exclusive Criminal
Defense Representation
For Over 42 Years

When is video evidence not reliable?

On Behalf of | Jul 8, 2020 | Criminal Defense

Imagine this scenario: A man headed toward divorce is caught on tape threatening to beat his wife and children. The wife plays the tape in court, demanding sole custody.

The man argues he never said those things. But they’re on tape, right? He must have said them.

Except, as it turns out, he may not have. The ABA Journal recently published an article that recounted a similar story. A divorce was underway in a British courtroom. The wife said she had proof her husband was dangerous. But the audio wasn’t his. It was his voice, but the words had been engineered. The audio was a fake.

The growing threat of deepfakes

As The ABA Journal noted, the audio hadn’t been a deepfake. It was, instead, a “cheapfake,” but it called attention to the real-life threats that deepfakes pose.

Deepfakes are computer-generated videos that use artificial intelligence to layer images and audio. The technology allows users to put words in other people’s mouths and to place other people in strange situations. For example, in one viral deepfake, the actor, writer and producer Jordan Peele made it look as though President Obama was speaking entirely out of character.

Jordan Peele created his deepfake — and exposed it — to warn people about the dangers of the technology. However, others have used the technology in less obvious ways and often to harm. One study found that 96% of all deepfakes may be pornographic. And as the divorce case illustrates, deepfakes may soon present real problems in the courtroom.

Consider the following scenarios:

  • Someone offers the police a deepfaked video of domestic abuse that wrongfully sends his or her partner to jail, costs them the right to spend time with their children and skews a divorce
  • A criminal uses deepfaked video to support a fraudulent insurance claim
  • During a murder or rape trial, the jury is forced to decide whether to believe the defendant’s alibi or the prosecution’s graphic and disturbing deepfake

Given how we humans lean so heavily on our emotions when we make decisions, it’s hard to imagine a jury not responding to the upsetting video. Even if they have reason to doubt the video, the jurors may find it hard to think calmly and rationally.

This leads us to one of the other points the ABA Journal made: Courts need to be ready. Once you let the genie out of the bottle, you can’t put it back in. Deepfakes have become a reality, and though they’re still flawed enough that humans can generally detect them, the best computer programs only catch deepfakes about 65% of the time.

That’s according to the MIT Technology Review, which pointed out that Facebook took the unusual step of releasing 100,000 deepfakes to help others improve their detection techniques. The authors also noted it’s likely the time will come when humans cannot detect deepfakes and need programs to help.

How will the truth come out?

While deepfakes will likely get better and harder to detect, people can still identify several common flaws most deepfakes share. This means that experts can identify the deepfakes and help others recognize them. However, suspects and their attorneys need to recognize when evidence has been faked and to ensure that others spot the forgeries in court.

Perhaps, in the future, we may see new laws written to limit deepfakes. We may see key watermarks added to key surveillance footage. And we may see other actions taken to limit their power to harm innocent lives. But for now, it seems the most important thing is to remain vigilant. Remain skeptical. Realize that your eyes and ears can, indeed, betray you.