Schneier - Detecting Deep Fakes

This story nicely illustrates the arms race between technologies to create fake videos and technogies to detect fake videos:

These fakes, while convincing if you watch a few seconds on a phone screen, aren't perfect (yet). They contain tells, like creepily ever-open eyes, from flaws in their creation process. In looking into DeepFake's guts, Lyu realized that the images that the program learned from didn't include many with closed eyes (after all, you wouldn't keep a selfie where you were blinking, would you?). "This becomes a bias," he says. The neural network doesn't get blinking. Programs also might miss other "physiological signals intrinsic to human beings," says Lyu's paper on the phenomenon, such as breathing at a normal rate, or having a pulse. (Autonomic signs of constant existential distress are not listed.) While this research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even a large set of snapshots might not adequately capture the physical human experience, and so any software trained on those images may be found lacking.

Lyu's blinking revelation revealed a lot of fakes. But a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally. The fake content creators had evolved.

I don't know who will win this arms race, if there ever will be a winner. But the problem with fake videos goes deeper: they affect people even if they are later told that they are fake, and there always will be people that will believe they are real despite any evidence to the contrary.



from Schneier on Security https://www.schneier.com/blog/archives/2018/10/detecting_deep_.html

Comments

Popular posts from this blog

Krebs - NY Charges First American Financial for Massive Data Leak

KnowBe4 - Scam Of The Week: "When Users Add Their Names to a Wall of Shame"