Just saw an ad for Intel’s deepfake detector called FakeCatcher. They say it’s 96% accurate in real time.
So far, so good …
Will be funny if deepfake detector starts detecting fake news in real humans…
Presumably, neither of you read the article …
Deepfake technology—where someone’s likeness is digitally placed over someone else’s—has some very spooky implications. Intel says that its new deepfake detection tech, called FakeCatcher, is able to clock a deepfake video 96% of the time.
“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” says Intel Labs senior staff research Ilke Demir in an Intel press release.
FakeCatcher is hosted on a server but interfaces with videos using a web-based platform. According to Intel, the tech’s approach is opposite of traditional deep-learning based detectors, which usually try to find what’s fake about a video, whereas FakeCatcher is looking for what’s real.
In an interview with VentureBeat, Demir explained that FakeCatcher’s approach is based on photoplethysmography (PPG), which is a method to determine the change in blood flow in human tissue. If a real person is on screen, their tissue will change color ever-so-slightly microscopically as blood is pumped through their veins. Deepfakes can’t replicate this change in complexion (at least not yet).
Thanks. I was confused by the responses. I thought deepfake was a common term. As you say, it’s not an alternate account for someone using text as the medium.
It’s generally a video that’s been altered so it looks like one person is saying something they didn’t say. Deepfakes have become harder to detect because of advances in video technology.
Intel’s technology is a detection device for deepfakes.
Perhaps some readers get confused by the the speed of technological advancement …