Washington, DC (The Washington Post) — Top AI researchers race to detect “deepfake” videos: “We are outgunned”.
Researchers fear it is only a matter of time before the AI-generated fake videos are deployed for maximum damage — to sow confusion, fuel doubt or undermine an opponent, potentially on the eve of a White House vote.
The researchers have designed automatic systems that can analyze videos for the telltale indicators of a fake, assessing light, shadows, blinking patterns — and, in one potentially groundbreaking method, even how a candidate’s real-world facial movements — such as the angle they tilt their head when they smile — relate to one another.
But for all that progress, the researchers say they remain vastly overwhelmed by a technology they fear could herald a damaging new wave of disinformation campaigns, much in the same way fake news stories and deceptive Facebook groups were deployed to influence public opinion during the 2016 election.
Powerful new AI software has effectively democratized the creation of convincing “deepfake” videos, making it easier than ever to fabricate someone appearing to say or do something they didn’t really do, from harmless satires and film tweaks to targeted harassment and deepfake porn.
And researchers fear it’s only a matter of time before the videos are deployed for maximum damage — to sow confusion, fuel doubt or undermine an opponent, potentially on the eve of a White House vote.
“We are outgunned,” said Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” — https://www.facebook.com/drewharwell/@washingtonpost