Adobe’s prototype AI tool that automatically detects Photoshoppped faces is a useless piece of technology.
Where do we even begin? In a nutshell, the tool is designed to detect digital alterations on someone's face. In order to do that, they need a baseline photo. An unaltered photo of the person they're going to compare. So are they going to keep a catalog of people's photographs? Good luck with that.
Makeup, cosmetic surgery are normal these days. Not to mention doctored photos (or editorial pieces) and special effects are also common. FaceTune, have you heard of it? People tweak their selfies before sharing on social media, because they want to look their best fantasy – just like celebrities applying heavy makeup to look great on the red carpet, but for zero dollars.
With the rise of “deepfake” content (thanks to advancement of computer technology), detection is not the solution to counter it. But rather, relying on legitimate sources and due diligence.
It's becoming quite common these days for the news media to lift content from social media and use it as sources in news articles. So the chances of a slip-up, where a producer unintentionally reference fake content is very real. But it will likely only happen if the person is being sloppy and not check their sources, unless, of course it's done with malice.
Adobe has no plans on making the tool available for commercial use, because frankly, it doesn't serve any purpose other than jumping on the artificial intelligence hype.
New York (The Verge) — Adobe’s prototype AI tool automatically spots Photoshopped faces theverge.com/2019/6/14/1867….
The world is becoming increasingly anxious about the spread of fake videos and pictures, and Adobe — a name synonymous with edited imagery — says it shares those concerns. It’s released new research that uses machine learning to automatically detect when images of faces have been manipulated.— James Vincent/@verge