Learn Before
Study from Neves et al.
Neves et al. performed an in-depth experimental assessment of this type of facial manipulation considering different state-of-the-art detection systems and experimental conditions, i.e., controlled and in-the-wild scenarios. Four different fake databases were considered: i) 150,000 fake faces collected online11 and based on StyleGAN architecture, ii) the 100K-faces public database, iii) 80,000 synthetic faces generated using ProGAN, and iv) the iFakeFaceDB database, an improved version of previous fake databases in which the GAN-fingerprint information has been removed using the GANprintR approach. In controlled scenarios, they achieved similar results as the best previous studies (EER = 0.02%). However, in more challenging scenarios in which images (real and fake) come from different sources (mismatch of datasets), a high degradation of the fake detection performance is observed. Finally, the results achieved over their public iFakeFaceDB database with an EER = 4.5% for the best fake detectors remark how challenging is iFakeFaceDB even for the most advanced manipulation detection methods.
0
1
Tags
Data Science