Auteurs: | » Kherchouche Anouar » Hamidouche Wassim » Deforges Olivier |
Type : | Conférence Internationale |
Nom de la conférence : | IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP) |
Lieu : | Pays: |
Lien : » | |
Publié le : | 21-09-2020 |
The deep neural networks (DNNs) have been adopted in a wide spectrum of applications. However, it has been demonstrated that their are vulnerable to adversarial examples (AEs): carefully-crafted perturbations added to a clean input image. These AEs fool the DNNs which classify them incorrectly. Therefore, it is imperative to develop a detection method of AEs allowing the defense of DNNs. In this paper, we propose to characterize the adversarial perturbations through the use of natural scene statistics. We demonstrate that these statistical properties are altered by the presence of adversarial perturbations. Based on this finding, we design a classifier that exploits these scene statistics to determine if an input is adversarial or not. The proposed method has been evaluated against four prominent adversarial attacks and on three standards datasets. The experimental results have shown that the proposed detection …