Auteurs: | » Kherchouche Anouar » Hamidouche Wassim » Deforges Olivier |
Type : | Conférence Internationale |
Nom de la conférence : | International Joint Conference on Neural Networks (IJCNN) |
Lieu : | Pays: |
Lien : » | |
Publié le : | 19-07-2020 |
Recent studies have demonstrated that the deep neural networks (DNNs) are vulnerable to carefully-crafted perturbations added to a legitimate input image. Such perturbed images are called adversarial examples (AEs) and can cause DNNs to misclassify. Consequently, it is of paramount importance to develop detection methods of AEs, thus allowing to reject them. In this paper, we propose to characterize the AEs through the use of natural scene statistics (NSS). We demonstrate that these statistical properties are altered by the presence of adversarial perturbations. Based on this finding, we propose three different methods that exploit these scene statistics to determine if an input is adversarial or not. The proposed detection methods have been evaluated against four prominent adversarial attacks and on three standards datasets. The experimental results have shown that the proposed methods achieve a high …