Auteurs: | » Bakhti Yassine » Hamidouche Wassim » Deforges Olivier |
Type : | Conférence Internationale |
Nom de la conférence : | 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX) |
Lieu : | Pays: |
Lien : » | |
Publié le : | 05-06-2019 |
Deep neural networks (DNNs) have recently achieved state-of-the-art performance and provide significant progress in many machine learning tasks, such as image classification, speech processing, natural language processing, etc. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. For instance, in the image classification domain, adding small imperceptible perturbations to the input image is sufficient to fool the DNN and to cause misclassification. The perturbed image, called adversarial example, should be visually as close as possible to the original image. However, all the works proposed in the literature for generating adversarial examples have used the L p norms (L 0 , L 2 and L ∞ ) as distance metrics to quantify the similarity between the original image and the adversarial example. Nonetheless, the L p norms do not correlate with human judgment, making them not …