Authors
Gilad Cohen, Guillermo Sapiro, Raja Giryes
Publication date
2020
Conference
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Pages
14453-14462
Description
Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks, which are small perturbations added to their input images to mislead their prediction. Detection of adversarial examples is, therefore, a fundamental requirement for robust classification frameworks. In this work, we present a method for detecting such adversarial attacks, which is suitable for any pre-trained neural network classifier. We use influence functions to measure the impact of every training sample on the validation set data. From the influence scores, we find the most supportive training samples for any given validation example. A k-nearest neighbor (k-NN) model fitted on the DNN's activation layers is employed to search for the ranking of these supporting training samples. We observe that these samples are highly correlated with the nearest neighbors of the normal inputs, while this correlation is much weaker for adversarial inputs. We train an adversarial detector using the k-NN ranks and distances and show that it successfully distinguishes adversarial examples, getting state-of-the-art results on six attack methods with three datasets. Code is available at https://github. com/giladcohen/NNIF_adv_defense.
Total citations
20202021202220232024426424623
Scholar articles
G Cohen, G Sapiro, R Giryes - Proceedings of the IEEE/CVF conference on computer …, 2020