Authors
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry
Publication date
2019
Journal
Advances in neural information processing systems
Volume
32
Description
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features (derived from patterns in the data distribution) that are highly predictive, yet brittle and (thus) incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a {\em misalignment} between the (human-specified) notion of robustness and the inherent geometry of the data.
Total citations
20192020202120222023202479312423441451237
Scholar articles
A Ilyas, S Santurkar, D Tsipras, L Engstrom, B Tran… - Advances in neural information processing systems, 2019
A Ilyas, S Santurkar, D Tsipras, L Engstrom, B Tran… - arXiv preprint arXiv:1905.02175, 2019