Authors
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry
Publication date
2018/5/30
Journal
arXiv preprint arXiv:1805.12152
Description
We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed empirically in more complex settings. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception.
Total citations
201820192020202120222023202426162291379410421205
Scholar articles
D Tsipras, S Santurkar, L Engstrom, A Turner, A Madry - arXiv preprint arXiv:1805.12152, 2018
D Tsipras, S Santurkar, L Engstrom, A Turner, A Madry - arXiv preprint arXiv:1805.12152, 2018
D Tsipras, S Santurkar, L Engstrom, A Turner, A Madry - arXiv preprint arXiv:1805.12152, 2018