Authors
Wenbo Guo, Qinglong Wang, Kaixuan Zhang, Alexander G Ororbia, Sui Huang, Xue Liu, C Lee Giles, Lin Lin, Xinyu Xing
Publication date
2018/11/17
Conference
2018 IEEE International Conference on Data Mining (ICDM)
Pages
137-146
Publisher
IEEE
Description
It has been recently shown that deep neural networks (DNNs) are susceptible to a particular type of attack that exploits a fundamental flaw in their design. This attack consists of generating particular synthetic examples referred to as adversarial samples. These samples are constructed by slightly manipulating real data-points that change "fool" the original DNN model, forcing it to misclassify previously correctly classified samples with high confidence. Many believe addressing this flaw is essential for DNNs to be used in critical applications such as cyber security. Previous work has shown that learning algorithms that enhance the robustness of DNN models all use the tactic of "security through obscurity". This means that security can be guaranteed only if one can obscure the learning algorithms from adversaries. Once the learning technique is disclosed, DNNs protected by these defense mechanisms are still …
Total citations
2017201820192020202120222023202451210111110215
Scholar articles
Q Wang, W Guo, K Zhang, AG Ororbia II, X Xing, X Liu… - arXiv preprint arXiv:1612.01401, 2016
W Guo, Q Wang, K Zhang, AG Ororbia, S Huang, X Liu… - 2018 IEEE International Conference on Data Mining …, 2018
Q Wang, W Guo, II Ororbia, G Alexander, X Xing, L Lin… - arXiv preprint arXiv:1610.01934, 2016
Q Wang, W Guo, AG Ororbia II, X Xing, L Lin, CL Giles… - arXiv preprint arXiv:1610.01934, 2016