Authors
Dian Zhang, Yunwei Dong, Yun Yang
Publication date
2024/8/1
Journal
Applied Soft Computing
Volume
161
Pages
111733
Publisher
Elsevier
Description
With the widespread application of deep learning, the vulnerability of neural networks has attracted considerable attention, raising reliability and security concerns. Therefore, research on the robustness of neural networks has become increasingly critical. In this paper, we propose a novel sample-analysis based robustness evaluation method that overcomes the drawbacks of existing techniques, such as solving difficulty, single strategy, and loose radius. Our algorithm comprises two parts: robustness evaluation and adversarial attacks. Specifically, we introduce formal definitions of multiple sample types and a general solution to the problem of adversarial samples. We formulate a disturbance model-based description of adversarial samples in the adversarial attack algorithm and utilize saliency map to solve them. Our experimental results demonstrate that our adversarial attack algorithm not only achieves a high …
Scholar articles