Authors
Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, Jin Song Dong
Publication date
2022/5/30
Journal
IEEE Transactions on Dependable and Secure Computing
Publisher
IEEE
Description
Neural networks have been widely applied in security applications such as spam and phishing detection, intrusion prevention, and malware detection. This black-box method, however, often has uncertainty and poor explainability in applications. Furthermore, neural networks themselves are often vulnerable to adversarial attacks. For those reasons, there is a high demand for trustworthy and rigorous methods to verify the robustness of neural network models. Adversarial robustness, which concerns the reliability of a neural network when dealing with maliciously manipulated inputs, is one of the hottest topics in cybersecurity and machine learning. In this work, we survey existing literature in adversarial robustness verification for neural networks and collect 39 diversified research works across machine learning, security, and software engineering domains. We systematically analyze their approaches, including how …
Total citations
20222023202441612
Scholar articles
MH Meng, G Bai, SG Teo, Z Hou, Y Xiao, Y Lin… - IEEE Transactions on Dependable and Secure …, 2022