Authors
Daizong Ding, Mi Zhang, Fuli Feng, Yuanmin Huang, Erling Jiang, Min Yang
Publication date
2023/6/26
Journal
Proceedings of the AAAI Conference on Artificial Intelligence
Volume
37
Issue
6
Pages
7358-7368
Description
With the increasing use of deep neural network (DNN) in time series classification (TSC), recent work reveals the threat of adversarial attack, where the adversary can construct adversarial examples to cause model mistakes. However, existing researches on the adversarial attack of TSC typically adopt an unrealistic white-box setting with model details transparent to the adversary. In this work, we study a more rigorous black-box setting with attack detection applied, which restricts gradient access and requires the adversarial example to be also stealthy. Theoretical analyses reveal that the key lies in: estimating black-box gradient with diversity and non-convexity of TSC models resolved, and restricting the l0 norm of the perturbation to construct adversarial samples. Towards this end, we propose a new framework named BlackTreeS, which solves the hard optimization issue for adversarial example construction with two simple yet effective modules. In particular, we propose a tree search strategy to find influential positions in a sequence, and independently estimate the black-box gradients for these positions. Extensive experiments on three real-world TSC datasets and five DNN based models validate the effectiveness of BlackTreeS, eg, it improves the attack success rate from 19.3% to 27.3%, and decreases the detection success rate from 90.9% to 6.8% for LSTM on the UWave dataset.
Total citations
2023202413
Scholar articles
D Ding, M Zhang, F Feng, Y Huang, E Jiang, M Yang - Proceedings of the AAAI Conference on Artificial …, 2023