Authors
Yan Xiao, Ivan Beschastnikh, Yun Lin, Rajdeep Singh Hundal, Xiaofei Xie, David S Rosenblum, Jin Song Dong
Publication date
2022/8/22
Journal
IEEE Transactions on Dependable and Secure Computing
Publisher
IEEE
Description
Deep Neural Networks (DNNs) have been widely adopted, yet DNN models are surprisingly unreliable, which raises significant concerns about their use in critical domains. In this work, we propose that runtime DNN mistakes can be quickly detected and properly dealt with in deployment , especially in settings like self-driving vehicles. Just as software engineering (SE) community has developed effective mechanisms and techniques to monitor and check programmed components, our previous work, SelfChecker, is designed to monitor and correct DNN predictions given unintended abnormal test data. SelfChecker triggers an alarm if the decisions given by the internal layer features of the model are inconsistent with the final prediction and provides advice in the form of an alternative prediction. In this paper, we extend SelfChecker to the security domain. Specifically, we describe SelfChecker++, which we designed …
Total citations
202220232024251
Scholar articles
Y Xiao, I Beschastnikh, Y Lin, RS Hundal, X Xie… - IEEE Transactions on Dependable and Secure …, 2022