Authors
Thomas Rojat, Raphaël Puget, David Filliat, Javier Del Ser, Rodolphe Gelin, Natalia Díaz-Rodríguez
Publication date
2021/4/2
Journal
arXiv preprint arXiv:2104.00950
Description
Most of state of the art methods applied on time series consist of deep learning methods that are too complex to be interpreted. This lack of interpretability is a major drawback, as several applications in the real world are critical tasks, such as the medical field or the autonomous driving field. The explainability of models applied on time series has not gather much attention compared to the computer vision or the natural language processing fields. In this paper, we present an overview of existing explainable AI (XAI) methods applied on time series and illustrate the type of explanations they produce. We also provide a reflection on the impact of these explanation methods to provide confidence and trust in the AI systems.
Total citations
20202021202220232024112478450
Scholar articles
T Rojat, R Puget, D Filliat, J Del Ser, R Gelin… - arXiv preprint arXiv:2104.00950, 2021