Authors
Katharina Weitz, Teena Hassan, Ute Schmid, Jens-Uwe Garbas
Publication date
2019/7/26
Journal
tm-Technisches Messen
Volume
86
Issue
7-8
Pages
404-412
Publisher
De Gruyter Oldenbourg
Description
Deep neural networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep neural methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that due to the complexity of these networks, it is not possible to interpret the knowledge that is stored inside the network. It is a black-box learning procedure. Explainable Artificial Intelligence (AI) approaches mitigate this problem by extracting explanations for decisions and representing them in a human-interpretable form. The aim of this paper is to investigate the explainable AI methods Layer-wise Relevance …
Total citations
201920202021202220232024381418124