Authors
Matthias Jakobs, Helena Kotthaus, Ines Röder, Maximilian Baritz
Publication date
2022
Conference
LWDA
Pages
33-44
Description
Quantitatively evaluating explainability methods is a notoriously hard endeavor. One reason for this is the lack of real-world benchmark datasets that contain local feature importance annotations done by domain experts. We present SancScreen, a dataset from the domain of financial sanction screening. It allows for both evaluating explainability methods and uncovering errors made during model training. We showcase two possible ways to use the dataset for evaluating and debugging a Random Forest and a Neural Network model. For evaluation, we compare a total of 8 configurations of state-ofthe-art explainability methods to the expert annotations. The dataset and code is available under https://github. com/MatthiasJakobs/sancscreen.
Total citations
Scholar articles