Authors
Rajiv Khanna, Been Kim, Joydeep Ghosh, Sanmi Koyejo
Publication date
2019/4/11
Conference
The 22nd International Conference on Artificial Intelligence and Statistics
Pages
3382-3390
Publisher
PMLR
Description
Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models. To this end, we take a novel look at black box interpretation of test predictions in terms of training examples. Our goal is to ask “which training examples are most responsible for a given set of predictions”? To answer this question, we make use of Fisher kernels as the defining feature embedding of each data point, combined with Sequential Bayesian Quadrature (SBQ) for efficient selection of examples. In contrast to prior work, our method is able to seamlessly handle any sized subset of test predictions in a principled way. We theoretically analyze our approach, providing novel convergence bounds for SBQ over discrete candidate atoms. Our approach recovers the application of influence functions for interpretability as a special case yielding novel insights from this connection. We also present applications of the proposed approach to three use cases: cleaning training data, fixing mislabeled examples and data summarization.
Total citations
20192020202120222023202472425202210
Scholar articles
R Khanna, B Kim, J Ghosh, S Koyejo - The 22nd International Conference on Artificial …, 2019