Authors
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, Hanna Wallach
Publication date
2021/5/6
Book
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
Pages
1-52
Description
With machine learning models being increasingly used to aid decision making even in high-stakes domains, there has been a growing interest in developing interpretable models. Although many supposedly interpretable models have been proposed, there have been relatively few experimental studies investigating whether these models achieve their intended effects, such as making people more closely follow a model’s predictions when it is beneficial for them to do so or enabling them to detect when a model has made a mistake. We present a sequence of pre-registered experiments (N = 3, 800) in which we showed participants functionally identical models that varied only in two factors commonly thought to make machine learning models more or less interpretable: the number of features and the transparency of the model (i.e., whether the model internals are clear or black box). Predictably, participants who …
Total citations
2018201920202021202220232024195583123165203133
Scholar articles
F Poursabzi-Sangdeh, DG Goldstein, JM Hofman… - Proceedings of the 2021 CHI conference on human …, 2021