Authors
Florian Pfisterer, Christoph Kern, Susanne Dandl, Matthew Sun, Michael P Kim, Bernd Bischl
Publication date
2021/8/6
Journal
Journal of Open Source Software
Volume
6
Issue
64
Pages
3453
Description
Given the increasing usage of automated prediction systems in the context of high-stakes decisions, a growing body of research focuses on methods for detecting and mitigating biases in algorithmic decision-making. One important framework to audit for and mitigate biases in predictions is that of Multi-Calibration, introduced by Hebert-Johnson et al.(2018). The underlying fairness notion, Multi-Calibration, promotes the idea of multi-group fairness and requires calibrated predictions not only for marginal populations, but also for subpopulations that may be defined by complex intersections of many attributes. A simpler variant of Multi-Calibration, referred to as Multi-Accuracy, requires unbiased predictions for large collections of subpopulations. Hebert-Johnson et al.(2018) proposed a boosting-style algorithm for learning multi-calibrated predictors. Kim et al.(2019) demonstrated how to turn this algorithm into a post-processing strategy to achieve multi-accuracy, demonstrating empirical effectiveness across various domains. This package provides a stable implementation of the multi-calibration algorithm, called MCBoost. In contrast to other Fair ML approaches, MC-Boost does not harm the overall utility of a prediction model, but rather aims at improving calibration and accuracy for large sets of subpopulations post-training. MCBoost comes with strong theoretical guarantees, which have been explored formally in Hebert-Johnson et al.(2018), Kim et al.(2019), Dwork et al.(2019), Dwork et al.(2020) and Kim et al.(2021). mcboost implements Multi-Calibration Boosting for R. mcboost is model agnostic and allows the user to post-process any supervised …
Total citations
20212022202320241531
Scholar articles
F Pfisterer, C Kern, S Dandl, M Sun, MP Kim, B Bischl - Journal of Open Source Software, 2021