Authors
Tatiana Dancy
Publication date
2023/9/10
Publisher
Money, Power and AI: From Automated Banks to Automated States, Cambridge University Press
Description
AI and ADM tools can help us to make predictions in situations of uncertainty, which are used to inform a range of significant decisions about who should bear some burden for the sake of some broader social good. Humans play a critical role in setting parameters, designing, and testing these tools. And if the final decision is not purely predictive, a human decision-maker must use the algorithmic output to reach a conclusion. But courts have concluded that humans also play a corrective role–that, even if there are concerns about the predictive assessment, applying human discretion to the predictive task is both a necessary and sufficient safeguard against unjust ADM. Thus, the focus in academic, judicial, and legislative spheres has been on making sure that humans are equipped and willing to wield this ultimate decision-making power. I argue that this focus is misplaced. Human supervision can help to ensure that AI and ADM tools are fit for purpose, but it cannot make up for the use of AI and ADM tools that are not. Safeguarding requires gatekeeping–using these tools just when we can show that they take the right considerations into account in the right way. In this chapter, I make some concrete recommendations about how to determine whether AI and ADM tools meet this threshold, and what we should do once we know.