Authors
Nicole Krämer, Magdalena Wischnewski, Emmanuel Müller
Publication date
2023/5/7
Publisher
PsyArXiv
Description
Contemporary technology increasingly relies on artificial intelligence and machine learning and is therefore able to act increasingly autonomously. This leads to systems that display agency instead of merely serving as tools that are utilized by a human user who is in charge. Given that these autonomous systems take roles that emulate human functions (eg, decision making, tutoring, counseling), it is increasingly important to scrutinize how humans interact with these systems and to which extent they understand and trust these systems. While recent research has specifically addressed how to foster users’ understanding of algorithms, we argue that fostering calibrated trust, in the sense of trust that is warranted and calibrated to the actual reliability of the system, might be more fruitful. Building on these considerations we propose a theoretical research framework on the interrelation of human trust and understanding and discuss how they might be affected by design variables such as explanations, anthropomorphization and trust cues.