Presentation
Multi-Measure Trust Calibration in Expert Interactions with AI-Enabled Decision Support Systems: a Multiple Cause, Multiple (Behavioral) Indicator Model
DescriptionArtificial intelligence-enabled decision-support systems (AI-DSSs) supplement human decision-making performance in high-stakes decision domains, including national security. However, calibrating human supervisors' trust to actual AI-DSS performance remains a challenge, despite the use of Explainable AI ("XAI") designs like confidence scores. In this study, we examined how confidence scores impact trust calibration during a simulated border control face-matching task with 152 Transportation Security Officers. We created a multiple-indicator-multiple-cause structural equation model in which trust calibration was a latent variable co-indicated by final decision accuracy and two behavioral indicators: decision switches and adoption of AI recommendations. Results showed that high confidence scores positively influence calibrated decision-making, albeit the mere presence of XAI features does not guarantee it. Initial agreements with AI-DSS recommendations and task complexity were also found to influence trust calibration. These results highlight the context-dependent nature of trust in AI-DSSs and the importance of interaction-level behavioral indicators in understanding trust dynamics.
Event Type
Lecture
TimeTuesday, September 10th10:25am - 10:45am MST
LocationFLW Salon C
Cognitive Engineering & Decision Making