Close

Presentation

The Pop-Out Effect of Rarer Occurring Stimuli Shapes the Effectiveness of AI Explainability
DescriptionExplainable artificial intelligence (XAI) is proposed to improve transparency and performance by providing information about AI's limitations. Specifically, XAI could support appropriate behavior in cases where AI errors occur due to less training data. These error-prone cases might be salient (pop-out) because of their naturally rarer occurrence. The current study investigated how this pop-out effect influences explainability’s effectiveness on trust and dependence. In an online experiment, participants (N=128) estimated the contamination degree of bacterial stimuli. The lower occurrence of error-prone stimuli was indicated by one of two colors. Participants either knew about the error-prone color (XAI) or not (nonXAI). Contrary to earlier research without salient error-prone trials, explainability did not help participants follow correct recommendations in non-error-prone trials but helped them correct AI's errors in error-prone trials. However, explainability still led to over-correction in correct error-prone trials. This poses the challenge of implementing explainability while mitigating its negative effects.
Event Type
Lecture
TimeWednesday, September 11th11:55am - 12:15pm MST
LocationFlagstaff
Tracks
Human AI Robot Teaming (AI)