Empirical hints of cognitive biases despite human-centered AI explanations

Abstract

In explainable artificial intelligence (XAI) research, explain-ability is widely regarded as crucial for user trust in artificialintelligence (AI). However, empirical investigations of thisassumption are still lacking. There are several proposalsas to how explainability might be achieved and it is an on-going debate what ramifications explanations actually haveon humans. In our work-in-progress we explored two post-hoc explanation approaches presented in natural languageas a means for explainable AI. We examined the effectsof human-centered explanations on trust behavior in a financial decision-making experiment (N = 387), capturedby weight of advice (WOA). Results showed that AI explanations lead to higher trust behavior if participants wereadvised todecreasean initial price estimate. However, explanations had no effect if the AI recommended toincreasethe initial price estimate. We argue that these differencesin trust behavior may be caused by cognitive biases andheuristics that people retain in their decision-making processes involving AI. So far, XAI has primarily focused onbiased data and prejudice due to incorrect assumptions inthe machine learning process. The implications of potentialbiases and heuristics that humans exhibit when being presented an explanation by AI have received little attention inthe current XAI debate. Both researchers and practitionersneed to be aware of such human biases and heuristics inorder to develop truly human-centered AI.

Publication
CHI 2021 Workshop: Operationalizing Human-Centered Perspectives in Explainable AI

Related