Initial evidence for biased decision-making despite human-centered AI explanations

Abstract

In explainable artificial intelligence (XAI) research, explainability is widely regarded as crucial for user trust in artificial intelligence (AI). However, empirical investigations of this assumption are still lacking. There are several proposals as to how explainability might be achieved and it is an ongoing debate what ramifications explanations actually have on humans. In our work-in-progress we explored two posthoc explanation approaches presented in natural language as a means for explainable AI. We examined the effects of human-centered explanations on trust behavior in a financial decision-making experiment (N = 387), captured by weight of advice (WOA). Results showed that AI explanations lead to higher trust behavior if participants were advised to decrease an initial price estimate. However, explanations had no effect if the AI recommended to increase the initial price estimate. We argue that these differences in trust behavior may be caused by cognitive biases and heuristics that people retain in their decision-making processes involving AI. So far, XAI has primarily focused on biased data and prejudice due to incorrect assumptions in the machine learning process. The implications of potential biases and heuristics that humans exhibit when being presented an explanation by AI have received little attention in the current XAI debate. Both researchers and practitioners need to be aware of such human biases and heuristics in order to develop truly human-centered AI.

Publication
CHI 2022 TRAIT Workshop on Trust and Reliance in AI-Human Teams

Related