The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI
Source of Publication
International Journal of Human Computer Studies
© 2020 Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors’ perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.
Causability, Explainable Ai, Explanatorycues, Glassbox, Human-ai interaction, Human-centeredAI, Interpretability, Trust, Understandability
Shin, Donghee, "The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI" (2021). All Works. 3421.
Indexed in Scopus