The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI

Author First name, Last name, Institution

Donghee Shin, Zayed University

Document Type

Article

Source of Publication

International Journal of Human Computer Studies

Publication Date

2-1-2021

Abstract

© 2020 Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors’ perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.

ISSN

1071-5819

Publisher

Academic Press

Volume

146

First Page

102551

Disciplines

Computer Sciences

Keywords

Causability, Explainable Ai, Explanatorycues, Glassbox, Human-ai interaction, Human-centeredAI, Interpretability, Trust, Understandability

Scopus ID

85094928986

Indexed in Scopus

yes

Open Access

no

Share

COinS