Comparative analysis of moral decision-making and trust dynamics: human reasoning vs. ChatGPT-3 narratives

Document Type

Article

Source of Publication

AI and Ethics

Publication Date

10-29-2024

Abstract

As artificial intelligence (AI) becomes increasingly integrated into everyday decision-making, so does the influence of large language models like ChatGPT. While AI systems generate moral judgments, there may be distinct differences between how humans and AI approach moral dilemmas. This study explores these differences by conducting a comparative analysis of moral decision-making narratives produced by human participants and AI, specifically ChatGPT-3. Key evaluation metrics included causality, explicability, and overall satisfaction. Participants were presented with a complex moral dilemma and asked to provide justifications for their decisions, which were then compared with AI-generated responses. Surprisingly, the study found no significant difference in the quality of the explanations produced by ChatGPT-3 and human respondents. In the second phase, we examined the role of verification methods in fostering trust in these explanations. Participants evaluated explanations verified by humans, AI, or left unverified and assigned trust scores accordingly. The results demonstrate that human verification significantly enhances trust in explanations, while AI verification, though beneficial, had a smaller impact. This study underscores the importance of distinguishing between moral and ethical reasoning in AI systems and highlights the role of verification in trust-building.

ISSN

2730-5961

Publisher

Springer Nature

First Page

1

Last Page

14

Disciplines

Computer Sciences

Keywords

Moral decision-making, Explanatory quality, Trust, Verification, ChatGPT-3, Human-AI interaction

Indexed in Scopus

no

Open Access

no

Share

COinS