Exploring the human factors in moral dilemmas of autonomous vehicles

Document Type

Article

Source of Publication

Personal and Ubiquitous Computing

Publication Date

1-1-2022

Abstract

Given the widespread popularity of autonomous vehicles (AVs), researchers have been exploring the ethical implications of AVs. Researchers believe that empirical experiments can provide insights into human characterization of ethically sound machine behaviour. Previous research indicates that humans generally endorse utilitarian AVs; however, this paper explores an alternative account of the discourse of ethical decision-making in AVs. We refrain from favouring consequentialism or non-consequential ethical theories and argue that human moral decision-making is pragmatic, or in other words, ethically and rationally bounded, especially in the context of intelligent environments. We hold the perspective that our moral preferences shift based on various externalities and biases. To further this concept, we conduct three Amazon Mechanical Turk studies, comprising 479 respondents to investigate factors, such as the “degree of harm,” “level of affection,” and “fixing the responsibility” that influences people’s moral decision-making. Our experimental findings seem to suggest that human moral judgments cannot be wholly deontological or utilitarian and offer evidence on the ethical variations in human decision-making processes that favours a specific moral framework. The findings also offer valuable insights for policymakers to explore the overall public perception of the ethical implications of AV as part of user decision-making in intelligent environments.

ISSN

1617-4909

Publisher

Springer Science and Business Media LLC

Disciplines

Computer Sciences

Keywords

Autonomous vehicles, Bounded rationality, Intelligent environment user decision-making, Moral dilemmas, Technology ethics, Trolley problem

Scopus ID

85131826812

Indexed in Scopus

yes

Open Access

no

Share

COinS