Exploring the human factors in moral dilemmas of autonomous vehicles
Source of Publication
Personal and Ubiquitous Computing
Given the widespread popularity of autonomous vehicles (AVs), researchers have been exploring the ethical implications of AVs. Researchers believe that empirical experiments can provide insights into human characterization of ethically sound machine behaviour. Previous research indicates that humans generally endorse utilitarian AVs; however, this paper explores an alternative account of the discourse of ethical decision-making in AVs. We refrain from favouring consequentialism or non-consequential ethical theories and argue that human moral decision-making is pragmatic, or in other words, ethically and rationally bounded, especially in the context of intelligent environments. We hold the perspective that our moral preferences shift based on various externalities and biases. To further this concept, we conduct three Amazon Mechanical Turk studies, comprising 479 respondents to investigate factors, such as the “degree of harm,” “level of affection,” and “fixing the responsibility” that influences people’s moral decision-making. Our experimental findings seem to suggest that human moral judgments cannot be wholly deontological or utilitarian and offer evidence on the ethical variations in human decision-making processes that favours a specific moral framework. The findings also offer valuable insights for policymakers to explore the overall public perception of the ethical implications of AV as part of user decision-making in intelligent environments.
Springer Science and Business Media LLC
Autonomous vehicles, Bounded rationality, Intelligent environment user decision-making, Moral dilemmas, Technology ethics, Trolley problem
Shah, Muhammad Umair; Rehman, Umair; Iqbal, Farkhund; and Ilahi, Hassan, "Exploring the human factors in moral dilemmas of autonomous vehicles" (2022). All Works. 5191.
Indexed in Scopus