Document Type

Article

Source of Publication

Ai & Society

Publication Date

3-5-2026

Abstract

Research shows that trust in AI is influenced by socio-ethical considerations, technical features of AI systems, and user characteristics. Yet, the mediating role of emotional response between perceived risk and trust remains underexplored, particularly across different AI contexts. This cross-sectional vignette experiment design aims to explore the relationship between users' perceived potential risk, emotional response, and trust in AI, and examine how these relationships vary across different levels of automation and criticality. An online survey included a total of 639 participants including 316 from the UK and 323 from Arab Gulf Cooperation Council (GCC) countries. Participants rated their perceived risk, trust, and emotional response across four scenarios representing different combinations of automation (low/high) and criticality (low/high). Correlation results indicate a significant negative association between perceived risk and trust, as well as between emotional response and perceived risk. These associations were weakest in the low-automation, low-criticality scenario and strengthened with increasing criticality levels. Mediation analyses assessed the role of emotional response in the perceived risk-trust relationship. In the UK sample, emotional response fully mediated the risk-trust relationship in the low-automation, low-criticality scenario and partially mediated this relationship in the remaining scenarios. In the Arab sample, emotional response partially mediated the risk-trust relationship in all scenarios. Gender and age showed varying influences in both samples. These findings underscore the critical role of emotional response in shaping users' trust across cultures and AI modalities, significantly influencing trust alongside cognition. Awareness of emotional sources enables users to build trust based on both rational and emotional aspects, highlighting the inadequacy of cognition alone in achieving effective trust calibration. Enhancing system performance, addressing perceived risks, and promoting positive affect can foster greater user trust.

ISSN

0951-5666

Publisher

Springer Science and Business Media LLC

Disciplines

Computer Sciences

Keywords

Artificial intelligence, Perceived risk, Trust, Emotional response, Culture

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Indexed in Scopus

no

Open Access

yes

Open Access Type

Hybrid: This publication is openly available in a subscription-based journal/series

Share

COinS