Navigating the dark side of AI in service ecosystems: an ethical leadership framework for risk mitigation

Document Type

Article

Source of Publication

Service Industries Journal

Publication Date

3-16-2026

Abstract

The rapid integration of artificial intelligence (AI) into service ecosystems is transforming value cocreation while generating significant ethical risks that threaten customer trust, organisational legitimacy, and social sustainability. This paper develops the Ethical AI Risk Mitigation (EAIRM) model to examine how different configurations of human-AI collaboration create distinct ethical challenges across fairness, autonomy, transparency, and accountability dimensions. Drawing on a structured literature synthesis, we identify four leadership approaches (compliance-oriented, values-based, stakeholder-engaged, and anticipatory) that systematically mitigate ethical risks while enabling service innovation. Through integrative theory building, the model contributes to service research and practice by: (1) revealing how identical ethical risks operate through different causal mechanisms depending on human-AI resource configuration; (2) specifying multi-actor governance structures for service ecosystems where no single actor controls ethical outcomes; (3) theorizing leadership mechanisms and organisational mediators that convert ethical principles into operational practices; and (4) generating testable propositions with boundary conditions, moderators, and feedback dynamics. This framework advances service ecosystem theory by demonstrating that resource relations carry ethical risk implications requiring polycentric governance, not merely value creation potential.

ISSN

0264-2069

Publisher

Informa UK Limited

Disciplines

Business

Keywords

AI governance, Artificial intelligence, ethical leadership, human-AI relations, responsible AI, risk mitigation, service ecosystems

Scopus ID

105033010040

Indexed in Scopus

yes

Open Access

no

Share

COinS