Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform

Document Type


Source of Publication

AI and Society

Publication Date



A number of artificial intelligence (AI) systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines the roles of normative values in over-the-top (OTT) platforms by empirically testing their effects on sensemaking processes. A mixed-method design incorporating both qualitative and quantitative approaches was used to discover user heuristics and to test the effects of such normative values on user acceptance. Collectively, a composite concept of transparent fairness emerged around user sensemaking processes and its formative roles regarding their underlying relations to perceived quality and credibility. From a sensemaking perspective, this study discusses the implications of transparent fairness in algorithmic media platforms by clarifying how and what should be done to make algorithmic media more trustable and reliable platforms. Based on the findings, a theoretical model is developed to define transparent fairness as an essential algorithmic attribute in the context of OTT platforms.




Springer Science and Business Media LLC


Social and Behavioral Sciences


Algorithmic credibility, Algorithmic information processing, Algorithmic normative values, Algorithmic sensemaking, OTT platforms, Transparent fairness

Scopus ID


Indexed in Scopus


Open Access