Why am I seeing this? Deconstructing algorithm literacy through the lens of users

Document Type


Source of Publication

Internet Research

Publication Date



Purpose: As algorithms permeate nearly every aspect of digital life, artificial intelligence (AI) systems exert a growing influence on human behavior in the digital milieu. Despite its popularity, little is known about the roles and effects of algorithmic literacy (AL) on user acceptance. The purpose of this study is to contextualize AL in the AI environment by empirically examining the role of AL in developing users' information processing in algorithms. The authors analyze how users engage with over-the-top (OTT) platforms, what awareness the user has of the algorithmic platform and how awareness of AL may impact their interaction with these systems. Design/methodology/approach: This study employed multiple-group equivalence methods to compare two group invariance and the hypotheses concerning differences in the effects of AL. The method examined how AL helps users to envisage, understand and work with algorithms, depending on their understanding of the control of the information flow embedded within them. Findings: Our findings clarify what functions AL plays in the adoption of OTT platforms and how users experience algorithms, particularly in contexts where AI is used in OTT algorithms to provide personalized recommendations. The results point to the heuristic functions of AL in connection with its ties in trust and ensuing attitude and behavior. Heuristic processes using AL strongly affect the credibility of recommendations and the way users understand the accuracy and personalization of results. The authors argue that critical assessment of AL must be understood not just about how it is used to evaluate the trust of service, but also regarding how it is performatively related in the modeling of algorithmic personalization. Research limitations/implications: The relation of AL and trust in an algorithm lends strategic direction in developing user-centered algorithms in OTT contexts. As the AI industry has faced decreasing credibility, the role of user trust will surely give insights on credibility and trust in algorithms. To better understand how to cultivate a sense of literacy regarding algorithm consumption, the AI industry could provide examples of what positive engagement with algorithm platforms looks like. Originality/value: User cognitive processes of AL provide conceptual frameworks for algorithm services and a practical guideline for the design of OTT services. Framing the cognitive process of AL in reference to trust has made relevant contributions to the ongoing debate surrounding algorithms and literacy. While the topic of AL is widely recognized, empirical evidence on the effects of AL is relatively rare, particularly from the user's behavioral perspective. No formal theoretical model of algorithmic decision-making based on the dual processing model has been researched.






Computer Sciences


Accountability, Algorithmic literacy, Algorithmic platforms, Dual processing, Explainability, Fairness, Over-the-top, Transparency

Scopus ID


Indexed in Scopus


Open Access