Talking Tech in Education: A Systematic Review of Chatbot Evaluation, Methods, Communication Modalities, and Impact
Document Type
Article
Source of Publication
International Journal of Human Computer Interaction
Publication Date
8-7-2025
Abstract
This systematic review investigates how chatbots are evaluated and adopted in education; namely the methods used to assess their effectiveness, their communication modalities, and their benefits and challenges. Following the PRISMA reporting guidelines a comprehensive search was conducted across seven academic databases, covering studies published between 2014 and 2024, resulting in 124 empirical studies based on inclusion and exclusion criteria. Thematic analysis focused on evaluation criteria, advantages, and disadvantages, while data collection methods and communication modalities were analyzed descriptively. Eight key themes were identified in the assessment of educational chatbots, including technology acceptance, learning outcomes and usability. Surveys were the most common data collection method followed by interviews. Text modality dominated chatbot communication, while multimodal formats were rarely used. Reported benefits included 24/7 availability, personalized feedback, and improved motivation. However, challenges such as technical issues and academic integrity concerns were noted. Future research should explore longitudinal impacts and more inclusive assessment frameworks.
DOI Link
ISSN
Publisher
Informa UK Limited
Disciplines
Computer Sciences
Keywords
Chatbot, education, evaluation, human-computer interaction, user experience
Scopus ID
Recommended Citation
Al-Shamaileh, Ons; Mubin, Omar; and Nizamuddin, Nishara, "Talking Tech in Education: A Systematic Review of Chatbot Evaluation, Methods, Communication Modalities, and Impact" (2025). All Works. 7419.
https://zuscholars.zu.ac.ae/works/7419
Indexed in Scopus
yes
Open Access
no