Document Type
Article
Source of Publication
Digital
Publication Date
3-1-2026
Abstract
Text-to-video (T2V) generation has recently emerged as a transformative technology within the field of generative AI, enabling the creation of realistic, temporally coherent videos based on natural language descriptions. This paradigm provides significant added value in many domains such as creative media, human-computer interaction, immersive learning, and simulation. Despite its growing importance, systematic discussion of T2V is still limited compared with adjacent modalities such as text-to-image and image-to-video. To alleviate the scarcity of discussions in the T2V field, this paper provides a systematic review of works published from 2024 onward, consolidating fragmented contributions across the field. We survey and categorize the selected literature into three principal areas—namely, T2V methods, datasets, and evaluation practices—and further subdivide each area into subcategories that reflect recurring themes and methodological patterns in the literature. Emphasis is then placed on identifying key research opportunities and open challenges that need further investigation.
DOI Link
ISSN
Publisher
MDPI AG
Volume
6
Issue
1
Disciplines
Computer Sciences
Keywords
large language model, literature review, text-to-video generation, video generation
Scopus ID
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Hayawi, Kadhim and Shahriar, Sakib, "Generative AI for Text-to-Video Generation: Recent Advances and Future Directions" (2026). All Works. 7987.
https://zuscholars.zu.ac.ae/works/7987
Indexed in Scopus
yes
Open Access
yes
Open Access Type
Gold: This publication is openly available in an open access journal/series