Document Type

Article

Source of Publication

Digital

Publication Date

3-1-2026

Abstract

Text-to-video (T2V) generation has recently emerged as a transformative technology within the field of generative AI, enabling the creation of realistic, temporally coherent videos based on natural language descriptions. This paradigm provides significant added value in many domains such as creative media, human-computer interaction, immersive learning, and simulation. Despite its growing importance, systematic discussion of T2V is still limited compared with adjacent modalities such as text-to-image and image-to-video. To alleviate the scarcity of discussions in the T2V field, this paper provides a systematic review of works published from 2024 onward, consolidating fragmented contributions across the field. We survey and categorize the selected literature into three principal areas—namely, T2V methods, datasets, and evaluation practices—and further subdivide each area into subcategories that reflect recurring themes and methodological patterns in the literature. Emphasis is then placed on identifying key research opportunities and open challenges that need further investigation.

ISSN

2673-6470

Publisher

MDPI AG

Volume

6

Issue

1

Disciplines

Computer Sciences

Keywords

large language model, literature review, text-to-video generation, video generation

Scopus ID

105033657079

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Indexed in Scopus

yes

Open Access

yes

Open Access Type

Gold: This publication is openly available in an open access journal/series

Share

COinS