A Vision for All: A Dataset and CNN Framework for Navigating Educational Spaces for the Visually Impaired
Document Type
Conference Proceeding
Source of Publication
International Conference on Artificial Intelligence Computer Data Sciences and Applications Acdsa 2025
Publication Date
9-24-2025
Abstract
Over the years, numerous datasets featuring both images and text have been introduced, driving the development of innovative methods that integrate natural language processing and computer vision. Nonetheless, there remains a demand for datasets that present images within their authentic context. This paper presents a large-scale image dataset collected from Zayed University and the British University in Dubai, designed to support the development of AI-powered assistive technologies for visually impaired individuals in academic settings. The dataset includes 300,000 images representing university facilities such as classrooms, labs, and safety features. Images were captured using direct photography and video frame extraction, incorporating a range of conditions. Preprocessing techniques ensured the dataset's high quality and variability. CNN and ResNet50 classification models were applied, and they achieved 90% and 93% accuracy, respectively. This dataset paves the way for an envisioned intelligent Arabic voice-based navigation system, supporting both standard and Gulf Dialect Arabic. This paper details the creation process of the dataset, the dataset structure, and the potential contributions to advancing research in assistive technologies for individuals with visual impairments.
DOI Link
ISBN
[9798331535629]
Publisher
IEEE
Disciplines
Computer Sciences
Keywords
Environmental Recognition Systems, Image Dataset for Assistive AI, ResNet50, Visually Impaired
Scopus ID
Recommended Citation
Belqasmi, Fatna; Loucif, Samia; and Alkhatib, Manar, "A Vision for All: A Dataset and CNN Framework for Navigating Educational Spaces for the Visually Impaired" (2025). All Works. 7578.
https://zuscholars.zu.ac.ae/works/7578
Indexed in Scopus
yes
Open Access
no