Document Type
Article
Source of Publication
Frontiers in Medicine
Publication Date
7-24-2025
Abstract
The emergence of both task-specific single-modality models and general-purpose multimodal large models presents new opportunities, but also introduces challenges, particularly regarding adversarial attacks. In high-stakes domains like healthcare, these attacks can severely undermine model reliability and their applicability in real-world scenarios, highlighting the critical need for research focused on adversarial robustness. This study investigates the behavior of multimodal models under various adversarial attack scenarios. We conducted experiments involving two modalities: images and texts. Our findings indicate that multimodal models exhibit enhanced resilience against adversarial attacks compared to their single-modality counterparts. This supports our hypothesis that the integration of multiple modalities contributes positively to the robustness of deep learning systems. The results of this research advance understanding in the fields of multimodality and adversarial robustness and suggest new avenues for future studies focused on optimizing data flow within multimodal systems.
DOI Link
ISSN
Publisher
Frontiers Media SA
Volume
12
Disciplines
Computer Sciences
Keywords
adversarial attack, classification, machine learning (ML), multimodal data fusion, X-ray
Scopus ID
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Mozhegova, Ekaterina; Khattak, Asad Masood; Khan, Adil; Garaev, Roman; Rasheed, Bader; and Anwar, Muhammad Shahid, "Assessing the adversarial robustness of multimodal medical AI systems: insights into vulnerabilities and modality interactions" (2025). All Works. 7469.
https://zuscholars.zu.ac.ae/works/7469
Indexed in Scopus
yes
Open Access
yes
Open Access Type
Gold: This publication is openly available in an open access journal/series