MDVA-GAN: multi-domain visual attribution generative adversarial networks
ORCID Identifiers
Document Type
Article
Source of Publication
Neural Computing and Applications
Publication Date
1-1-2022
Abstract
Some pixels of an input image have thick information and give insights about a particular category during classification decisions. Visualization of these pixels is a well-studied problem in computer vision, called visual attribution (VA), which helps radiologists to recognize abnormalities and identify a particular disease in the medical image. In recent years, several classification-based techniques for domain-specific attribute visualization have been proposed, but these techniques can only highlight a small subset of most discriminative features. Therefore, their generated VA maps are inadequate to visualize all effects in an input image. Due to recent advancements in generative models, generative model-based VA techniques are introduced which generate efficient VA maps and visualize all affected regions. To deal the issue, generative adversarial network-based VA techniques are recently proposed, where the researchers leverage the advances in domain adaption techniques to learn a map for abnormal-to-normal medical image translation. As these approaches rely on a two-domain translation model, it would require training as many models as number of diseases in a medical dataset, which is a tedious and compute-intensive task. In this work, we introduce a unified multi-domain VA model that generates a VA map of more than one disease at a time. The proposed unified model gets images from a particular domain and its domain label as input to generate VA map and visualize all the affected regions by that particular disease. Experiments on the CheXpert dataset, which is a publicly available multi-disease chest radiograph dataset, and the TBX11K dataset show that the proposed model generates identical results.
DOI Link
Publisher
Springer Science and Business Media LLC
Disciplines
Computer Sciences
Keywords
Abnormal-to-normal translation, Change map, Chest X-ray, Generative adversarial network, Tuberculosis, Visual attribution
Scopus ID
Recommended Citation
Nawaz, Muhammad; Al-Obeidat, Feras; Tubaishat, Abdallah; Zia, Tehseen; Maqbool, Fahad; and Rocha, Alvaro, "MDVA-GAN: multi-domain visual attribution generative adversarial networks" (2022). All Works. 4858.
https://zuscholars.zu.ac.ae/works/4858
Indexed in Scopus
yes
Open Access
no