Title
Dynamic Dual-Attentive Aggregation Learning for Visible-Infrared Person Re-identification
Source of Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
© 2020, Springer Nature Switzerland AG. Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality pedestrian retrieval problem. Due to the large intra-class variations and cross-modality discrepancy with large amount of sample noise, it is difficult to learn discriminative part features. Existing VI-ReID methods instead tend to learn global representations, which have limited discriminability and weak robustness to noisy images. In this paper, we propose a novel dynamic dual-attentive aggregation (DDAG) learning method by mining both intra-modality part-level and cross-modality graph-level contextual cues for VI-ReID. We propose an intra-modality weighted-part attention module to extract discriminative part-aggregated features, by imposing the domain knowledge on the part relationship mining. To enhance robustness against noisy samples, we introduce cross-modality graph structured attention to reinforce the representation with the contextual relations across the two modalities. We also develop a parameter-free dynamic dual aggregation learning strategy to adaptively integrate the two components in a progressive joint training manner. Extensive experiments demonstrate that DDAG outperforms the state-of-the-art methods under various settings.
Document Type
Conference Proceeding
ISBN
9783030585198
First Page
229
Last Page
247
Publication Date
1-1-2020
DOI
10.1007/978-3-030-58520-4_14
Scopus ID
85097087491
Recommended Citation
Ye, Mang; Shen, Jianbing; J. Crandall, David; Shao, Ling; and Luo, Jiebo, "Dynamic Dual-Attentive Aggregation Learning for Visible-Infrared Person Re-identification" (2020). Scopus Indexed Articles. 2680.
https://zuscholars.zu.ac.ae/scopus-indexed-articles/2680