Dynamic Dual-Attentive Aggregation Learning for Visible-Infrared Person Re-identification

Document Type

Conference Proceeding

Source of Publication

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Publication Date

1-1-2020

Abstract

© 2020, Springer Nature Switzerland AG. Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality pedestrian retrieval problem. Due to the large intra-class variations and cross-modality discrepancy with large amount of sample noise, it is difficult to learn discriminative part features. Existing VI-ReID methods instead tend to learn global representations, which have limited discriminability and weak robustness to noisy images. In this paper, we propose a novel dynamic dual-attentive aggregation (DDAG) learning method by mining both intra-modality part-level and cross-modality graph-level contextual cues for VI-ReID. We propose an intra-modality weighted-part attention module to extract discriminative part-aggregated features, by imposing the domain knowledge on the part relationship mining. To enhance robustness against noisy samples, we introduce cross-modality graph structured attention to reinforce the representation with the contextual relations across the two modalities. We also develop a parameter-free dynamic dual aggregation learning strategy to adaptively integrate the two components in a progressive joint training manner. Extensive experiments demonstrate that DDAG outperforms the state-of-the-art methods under various settings.

ISBN

9783030585198

ISSN

0302-9743

Publisher

Springer International Publishing

Volume

12362 LNCS

First Page

229

Last Page

247

Disciplines

Computer Sciences

Keywords

Cross-modality, Graph attention, Person re-identification

Indexed in Scopus

yes

Open Access

yes

Open Access Type

Green: A manuscript of this publication is openly available in a repository

Share

COinS