ИАПУ ДВО РАН

Discriminative Local Representation Learning for Cross-Modality Visible-Thermal Person Re-Identification


2023

IEEE Transactions on Biometrics, Behavior and Identity Science, Scopus

Статьи в журналах

Wu Y., He G.D., Wen L.H., Gribova V., Filaretov V.F., Qin X., Yuan C A., Huang D.S. Discriminative Local Representation Learning for Cross-Modality Visible-Thermal Person Re-Identification // IEEE Transactions on Biometrics, Behavior, and Identity Science. 2023. Vol. 5, Is. 1. Pp. 1-14. DOI: 10.1109/TBIOM.2022.3184525.

Visible-thermal person re-identification (VTReID) is a rising and challenging cross-modality retrieval task in intelligent video surveillance systems. Most attention architectures cannot explore the discriminative person representations for VTReID, especially in the thermal modality. In addition, the fine-grained middle-level semantic information has received much less attention in the part-based approaches for the cross-modality pedestrian retrieval task, resulting in limited generalization capability and poor representation robustness. This paper proposes a simple yet powerful discriminative local representation learning (DLRL) model to capture the robust local fine-grained feature representations and explore the rich semantic relationship between the learned part features. Specifically, an efficient contextual attention aggregation module (CAAM) is designed to strengthen the discriminative capability of the feature representations and explore the contextual cues for visible and thermal modalities. Then, an integrated middle-high feature learning (IMHF) method is introduced to capture the part-level salient representations, which handles the ambiguous modality discrepancy in both discriminative middle-level and robust high-level information. Moreover, a part-guided graph convolution module (PGCM) is constructed to mine the structural relationship among the part representations within each modality. The quantitative and qualitative experiments on the two benchmark datasets demonstrate that the proposed DLRL model significantly outperforms state-of-the-art methods and achieves rank-1/mAP accuracy of 92.77%/82.05% on the RegDB dataset and 63.04%/60.58% on the SYSU-MM01 dataset.

10.1109/TBIOM.2022.3184525

https://ieeexplore.ieee.org/document/9803280