Visible-Infrared Person Re-Identification Based on Feature Decoupling and Refinement
Résumé
The objective of visible-infrared person re-identification is to accurately match pedestrian images captured in different modalities. Since these images are taken from varying viewpoints by different cameras, the cross-modal detection task must address both modality discrepancies and camera variations. Many existing approaches primarily focus on minimizing inter-modality differences to enhance retrieval accuracy, often overlooking the impact of camera viewpoint differences. To tackle these challenges, this article introduces a hierarchical feature decoupling network. First, the network decouples and extracts camera-related and camera-irrelated features separately to mitigate the effects of camera variations. Second, it addresses modality differences by extracting modality-independent features. Additionally, an adversarial decoupling loss is employed to further disentangle identity-irrelevant information from identity-relevant features, thereby boosting the system’s accuracy and robustness. Extensive experiments conducted on the SYSU-MM01 and RegDB datasets validate the effectiveness of the proposed method.
