模式识别与人工智能
Friday, Apr. 11, 2025 Home      About Journal      Editorial Board      Instructions      Ethics Statement      Contact Us                   中文
Pattern Recognition and Artificial Intelligence  2022, Vol. 35 Issue (10): 904-914    DOI: 10.16451/j.cnki.issn1003-6059.202210004
The Applications of Deep Learning in Image and Vision Current Issue| Next Issue| Archive| Adv Search |
Unsupervised Cross-Modality Person Re-identification Based on Semantic Pseudo-Label and Dual Feature Memory Banks
SUN Rui1,2, YU Yiheng1,2, ZHANG Lei1,2, ZHANG Xudong1,2
1. School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230601;
2. Anhui Key Laboratory of Industry Safety and Emergency Technology, Hefei University of Technology, Hefei 230009

Download: PDF (767 KB)   HTML (1 KB) 
Export: BibTeX | EndNote (RIS)      
Abstract  The existing supervised visible infrared person re-identification methods require a lot of human resources to manually label the data and they fail to adapt to the generalization of real and changeable application scenes due to the limitation by the labeled data scene. In this paper, an unsupervised cross-modality person re-identification method based on semantic pseudo-label and dual feature memory banks is proposed. Firstly, a pre-training method based on the contrast learning framework is proposed, using the visible image and its generated auxiliary gray image for training. The pre-training method is employed to obtain the semantic feature extraction network that is robust to color changes. Then,semantic pseudo-label is generated by density based spatial clustering of applications with noise (DBSCAN) clustering method. Compared with the existing pseudo-label generation methods, the proposed method makes full use of the structural information between the cross-modality data in the generation process, and thus the modality discrepancy caused by the color change of the cross-modality data is reduced. In addition, an instance-level hard sample feature memory bank and a centroid-level clustering feature memory bank are constructed to make the model more robust to noise pseudo-label by hard sample features and clustering features. Experimental results obtained on two cross-modality datasets, SYSU-MM01 and RegDB, demonstrate the effectiveness of the proposed method.
Key wordsUnsupervised Cross-Modality Person Re-identification      Semantic Pseudo-Label      Dual Feature Memory Bank      Deep Learning     
Received: 05 May 2022     
ZTFLH: TP 391  
Fund:General Project of National Natural Science Foundation of China(No.61876057), Natural Science Foundation of Anhui Province(No.2208085MF158), Key Research and Development Plan of Anhui Province-Special Project for Strengthening the Police in Science and Technology(No.202004D07020012)
Corresponding Authors: SUN Rui, Ph.D., professor. His research interests include computer vision and machine learning.   
About author:: YU Yiheng, master student. His research interests include image information processing and computer vision.ZHANG Lei, master student. His research interests include image information processing and computer vision.ZHANG Xudong, Ph.D., professor. His research interests include intelligent information processing and machine vision.
Service
E-mail this article
Add to my bookshelf
Add to citation manager
E-mail Alert
RSS
Articles by authors
SUN Rui
YU Yiheng
ZHANG Lei
ZHANG Xudong
Cite this article:   
SUN Rui,YU Yiheng,ZHANG Lei等. Unsupervised Cross-Modality Person Re-identification Based on Semantic Pseudo-Label and Dual Feature Memory Banks[J]. Pattern Recognition and Artificial Intelligence, 2022, 35(10): 904-914.
URL:  
http://manu46.magtech.com.cn/Jweb_prai/EN/10.16451/j.cnki.issn1003-6059.202210004      OR     http://manu46.magtech.com.cn/Jweb_prai/EN/Y2022/V35/I10/904
Copyright © 2010 Editorial Office of Pattern Recognition and Artificial Intelligence
Address: No.350 Shushanhu Road, Hefei, Anhui Province, P.R. China Tel: 0551-65591176 Fax:0551-65591176 Email: bjb@iim.ac.cn
Supported by Beijing Magtech  Email:support@magtech.com.cn