Abstract
Recently, many methods of person re-identification (ReID) rely on part-based feature representation to learn a discriminative pedestrian descriptor. However, the spatial context between these parts is ignored for the independent extractor on each separate part. In this paper, we propose to apply Long Short-Term Memory (LSTM) in an end-to-end way to model the pedestrian, seen as a sequence of body parts from head to foot. Integrating the contextual information strengthens the discriminative ability of local representation. We also leverage the complementary information between local and global feature. Furthermore, we integrate both identification task and ranking task in one network, where a discriminative embedding and a similarity measurement are learned concurrently. This results in a novel three-branch framework named DeepPerson, which learns highly discriminative features for person Re-ID. Experimental results demonstrate that DeepPerson outperforms the state-of-the-art methods by a large margin on three challenging datasets including Market1501, CUHK03, and DukeMTMC-reID. Specifically, combining with a re-ranking approach, we achieve a 90.84% mAP on Market-1501 under single query setting
Method


Results
We evaluate our proposed method, DeepPerson, on three widely used large-scale datasets: Market1501 , CUHK03 , and DukeMTMC-reID dataset.
![]()
|
![]()
|
![]()
|
![]()
|
BibTeX
@article{bai2017deep, title={Deep-Person: Learning Discriminative Deep Features for Person Re-Identification}, author={Bai, Xiang and Yang, Mingkun and Huang, Tengteng and Dou, Zhiyong and Yu, Rui and Xu, Yongchao}, journal={arXiv preprint arXiv:1711.10658}, year={2017} }