posted on 2019-02-08, 11:15authored byS Wang, X Zhang, L Chen, H Zhou, J Dong
Person re-identification aims at matching individuals across multiple camera views under surveillance
systems. The major challenges lie in the lack of spatial and temporal cues, which makes it difficult to cope
with large variations of lighting conditions, viewing angles, body poses and occlusions. How to extract
multimodal features including facial features, physical features, behavioral features, color features, etc
is still a fundamental problem in person re-identification. In this paper, we propose a novel
Convolutional Neural Network, called Asymmetric Filtering-based Dense Convolutional Neural
Network (AF D-CNN) to learn powerful features, which can extract different levels’ features and take
advantage of identity information. Moreover, instead of using typical metric learning methods, we obtain
the ranking lists by merging Joint Bayesian and re-ranking techniques which do not need dimensionality
reduction. Finally, extensive experiments show that our proposed architecture performs well on four
popular benchmark datasets (CUHK01, CUHK03, Market-1501, DukeMTMC-reID).
Funding
This work is supported by the National Natural Science Foundation of China (NSFC) Grants U1706218, 61602229, 41606198, 61501417 and 41706010, Natural Science Foundation of Shandong Provincial ZR2016FM13, ZR2016FB02. H. Zhou was supported in part by the European Union’s Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 720325 FoodSmartphone, the UK EPSRC under Grant EP/N011074/1 and the Royal Society-Newton Advanced Fellowship under Grant NA160342.
History
Citation
Journal of Visual Communication and Image Representation, 2018, 57, pp. 262-271
Author affiliation
/Organisation/COLLEGE OF SCIENCE AND ENGINEERING/Department of Informatics
Version
AM (Accepted Manuscript)
Published in
Journal of Visual Communication and Image Representation
The file associated with this record is under embargo until 12 months after publication, in accordance with the publisher's self-archiving policy. The full text may be available through the publisher links provided above.