Multi-shot Person Re-Identification Using Part Appearance Mixture
Résumé
Appearance based person re-identification in real-world video surveillance systems is a challenging problem for many reasons, including ineptness of existing low level features under significant viewpoint, illumination, or camera characteristic changes to robustly describe a person's appearance. One approach to handle appearance variability is to learn similarity metrics or ranking functions to implicitly model appearance transformation between cameras for each camera pair, or group, in the system. The alternative, that this paper follows, is the more fundamental approach of improving appearance descriptors, called signatures, to cater for high appearance variance and occlusions. The novel signature representation for multi-shot person reidentification presented in this paper uses multiple appearance models, each describing appearance as a probability distribution of a low-level feature for a certain portion of individual's body. Combined with metric learning, rank-1 recognition rates of 92:5% and 79:5% are achieved on PRID2011 [12] and iLIDS-VID [34] datasets, respectively.