Spatiotemporal Saliency Detection Based on Superpixel-level Trajectory
Résumé
In this paper, we propose a novel spatiotemporal saliency model based on superpixel-level trajectories for saliency detection in videos. The input video is first decomposed into a set of temporally consistent superpixels, on which superpixel-level trajectories are directly generated, and motion histograms at superpixel level as well as frame level are extracted. Based on motion vector fields of multiple successive frames, the inside–outside maps are estimated to roughly indicate whether pixels are inside or outside objects with motion different from background. Then two descriptors, i.e. accumulated motion histogram and trajectory velocity entropy, are exploited to characterize the short-term and long-term temporal features of superpixel-level trajectories. Based on trajectory descriptors and inside–outside maps, superpixel-level trajectory distinctiveness is evaluated and trajectory classification is performed to obtain trajectory-level temporal saliency. Superpixel-level and pixel-level temporal saliency maps are generated in turn by exploiting motion similarity with neighboring superpixels around each trajectory, and color-spatial similarity with neighboring superpixels around each pixel, respectively. Finally, a quality-guided fusion method is proposed to integrate the pixel-level temporal saliency map with the pixel-level spatial saliency map, which is generated based on global contrast and spatial sparsity of superpixels, to generate the pixel-level spatiotemporal saliency map with reasonable quality. Experimental results on two public video datasets demonstrate that the proposed model outperforms the state-of-the-art spatiotemporal saliency models on saliency detection performance.