Motion Textures: Modeling, Classification, and Segmentation Using Mixed-State
Abstract
A motion texture is an instantaneous motion map extracted from a dynamic texture. We observe that such motion maps exhibit values of two types: a discrete component at zero (absence of motion) and continuous motion values. We thus develop a mixed-state Markov random field model to represent motion textures. The core of our approach is to show that motion information is powerful enough to classify and segment dynamic textures if it is properly modeled regarding its specific nature and the local interactions involved. A parsimonious set of 11 parameters constitutes the descriptive feature of a motion texture. The motivation of the proposed formulation runs toward the analysis of dynamic video contents, and we tackle two related problems. First, we present a method for recognition and classification of motion textures, by means of the Kullback-Leibler distance between mixed-state statistical models. Second, we define a two-frame motion texture maximum a posteriori (MAP)- based segmentation method applicable to motion textures with deforming boundaries. We also investigate a new issue, the space-time dynamic texture segmentation, by combining the spatial segmentation and the recognition methods. Numer ous experimental results are reported for those three problems which demonstrate the efficiency and accuracy of the proposed two-frame approach.