Multi-view pose estimation with mixtures-of-parts and adaptive viewpoint selection
Résumé
We propose a new method for human pose estimation which leverages information from multiple views to impose a strong prior on articulated pose. The novelty of the method concerns the types of coherence modelled. Consistency is maximised over the different views through different terms modelling classical geometric information (coherence of the resulting poses) as well as appearance information which is modelled as latent variables in the global energy function. Moreover, adequacy of each view is assessed and their contributions are adjusted accordingly. Experiments on the HumanEva and Utrecht multi-person motion datasets show that the proposed method significantly decreases the estimation error compared to single-view results.