Recognition of Group Activities in Videos Based on Single- and Two-Person Descriptors
Résumé
Group activity recognition from videos is a very challenging problem that has barely been addressed. We propose an activity recognition method using group context. In order to encode both single-person description and two-person interactions, we learn mappings from high-dimensional feature spaces to low-dimensional dictionaries. In particular the proposed two-person descriptor takes into account geometric characteristics of the relative pose and motion between the two persons. Both single-person and two-person representations are then used to define unary and pairwise potentials of an energy function, whose optimization leads to the structured labeling of persons involved in the same activity. An interesting feature of the proposed method is that, unlike the vast majority of existing methods, it is able to recognize multiple distinct group activities occurring simultaneously in a video. The proposed method is evaluated with datasets widely used for group activity recognition, and is compared with several baseline methods.
Fichier principal
main.pdf (6.24 Mo)
Télécharger le fichier
backProj.png (698.62 Ko)
Télécharger le fichier
backProj.jpg (174.14 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Format | Figure, Image |
---|---|
Origine | Fichiers produits par l'(les) auteur(s) |
Format | Figure, Image |
---|---|
Origine | Fichiers produits par l'(les) auteur(s) |
Loading...