HEADSET: Human Emotion Awareness under Partial Occlusions Multimodal DataSET
Abstract
The volumetric representation of human interactions is one of the fundamental domains in the development of immersive
media productions and telecommunication applications. Particularly in the context of the rapid advancement of Extended Reality
(XR) applications, this volumetric data has proven to be an essential technology for future XR elaboration. In this work, we present a
new multimodal database to help advance the development of immersive technologies. Our proposed database provides ethically
compliant and diverse volumetric data, in particular 27 participants displaying posed facial expressions and subtle body movements
while speaking, plus 11 participants wearing head-mounted displays (HMDs). The recording system consists of a volumetric capture
(VoCap) studio, including 31 synchronized modules with 62 RGB cameras and 31 depth cameras. In addition to textured meshes, point
clouds, and multi-view RGB-D data, we use one Lytro Illum camera for providing light field (LF) data simultaneously. Finally, we also
provide an evaluation of our dataset employment with regard to the tasks of facial expression classification, HMDs removal, and point
cloud reconstruction. The dataset can be helpful in the evaluation and performance testing of various XR algorithms, including but not
limited to facial expression recognition and reconstruction, facial reenactment, and volumetric video. HEADSET and its all associated
raw data and license agreement will be publicly available for research purposes.
Origin | Files produced by the author(s) |
---|