Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset
Résumé
We present the CREATTIVE3D dataset of human interaction and navigation at road crossings in virtual reality. The dataset has three main breakthroughs: (1) it is the largest dataset of human motion in fully-annotated scenarios (40 hours, 2.6 million poses), (2) it is captured in dynamic 3D scenes with multivariate-gaze, physiology, and motion-data, and (3) it investigates the impact of simulated low-vision conditions using dynamic eye tracking under real walking and simulated walking conditions. Extensive effort has been made to ensure the transparency, usability, and reproducibility of the study and collected data, even under extremely complex study conditions involving 6 degrees of freedom interactions, and multiple sensors. We believe this will allow studies using the same or similar protocols to be comparable to existing study results, and allow a much more fine-grained analysis of individual nuances of user behavior across datasets or study designs. This is what we call a living contextual dataset.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |