Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset - Inria - Institut national de recherche en sciences et technologies du numérique
Pré-Publication, Document De Travail Année : 2023

Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset

Résumé

We present the CREATTIVE3D dataset of human interaction and navigation at road crossings in virtual reality. The dataset has three main breakthroughs: (1) it is the largest dataset of human motion in fully-annotated scenarios (40 hours, 2.6 million poses), (2) it is captured in dynamic 3D scenes with multivariate-gaze, physiology, and motion-data, and (3) it investigates the impact of simulated low-vision conditions using dynamic eye tracking under real walking and simulated walking conditions. Extensive effort has been made to ensure the transparency, usability, and reproducibility of the study and collected data, even under extremely complex study conditions involving 6 degrees of freedom interactions, and multiple sensors. We believe this will allow studies using the same or similar protocols to be comparable to existing study results, and allow a much more fine-grained analysis of individual nuances of user behavior across datasets or study designs. This is what we call a living contextual dataset.
Fichier principal
Vignette du fichier
2023_CREATTIVE3D_dataset_arxiv_.pdf (3.27 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04429351 , version 1 (31-01-2024)

Licence

Identifiants

  • HAL Id : hal-04429351 , version 1

Citer

Hui-Yin Wu, Florent Alain Sauveur Robert, Franz Franco Gallo, Kateryna Pirkovets, Clément Quere, et al.. Exploring, walking, and interacting in virtual reality with simulated low vision: a living contextual dataset. 2023. ⟨hal-04429351⟩
195 Consultations
87 Téléchargements

Partager

More