Comparing Performance Models for Bivariate Pointing Through a Crowdsourced Experiment - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Comparing Performance Models for Bivariate Pointing Through a Crowdsourced Experiment

Shota Yamanaka
  • Fonction : Auteur
  • PersonId : 1280654

Résumé

Evaluation of a novel user-performance model’s fitness requires comparison with baseline models, yet it is often time consuming and involves much effort by researchers to collect data from many participants. Crowdsourcing has recently been used for evaluating novel interaction techniques, but its potential for model comparison studies has not been investigated in detail. In this study, we evaluated four existing Fitts’ law models for rectangular targets, as though one of them was a proposed novel model. We recruited 210 crowd workers, who performed 94,080 clicks in total, and confirmed that the result for the best-fit model was consistent with previous studies. We also analyzed whether this conclusion would change depending on the sample size, but even when we randomly sampled data from five workers for 10,000 iterations, the best-fit model changed only once (0.01%). We have thus demonstrated a case in which crowdsourcing is beneficial for comparing performance models.
Fichier principal
Vignette du fichier
520516_1_En_6_Chapter.pdf (824.8 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04196868 , version 1 (05-09-2023)

Licence

Identifiants

Citer

Shota Yamanaka. Comparing Performance Models for Bivariate Pointing Through a Crowdsourced Experiment. 18th IFIP Conference on Human-Computer Interaction (INTERACT), Aug 2021, Bari, Italy. pp.76-92, ⟨10.1007/978-3-030-85616-8_6⟩. ⟨hal-04196868⟩
18 Consultations
35 Téléchargements

Altmetric

Partager

More