Gestures in Human-Computer Interaction: Does the artificial partner make any difference? - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 1998

Gestures in Human-Computer Interaction: Does the artificial partner make any difference?

Résumé

The paper presents a common theoretical framework for explaining multimodal communication in person-person interaction and Human-Computer Interaction (HCI). Within such a context, specific attention will be devoted to referring acts, i.e. communication acts that indicate a target in the physical space through the joint use of referring nominal phrases and referring gestures. HCI multimodal communication is mediated by systems that support the fusion of data coming from different input devices (microphone, touch-screen, pen, keyboard, mouse). Because of this property, these systems can introduce a major improvement in the usability of future computers. In particular, they have the potential for favoring more flexible, easy to learn, and productive interaction. Both user and computers will take advantage from this new form of interaction. As regards the human factor, multimodal systems allow users to exploit their natural communication abilities. As a consequence, they reduce the psychological distance between users and computers with respect to both the execution of input and to the evaluation of output. As regards computers, multimodal systems broaden the information bandwidth of the message, so that more cues for meaning extraction becomes available. Typically gesture have the property of simplifying linguistic expression especially with respect to target identification. Despite these clear advantages, the design of multimodal systems can be complex and counterintuitive. Before future technology will be able to function successfully with real users and in actual field settings, interface techniques are needed to steer user behavior towards systems capabilities. Accomplishing this goal requires a deep understanding of how people actually take advantage of multimodal communication when interacting with computers. Neither traditional unimodal HCI nor face-to-face communication models appear adequate to achieve this goal. Therefore, empirical studies, especially in the form of early simulations, are needed to understand the new communication form. Such a knowledge is essential to the design of future systems and to the development of interfaces that make the best use of different modalities. The theoretical part is followed by the presentation of same empirical data collected through the Wizard of Oz simulation technique. The experiment is aimed at specifically investigating the role of perception on referring acts and the process of cross-modal integration of gesture and speech. As regards perception, we attempt to extend the ecological theory of affordances (Gibson 1979) to explain difference in gesture production. As regards cross-modal integration we compares the natural integration pattern produced in a face-to-face communication context with that produced in HCI context. Results allow to clarify: (a) how users distribute their communication intentions across language and gesture; (b) how they spontaneously integrate gesture and speech when referring to objects displayed on a graphical interface; (c) the influence of perception on the two communication modalities. Finally, implications of our findings for multimodal interface design are discussed together with a model for implementing a multimodal system capable of understanding spontaneous input.
Fichier non déposé

Dates et versions

inria-00098519 , version 1 (25-09-2006)

Identifiants

  • HAL Id : inria-00098519 , version 1

Citer

Antonella de Angeli, Frédéric Wolff, Nadia Bellalem, Laurent Romary. Gestures in Human-Computer Interaction: Does the artificial partner make any difference?. Orage'98, 1998, none. ⟨inria-00098519⟩
159 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More