An integrated system for teaching new visually grounded words to a robot for non-expert users using a mobile device - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year : 2009

An integrated system for teaching new visually grounded words to a robot for non-expert users using a mobile device

Abstract

In this paper, we present a system allowing non- expert users to teach new words to their robot. In opposition to most of existing works in this area which focus on the associated visual perception and machine learning challenges, we choose to focus on the HRI challenges with the aim to show that it may improve the learning quality. We argue that by using mediator objects and in particular a handheld device, we can develop a human-robot interface which is not only intuitive and entertaining but will also "help" the user to provide "good" learning examples to the robot and thus will improve the efficiency of the whole learning system. The perceptual and machine learning parts of this system rely on an incremental version of visual bag-of-words. We also propose a system called ASMAT that makes it possible for the robot to incrementally build a model of a novel unknown object by simultaneously modelling and tracking it. We report experiments demonstrating the fast acquisition of robust object models using this approach.
Fichier principal
Vignette du fichier
RouanetOudeyer-Humanoids09.pdf (903.51 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

inria-00438564 , version 1 (04-12-2009)

Identifiers

  • HAL Id : inria-00438564 , version 1

Cite

Pierre Rouanet, Pierre-Yves Oudeyer, David Filliat. An integrated system for teaching new visually grounded words to a robot for non-expert users using a mobile device. The 9th IEEE-RAS International Conference on Humanoid Robot, Dec 2009, Paris, France. ⟨inria-00438564⟩
118 View
208 Download

Share

Gmail Facebook X LinkedIn More