A Multimodal Human-Robot Interaction Dataset
Abstract
This works presents a multimodal dataset for Human-Robot Interactive Learning. 1 The dataset contains synchronized recordings of several human users, from a stereo 2 microphone and three cameras mounted on the robot. The focus of the dataset is 3 incremental object learning, oriented to human-robot assistance and interaction. To 4 learn new object models from interactions with a human user, the robot needs to 5 be able to perform multiple tasks: (a) recognize the type of interaction (pointing, 6 showing or speaking), (b) segment regions of interest from acquired data (hands and 7 objects), and (c) learn and recognize object models. We illustrate the advantages 8 of multimodal data over camera-only datasets by presenting an approach that 9 recognizes the user interaction by combining simple image and language features.
Domains
Artificial Intelligence [cs.AI]Origin | Files produced by the author(s) |
---|
Loading...