The Grenoble System for the Social Touch Challenge at ICMI 2015
Résumé
New technologies and especially robotics is going towards more natural user interfaces.
Works have been done in different modality of interaction such as sight (visual computing), and audio (speech and audio recognition) but some other modalities are still less researched.
The touch modality is one of the less studied in HRI but could be valuable for naturalistic interaction.
However touch signals can vary in semantics.
It is therefore necessary to be able to recognize touch gestures in order to make human-robot interaction even more natural.
We propose a method to recognize touch gestures.
This method was developed on the CoST corpus and then directly applied on the HAART dataset as a participation of the Social Touch Challenge at ICMI 2015.
Our touch gesture recognition process is detailed in this article to make it reproducible by other research teams.
Besides features set description, we manually filtered the training corpus to produce 2 datasets.
For the challenge, we submitted 6 different systems.
A Support Vector Machine and a Random Forest classifiers for the HAART dataset.
For the CoST dataset, the same classifiers are tested in two conditions: using all or filtered training datasets.
As reported by organizers, our systems have the best correct rate in this year's challenge (70.91% on HAART, 61.34% on CoST).
Our performances are slightly better that other participants but stay under previous reported state-of-the-art results.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...