Learning to Parse Grounded Language using Reservoir Computing
Résumé
Recently new models for language processing and learning using Reservoir Computing have been popular. However, these models are typically not grounded in sensorimotor systems and robots. In this paper, we develop a model of Reservoir Computing called Reservoir Parser (ResPars) for learning to parse Natural Language from grounded data coming from humanoid robots. Previous work showed that ResPars is able to do syntactic generalization over different sentences (surface structure) with the same meaning (deep structure). We argue that such ability is key to guide linguistic generalization in a grounded architecture. We show that ResPars is able to generalize on grounded compositional semantics by combining it with Incremental Recruitment Language (IRL). Additionally, we show that ResPars is able to learn to generalize on the same sentences, but not processed word by word, but as an unsegmented sequence of phonemes. This ability enables the architecture to not rely only on the words recognized by a speech recognizer, but to process the sub-word level directly. We additionally test the model's robustness to word error recognition.
Mots clés
Language acquisition
Grounding of Knowledge and Development of Representations
Language and semantic reasoning
Semantics
Reservoir Computing
Robot kinematics
Computational modeling
Brain modeling
computational linguistics
humanoid robots
learning (artificial intelligence)
natural language processing
neural nets
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...