Episodic Transformer for Vision-and-Language Navigation - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2021

Episodic Transformer for Vision-and-Language Navigation

Résumé

Interaction and navigation defined by natural language instructions in dynamic environments pose significant challenges for neural agents. This paper focuses on addressing two challenges: handling long sequence of subtasks, and understanding complex human instructions. We propose Episodic Transformer (E.T.), a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions. To improve training, we leverage synthetic instructions as an intermediate representation that decouples understanding the visual appearance of an environment from the variations of natural language instructions. We demonstrate that encoding the history with a transformer is critical to solve compositional tasks, and that pretraining and joint training with synthetic instructions further improve the performance. Our approach sets a new state of the art on the challenging ALFRED benchmark, achieving 38.4% and 8.5% task success rates on seen and unseen test splits.
Fichier principal
Vignette du fichier
2105.06453v2.pdf (9.6 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03371803 , version 1 (12-07-2024)

Identifiants

Citer

Alexander Pashevich, Cordelia Schmid, Chen Sun. Episodic Transformer for Vision-and-Language Navigation. ICCV 2021 - International Conference on Computer Vision, Oct 2021, Virtual, United States. pp.1-18, ⟨10.1109/ICCV48922.2021.01564⟩. ⟨hal-03371803⟩
128 Consultations
38 Téléchargements

Altmetric

Partager

More