A Probabilistic Model for Joint Learning of Word Embeddings from Texts and Images - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2018

A Probabilistic Model for Joint Learning of Word Embeddings from Texts and Images

Résumé

Several recent studies have shown the benefits of combining language and perception to infer word embeddings. These multimodal approaches either simply combine pre-trained textual and visual representations (e.g. features extracted from convolutional neural networks), or use the latter to bias the learning of textual word embeddings. In this work, we propose a novel probabilistic model to formalize how linguistic and perceptual inputs can work in concert to explain the observed word-context pairs in a text corpus. Our approach learns textual and visual representations jointly: latent visual factors couple together a skip-gram model for co-occurrence in linguistic data and a generative latent variable model for visual data. Extensive experimental studies validate the proposed model. Concretely, on the tasks of assessing pairwise word similarity and image/caption retrieval, our approach attains equally competitive or stronger results when compared to other state-of-the-art multimodal models.
Fichier principal
Vignette du fichier
emnlp18.pdf (372.59 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01922985 , version 1 (14-11-2018)

Identifiants

  • HAL Id : hal-01922985 , version 1

Citer

Melissa Ailem, Bowen Zhang, Aurélien Bellet, Pascal Denis, Fei Sha. A Probabilistic Model for Joint Learning of Word Embeddings from Texts and Images. Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), 2018, Brussels, Belgium. ⟨hal-01922985⟩
118 Consultations
212 Téléchargements

Partager

More