GPU-Accelerated Clique Tree Propagation for Pouch Latent Tree Models - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2018

GPU-Accelerated Clique Tree Propagation for Pouch Latent Tree Models

Leonard Poon
  • Fonction : Auteur
  • PersonId : 1053369

Résumé

Pouch latent tree models (PLTMs) are a class of probabilistic graphical models that generalizes the Gaussian mixture models (GMMs). PLTMs produce multiple clusterings simultaneously and have been shown better than GMMs for cluster analysis in previous studies. However, due to the considerably higher number of possible structures, the training of PLTMs is more time-demanding than GMMs. This thus has limited the application of PLTMs on only small data sets. In this paper, we consider using GPUs to exploit two parallelism opportunities, namely data parallelism and element-wise parallelism, for PTLMs. We focus on clique tree propagation, since this exact inference procedure is a strenuous task and is recurrently called for each data sample and each model structure during PLTM training. Our experiments with real-world data sets show that the GPU-accelerated implementation procedure can achieve up to 52x speedup over the sequential implementation running on CPUs. The experiment results signify promising potential for further improvement on the full training of PLTMs with GPUs.
Fichier principal
Vignette du fichier
477597_1_En_8_Chapter.pdf (263.07 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02279542 , version 1 (05-09-2019)

Licence

Identifiants

Citer

Leonard Poon. GPU-Accelerated Clique Tree Propagation for Pouch Latent Tree Models. 15th IFIP International Conference on Network and Parallel Computing (NPC), Nov 2018, Muroran, Japan. pp.90-102, ⟨10.1007/978-3-030-05677-3_8⟩. ⟨hal-02279542⟩
39 Consultations
110 Téléchargements

Altmetric

Partager

More