Quasi-Symplectic Langevin Variational Au-Toencoder
Résumé
Variational autoencoder (VAE) as one of the well-investigated generative model is very popular in nowadays neural learning research works. To leverage VAE in practical tasks which have high dimensions and massive dataset often facing the problem of low variance evidence lower bounds construction. Markov chain Monte Carlo (MCMC) is an effective approach to tight the evidence lower bound (ELBO) for approximating the posterior distribution. Hamiltonian Variational Autoencoder (HVAE) is one of those effective MCMC inspired approaches for constructing the low-variance ELBO which is also amenable for reparameterization trick. The solution significantly improves the performance of the posterior estimation, yet, a main drawback of HVAE is the leapfrog method need to access the posterior gradient twice which leads to bad inference efficiency and the GPU memory requirement is fair large. This flaw limited the application of Hamiltonian based inference framework for large scale networks inference. To tackle this problem, we propose a Quasi-symplectic Langevin Variational autoencoder (Langevin-VAE), which can be a significant improvement over resource usage efficiency. We qualitatively and quantitatively demonstrate the effectiveness of the Langevin-VAE compared to the state-of-art gradients informed inference framework.
Origine | Fichiers produits par l'(les) auteur(s) |
---|