Longitudinal Variational Autoencoders learn a Riemannian progression model for imaging data
Abstract
Interpretable progression models for longitudinal neuroimaging data are crucial to understanding neurodegenerative diseases. Well validated geometric progression models for biomarkers do not scale for such high dimensional data. In this work, we analyse a recent approach that combines a Variational Autoencoder with a latent linear mixed-effects model, and demonstrate that imposing a Euclidean prior on the latent space allows the network to learn the geometry of the observation manifold, and model non linear dynamics.
Origin | Files produced by the author(s) |
---|