Faster Training of Diffusion Models and Improved Density Estimation via Parallel Score Matching
Abstract
In Diffusion Probabilistic Models (DPMs), the task of modeling the score evolution via a single time-dependent neural network necessitates extended training periods and may potentially impede modeling flexibility and capacity. To counteract these challenges, we propose leveraging the independence of learning tasks at different time points inherent to DPMs. More specifically, we partition the learning task by utilizing independent networks, each dedicated to learning the evolution of scores within a specific time sub-interval. Further, inspired by residual flows, we extend this strategy to its logical conclusion by employing separate networks to independently model the score at each individual time point. As empirically demonstrated on synthetic and image datasets, our approach not only significantly accelerates the training process by introducing an additional layer of parallelization atop data parallelization, but it also enhances density estimation performance when compared to the conventional training methodology for DPMs.Faster Training of Diffusion Models and Improved Density Estimation via Parallel Score Matching
Fichier principal
Improving_Training_Speed_and_Density_Estimation_of_Diffusion_Models_via_Parallel_Score_Matching__HAL_.pdf.pdf (4.34 Mo)
Télécharger le fichier
Improving_Training_Speed_and_Performance_of_Diffusion_Models_via_Parallel_Score_Matching__HAL_.pdf (3.31 Mo)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|