Efficient greedy learning of Gaussian mixtures
Abstract
We present a deterministic greedy method to learn a mixture of Gaussians which runs in O(nk^2) time. The key element is that we build the mixture component-wise. By allocating a new component close to optimal in the existing (close to optimal) learned mixture, we hope to be able to reach a solution close to optimal for the new mixture. Each component of the mixture is characterized by a fixed number of parameters. Then, instead of solving directly a optimization problem involving the parameters of all components, we replace the problem by a sequence of optimization problems involving only the parameters of the new component. We include experimental results obtained on image segmentation and reconstruction tasks as well as results of extensive tests on artificially generated data sets. In these experiments the learning method compares favorably to the standard EM with random initializations as well as to another existing greedy approach to learning Gaussian mixtures.
Fichier principal
verbeek01tr2.pdf (975 Ko)
Télécharger le fichier
VVK01a.png (31.17 Ko)
Télécharger le fichier
Origin : Files produced by the author(s)
Format : Figure, Image