Channels’ Matching Algorithm for Mixture Models
Résumé
To solve the Maximum Mutual Information (MMI) and Maximum Likelihood (ML) for tests, estimations, and mixture models, it is found that we can obtain a new iterative algorithm by the Semantic Mutual Information (SMI) and R(G) function proposed by Chenguang Lu (1993) (where R(G) function is an extension of information rate distortion function R(D), G is the lower limit of the SMI, and R(G) represents the minimum R for given G). This paper focus on mixture models. The SMI is defined by the average log normalized likelihood. The likelihood function is produced from the truth function and the prior by the semantic Bayesian inference. A group of truth functions constitute a semantic channel. Letting the semantic channel and Shannon channel mutually match and iterate, we can obtain the Shannon channel that maximizes the MMI and the average log likelihood. Therefore, this iterative algorithm is called Channels’ Matching algorithm or the CM algorithm. It is proved that the relative entropy between the sampling distribution and predicted distribution may be equal to R − G. Hence, solving the maximum likelihood mixture model only needs minimizing R − G, without needing Jensen’s inequality. The convergence can be intuitively explained and proved by the R(G) function. Two iterative examples of mixture models (which are demonstrated in an excel file) show that the computation for the CM algorithm is simple. In most cases, the number of iterations for convergence (as the relative entropy <0.001 bit) is about 5. The CM algorithm is similar to the EM algorithm; however, the CM algorithm has better convergence and more potential applications.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...