We address the longstanding problem of learning and model selection in finite mixtures. A common approach is to generate solutions of varying number of components (via the Expectation-Maximization (EM) algorithm) and then select the best model in the sense of a cost such as the Bayesian Information Criterion (BIC). A recent alternative uses component-wise EM (CEM) and, further, integrates model selection within CEM. Both approaches are susceptible to finding poor solutions, the first due to initialization sensitivity of EM and the second due to the sequential (greedy) nature of CEM. Deterministic annealing for clustering (DA) and mixture modeling (DAEM) provide potential for avoiding local optima. However, these methods do not encompass model selection. We propose a new technique with positive attributes of all these methods: it integrates learning and model selection, performs batch optimization over components, and has the character of DA, with the optimization performed over a sequence of decreasing temperatures. Unlike standard DA, with the partition entropy reduced as the temperature is lowered, our approach reduces entropy of binary random variables that express whether each component is active or inactive. At low temperature, the method achieves explicit model order selection. Experiments demonstrate favorable performance of our method, compared with several alternatives. We also give an interesting stochastic generative model interpretation for our method.
|ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
|Published - 2004
|Proceedings - IEEE International Conference on Acoustics, Speech, and Signal Processing - Montreal, Que, Canada
Duration: May 17 2004 → May 21 2004
All Science Journal Classification (ASJC) codes
- Signal Processing
- Electrical and Electronic Engineering