A Tutorial on MM Algorithms

David R. Hunter, Kenneth Lange

Research output: Contribution to journalArticlepeer-review

1330 Scopus citations

Abstract

Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the log-likelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to be part of the standard toolkit of professional statisticians. This article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this article introduces some new material on constrained optimization and standard error estimation.

Original languageEnglish (US)
Pages (from-to)30-37
Number of pages8
JournalAmerican Statistician
Volume58
Issue number1
DOIs
StatePublished - Feb 2004

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • General Mathematics
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'A Tutorial on MM Algorithms'. Together they form a unique fingerprint.

Cite this