TY - JOUR
T1 - A Tutorial on MM Algorithms
AU - Hunter, David R.
AU - Lange, Kenneth
N1 - Funding Information:
DvidRa.HunterisAistantsProsfser,sDepartmeno tofStatistics,PennState Unive, UnrivesrtyisPark,tiyPA 16802-2111 (E-mail: [email protected]). KeethnLannge is Profeor, Depsarsmtnts ofeBiomathemanad HumanticGnet-se ics, Dvid GafeSchoenfol of Medicine at UCLA, Los Ange,lCAe900s95-1766. Rrecsuspphorteed ain part by USPHS grants GM53275 and MH59490.
PY - 2004/2
Y1 - 2004/2
N2 - Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the log-likelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to be part of the standard toolkit of professional statisticians. This article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this article introduces some new material on constrained optimization and standard error estimation.
AB - Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the log-likelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to be part of the standard toolkit of professional statisticians. This article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this article introduces some new material on constrained optimization and standard error estimation.
UR - http://www.scopus.com/inward/record.url?scp=1342332031&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=1342332031&partnerID=8YFLogxK
U2 - 10.1198/0003130042836
DO - 10.1198/0003130042836
M3 - Article
AN - SCOPUS:1342332031
SN - 0003-1305
VL - 58
SP - 30
EP - 37
JO - American Statistician
JF - American Statistician
IS - 1
ER -