Iterative conditional maximization algorithm for nonconcave penalized likelihood

Yiyun Zhang, Runze Li

Research output: Chapter in Book/Report/Conference proceedingChapter

1 Scopus citations


Variable selection via penalized likelihood has received considerable attention recently. Penalized likelihood estimators with properly chosen penalty functions possess nice properties. In practice, optimizing the penalized likelihood function is often challenging because the object function may be nondiffer-entiable and/or nonconcave. Existing algorithms such as the local quadratic approximation (LQA) algorithm share a similar drawback of backward selection that once a variable is deleted, it is essentially left out of the final model. We propose the iterative conditional maximization (ICM) algorithm to address the aforementioned drawback. It utilizes the characteristics of the nonconcave penalized likelihood and enjoys fast convergence. Three simulation studies, in linear, logistic, and Poisson regression, together with one real data analysis, are conducted to assess the performance of the ICM algorithm.

Original languageEnglish (US)
Title of host publicationNonparametric Statistics and Mixture Models
Subtitle of host publicationA Festschrift in Honor of Thomas P Hettmansperger
PublisherWorld Scientific Publishing Co.
Number of pages16
ISBN (Electronic)9789814340564
ISBN (Print)9814340553, 9789814340557
StatePublished - Jan 1 2011

All Science Journal Classification (ASJC) codes

  • General Mathematics


Dive into the research topics of 'Iterative conditional maximization algorithm for nonconcave penalized likelihood'. Together they form a unique fingerprint.

Cite this