TY - GEN
T1 - Globally Convergent Algorithms for Learning Multivariate Generalized Gaussian Distributions
AU - Wang, Bin
AU - Zhang, Huanyu
AU - Zhao, Ziping
AU - Sun, Ying
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/11
Y1 - 2021/7/11
N2 - The multivariate generalized Gaussian distribution has been used intensively in various data analytics fields. Due to its flexibility in modeling different distributions, developing efficient methods to learn the model parameters has attracted lots of attentions. Existing algorithms including the popular fixed-point algorithms focus on learning the shape parameters and scatter matrices, but convergence is only established when the shape parameters are taken as given. When coupled with the shape parameters, convergence properties of the existing alternating algorithms remain unknown. In this paper, globally convergent algorithms based on the block majorization minimization method are proposed to jointly learn all the model parameters in the maximum likelihood estimation setting. The negative log-likelihood function w.r.t. the shape parameter is proved to be strictly convex, which to our best knowledge is the first result of this kind in the literature. Superior performance of the proposed algorithms are validated numerically based on synthetic data with comparisons to existing methods.
AB - The multivariate generalized Gaussian distribution has been used intensively in various data analytics fields. Due to its flexibility in modeling different distributions, developing efficient methods to learn the model parameters has attracted lots of attentions. Existing algorithms including the popular fixed-point algorithms focus on learning the shape parameters and scatter matrices, but convergence is only established when the shape parameters are taken as given. When coupled with the shape parameters, convergence properties of the existing alternating algorithms remain unknown. In this paper, globally convergent algorithms based on the block majorization minimization method are proposed to jointly learn all the model parameters in the maximum likelihood estimation setting. The negative log-likelihood function w.r.t. the shape parameter is proved to be strictly convex, which to our best knowledge is the first result of this kind in the literature. Superior performance of the proposed algorithms are validated numerically based on synthetic data with comparisons to existing methods.
UR - http://www.scopus.com/inward/record.url?scp=85113559402&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113559402&partnerID=8YFLogxK
U2 - 10.1109/SSP49050.2021.9513857
DO - 10.1109/SSP49050.2021.9513857
M3 - Conference contribution
AN - SCOPUS:85113559402
T3 - IEEE Workshop on Statistical Signal Processing Proceedings
SP - 336
EP - 340
BT - 2021 IEEE Statistical Signal Processing Workshop, SSP 2021
PB - IEEE Computer Society
T2 - 21st IEEE Statistical Signal Processing Workshop, SSP 2021
Y2 - 11 July 2021 through 14 July 2021
ER -