Posterior Averaging Information Criterion

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

We propose a new model selection method, named the posterior averaging information criterion, for Bayesian model assessment to minimize the risk of predicting independent future observations. The theoretical foundation is built on the Kullback–Leibler divergence to quantify the similarity between the proposed candidate model and the underlying true model. From a Bayesian perspective, our method evaluates the candidate models over the entire posterior distribution in terms of predicting a future independent observation. Without assuming that the true distribution is contained in the candidate models, the new criterion is developed by correcting the asymptotic bias of the posterior mean of the in-sample log-likelihood against out-of-sample log-likelihood, and can be generally applied even for Bayesian models with degenerate non-informative priors. Simulations in both normal and binomial settings demonstrate superior small sample performance.

Original languageEnglish (US)
Article number468
JournalEntropy
Volume25
Issue number3
DOIs
StatePublished - Mar 2023

All Science Journal Classification (ASJC) codes

  • Information Systems
  • General Physics and Astronomy
  • Electrical and Electronic Engineering
  • Mathematical Physics
  • Physics and Astronomy (miscellaneous)

Cite this