Uncertainty quantification metrics with varying statistical information in model calibration and validation

Sifeng Bi, Saurabh Prabhu, Scott Cogan, Sez Atamturktur

Research output: Contribution to journalArticlepeer-review

34 Scopus citations


Test-analysis comparison metrics are mathematical functions that provide a quantitative measure of the agreement (or lack thereof) between numerical predictions and experimental measurements. While calibrating and validating models, the choice of a metric can significantly influence the outcome, yet the published research discussing the role of metrics, in particular, varying levels of statistical information the metrics can contain, has been limited. This paper calibrates and validates the model predictions using alternative metrics formulated based on three types of distancebased criteria: 1) Euclidian distance (i.e., the absolute geometric distance between two points), 2) Mahalanobis distance (i.e., the weighted distance that considers the correlations of two point clouds), and 3) Bhattacharyya distance (i.e., the statistical distance between two point clouds considering their probabilistic distributions). A comparative study is presented in the first case study, where the influence of various metrics, and the varying levels of statistical information they contain, on the predictions of the calibrated models is evaluated. In the second case study, an integrated application of the distance metrics is demonstrated through a cross-validation process with regard to the measurement variability.

Original languageEnglish (US)
Pages (from-to)3570-3583
Number of pages14
JournalAIAA journal
Issue number10
StatePublished - 2017

All Science Journal Classification (ASJC) codes

  • Aerospace Engineering


Dive into the research topics of 'Uncertainty quantification metrics with varying statistical information in model calibration and validation'. Together they form a unique fingerprint.

Cite this