Empirically improving model adequacy in scientific computing

Sez Atamturktur, Garrison N. Stevens, D. Andrew Brown

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In developing mechanistic models, we establish assumptions regarding aspects of the system behavior that are not fully understood. Such assumptions in turn may lead to a simplified representation or omission of some underlying phenomena. Although necessary for feasibility, such simplifications introduce systematic bias in the model predictions. Often times model bias is non-uniform across the operational domain of the system of interest. This operational domain is defined by the control parameters, i.e., those that can be controlled by experimentalists during observations of the system behavior. The conventional approach for addressing model bias involves empirically inferring a functional representation of the discrepancy with respect to control parameters and accordingly bias-correcting model predictions. This conventional process can be considered as experimental data fitting informed by theoretical knowledge, only providing a one-way interaction between simulation and observation. The model calibration approach presented herein recognizes that assumptions established during model development may require omission or simplification of interactions among model input parameters. When prediction accuracy relies on the inclusion of these interactions, it becomes necessary to infer the functional relationships between the input parameters from experiments. As such, this study demonstrates a two-way interaction in which theoretical knowledge is in turn informed by experimental data fitting. We propose to empirically learn previously unknown parameter interactions through the training of functions emulating these relationships. Such interactions can be posed in the form of reliance of model input parameter values on control parameter settings or on other input parameters. If the nature of the interactions is known, appropriate parametric functions may be implemented. Otherwise, nonparametric emulator functions can be leveraged. In our study, we use nonparametric Gaussian Process models in the Bayesian paradigm to infer the interactions among input parameters from the experimental data. The proposed approach will equip model developers with a tool capable of identifying the underlying and mechanistically-relevant physical processes absent from engineering models. This approach has the potential to not only significantly reduce the systematic bias between model predictions and experimental observations, but also further engineers’ knowledge of the physics principles governing complex systems.

Original languageEnglish (US)
Title of host publicationModel Validation and Uncertainty Quantification - Proceedings of the 35th IMAC, A Conference and Exposition on Structural Dynamics 2017
EditorsBabak Moaveni, Robert Barthorpe, Costas Papadimitriou, Israel Lopez, Roland Platz
PublisherSpringer New York LLC
Pages363-369
Number of pages7
ISBN (Print)9783319548579
DOIs
StatePublished - 2017
Event35th IMAC Conference and Exposition on Structural Dynamics, 2017 - Garden Grove, United States
Duration: Jan 30 2016Feb 2 2016

Publication series

NameConference Proceedings of the Society for Experimental Mechanics Series
Volume3 Part F2
ISSN (Print)2191-5644
ISSN (Electronic)2191-5652

Other

Other35th IMAC Conference and Exposition on Structural Dynamics, 2017
Country/TerritoryUnited States
CityGarden Grove
Period1/30/162/2/16

All Science Journal Classification (ASJC) codes

  • General Engineering
  • Computational Mechanics
  • Mechanical Engineering

Fingerprint

Dive into the research topics of 'Empirically improving model adequacy in scientific computing'. Together they form a unique fingerprint.

Cite this