When we use simulation to estimate the performance of a stochastic system, the simulation often contains input models that were estimated from real-world data; therefore, there is both simulation and input uncertainty in the performance estimates. In this paper, we provide a method to measure the overall uncertainty while simultaneously reducing the influence of simulation estimation error due to output variability. To reach this goal, a Bayesian framework is introduced. We use a Bayesian posterior for the input-model parameters, conditional on the real-world data, to quantify the input-parameter uncertainty; we propagate this uncertainty to the output mean using a Gaussian process posterior distribution for the simulation response as a function of the input-model parameters, conditional on a set of simulation experiments. We summarize overall uncertainty via a credible interval for the mean. Our framework is fully Bayesian, makes more effective use of the simulation budget than other Bayesian approaches in the stochastic simulation literature, and is supported with both theoretical analysis and an empirical study. We also make clear how to interpret our credible interval and why it is distinctly different from the confidence intervals for input uncertainty obtained in other papers.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Management Science and Operations Research