A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other similar disciplines.
Computer simulations are constructed to emulate a physical system. Because these are meant to replicate some aspect of a system in detail, they often do not yield an analytic solution. Therefore, methods such as discrete event simulation or finite element solvers are used. A computer model is used to make inferences about the system it replicates. For example, climate models are often used because experimentation on an earth sized object is impossible.
Computer experiments have been employed with many purposes in mind. Some of those include:
Modeling of computer experiments typically uses a Bayesian framework. Bayesian statistics is an interpretation of the field of statistics where all evidence about the true state of the world is explicitly expressed in the form of probabilities. In the realm of computer experiments, the Bayesian interpretation would imply we must form a prior distribution that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980s and is nicely summarized by Sacks et al. (1989) https://web.archive.org/web/20170918022130/https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss%2F1177012413. While the Bayesian approach is widely used, frequentist approaches have been recently discussed http://www2.isye.gatech.edu/~jeffwu/publications/calibration-may1.pdf.
The basic idea of this framework is to model the computer simulation as an unknown function of a set of inputs. The computer simulation is implemented as a piece of computer code that can be evaluated to produce a collection of outputs. Examples of inputs to these simulations are coefficients in the underlying model, initial conditions and forcing functions. It is natural to see the simulation as a deterministic function that maps these inputs into a collection of outputs. On the basis of seeing our simulator this way, it is common to refer to the collection of inputs as
x
f
f(x)
x
f(x)
Although
f( ⋅ )
The typical model for a computer code output is a Gaussian process. For notational simplicity, assume
f(x)
f
f\sim\operatorname{GP}(m( ⋅ ),C( ⋅ , ⋅ )),
m
C
\nu=1/2
\nu → infty
The design of computer experiments has considerable differences from design of experiments for parametric models. Since a Gaussian process prior has an infinite dimensional representation, the concepts of A and D criteria (see Optimal design), which focus on reducing the error in the parameters, cannot be used. Replications would also be wasteful in cases when the computer simulation has no error. Criteria that are used to determine a good experimental design include integrated mean squared prediction error https://web.archive.org/web/20170918022130/https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss%2F1177012413 and distance based criteria http://www.sciencedirect.com/science/article/pii/037837589090122B.
Popular strategies for design include latin hypercube sampling and low discrepancy sequences.
Unlike physical experiments, it is common for computer experiments to have thousands of different input combinations. Because the standard inference requires matrix inversion of a square matrix of the size of the number of samples (
n
l{O}(n3)