Gaussian process explained
In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
The concept of Gaussian processes is named after Carl Friedrich Gauss because it is based on the notion of the Gaussian distribution (normal distribution). Gaussian processes can be seen as an infinite-dimensional generalization of multivariate normal distributions.
Gaussian processes are useful in statistical modelling, benefiting from properties inherited from the normal distribution. For example, if a random process is modelled as a Gaussian process, the distributions of various derived quantities can be obtained explicitly. Such quantities include the average value of the process over a range of times and the error in estimating the average using sample values at a small set of times. While exact models often scale poorly as the amount of data increases, multiple approximation methods have been developed which often retain good accuracy while drastically reducing computation time.
Definition
is Gaussian
if and only if for every
finite set of
indices
in the index set
is a multivariate Gaussian random variable.[1] That is the same as saying every linear combination of
has a univariate normal (or Gaussian) distribution.
Using characteristic functions of random variables with
denoting the
imaginary unit such that
, the Gaussian property can be formulated as follows:
is Gaussian if and only if, for every finite set of indices
, there are real-valued
,
with
such that the following equality holds for all
,
or
. The numbers
and
can be shown to be the
covariances and
means of the variables in the process.
[2] Variance
The variance of a Gaussian process is finite at any time
, formally
[3] Stationarity
For general stochastic processes strict-sense stationarity implies wide-sense stationarity but not every wide-sense stationary stochastic process is strict-sense stationary. However, for a Gaussian stochastic process the two concepts are equivalent.[3]
A Gaussian stochastic process is strict-sense stationary if and only if it is wide-sense stationary.
Example
There is an explicit representation for stationary Gaussian processes.[4] A simple example of this representation is
where
and
are independent random variables with the standard normal distribution.
Covariance functions
See main article: Covariance function. A key fact of Gaussian processes is that they can be completely defined by their second-order statistics.[5] Thus, if a Gaussian process is assumed to have mean zero, defining the covariance function completely defines the process' behaviour. Importantly the non-negative definiteness of this function enables its spectral decomposition using the Karhunen–Loève expansion. Basic aspects that can be defined through the covariance function are the process' stationarity, isotropy, smoothness and periodicity.[6] [7]
Stationarity refers to the process' behaviour regarding the separation of any two points
and
. If the process is stationary, the covariance function depends only on
. For example, the
Ornstein–Uhlenbeck process is stationary.
If the process depends only on
, the Euclidean distance (not the direction) between
and
, then the process is considered isotropic. A process that is concurrently stationary and isotropic is considered to be
homogeneous;
[8] in practice these properties reflect the differences (or rather the lack of them) in the behaviour of the process given the location of the observer.
Ultimately Gaussian processes translate as taking priors on functions and the smoothness of these priors can be induced by the covariance function.[6] If we expect that for "near-by" input points
and
their corresponding output points
and
to be "near-by" also, then the assumption of continuity is present. If we wish to allow for significant displacement then we might choose a rougher covariance function. Extreme examples of the behaviour is the Ornstein - Uhlenbeck covariance function and the squared exponential where the former is never differentiable and the latter infinitely differentiable.
Periodicity refers to inducing periodic patterns within the behaviour of the process. Formally, this is achieved by mapping the input
to a two dimensional vector
u(x)=\left(\cos(x),\sin(x)\right)
.
Usual covariance functions
There are a number of common covariance functions:[7]
K\operatorname{C}(x,x')=C
K\operatorname{L}(x,x')=xTx'
K\operatorname{GN}(x,x')=\sigma2\deltax,x'
K\operatorname{SE}(x,x')=\exp\left(-\tfrac{|d|2}{2\ell2}\right)
K\operatorname{OU}(x,x')=\exp\left(-\tfrac{|d|}\ell\right)
K\operatorname{Matern}(x,x')=\tfrac{21-\nu
} \left(\tfrac \right)^\nu K_\nu \left(\tfrac \right)
K\operatorname{P}(x,x')=\exp\left(-\tfrac{2}{\ell2}\sin2(d/2)\right)
K\operatorname{RQ}(x,x')=\left(1+|d|2\right)-\alpha, \alpha\geq0
Here
. The parameter
is the characteristic length-scale of the process (practically, "how close" two points
and
have to be to influence each other significantly),
is the
Kronecker delta and
the
standard deviation of the noise fluctuations. Moreover,
is the modified Bessel function of order
and
is the
gamma function evaluated at
. Importantly, a complicated covariance function can be defined as a linear combination of other simpler covariance functions in order to incorporate different insights about the data-set at hand.
The inferential results are dependent on the values of the hyperparameters
(e.g.
and
) defining the model's behaviour. A popular choice for
is to provide
maximum a posteriori (MAP) estimates of it with some chosen prior. If the prior is very near uniform, this is the same as maximizing the
marginal likelihood of the process; the marginalization being done over the observed process values
.
[7] This approach is also known as
maximum likelihood II,
evidence maximization, or
empirical Bayes.
[9] Continuity
For a Gaussian process, continuity in probability is equivalent to mean-square continuity,[10] and continuity with probability one is equivalent to sample continuity.[11] The latter implies, but is not implied by, continuity in probability.Continuity in probability holds if and only if the mean and autocovariance are continuous functions. In contrast, sample continuity was challenging even for stationary Gaussian processes (as probably noted first by Andrey Kolmogorov), and more challenging for more general processes.[12] [13] [14] As usual, by a sample continuous process one means a process that admits a sample continuous modification.[15] [16]
Stationary case
For a stationary Gaussian process
some conditions on its spectrum are sufficient for sample continuity, but fail to be necessary. A necessary and sufficient condition, sometimes called Dudley–Fernique theorem, involves the function
defined by
(the right-hand side does not depend on
due to stationarity). Continuity of
in probability is equivalent to continuity of
at
When convergence of
to
(as
) is too slow, sample continuity of
may fail. Convergence of the following integrals matters:
these two integrals being equal according to
integration by substitution The first integrand need not be bounded as
thus the integral may converge (
) or diverge (
). Taking for example
for large
that is,
for small
one obtains
when
and
when
In these two cases the function
is increasing on
but generally it is not. Moreover, the conditiondoes not follow from continuity of
and the evident relations
(for all
) and
Some history.Sufficiency was announced by Xavier Fernique in 1964, but the first proof was published by Richard M. Dudley in 1967.Necessity was proved by Michael B. Marcus and Lawrence Shepp in 1970.[17]
There exist sample continuous processes
such that
they violate condition (*). An example found by Marcus and Shepp is a random lacunary Fourier series
where
are independent random variables with standard normal distribution; frequencies
are a fast growing sequence; and coefficients
satisfy
The latter relation implies
whence almost surely, which ensures uniform convergence of the Fourier series almost surely, and sample continuity of
Its autocovariation function
is nowhere monotone (see the picture), as well as the corresponding function
Brownian motion as the integral of Gaussian processes
A Wiener process (also known as Brownian motion) is the integral of a white noise generalized Gaussian process. It is not stationary, but it has stationary increments.
The Ornstein–Uhlenbeck process is a stationary Gaussian process.
The Brownian bridge is (like the Ornstein–Uhlenbeck process) an example of a Gaussian process whose increments are not independent.
The fractional Brownian motion is a Gaussian process whose covariance function is a generalisation of that of the Wiener process.
Driscoll's zero-one law
Driscoll's zero-one law is a result characterizing the sample functions generated by a Gaussian process.
Let
be a mean-zero Gaussian process
with non-negative definite covariance function
. Let
be a
Reproducing kernel Hilbert space with positive definite kernel
.
Thenwhere
and
are the covariance matrices of all possible pairs of
points, implies
Moreover, implies [18]
This has significant implications when
, as
As such, almost all sample paths of a mean-zero Gaussian process with positive definite kernel
will lie outside of the Hilbert space
.
Linearly constrained Gaussian processes
For many applications of interest some pre-existing knowledge about the system at hand is already given. Consider e.g. the case where the output of the Gaussian process corresponds to a magnetic field; here, the real magnetic field is bound by Maxwell's equations and a way to incorporate this constraint into the Gaussian process formalism would be desirable as this would likely improve the accuracy of the algorithm.
A method on how to incorporate linear constraints into Gaussian processes already exists:[19]
Consider the (vector valued) output function
which is known to obey the linear constraint (i.e.
is a linear operator)
Then the constraint
can be fulfilled by choosing
, where
is modelled as a Gaussian process, and finding
such that
Given
and using the fact that Gaussian processes are closed under linear transformations, the Gaussian process for
obeying constraint
becomes
Hence, linear constraints can be encoded into the mean and covariance function of a Gaussian process.
Applications
A Gaussian process can be used as a prior probability distribution over functions in Bayesian inference.[7] [20] Given any set of N points in the desired domain of your functions, take a multivariate Gaussian whose covariance matrix parameter is the Gram matrix of your N points with some desired kernel, and sample from that Gaussian. For solution of the multi-output prediction problem, Gaussian process regression for vector-valued function was developed. In this method, a 'big' covariance is constructed, which describes the correlations between all the input and output variables taken in N points in the desired domain.[21] This approach was elaborated in detail for the matrix-valued Gaussian processes and generalised to processes with 'heavier tails' like Student-t processes.[22]
Inference of continuous values with a Gaussian process prior is known as Gaussian process regression, or kriging; extending Gaussian process regression to multiple target variables is known as cokriging.[23] Gaussian processes are thus useful as a powerful non-linear multivariate interpolation tool.
Gaussian processes are also commonly used to tackle numerical analysis problems such as numerical integration, solving differential equations, or optimisation in the field of probabilistic numerics.
Gaussian processes can also be used in the context of mixture of experts models, for example.[24] [25] The underlying rationale of such a learning framework consists in the assumption that a given mapping cannot be well captured by a single Gaussian process model. Instead, the observation space is divided into subsets, each of which is characterized by a different mapping function; each of these is learned via a different Gaussian process component in the postulated mixture.
In the natural sciences, Gaussian processes have found use as probabilistic models of astronomical time series and as predictors of molecular properties.[26]
Gaussian process prediction, or Kriging
When concerned with a general Gaussian process regression problem (Kriging), it is assumed that for a Gaussian process
observed at coordinates
, the vector of values is just one sample from a multivariate Gaussian distribution of dimension equal to number of observed coordinates . Therefore, under the assumption of a zero-mean distribution,, where is the covariance matrix between all possible pairs for a given set of hyperparameters
θ.
[7] As such the log marginal likelihood is:
and maximizing this marginal likelihood towards provides the complete specification of the Gaussian process . One can briefly note at this point that the first term corresponds to a penalty term for a model's failure to fit observed values and the second term to a penalty term that increases proportionally to a model's complexity. Having specified, making predictions about unobserved values at coordinates is then only a matter of drawing samples from the predictive distribution
p(y*\midx*,f(x),x)=N(y*\midA,B)
where the posterior mean estimate is defined as
and the posterior variance estimate
B is defined as:
where is the covariance between the new coordinate of estimation
x* and all other observed coordinates
x for a given hyperparameter vector, and are defined as before and is the variance at point as dictated by . It is important to note that practically the posterior mean estimate of (the "point estimate") is just a linear combination of the observations ; in a similar manner the variance of is actually independent of the observations . A known bottleneck in Gaussian process prediction is that the computational complexity of inference and likelihood evaluation is cubic in the number of points |
x|, and as such can become unfeasible for larger data sets.
[6] Works on sparse Gaussian processes, that usually are based on the idea of building a
representative set for the given process
f, try to circumvent this issue.
[27] [28] The
kriging method can be used in the latent level of a
nonlinear mixed-effects model for a spatial functional prediction: this technique is called the latent kriging.
[29] Often, the covariance has the form , where
is a scaling parameter. Examples are the Matérn class covariance functions. If this scaling parameter
is either known or unknown (i.e. must be marginalized), then the posterior probability,
, i.e. the probability for the hyperparameters
given a set of data pairs
of observations of
and
, admits an analytical expression.
[30] Bayesian neural networks as Gaussian processes
Bayesian neural networks are a particular type of Bayesian network that results from treating deep learning and artificial neural network models probabilistically, and assigning a prior distribution to their parameters. Computation in artificial neural networks is usually organized into sequential layers of artificial neurons. The number of neurons in a layer is called the layer width. As layer width grows large, many Bayesian neural networks reduce to a Gaussian process with a closed form compositional kernel. This Gaussian process is called the Neural Network Gaussian Process (NNGP).[7] [31] [32] It allows predictions from Bayesian neural networks to be more efficiently evaluated, and provides an analytic tool to understand deep learning models.
Computational issues
See also: Gaussian process approximations. In practical applications, Gaussian process models are often evaluated on a grid leading to multivariate normal distributions. Using these models for prediction or parameter estimation using maximum likelihood requires evaluating a multivariate Gaussian density, which involves calculating the determinant and the inverse of the covariance matrix. Both of these operations have cubic computational complexity which means that even for grids of modest sizes, both operations can have a prohibitive computational cost. This drawback led to the development of multiple approximation methods.
See also
External links
Literature
Software
Video tutorials
Notes and References
- Book: MacKay, David, J.C.. David J.C. MacKay. Information Theory, Inference, and Learning Algorithms . 2003 . 540. Cambridge University Press. 9780521642989. The probability distribution of a function
is a Gaussian processes if for any finite selection of points
, the density
P(y(x(1)),y(x(2)),\ldots,y(x(N)))
is a Gaussian.
- Book: Dudley, R.M. . Real Analysis and Probability . 1989 . Wadsworth and Brooks/Cole . 0-534-10050-3 .
- Book: Amos Lapidoth. A Foundation in Digital Communication. 8 February 2017. Cambridge University Press. 978-1-107-17732-1.
- Kac. M.. Siegert. A.J.F. 1947. An Explicit Representation of a Stationary Gaussian Process. The Annals of Mathematical Statistics. 18. 3. 438–442. 10.1214/aoms/1177730391 . free.
- Book: Bishop, C.M. . Pattern Recognition and Machine Learning . 2006 . . 978-0-387-31073-2.
- Book: Barber, David . Bayesian Reasoning and Machine Learning . 2012 . . 978-0-521-51814-7.
- Book: Rasmussen, C.E. . Williams, C.K.I . Gaussian Processes for Machine Learning . 2006 . . 978-0-262-18253-9.
- Book: Grimmett, Geoffrey . David Stirzaker. Probability and Random Processes. 2001 . . 978-0198572220.
- Seeger. Matthias . 2004 . Gaussian Processes for Machine Learning. International Journal of Neural Systems. 14. 2. 69–104 . 10.1142/s0129065704001899 . 15112367 . 10.1.1.71.1079 . 52807317 .
- Book: Dudley, R. M. . https://www.mathunion.org/fileadmin/ICM/Proceedings/ICM1974.2/ICM1974.2.ocr.pdf . Proceedings of the International Congress of Mathematicians . Richard M. Dudley . 1975 . 2 . 143–146 . The Gaussian process and how to approach it.
- Book: Dudley, R. M. . Selected Works of R.M. Dudley . Sample Functions of the Gaussian Process . 2010 . Richard M. Dudley . . 1 . 1 . 66–103 . 10.1007/978-1-4419-5821-1_13 . 978-1-4419-5820-4 . http://projecteuclid.org/euclid.aop/1176997026 .
- Book: Talagrand, Michel . Michel Talagrand . Upper and lower bounds for stochastic processes: modern methods and classical problems . 2014 . Springer, Heidelberg . 978-3-642-54074-5 . Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics .
- Book: Adler, Robert J.
. An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Processes . Lecture Notes-Monograph Series . 0-940600-17-X . 4355563 . Hayward, California . 1088478 . Institute of Mathematical Statistics . 12 . 1990.
- Review of: Adler 1990 'An introduction to continuity...' . Simeon M. . Berman . 1992 . Mathematical Reviews. 1088478.
- R. M. . Dudley . Richard M. Dudley . The sizes of compact subsets of Hilbert space and continuity of Gaussian processes . Journal of Functional Analysis . 1 . 3. 290–330 . 1967. 10.1016/0022-1236(67)90017-1 . free .
- Book: https://projecteuclid.org/euclid.bsmsp/1200514231 . Proceedings of the sixth Berkeley symposium on mathematical statistics and probability, vol. II: probability theory . M.B. . Marcus . Lawrence A. . Shepp . Lawrence Shepp . 1972 . Univ. California, Berkeley . 423–441 . Sample behavior of Gaussian processes. 6 . 2 .
- Michael B. . Marcus . Lawrence A. . Shepp . Lawrence Shepp . Continuity of Gaussian processes . . 151 . 2. 377–391 . 1970 . 10.1090/s0002-9947-1970-0264749-1 . 1995502 . free .
- Driscoll. Michael F.. The reproducing kernel Hilbert space structure of the sample paths of a Gaussian process. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete . 26. 4. 1973. 309–316. 0044-3719. 10.1007/BF00534894. 123348980. free.
- Jidling. Carl. Wahlström. Niklas. Wills. Adrian. Schön. Thomas B.. 2017-09-19. Linearly constrained Gaussian processes. stat.ML. 1703.00787.
- Book: Liu, W. . Principe, J.C. . Haykin, S. . Kernel Adaptive Filtering: A Comprehensive Introduction . 2010 . . 978-0-470-44753-6 . 2010-03-26 . https://web.archive.org/web/20160304042652/http://www.cnel.ufl.edu/~weifeng/publication.htm . 2016-03-04 . dead .
- Álvarez. Mauricio A.. Rosasco . Lorenzo . Lawrence. Neil D.. 2012 . Kernels for vector-valued functions: A review. Foundations and Trends in Machine Learning. 4. 3. 195–266 . 10.1561/2200000036. 456491.
- Chen. Zexun . Wang. Bo. Gorban. Alexander N.. 2019 . Multivariate Gaussian and Student-t process regression for multi-output prediction. Neural Computing and Applications. 32. 8. 3005–3028 . 10.1007/s00521-019-04687-8. free. 1703.04455 .
- Book: Stein, M.L. . Interpolation of Spatial Data: Some Theory for Kriging . 1999 . Springer.
- 10.1109/TPAMI.2013.183. 26353224. Gaussian Process-Mixture Conditional Heteroscedasticity. IEEE Transactions on Pattern Analysis and Machine Intelligence. 36. 5. 888–900. 2014. Platanios . Emmanouil A.. Chatzis. Sotirios P.. 10424638.
- 10.1016/j.neucom.2013.04.029. A latent variable Gaussian process model with Pitman–Yor process priors for multiclass classification. Neurocomputing. 120. 482–489. 2013. Chatzis. Sotirios P..
- 10.17863/CAM.93643. Applications of Gaussian Processes at Extreme Lengthscales: From Molecules to Black Holes. PhD. University of Cambridge. 2022 . Ryan-Rhys . Griffiths. 2303.14291 .
- Smola. A.J.. Schoellkopf . B. . 2000 . Sparse greedy matrix approximation for machine learning . Proceedings of the Seventeenth International Conference on Machine Learning. 911–918. 10.1.1.43.3153.
- Csato. L.. Opper . M. . 2002 . Sparse on-line Gaussian processes . Neural Computation . 3. 14 . 641–668 . 10.1162/089976602317250933. 11860686. 10.1.1.335.9713. 11375333.
- Lee. Se Yoon . Bani. Mallick. Bayesian Hierarchical Modeling: Application Towards Production Results in the Eagle Ford Shale of South Texas. Sankhya B. 2021. 84 . 1–43 . 10.1007/s13571-020-00245-8. free.
- Ranftl. Sascha. Melito. Gian Marco. Badeli. Vahid. Reinbacher-Köstinger. Alice. Ellermann. Katrin. von der Linden. Wolfgang. 2019-12-31. Bayesian Uncertainty Quantification with Multi-Fidelity Data and Gaussian Processes for Impedance Cardiography of Aortic Dissection. Entropy. 22. 1. 58. 10.3390/e22010058. 1099-4300. 7516489. 33285833. 2019Entrp..22...58R . free.
- Novak . Roman . Xiao . Lechao . Hron . Jiri . Lee . Jaehoon . Alemi . Alexander A. . Sohl-Dickstein . Jascha . Schoenholz . Samuel S. . Neural Tangents: Fast and Easy Infinite Neural Networks in Python . International Conference on Learning Representations . 2020. 1912.02803 .
- Book: Neal, Radford M.. Bayesian Learning for Neural Networks. Springer Science and Business Media. 2012.