Gaussian process
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
The concept of Gaussian processes is named after Carl Friedrich Gauss because it is based on the notion of the Gaussian distribution. Gaussian processes can be seen as an infinite-dimensional generalization of multivariate normal distributions.
Gaussian processes are useful in statistical modelling, benefiting from properties inherited from the normal distribution. For example, if a random process is modelled as a Gaussian process, the distributions of various derived quantities can be obtained explicitly. Such quantities include the average value of the process over a range of times and the error in estimating the average using sample values at a small set of times. While exact models often scale poorly as the amount of data increases, multiple approximation methods have been developed which often retain good accuracy while drastically reducing computation time.
Definition
A time continuous stochastic process is Gaussian if and only if for every finite set of indices in the index setis a multivariate Gaussian random variable.
As the sum of independent and Gaussian distributed random variables is again Gaussian distributed, that is the same as saying every linear combination of has a univariate Gaussian distribution.
Using characteristic functions of random variables with denoting the imaginary unit such that, the Gaussian property can be formulated as follows: is Gaussian if and only if, for every finite set of indices, there are real-valued, with such that the following equality holds for all,
or.
The numbers and can be shown to be the covariances and means of the variables in the process.
Variance
The variance of a Gaussian process is finite at any time, formallyStationarity
For general stochastic processes strict-sense stationarity implies wide-sense stationarity but not every wide-sense stationary stochastic process is strict-sense stationary. However, for a Gaussian stochastic process the two concepts are equivalent.A Gaussian stochastic process is strict-sense stationary if and only if it is wide-sense stationary.
Example
There is an explicit representation for [|stationary Gaussian processes]. A simple example of this representation iswhere and are independent random variables with the standard normal distribution.
Covariance functions
A key fact of Gaussian processes is that they can be completely defined by their second-order statistics. Thus, if a Gaussian process is assumed to have mean zero, defining the covariance function completely defines the process' behaviour. Importantly the non-negative definiteness of this function enables its spectral decomposition using the Karhunen–Loève expansion. Basic aspects that can be defined through the covariance function are the process' stationarity, isotropy, smoothness and periodicity.Stationarity refers to the process' behaviour regarding the separation of any two points and. If the process is stationary, the covariance function depends only on. For example, the Ornstein–Uhlenbeck process is stationary.
If the process depends only on, the Euclidean distance between and, then the process is considered isotropic. A process that is concurrently stationary and isotropic is considered to be homogeneous; in practice these properties reflect the differences in the behaviour of the process given the location of the observer.
Ultimately Gaussian processes translate as taking priors on functions and the smoothness of these priors can be induced by the covariance function. If we expect that for "near-by" input points and their corresponding output points and to be "near-by" also, then the assumption of continuity is present. If we wish to allow for significant displacement then we might choose a rougher covariance function. Extreme examples of the behaviour is the Ornstein-Uhlenbeck covariance function and the squared exponential where the former is never differentiable and the latter infinitely differentiable.
Periodicity refers to inducing periodic patterns within the behaviour of the process. Formally, this is achieved by mapping the input to a two dimensional vector.
Usual covariance functions
There are a number of common covariance functions:- Constant :
- Linear:
- white Gaussian noise:
- Squared exponential:
- Ornstein-Uhlenbeck:
- Matérn:
- Periodic:
- Rational quadratic:
The inferential results are dependent on the values of the hyperparameters defining the model's behaviour. A popular choice for is to provide maximum a posteriori estimates of it with some chosen prior. If the prior is very near uniform, this is the same as maximizing the marginal likelihood of the process; the marginalization being done over the observed process values. This approach is also known as maximum likelihood II, evidence maximization, or empirical Bayes.
Continuity
For a Gaussian process, continuity in probability is equivalent to mean-square continuityand continuity with probability one is equivalent to sample continuity.
The latter implies, but is not implied by, continuity in probability.
Continuity in probability holds if and only if the mean and autocovariance are continuous functions. In contrast, sample continuity was challenging even for stationary Gaussian processes, and more challenging for more general processes.
As usual, by a sample continuous process one means a process that admits a sample continuous modification.
Stationary case
For a stationary Gaussian process some conditions on its spectrum are sufficient for sample continuity, but fail to be necessary. A necessary and sufficient condition, sometimes called Dudley–Fernique theorem, involves the function defined by. Continuity of in probability is equivalent to continuity of at When convergence of to is too slow, sample continuity of may fail. Convergence of the following integrals matters:
these two integrals being equal according to integration by substitution The first integrand need not be bounded as thus the integral may converge or diverge. Taking for example for large that is, for small one obtains when and when
In these two cases the function is increasing on but generally it is not. Moreover, the condition
does not follow from continuity of and the evident relations and
Some history.
Sufficiency was announced by Xavier Fernique in 1964, but the first proof was published by Richard M. Dudley in 1967.
Necessity was proved by Michael B. Marcus and Lawrence Shepp in 1970.
There exist sample continuous processes such that they violate condition. An example found by Marcus and Shepp is a random lacunary Fourier series
where are independent random variables with standard normal distribution; frequencies are a fast growing sequence; and coefficients satisfy The latter relation implies
whence almost surely, which ensures uniform convergence of the Fourier series almost surely, and sample continuity of
Its autocovariation function
is nowhere monotone, as well as the corresponding function
Brownian motion as the integral of Gaussian processes
A Wiener process is the integral of a white noise generalized Gaussian process. It is not stationary, but it has stationary increments.The Ornstein–Uhlenbeck process is a stationary Gaussian process.
The Brownian bridge is an example of a Gaussian process whose increments are not independent.
The fractional Brownian motion is a Gaussian process whose covariance function is a generalisation of that of the Wiener process.
RKHS structure and Gaussian process
Let be a mean-zero Gaussian process with a non-negative definite covariance function and let be a symmetric and positive semidefinite function. Then, there exists a Gaussian process which has the covariance. Moreover, the reproducing kernel Hilbert space associated to coincides with the Cameron–Martin theorem associated space of, and all the spaces,, and are isometric. From now on, let be a reproducing kernel Hilbert space with positive definite kernel.Driscoll's zero-one law is a result characterizing the sample functions generated by a Gaussian process:
where and are the covariance matrices of all possible pairs of points, implies
Moreover,
implies
This has significant implications when, as
As such, almost all sample paths of a mean-zero Gaussian process with positive definite kernel will lie outside of the Hilbert space.
Linearly constrained Gaussian processes
For many applications of interest some pre-existing knowledge about the system at hand is already given. Consider e.g. the case where the output of the Gaussian process corresponds to a magnetic field; here, the real magnetic field is bound by Maxwell's equations and a way to incorporate this constraint into the Gaussian process formalism would be desirable as this would likely improve the accuracy of the algorithm.A method on how to incorporate linear constraints into Gaussian processes already exists:
Consider the output function which is known to obey the linear constraint
Then the constraint can be fulfilled by choosing, where is modelled as a Gaussian process, and finding such that
Given and using the fact that Gaussian processes are closed under linear transformations, the Gaussian process for obeying constraint becomes
Hence, linear constraints can be encoded into the mean and covariance function of a Gaussian process.