Covariance function
In probability theory and statistics, the covariance function describes how much two random variables change together with varying spatial or temporal separation. For a random field or stochastic process Z on a domain D, a covariance function C gives the covariance of the values of the random field at the two locations x and y:
The same C is called the autocovariance function in two instances: in time series, and in multivariate random fields, Y)).
Admissibility
For locations x1, x2,..., xN ∈ D the variance of every linear combinationcan be computed as
A function is a valid covariance function if and only if this variance is non-negative for all possible choices of N and weights w1, ..., wN. A function with this property is called positive semidefinite.
Simplifications with stationarity
In case of a weakly stationary random field, wherefor any lag h, the covariance function can be represented by a one-parameter function
which is called a covariogram and also a covariance function. Implicitly the C can be computed from Cs by:
The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem.
Parametric families of covariance functions
For a given variance, a simple stationary parametric covariance function is the "exponential covariance function"where V is a scaling parameter, and d = d is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential" covariance function:
is a stationary covariance function with smooth sample paths.
The Matérn covariance function and rational quadratic covariance function are two parametric families of stationary covariance functions. The Matérn family includes the exponential and squared exponential covariance functions as special cases.