Surrogate data testing
Surrogate data testing is a statistical proof by contradiction technique similar to permutation tests and parametric bootstrapping. It is used to detect non-linearity in a time series. The technique involves specifying a null hypothesis describing a linear process and then generating several surrogate data sets according to using Monte Carlo methods. A discriminating statistic is then calculated for the original time series and all the surrogate set. If the value of the statistic is significantly different for the original series than for the surrogate set, the null hypothesis is rejected and non-linearity assumed.
The particular surrogate data testing method to be used is directly related to the null hypothesis. Usually this is similar to the following:
The data is a realization of a stationary linear system, whose output has been possibly measured by a monotonically increasing possibly nonlinear function. Here linear means that each value is linearly dependent on past values or on present and past values of some independent identically distributed process, usually also Gaussian. This is equivalent to saying that the process is ARMA type. In case of fluxes, linearity of system means that it can be expressed by a linear differential equation. In this hypothesis, the static measurement function is one which depends only on the present value of its argument, not on past ones.
Methods
Many algorithms to generate surrogate data have been proposed. They are usually classified in two groups:- Typical realizations: data series are generated as outputs of a well-fitted model to the original data.
- Constrained realizations: data series are created directly from original data, generally by some suitable transformation of it.
Among constrained realizations methods, the most widely used are:
- Algorithm 0, or RS : New data are created simply by random permutations of the original series. This concept is also used in permutation tests. The permutations guarantee the same amplitude distribution as the original series, but destroy any temporal correlation that may have been in the original data. This method is associated to the null hypothesis of the data being uncorrelated i.i.d noise.
- Algorithm 1, or RP : In order to preserve the linear correlation of the series, surrogate data are created by the inverse Fourier Transform of the modules of Fourier Transform of the original data with new phases. If the surrogates must be real, the Fourier phases must be antisymmetric with respect to the central value of data.
- Algorithm 2, or AAFT : This method has approximately the advantages of the two previous ones: it tries to preserve both the linear structure and the amplitude distribution. This method consists of these steps:
- * Scaling the data to a Gaussian distribution.
- * Performing a RP transformation of the new data.
- * Finally doing a transformation inverse of the first one.
- :The drawback of this method is precisely that the last step changes somewhat the linear structure.
- Iterative algorithm 2, or IAAFT : This algorithm is an iterative version of AAFT. The steps are repeated until the autocorrelation function is sufficiently similar to the original, or until there is no change in the amplitudes.
The above mentioned techniques are called linear surrogate methods, because they are based on a linear process and address a linear null hypothesis. Broadly speaking, these methods are useful for data showing irregular fluctuations and data with such a behaviour abound in the real world. However, we often observe data with obvious periodicity, for example, annual sunspot numbers, electrocardiogram and so on. Time series exhibiting strong periodicities are clearly not consistent with the linear null hypotheses. To tackle this case, some algorithms and null hypotheses have been proposed.