Error correction model
An error correction model is a type of time series model commonly applied when the underlying variables share a long-run stochastic trend, a property known as cointegration. ECMs provide a theoretically grounded framework for estimating both short-run dynamics and long-run relationships among variables.
The term error correction refers to the idea that deviations from the long-run equilibrium affect short-run adjustments. In this framework, the model directly estimates the speed at which a dependent variable returns to equilibrium following changes in other explanatory variables.
History
and Granger and Newbold were the first to draw attention to the problem of spurious correlation and find solutions on how to address it in time series analysis. Given two completely unrelated but integrated time series, the regression analysis of one on the other will tend to produce an apparently statistically significant relationship and thus a researcher might falsely believe to have found evidence of a true relationship between these variables. Ordinary least squares will no longer be consistent and commonly used test-statistics will be non-valid. In particular, Monte Carlo simulations show that one will get a very high R squared, very high individual t-statistic and a low Durbin–Watson statistic. Technically speaking, Phillips proved that parameter estimates will not converge in probability, the intercept will diverge and the slope will have a non-degenerate distribution as the sample size increases. However, there might be a common stochastic trend to both series that a researcher is genuinely interested in because it reflects a long-run relationship between these variables.Because of the stochastic nature of the trend it is not possible to break up integrated series into a deterministic trend and a stationary series containing deviations from trend. Even in deterministically detrended random walks spurious correlations will eventually emerge. Thus detrending does not solve the estimation problem.
In order to still use the Box–Jenkins approach, one could difference the series and then estimate models such as ARIMA, given that many commonly used time series appear to be stationary in first differences. Forecasts from such a model will still reflect cycles and seasonality that are present in the data. However, any information about long-run adjustments that the data in levels may contain is omitted and longer term forecasts will be unreliable.
This led Sargan to develop the ECM methodology, which retains the level information.
Estimation
Several methods are known in the literature for estimating a refined dynamic model as described above. Among these are the Engle and Granger 2-step approach, estimating their ECM in one step and the vector-based VECM using Johansen's method.Engle and Granger 2-step approach
The first step of this method is to pre-test the individual time series used, to confirm that they are non-stationary in the first place. This can be done by standard unit root DF testing and ADF test.Take the case of two different series and. If both are I, standard regression analysis will be valid. If they are integrated of a different order, e.g. one being I and the other being I, we have to transform the model.
If they are both integrated to the same order, we can estimate an ECM model of the form
If both variables are integrated and this ECM exists, they are cointegrated by the Engle–Granger representation theorem.
The second step is then to estimate the model using ordinary least squares:
If the regression is not spurious as determined by test criteria described above, Ordinary least squares will not only be valid, but also consistent.
Then the predicted residuals from this regression are saved and used in a regression of differenced variables plus a lagged error term
We can then test for cointegration using a standard t-statistic on.
While this approach is easy to apply, there are numerous problems:
- The univariate unit root tests used in the first stage have low statistical power
- The choice of dependent variable in the first stage influences test results, that is we need to be weakly exogenous
- We can potentially have a small sample bias
- The cointegration test on does not follow a standard distribution
- The validity of the long-run parameters in the first regression stage where we obtain the residuals cannot be verified because the distribution of the OLS estimator of the cointegrating vector is highly complicated and non-normal
- At most one cointegrating relationship can be examined.
VECM
- Step 1: estimate an unrestricted VAR involving potentially non-stationary variables
- Step 2: Test for cointegration using Johansen test
- Step 3: Form and analyse the VECM.
An example of ECM
In this setting a change in consumption level can be modelled as. The first term in the RHS describes short-run impact of change in on, the second term explains long-run gravitation towards the equilibrium relationship between the variables, and the third term reflects random shocks that the system receives. To see how the model works, consider two kinds of shocks: permanent and transitory. For simplicity, let be zero for all t. Suppose in period t − 1 the system is in equilibrium, i.e. . Suppose that in the period t, disposable income increases by 10 and then returns to its previous level. Then first increases by 5, but after the second period begins to decrease and converges to its initial level. In contrast, if the shock to is permanent, then slowly converges to a value that exceeds the initial by 9.
This structure is common to all ECM models. In practice, econometricians often first estimate the cointegration relationship, and then insert it into the main model.