Errors-in-variables model
In statistics, an errors-in-variables model or a measurement error model is a regression model that accounts for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.
In the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples. For simple linear regression the effect is an underestimate of the coefficient, known as the attenuation bias. In non-linear models the direction of the bias is likely to be more complicated.
Motivating example
Consider a simple linear regression model of the formwhere denotes the true but unobserved regressor. Instead, we observe this value with an error:
where the measurement error is assumed to be independent of the true value.
A practical application is the standard school science experiment for Hooke's law, in which one estimates the relationship between the weight added to a spring and the amount by which the spring stretches.
If the ′s are simply regressed on the ′s, then the estimator for the slope coefficient is
which converges as the sample size increases without bound:
This is in contrast to the "true" effect of, estimated using the,:
Variances are non-negative, so that in the limit the estimated is smaller than, an effect which statisticians call attenuation or regression dilution. Thus the ‘naïve’ least squares estimator is an inconsistent estimator for . However, is a consistent estimator of the parameter required for a best linear predictor of given the observed : in some applications this may be what is required, rather than an estimate of the 'true' regression coefficient, although that would assume that the variance of the errors in the estimation and prediction is identical. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the ′s to the actually observed ′s, in a simple linear regression, is given by
It is this coefficient, rather than, that would be required for constructing a predictor of based on an observed which is subject to noise.
It can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent. Jerry Hausman sees this as an iron law of econometrics: "The magnitude of the estimate is usually smaller than expected."
Specification
Usually, measurement error models are described using the latent variables approach. If is the response variable and are observed values of the regressors, then it is assumed there exist some latent variables and which follow the model's "true" functional relationship, and such that the observed quantities are their noisy observations:where is the model's parameter and are those regressors which are assumed to be error-free. Depending on the specification these error-free regressors may or may not be treated separately; in the latter case it is simply assumed that corresponding entries in the variance matrix of 's are zero.
The variables,, are all observed, meaning that the statistician possesses a data set of statistical units which follow the data generating process described above; the latent variables,,, and are not observed, however.
This specification does not encompass all the existing errors-in-variables models. For example, in some of them, function may be non-parametric or semi-parametric. Other approaches model the relationship between and as distributional instead of functional; that is, they assume that conditionally on follows a certain distribution.
Terminology and assumptions
- The observed variable may be called the manifest, indicator, or proxy variable.
- The unobserved variable may be called the latent or true variable. It may be regarded either as an unknown constant, or as a random variable.
- The relationship between the measurement error and the latent variable can be modeled in different ways:
- * Classical errors: the errors are independent of the latent variable. This is the most common assumption; it implies that the errors are introduced by the measuring device and their magnitude does not depend on the value being measured.
- * Mean-independence: the errors are mean-zero for every value of the latent regressor. This is a less restrictive assumption than the classical one, as it allows for the presence of heteroscedasticity or other effects in the measurement errors.
- * Berkson's errors: the errors are independent of the observed regressor x. This assumption has very limited applicability. One example is round-off errors: for example, if a person's age* is a continuous random variable, whereas the observed age is truncated to the next smallest integer, then the truncation error is approximately independent of the observed age. Another possibility is with the fixed design experiment: for example, if a scientist decides to make a measurement at a certain predetermined moment of time, say at, then the real measurement may occur at some other value of and such measurement error will be generally independent of the "observed" value of the regressor.
- * Misclassification errors: special case used for the dummy regressors. If is an indicator of a certain event or condition, then the measurement error in such regressor will correspond to the incorrect classification similar to type I and type II errors in statistical testing. In this case the error may take only 3 possible values, and its distribution conditional on is modeled with two parameters:, and. The necessary condition for identification is that, that is misclassification should not happen "too often".
Linear model
Simple linear model
The simple linear errors-in-variables model was already presented in the "motivation" section:where all variables are scalar. Here α and β are the parameters of interest, whereas σε and ση—standard deviations of the error terms—are the nuisance parameters. The "true" regressor x* is treated as a random variable, independent of the measurement error η.
This model is identifiable in two cases: either the latent regressor x* is not normally distributed, or x* has normal distribution, but neither εt nor ηt are divisible by a normal distribution. That is, the parameters α, β can be consistently estimated from the data set without any additional information, provided the latent regressor is not Gaussian.
Before this identifiability result was established, statisticians attempted to apply the maximum likelihood technique by assuming that all variables are normal, and then concluded that the model is not identified. The suggested remedy was to assume that some of the parameters of the model are known or can be estimated from the outside source. Such estimation methods include
- Deming regression — assumes that the ratio δ = σ²ε/''σ²η is known. This could be appropriate for example when errors in y'' and x are both caused by measurements, and the accuracy of measuring devices or procedures are known. The case when δ = 1 is also known as the orthogonal regression.
- Regression with known reliability ratio λ = σ²∗/, where σ²∗ is the variance of the latent regressor. Such approach may be applicable for example when repeating measurements of the same unit are available, or when the reliability ratio has been known from the independent study. In this case the consistent estimate of slope is equal to the least-squares estimate divided by λ.
- Regression with known σ²η may occur when the source of the errors in x's is known and their variance can be calculated. This could include rounding errors, or errors introduced by the measuring device. When σ²η is known we can compute the reliability ratio as λ = / σ²x and reduce the problem to the previous case.
Multivariable linear model
The multivariable model looks exactly like the simple linear model, only this time β, ηt, xt and x*t are k×1 vectors.In the case when is jointly normal, the parameter β is not identified if and only if there is a non-singular k×k block matrix , where a is a k×1 vector such that a′x* is distributed normally and independently of A′x*. In the case when εt, ηt1,..., ηtk are mutually independent, the parameter β is not identified if and only if in addition to the conditions above some of the errors can be written as the sum of two independent variables one of which is normal.
Some of the estimation methods for multivariable linear models are