Simple linear regression


In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable.
The adjective simple refers to the fact that the outcome variable is related to a single predictor.
It is common to make the additional stipulation that the ordinary least squares method should be used: the accuracy of each predicted value is measured by its squared residual, and the goal is to make the sum of these squared deviations as small as possible.
In this case, the slope of the fitted line is equal to the correlation between and corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass of the data points.

Formulation and computation

Consider the model function
which describes a line with slope and -intercept. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation the errors. Suppose we observe data pairs and call them. We can describe the underlying relationship between and involving this error term by
This relationship between the true underlying parameters and and the data points is called a linear regression model.
The goal is to find estimated values and for the parameters and which would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in the least-squares approach: a line that minimizes the sum of squared residuals , each of which is given by, for any candidate parameter values and,
In other words, and solve the following minimization problem:
where the objective function is:
By expanding to get a quadratic expression in and we can derive minimizing values of the function arguments, denoted and :
Here we have introduced

Expanded formulas

The above equations are efficient to use if the mean of the x and y variables are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of the equations. These expanded equations may be derived from the more general polynomial regression equations by defining the regression polynomial to be of order 1, as follows.
The above system of linear equations may be solved directly, or stand-alone equations for may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.

Interpretation

Relationship with the sample covariance matrix

The solution can be reformulated using elements of the covariance matrix:
where
Substituting the above expressions for and into the original solution yields
This shows that is the slope of the regression line of the standardized data points. Since then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y will be closer to the mean measurement than it was to the original value of x. This phenomenon is known as regressions toward the mean.
Generalizing the notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example:
This notation allows us a concise formula for :
The coefficient of determination is equal to when the model is linear with a single independent variable. See sample correlation coefficient for additional details.

Interpretation about the slope

By multiplying all members of the summation in the numerator by : :
We can see that the slope of the regression line is the weighted average of that is the slope of the line that connects the i-th point to the average of all points, weighted by because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more.

Interpretation about the intercept

Given with the angle the line makes with the positive x axis,
we have

Interpretation about the correlation

In the above formulation, notice that each is a constant value, while the are random variables that depend on the linear function of and the random term. This assumption is used when deriving the standard error of the slope and showing that it is unbiased.
In this framing, when is not actually a random variable, what type of parameter does the empirical correlation estimate? The issue is that for each value i we'll have: and. A possible interpretation of is to imagine that defines a random variable drawn from the empirical distribution of the x values in our sample. For example, if x had 10 values from the natural numbers: , then we can imagine x to be a Discrete uniform distribution. Under this interpretation all have the same expectation and some positive variance. With this interpretation we can think of as the estimator of the Pearson's correlation between the random variable y and the random variable x.

Numerical properties

Statistical properties

Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere.

Unbiasedness

The estimators and are unbiased.
To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residuals as random variables drawn independently from some distribution with mean zero. In other words, for each value of, the corresponding value of is generated as a mean response plus an additional random variable called the error term, equal to zero on average. Under such interpretation, the least-squares estimators and will themselves be random variables whose means will equal the "true values" and. This is the definition of an unbiased estimator.

Variance of the mean response

Since the data in this context is defined to be pairs for every observation, the mean response at a given value of x, say xd, is an estimate of the mean of the y values in the population at the x value of xd, that is. The variance of the mean response is given by:
This expression can be simplified to
where m is the number of data points.
To demonstrate this simplification, one can make use of the identity

Variance of the predicted response

The predicted response distribution is the predicted distribution of the residuals at the given point xd. So the variance is given by
The second line follows from the fact that is zero because the new prediction point is independent of the data used to fit the model. Additionally, the term was calculated earlier for the mean response.
Since , the variance of the predicted response is given by

Confidence intervals

The formulas given in the previous section allow one to calculate the point estimates of and — that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimators and vary from sample to sample for the specified sample size. Confidence intervals were devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times.
The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:
  1. the errors in the regression are normally distributed, or
  2. the number of observations is sufficiently large, in which case the estimator is approximately normally distributed.
The latter case is justified by the central limit theorem.

Normality assumption

Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean and variance where is the variance of the error terms. At the same time the sum of squared residuals is distributed proportionally to with degrees of freedom, and independently from. This allows us to construct a -value
where
is the unbiased standard error estimator of the estimator.
This -value has a Student's -distribution with degrees of freedom. Using it we can construct a confidence interval for :
at confidence level, where is the quantile of the distribution. For example, if then the confidence level is 95%.
Similarly, the confidence interval for the intercept coefficient is given by
at confidence level, where
Image:Okuns law with confidence bands.svg|thumb|300px|The US "changes in unemployment – GDP growth" regression with the 95% confidence bands.
The confidence intervals for and give us the general idea where these regression coefficients are most likely to be. For example, in the Okun's law regression shown here the point estimates are
The 95% confidence intervals for these estimates are
In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown that at confidence level the confidence band has hyperbolic form given by the equation
When the model assumed the intercept is fixed and equal to 0, the standard error of the slope turns into:
With: