Lasso (statistics)


In statistics and machine learning, lasso is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced in geophysics, and later by Robert Tibshirani, who coined the term.
Lasso was originally formulated for linear regression models. This simple case reveals a substantial amount about the estimator. These include its relationship to ridge regression and best subset selection and the connections between lasso coefficient estimates and so-called soft thresholding. It also reveals that the coefficient estimates do not need to be unique if covariates are collinear.
Though originally defined for linear regression, lasso regularization is easily extended to other statistical models including generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators. Lasso's ability to perform subset selection relies on the form of the constraint and has a variety of interpretations including in terms of geometry, Bayesian statistics and convex analysis.
The LASSO is closely related to basis pursuit denoising.

History

Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model.
Lasso was developed independently in geophysics literature in 1986, based on prior work that used the L1 norm| penalty for both fitting and penalization of the coefficients. Statistician Robert Tibshirani independently rediscovered and popularized it in 1996, based on Breiman's nonnegative garrote.
Prior to lasso, the most widely used method for choosing covariates was stepwise selection. That approach only improves prediction accuracy in certain cases, such as when only a few covariates have a strong relationship with the outcome. However, in other cases, it can increase prediction error.
At the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking the sum of the squares of the regression coefficients to be less than a fixed value in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable.
Lasso achieves both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to zero, excluding them from impacting prediction. This idea is similar to ridge regression, which also shrinks the size of the coefficients; however, ridge regression does not set coefficients to zero.

Basic form

Least squares

Consider a sample consisting of N cases, each of which consists of p covariates and a single outcome. Let be the outcome and be the covariate vector for the i th case. Then the objective of lasso is to solve:
subject to
Here is the constant coefficient, is the coefficient vector, and is a prespecified free parameter that determines the degree of regularization.
Letting be the covariate matrix, so that and is the i th row of, the expression can be written more compactly as
where is the standard norm.
Denoting the scalar mean of the data points by and the mean of the response variables by, the resulting estimate for is, so that
and therefore it is standard to work with variables that have been made zero-mean. Additionally, the covariates are typically standardized so that the solution does not depend on the measurement scale.
It can be helpful to rewrite
in the so-called Lagrangian form
where the exact relationship between and is data dependent.

Orthonormal covariates

Some basic properties of the lasso estimator can now be considered.
Assuming first that the covariates are orthonormal so that where is the Kronecker delta, or, equivalently, then using subgradient methods it can be shown that
where
is referred to as the soft thresholding operator, since it translates values towards zero instead of setting smaller values to zero and leaving larger ones untouched as the hard thresholding operator, often denoted would.
In ridge regression the objective is to minimize
Using and the ridge regression formula: yields:
Ridge regression shrinks all coefficients by a uniform factor of and does not set any coefficients to zero.
It can also be compared to regression with best subset selection, in which the goal is to minimize
where is the " norm", which is defined as if exactly components of are nonzero. Again assuming orthonormal covariates, it can be shown that in this special case
where is again the hard thresholding operator and is an indicator function.
Therefore, the lasso estimates share features of both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression and set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it.

Correlated covariates

In one special case two covariates, say j and k, are identical for each observation, so that, where. Then the values of and that minimize the lasso objective function are not uniquely determined. In fact, if some in which, then if replacing by and by, while keeping all the other fixed, gives a new solution, so the lasso objective function then has a continuum of valid minimizers. Several variants of the lasso, including the Elastic net regularization, have been designed to address this shortcoming.

General form

Lasso regularization can be extended to other objective functions such as those for generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators. Given the objective function
the lasso regularized version of the estimator s the solution to
where only is penalized while is free to take any allowed value, just as was not penalized in the basic case.

Interpretations

Geometric interpretation

Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective function
but with respect to different constraints: for lasso and for ridge. The figure shows that the constraint region defined by the norm is a square rotated so that its corners lie on the axes, while the region defined by the norm is a circle, which is rotationally invariant and, therefore, has no corners. As seen in the figure, a convex object that lies tangent to the boundary, such as the line shown, is likely to encounter a corner of a hypercube, for which some components of are identically zero, while in the case of an n-sphere, the points on the boundary for which some of the components of are zero are not distinguished from the others and the convex object is no more likely to contact a point at which some components of are zero than one for which none of them are.

Making λ easier to interpret with an accuracy-simplicity tradeoff

The lasso can be rescaled so that it becomes easy to anticipate and influence the degree of shrinkage associated with a given value of. It is assumed that is standardized with z-scores and that is centered. Let represent the hypothesized regression coefficients and let refer to the data-optimized ordinary least squares solutions. We can then define the Lagrangian as a tradeoff between the in-sample accuracy of the data-optimized solutions and the simplicity of sticking to the hypothesized values. This results in
where is specified below and the "prime" symbol stands for transpose. The first fraction represents relative accuracy, the second fraction relative simplicity, and balances between the two.
Given a single regressor, relative simplicity can be defined by specifying as, which is the maximum amount of deviation from when. Assuming that, the solution path can be defined in terms of coefficient of determination|:
If, the ordinary least squares solution is used. The hypothesized value of is selected if is bigger than. Furthermore, if, then represents the proportional influence of. In other words, measures in percentage terms the minimal amount of influence of the hypothesized value relative to the data-optimized OLS solution.
If an -norm is used to penalize deviations from zero given a single regressor, the solution path is given by
Like, moves in the direction of the point when is close to zero; but unlike, the influence of diminishes in if increases.
Given multiple regressors, the moment that a parameter is activated is also determined by a regressor's contribution to accuracy. First,
An of 75% means that in-sample accuracy improves by 75% if the unrestricted OLS solutions are used instead of the hypothesized values. The individual contribution of deviating from each hypothesis can be computed with the x matrix
where. If when is computed, then the diagonal elements of sum to. The diagonal values may be smaller than 0 or, less often, larger than 1. If regressors are uncorrelated, then the diagonal element of simply corresponds to the value between and.
A rescaled version of the adaptive lasso of can be obtained by setting. If regressors are uncorrelated, the moment that the parameter is activated is given by the diagonal element of. Assuming for convenience that is a vector of zeros,
That is, if regressors are uncorrelated, again specifies the minimal influence of. Even when regressors are correlated, the first time that a regression parameter is activated occurs when is equal to the highest diagonal element of.
These results can be compared to a rescaled version of the lasso by defining, which is the average absolute deviation of from. Assuming that regressors are uncorrelated, then the moment of activation of the regressor is given by
For, the moment of activation is again given by. If is a vector of zeros and a subset of relevant parameters are equally responsible for a perfect fit of, then this subset is activated at a value of. The moment of activation of a relevant regressor then equals. In other words, the inclusion of irrelevant regressors delays the moment that relevant regressors are activated by this rescaled lasso. The adaptive lasso and the lasso are special cases of a '1ASTc' estimator. The latter only groups parameters together if the absolute correlation among regressors is larger than a user-specified value.