Stochastic gradient descent
Stochastic gradient descent is an iterative method for optimizing an objective function with suitable smoothness properties. It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient by an estimate thereof. Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.
The basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s. Today, stochastic gradient descent has become an important optimization method in machine learning.
Background
Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum:where the parameter that minimizes is to be estimated. Each summand function is typically associated with the -th observation in the data set.
In classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation. The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation. Therefore, contemporary statistical theorists often consider stationary points of the likelihood function.
The sum-minimization problem also arises for empirical risk minimization. There, is the value of the loss function at -th example, and is the empirical risk.
When used to minimize the above function, a standard gradient descent method would perform the following iterations:
The step size is denoted by and here "" denotes the update of a variable in the algorithm.
In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations.
However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems.
Iterative method
In stochastic gradient descent, the true gradient of is approximated by a gradient at a single sample:As the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges.
In pseudocode, stochastic gradient descent can be presented as :
- Choose an initial vector of parameters and learning rate.
- Repeat until an approximate minimum is obtained:
- * Randomly shuffle samples in the training set.
- * For, do:
- **
The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates decrease with an appropriate rate,
and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum
when the objective function is convex or pseudoconvex,
and otherwise converges almost surely to a local minimum.
This is in fact a consequence of the Robbins–Siegmund theorem.
Linear regression
Suppose we want to fit a straight line to a training set with observations and corresponding estimated responses using least squares. The objective function to be minimized isThe last line in the above pseudocode for this specific problem will become:
Note that in each iteration or update step, the gradient is only evaluated at a single. This is the key difference between stochastic gradient descent and batched gradient descent.
In general, given a linear regression problem, stochastic gradient descent behaves differently when and . In the overparameterized case, stochastic gradient descent converges to. That is, SGD converges to the interpolation solution with minimum distance from the starting. This is true even when the learning rate remains constant. In the underparameterized case, SGD does not converge if learning rate remains constant.
History
In 1951, Herbert Robbins and Sutton Monro introduced the earliest stochastic approximation methods, preceding stochastic gradient descent. Building on this work one year later, Jack Kiefer and Jacob Wolfowitz published an optimization algorithm very close to stochastic gradient descent, using central differences as an approximation of the gradient. Later in the 1950s, Frank Rosenblatt used SGD to optimize his perceptron model, demonstrating the first applicability of stochastic gradient descent to neural networks.Backpropagation was first described in 1986, with stochastic gradient descent being used to efficiently optimize parameters across neural networks with multiple hidden layers. Soon after, another improvement was developed: mini-batch gradient descent, where small batches of data are substituted for single samples. In 1997, the practical performance benefits from vectorization achievable with such small batches were first explored, paving the way for efficient optimization in machine learning. As of 2023, this mini-batch approach remains the norm for training neural networks, balancing the benefits of stochastic gradient descent with gradient descent.
By the 1980s, momentum had already been introduced, and was added to SGD optimization techniques in 1986. However, these optimization techniques assumed constant hyperparameters, i.e. a fixed learning rate and momentum parameter. In the 2010s, adaptive approaches to applying SGD with a per-parameter learning rate were introduced with AdaGrad in 2011 and RMSprop in 2012. In 2014, Adam was published, applying the adaptive approaches of RMSprop to momentum; many improvements and branches of Adam were then developed such as Adadelta, Adagrad, AdamW, and Adamax.
Within machine learning, approaches to optimization in 2023 are dominated by Adam-derived optimizers, TensorFlow and PyTorch, by far the most popular machine learning libraries, as of 2023 largely only include Adam-derived optimizers, as well as predecessors to Adam such as RMSprop and classic SGD. PyTorch also partially supports limited-memory BFGS, a line-search method, but only for single-device setups without parameter groups.
Notable applications
Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including support vector machines, logistic regression and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks. Its use has been also reported in the Geophysics community, specifically to applications of Full Waveform Inversion.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE.
Another stochastic gradient descent algorithm is the least mean squares adaptive filter.
Extensions and variants
Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge. A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function of the iteration number, giving a learning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on -means clustering. Practical guidance on choosing the step size in several variants of SGD is given by Spall.Implicit updates (ISGD)
As mentioned earlier, classical stochastic gradient descent is generally sensitive to learning rate. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved by considering implicit updates whereby the stochastic gradient is evaluated at the next iterate rather than the current one:This equation is implicit since appears on both sides of the equation. It is a stochastic form of the proximal gradient method since the update
can also be written as:
As an example,
consider least squares with features and observations
. We wish to solve:
where indicates the inner product.
Note that could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows:
where is uniformly sampled between 1 and. Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when is misspecified so that has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, implicit stochastic gradient descent can be solved in closed-form as:
This procedure will remain numerically stable virtually for all as the learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares and
normalized least mean squares filter.
Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that depends on only through a linear combination with features, so that we can write, where may depend on as well but not on except through. Least squares obeys this rule, and so does logistic regression, and most generalized linear models. For instance, in least squares,, and in logistic regression, where is the logistic function. In Poisson regression,, and so on.
In such settings, ISGD is simply implemented as follows. Let, where is scalar.
Then, ISGD is equivalent to:
The scaling factor can be found through the bisection method since in most regular models, such as the aforementioned generalized linear models, function is decreasing, and thus the search bounds for are.