Rate of convergence
In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.
Asymptotic behavior is particularly useful for deciding when to stop a sequence of numerical computations, for instance once a target precision has been reached with an iterative root-finding algorithm, but pre-asymptotic behavior is often crucial for determining whether to begin a sequence of computations at all, since it may be impossible or impractical to ever reach a target precision with a poorly chosen approach. Asymptotic rates and orders of convergence are the focus of this article.
In practical numerical computations, asymptotic rates and orders of convergence follow two common conventions for two types of sequences: the first for sequences of iterations of an iterative numerical method and the second for sequences of successively more accurate numerical discretizations of a target. In formal mathematics, rates of convergence and orders of convergence are often described comparatively using asymptotic notation commonly called "big O notation," which can be used to encompass both of the prior conventions; this is an application of asymptotic analysis.
For iterative methods, a sequence that converges to is said to have asymptotic order of convergence and asymptotic rate of convergence if
Where methodological precision is required, these rates and orders of convergence are known specifically as the rates and orders of Q-convergence, short for quotient-convergence, since the limit in question is a quotient of error terms. The rate of convergence may also be called the asymptotic error constant, and some authors will use rate where this article uses order. Series acceleration methods are techniques for improving the rate of convergence of the sequence of partial sums of a series and possibly its order of convergence, also.
Similar concepts are used for sequences of discretizations. For instance, ideally the solution of a differential equation discretized via a regular grid will converge to the solution of the continuous equation as the grid spacing goes to zero, and if so the asymptotic rate and order of that convergence are important properties of the gridding method. A sequence of approximate grid solutions of some problem that converges to a true solution with a corresponding sequence of regular grid spacings that converge to 0 is said to have asymptotic order of convergence and asymptotic rate of convergence if
where the absolute value symbols stand for a metric for the space of solutions such as the uniform norm. Similar definitions also apply for non-grid discretization schemes such as the polygon meshes of a finite element method or the basis sets in computational chemistry: in general, the appropriate definition of the asymptotic rate will involve the asymptotic limit of the ratio of an approximation error term above to an asymptotic order power of a discretization scale parameter below.
In general, comparatively, one sequence that converges to a limit is said to asymptotically converge more quickly than another sequence that converges to a limit if
and the two are said to asymptotically converge with the same order of convergence if the limit is any positive finite value. The two are said to be asymptotically equivalent if the limit is equal to one. These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis and find wide application in mathematical analysis as a whole, including numerical analysis, real analysis, complex analysis, and functional analysis.
Asymptotic rates of convergence for iterative methods
Definitions
Q-convergence
Suppose that the sequence of iterates of an iterative method converges to the limit number as. The sequence is said to converge with order to and with a rate of convergence if the limit of quotients of absolute differences of sequential iterates from their limit satisfiesfor some positive constant if and if. Other more technical rate definitions are needed if the sequence converges but or the limit does not exist. This definition is technically called Q-convergence, short for quotient-convergence, and the rates and orders are called rates and orders of Q-convergence when that technical specificity is needed., below, is an appropriate alternative when this limit does not exist.
Sequences with larger orders converge more quickly than those with smaller order, and those with smaller rates converge more quickly than those with larger rates for a given order. This "smaller rates converge more quickly" behavior among sequences of the same order is standard but it can be counterintuitive. Therefore it is also common to define as the rate; this is the "number of extra decimals of precision per iterate" for sequences that converge with order 1.
Integer powers of are common and are given common names. Convergence with order and is called linear convergence and the sequence is said to converge linearly to . Convergence with and any is called quadratic convergence and the sequence is said to converge quadratically. Convergence with and any is called cubic convergence. However, it is not necessary that be an integer. For example, the secant method, when converging to a regular, simple root, has an order of the golden ratio φ ≈ 1.618.
The common names for integer orders of convergence connect to asymptotic big O notation, where the convergence of the quotient implies These are linear, quadratic, and cubic polynomial expressions when is 1, 2, and 3, respectively. More precisely, the limits imply the leading order error is exactly which can be expressed using asymptotic small o notation as
In general, when for a sequence or for any sequence that satisfies those sequences are said to converge superlinearly. A sequence is said to converge sublinearly if it converges and Importantly, it is incorrect to say that these sublinear-order sequences converge linearly with an asymptotic rate of convergence of 1. A sequence converges logarithmically to if the sequence converges sublinearly and also
R-convergence
The definitions of Q-convergence rates have the shortcoming that they do not naturally capture the convergence behavior of sequences that do converge, but do not converge with an asymptotically constant rate with every step, so that the Q-convergence limit does not exist. One class of examples is the staggered geometric progressions that get closer to their limits only every other step or every several steps, for instance the example detailed below. The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit.In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate. The "R-" prefix stands for "root." A sequence that converges to is said to converge at least R-linearly if there exists an error-bounding sequence such that and converges Q-linearly to zero; analogous definitions hold for R-superlinear convergence, R-sublinear convergence, R-quadratic convergence, and so on.
Any error bounding sequence provides a lower bound on the rate and order of R-convergence and the greatest lower bound gives the exact rate and order of R-convergence. As for Q-convergence, sequences with larger orders converge more quickly and those with smaller rates converge more quickly for a given order, so these greatest-rate-lower-bound error-upper-bound sequences are those that have the greatest possible and the smallest possible given that.
For the example given above, the tight bounding sequence converges Q-linearly with rate 1/2, so converges R-linearly with rate 1/2. Generally, for any staggered geometric progression, the sequence will not converge Q-linearly but will converge R-linearly with rate These examples demonstrate why the "R" in R-linear convergence is short for "root."
Examples
The geometric progression converges to. Plugging the sequence into the definition of Q-linear convergence shows thatThus converges Q-linearly with a convergence rate of ; see the first plot of the figure below.
More generally, for any initial value in the real numbers and a real number common ratio between -1 and 1, a geometric progression converges linearly with rate and the sequence of partial sums of a geometric series also converges linearly with rate. The same holds also for geometric progressions and geometric series parameterized by any complex numbers
The staggered geometric progression using the floor function that gives the largest integer that is less than or equal to converges R-linearly to 0 with rate 1/2, but it does not converge Q-linearly; see the second plot of the figure below. The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit. Generally, for any staggered geometric progression, the sequence will not converge Q-linearly but will converge R-linearly with rate these examples demonstrate why the "R" in R-linear convergence is short for "root."
The sequence
converges to zero Q-superlinearly. In fact, it is quadratically convergent with a quadratic convergence rate of 1. It is shown in the third plot of the figure below.
Finally, the sequence
converges to zero Q-sublinearly and logarithmically and its convergence is shown as the fourth plot of the figure below.
Image:ConvergencePlots.png|thumb|alt=Plot showing the different rates of convergence for the sequences ak, bk, ck and dk.|Log-linear plots of the example sequences ak, bk, ck, and dk that exemplify linear, linear, superlinear, and sublinear rates of convergence, respectively.|600px|center