Divergent series


In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.
If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge. A counterexample is the harmonic series
The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme.
In specialized mathematical contexts, values can be objectively assigned to certain series whose sequences of partial sums diverge, in order to make meaning of the divergence of the series. A summability method or summation method is a partial function from the set of series to values. For example, Cesàro summation assigns Grandi's divergent series
the value. Cesàro summation is an averaging method, in that it relies on the arithmetic mean of the sequence of partial sums. Other methods involve analytic continuations of related series. In physics, there are a wide variety of summability methods; these are discussed in greater detail in the article on regularization.

History

Before the 19th century, divergent series were widely used by Leonhard Euler and others, but often led to confusing and contradictory results. A major problem was Euler's idea that any divergent series should have a natural sum, without first defining what is meant by the sum of a divergent series. Augustin-Louis Cauchy eventually gave a rigorous definition of the sum of a series, and for some time after this, divergent series were mostly excluded from mathematics. They reappeared in 1886 with Henri Poincaré's work on asymptotic series. In 1890, Ernesto Cesàro realized that one could give a rigorous definition of the sum of some divergent series, and defined Cesàro summation. In the years after Cesàro's paper, several other mathematicians gave other definitions of the sum of a divergent series, although these are not always compatible: different definitions can give different answers for the sum of the same divergent series; so, when talking about the sum of a divergent series, it is necessary to specify which summation method one is using.

Examples

A summability method M is [|regular] if it agrees with the actual limit on all convergent series. Such a result is called an Abelian theorem for M, from the prototypical Abel's theorem. More subtle, are partial converse results, called Tauberian theorems, from a prototype proved by Alfred Tauber. Here partial converse means that if M sums the series Σ, and some side-condition holds, then Σ was convergent in the first place; without any side-condition such a result would say that M only summed convergent series.
The function giving the sum of a convergent series is [|linear], and it follows from the Hahn–Banach theorem that it may be extended to a summation method summing any series with bounded partial sums. This is called the Banach limit. This fact is not very useful in practice, since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking the axiom of choice or its equivalents, such as Zorn's lemma. They are therefore nonconstructive.
The subject of divergent series, as a domain of mathematical analysis, is primarily concerned with explicit and natural techniques such as Abel summation, Cesàro summation and Borel summation, and their relationships. The advent of Wiener's tauberian theorem marked an epoch in the subject, introducing unexpected connections to Banach algebra methods in Fourier analysis.
Summation of divergent series is also related to extrapolation methods and sequence transformations as numerical techniques. Examples of such techniques are Padé approximants, Levin-type sequence transformations, and order-dependent mappings related to renormalization techniques for large-order perturbation theory in quantum mechanics.

Properties of summation methods

Summation methods usually concentrate on the sequence of partial sums of the series. While this sequence does not converge, we may often find that when we take an average of larger and larger numbers of initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series. A summation method can be seen as a function from a set of sequences of partial sums to values. If A is any summation method assigning values to a set of sequences, we may mechanically translate this to a series-summation method AΣ that assigns the same values to the corresponding series. There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively.
  • Regularity. A summation method is regular if, whenever the sequence s converges to x, Equivalently, the corresponding series-summation method evaluates
  • Linearity. A is linear if it is a linear functional on the sequences where it is defined, so that for sequences r, s and a real or complex scalar k. Since the terms of the series a are linear functionals on the sequence s and vice versa, this is equivalent to AΣ being a linear functional on the terms of the series.
  • Stability. If s is a sequence starting from s0 and s′ is the sequence obtained by omitting the first value and subtracting it from the rest, so that, then A is defined if and only if A is defined, and Equivalently, whenever for all n, then Another way of stating this is that the shift rule must be valid for the series that are summable by this method.
The third condition is less important, and some significant methods, such as Borel summation, do not possess it.
One can also give a weaker alternative to the last condition.
  • Finite re-indexability. If a and a′ are two series such that there exists a bijection such that for all i, and if there exists some such that for all i > N, then
A desirable property for two distinct summation methods A and B to share is consistency: A and B are consistent if for every sequence s to which both assign a value, If two methods are consistent, and one sums more series than the other, the one summing more series is stronger.
There are powerful numerical summation methods that are neither regular nor linear, for instance nonlinear sequence transformations like Levin-type sequence transformations and Padé approximants, as well as the order-dependent mappings of perturbative series based on renormalization techniques.
Taking regularity, linearity and stability as axioms, it is possible to sum many divergent series by elementary algebraic manipulations. This partly explains why many different summation methods give the same answer for certain series.
For instance, whenever the geometric series
can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value. However, when r is a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of infinity.

Classical summation methods

The two classical summation methods for series, ordinary convergence and absolute convergence, define the sum as a limit of certain partial sums. These are included only for completeness; strictly speaking they are not true summation methods for divergent series since, by definition, a series is divergent only if these methods do not work. Most but not all summation methods for divergent series extend these methods to a larger class of sequences.

Sum of a series

Cauchy's classical definition of the sum of a series defines the sum to be the limit of the sequence of partial sums. This is the default definition of convergence of a series.

Absolute convergence

Absolute convergence defines the sum of a sequence of numbers to be the limit of the net of all partial sums, if it exists. It does not depend on the order of the elements of the sequence, and a classical theorem says that a sequence is absolutely convergent if and only if the sequence of absolute values is convergent in the standard sense.

Nørlund means

Suppose pn is a sequence of positive terms, starting from p0. Suppose also that
If now we transform a sequence s by using p to give weighted means, setting
then the limit of tn as n goes to infinity is an average called the Nørlund mean Np.
The Nørlund mean is regular, linear, and stable. Moreover, any two Nørlund means are consistent.

Cesàro summation

The most significant of the Nørlund means are the Cesàro sums. Here, if we define the sequence pk by
then the Cesàro sum Ck is defined by Cesàro sums are Nørlund means if, and hence are regular, linear, stable, and consistent. C0 is ordinary summation, and C1 is ordinary Cesàro summation. Cesàro sums have the property that if then Ch is stronger than Ck.