Perron–Frobenius theorem


In matrix theory, the Perron–Frobenius theorem, proved by and, asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory ; to the theory of dynamical systems ; to economics ;
to demography ;
to social networks ; to Internet search engines ; and even to the ranking of American football
teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.

Statement

Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value. The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to and concerned positive matrices. Later, found their extension to certain classes of non-negative matrices.

Positive matrices

Let be an positive matrix: for. Then the following statements hold.
  1. There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue, such that r is an eigenvalue of A and any other eigenvalue λ in absolute value is strictly smaller than r, |λ| < r. Thus, the spectral radius is equal to r. If the matrix coefficients are algebraic, this implies that the eigenvalue is a Perron number.
  2. The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is one-dimensional.
  3. There exists an eigenvector v = T of A with eigenvalue r such that all components of v are positive: A v = r v, vi > 0 for 1 ≤ in. It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, principal eigenvector or dominant eigenvector.
  4. There are no other positive eigenvectors except positive multiples of v, i.e., all other eigenvectors must have at least one negative or non-real component.
  5. , where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix vwT is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection.
  6. Collatz–Wielandt formula: for all non-negative non-zero vectors x, let f be the minimum value of i / xi taken over all those i such that xi ≠ 0. Then f is a real valued function whose maximum over all non-negative non-zero vectors x is the Perron–Frobenius eigenvalue.
  7. A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g be the maximum value of i / xi taken over i. Then g is a real valued function whose minimum over all strictly positive vectors x is the Perron–Frobenius eigenvalue.
  8. Birkhoff–Varga formula: Let x and y be strictly positive vectors. Then,
  9. Donsker–Varadhan–Friedland formula: Let p be a probability vector and x a strictly positive vector. Then,
  10. Fiedler formula:
  11. The Perron–Frobenius eigenvalue satisfies the inequalities
All of these properties extend beyond strictly positive matrices to primitive matrices. Facts 1–7 can be found in Meyer claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669.
The left and right eigenvectors w and v are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector v sums to one, while.

Non-negative matrices

There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than or equal, in absolute value, to all other eigenvalues. However, for the example, the maximum eigenvalue r = 1 has the same absolute value as the other eigenvalue −1; while for, the maximum eigenvalue is r = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector is not strictly positive.
However, Frobenius found a special subclass of non-negative matrices — irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form, where ' is a real strictly positive eigenvalue, and ranges over the complex h th roots of 1 for some positive integer h'' called the period of the matrix.
The eigenvector corresponding to has strictly positive components. Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below.

Classification of matrices

Let A be a n × n square matrix over field F.
The matrix A is irreducible if any of the following equivalent properties
holds.
Definition 1 : A does not have non-trivial invariant coordinate subspaces.
Here a non-trivial coordinate subspace means a linear subspace spanned by any nonempty proper subset of standard basis vectors of Fn. More explicitly, for any linear subspace spanned by standard basis vectors ei1 ,...,
eik, 0 < k < n its image under the action of A is not contained in the same subspace.
Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P:
where E and G are non-trivial square matrices.
Definition 3: One can associate with a matrix A a certain directed graph GA. It has n vertices labeled 1,...,n, and there is an edge from vertex i to vertex j precisely when aij ≠ 0. Then the matrix A is irreducible if and only if its associated graph GA is strongly connected.
If F is the field of real or complex numbers, then we also have the following condition.
Definition 4: The group representation of on or on given by has no non-trivial invariant coordinate subspaces.
A matrix is reducible if it is not irreducible.
A real matrix A is primitive if it is non-negative and its mth power is positive for some natural number m.
Let A be real and non-negative. Fix an index i and define the period of index i to be the greatest common divisor of all natural numbers m such that ii > 0. When A is irreducible, the period of every index is the same and is called the period of A. In fact, when A is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in GA. The period is also called the index of imprimitivity or the order of cyclicity. If the period is 1, A is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices.
All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period.
Results for non-negative matrices were first obtained by Frobenius in 1912.

Perron–Frobenius theorem for irreducible non-negative matrices

Let be an irreducible non-negative matrix with period and spectral radius.
Then the following statements hold.
  • The number is a positive real number and it is an eigenvalue of the matrix. It is called Perron–Frobenius eigenvalue.
  • The Perron–Frobenius eigenvalue is simple. Both right and left eigenspaces associated with are one-dimensional.
  • has both right and left eigenvectors, respectively and, with eigenvalue and whose components are all positive. Moreover the only eigenvectors whose components are all positive are those associated with the eigenvalue.
  • The matrix has exactly complex eigenvalues with absolute value. Each of them is a simple root of the characteristic polynomial and is the product of with an th root of unity.
  • Let. Then the matrix is similar to, consequently the spectrum of is invariant under multiplication by .
  • If then there exists a permutation matrix such that
where denotes a zero matrix and the blocks along the main diagonal are square matrices.
  • Collatz–Wielandt formula: for all non-negative non-zero vectors ' let ' be the minimum value of taken over all those such that. Then is a real valued function whose maximum is the Perron–Frobenius eigenvalue.
  • The Perron–Frobenius eigenvalue satisfies the inequalities
The example shows that the zero-matrices along the diagonal may be of different sizes, the blocks Aj need not be square, and h need not divide n.