Eigenvalue perturbation


In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues. This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system.
This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.
The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis.
This article is focused on the case of the perturbation of a simple eigenvalue, as opposed to a
multiplicity of eigenvalues.

Motivation for generalized eigenvalues

Many scientific fields use eigenvalues to obtain solutions. Generalized eigenvalue problems are less widespread but are key in the study of vibrations. They are useful when the Galerkin or Rayleigh-Ritz methods are used to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates - Courant
is fundamental. The finite element method is a widespread particular case.
In classical mechanics, generalized eigenvalues may crop up when inspecting vibrations of Vibration#Multiple [degrees of freedom systems and mode shapes|multiple degrees of freedom] systems close to equilibrium. In this case the kinetic energy provides the mass matrix, the potential strain energy provides the rigidity matrix.
With both methods, the following system of differential equations or matrix differential equation is derived:
with the mass matrix , the damping matrix and the rigidity matrix. If the damping effect is neglected,, and a solution of the form is assumed, and are obtained as solutions to the generalized eigenvalue problem.

Setting of perturbation for a generalized eigenvalue problem

Suppose the solutions to the generalized eigenvalue problem are known to be
where and are matrices. That is, we know the eigenvalues and eigenvectors for. An important note is that the eigenvalues are required to be distinct.
In order to perturb the matrices, one must find the eigenvalues and eigenvectors of
where
with the perturbations and much smaller than and respectively. Then the new eigenvalues and eigenvectors are expected to be similar to the original, plus small perturbations:

Steps

Under the assumption that the matrices are symmetric, positive definite, and assume the eigenvectors are scaled such that
where is the Kronecker delta.
Now the equation to be solved is
In this article, the study is restricted to first order perturbation.

First order expansion of the equation

Substituting in results in
which expands to
Canceling from leaves
Removing the higher-order terms, this simplifies to
As the matrix is symmetric, the unperturbed eigenvectors are orthogonal and so can be used as a basis for the perturbed eigenvectors.
This is the same as
where are small constants that are to be determined.
In the same way, substituting in, and removing higher order terms,
The derivation is then split into two paths.

First path: get first eigenvalue perturbation

Eigenvalue perturbation
or
This is the first order perturbation of the generalized Rayleigh quotient with fixed :
Moreover, for, the formula should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation.
Eigenvector perturbation
One then left multiplies with for and get
Recalling that for, one may substitute for
or
As the eigenvalues are assumed to be simple, for
Moreover yields
All of the components of have now been obtained.

Second path: Straightforward manipulations

Substituting into and rearranging gives
Because the eigenvectors are -orthogonal when is positive definite, one can remove the summations by left-multiplying by :
By use of equation again:
The two terms containing are equal because left-multiplying by gives
Canceling those terms in leaves
Rearranging gives
But by, this denominator is equal to 1. Thus
Then, as for by left-multiplying equation by :
Or by changing the name of the indices:
To find, using the fact that
implies:

Summary of the first order perturbation result

In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct,
for infinitesimal and .
A proof that higher order terms may be neglect may be derived using the implicit function theorem.

Theoretical derivation

Perturbation of an implicit function.

In the next paragraph, we shall use the Implicit function theorem ; we notice that for a continuously differentiable function, with an invertible Jacobian matrix , from a point solution of , we get solutions of with close to in the form where is a continuously differentiable function ; moreover the Jacobian marix of is provided by the linear system
.
As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of may be computed with a first order expansion of
, we get
; as, it is equivalent to equation.

Eigenvalue perturbation: a theoretical basis.

We use the previous paragraph with somewhat different notations suited to eigenvalue perturbation; we introduce, with
  • with
. In order to use the Implicit function theorem, we study the invertibility of the Jacobian with
. Indeed, the solution of
may be derived with computations similar to the derivation of the expansion.
When is a simple eigenvalue, as the eigenvectors form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible.
The implicit function theorem provides a continuously differentiable function
hence the expansion with little o notation:
with
This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved.

Results of sensitivity analysis with respect to the entries of the matrices

The results

This means it is possible to efficiently do a sensitivity analysis on as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing will also change, hence the term

Why generalized eigenvalues?

In the entry applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions. Generalized eigenvalue problems are less widespread but are a key in the study of vibrations.
They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate
solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant
is fundamental. The Finite element method is a widespread particular case.
In classical mechanics, generalized eigenvalues may crop up when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix, the potential strain energy provides the rigidity matrix.
For further details, see the first section of this article of Weinstein
With both methods, we obtain a system of differential equations or Matrix differential equation
with the mass matrix , the damping matrix and the rigidity matrix. If we neglect the damping effect, we use, we can look for a solution of the following form ; we obtain that and are solution of the generalized eigenvalue problem

Setting of perturbation for a generalized eigenvalue problem

Suppose we have solutions to the generalized eigenvalue problem,
where and are matrices. That is, we know the eigenvalues and eigenvectors for. It is also required that the eigenvalues are distinct.
Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of
where
with the perturbations and much smaller than and respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations:

Steps

We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that
where is the Kronecker delta.
Now we want to solve the equation
In this article we restrict the study to first order perturbation.

First order expansion of the equation

Substituting in, we get
which expands to
Canceling from leaves
Removing the higher-order terms, this simplifies to
As the matrix is symmetric, the unperturbed eigenvectors are orthogonal and so we use them as a basis for the perturbed eigenvectors.
That is, we want to construct
where the are small constants that are to be determined.
In the same way, substituting in, and removing higher order terms, we get
The derivation can go on with two forks.

First fork: get first eigenvalue perturbation

Eigenvalue perturbation
we left multiply with and use as well as its first order variation ; we get
or
We notice that it is the first order perturbation of the generalized Rayleigh quotient with fixed :
Moreover, for, the formula should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation.
Eigenvector perturbation
We left multiply with for and get
We use for.
or
As the eigenvalues are assumed to be simple, for
Moreover yields
We have obtained all the components of .

Second fork: Straightforward manipulations

Substituting into and rearranging gives
Because the eigenvectors are -orthogonal when is positive definite, we can remove the summations by left-multiplying by :
By use of equation again:
The two terms containing are equal because left-multiplying by gives
Canceling those terms in leaves
Rearranging gives
But by, this denominator is equal to 1. Thus
Then, as for by left-multiplying equation by :
Or by changing the name of the indices:
To find, use the fact that:
implies:

Summary of the first order perturbation result

In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct,
for infinitesimal and .
So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion.

Theoretical derivation

Perturbation of an implicit function.

In the next paragraph, we shall use the Implicit function theorem ; we notice that for a continuously differentiable function, with an invertible Jacobian matrix , from a point solution of , we get solutions of with close to in the form where is a continuously differentiable function ; moreover the Jacobian marix of is provided by the linear system
.
As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of may be computed with a first order expansion of
, we get
; as, it is equivalent to equation.

Eigenvalue perturbation: a theoretical basis.

We use the previous paragraph with somewhat different notations suited to eigenvalue perturbation; we introduce, with
  • with
. In order to use the Implicit function theorem, we study the invertibility of the Jacobian with
. Indeed, the solution of
may be derived with computations similar to the derivation of the expansion.
When is a simple eigenvalue, as the eigenvectors form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible.
The implicit function theorem provides a continuously differentiable function
hence the expansion with little o notation:
with
This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved.

Results of sensitivity analysis with respect to the entries of the matrices

The results

This means it is possible to efficiently do a sensitivity analysis on as a function of changes in the entries of the matrices.
Similarly

Eigenvalue sensitivity, a small example

A simple case is ; however you can compute eigenvalues and eigenvectors with the help of online tools such as or using Sage SageMath. You get the smallest eigenvalue and an explicit computation ; more over, an associated eigenvector is ; it is not an unitary vector; so ; we get and ; hence ; for this example, we have checked that or.

Existence of eigenvectors

Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have linearly independent eigenvectors, though a sufficient condition is that and be simultaneously diagonalizable.

The case of repeated eigenvalues

A technical report of Rellich for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded from
archive.org. We draw an example in which the eigenvectors have a nasty behavior.

Example 1

Consider the following matrix
and
For, the matrix has eigenvectors belonging to eigenvalues.
Since for if are any normalized eigenvectors belonging to respectively
then where
are real for
It is obviously impossible to define , say, in such a way that tends to a limit as because has no limit as
Note in this example that
is not only continuous but also has continuous derivatives of all orders.
Rellich draws the following important consequence.
<< Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. >>

Example 2

This example is less nasty that the previous one. Suppose is the 2x2 identity matrix, any vector is an eigenvector; then is one possible eigenvector. But if one makes a small perturbation, such as
Then the eigenvectors are and ; they are constant with respect to so that is constant and does not go to zero.

Books

  • .
  • Bhatia, R.. Perturbation bounds for matrix eigenvalues. SIAM.

Report

*

Journal papers