Previous section

## 4.7. Numerical computation of eigenvalues

Let A be a diagonalizable matrix, i.e., A has n linearly independent eigenvectors 1 , ..., n . Assume 1 R and

(1)

| i | < | 1 | ,    i = 2, ..., n.

Let 0 Rn. Since {1 , ..., n } is a basis for Rn , we have

0 = ci i .

Consider the sequence of vectors k = Ak - 1 :

 1 = A0 = ci Ai = ci i i 2 = ci i 2i : k = ci i ki = k1 [ c1 1 + ci (i / 1 )ki ]

so that, if c1 0, k approaches a vector parallel to the eigenvector 1 , as k -> .

Note. If c1 = 0 (0 does not have a component parallel to 1 ) we have to try another initial vector. We often choose 0 = (1, 1, ..., 1) unless it happens to be an eigenvector corresponding to another eigenvalue. This method does not apply to complex eigenvalues. Reason: 1 = u + iv => 2 = u - iv is also an eigenvalue and | 1 | = | 2 | .

For the eigenvalue 1 we have the approximation

1 ~ (k ·k + 1 ) / ( k ·k ) = (k)1

because k ~ c1 , and thus

 (k ·k+1 ) / (k ·k ) ~ (c1 · Ac1 ) / ( c1 · c1 ) = (c1 · c1 1 ) / ( c1 · c1 ) = (c21 1 · 1 ) / ( c21 · 1 ) = 1

Once 1 has been computed, we execute the so-called deflation: using particular similarity transformations we find a matrix

 B = 1 0 C

which is similar to A, the eigenvalues of the (n - 1) x (n - 1) matrix C are the eigenvalues 2 , ..., n of A. After this we proceed as above.

Question: When the above procedure can be applied to find the eigenvalue of A of smallest absolute value?

Note. Eigenvalue problem is very sensitive to interference. Errors can eliminated by preprocessing the matrix in different methods, see Grossman, Stanley: Elementary Linear Algebra.

Example 2: An error in an eigenvalue problem

Note. It can be shown that the eigenvalues of real symmetric matrix A are real and that A is diagonalizable. The eigenvalues of a real skew symmetric matrix are pure imaginary(the real part of every eigenvalue is zero).

Exercises: E49
Previous section
Contents
Next section