# An Eigenvalue Sensitivity Example

On May 29-30, I plan to attend a conference, organized by Nick Higham, at the University of Manchester. The title of the conference is

**Celebrating the Centenary of James H. Wilkinson's Birth**

I am giving a talk about one of Wilkinson's favorite topics, how can perturbations of a matrix with sensitive eigenvalues produce a defective matrix with a multiple eigenvalue. I uncovered this example from my original Fortran MATLAB User's Guide, a University of New Mexico technical report, dated 1981, four years before MathWorks. The text of this blog post is copied from that guide. I've inserted comments where today's MATLAB scales the eigenvectors differently.

### Contents

#### Eigenvalue Sensitivity Example

In this example, we construct a matrix whose eigenvalues are moderately sensitive to perturbations and then analyze that sensitivity. We begin with the statement

B = [3 0 7; 0 2 0; 0 0 1]

B = 3 0 7 0 2 0 0 0 1

Obviously, the eigenvalues of B are 1, 2 and 3 . Moreover, since B is not symmetric, these eigenvalues are slightly sensitive to perturbation. (The value B(1,3) = 7 was chosen so that the elements of the matrix A below are less than 1000.)

We now generate a similarity transformation to disguise the eigenvalues and make them more sensitive.

L = [1 0 0; 2 1 0; -3 4 1] M = L\L'

L = 1 0 0 2 1 0 -3 4 1 M = 1 2 -3 -2 -3 10 11 18 -48

The matrix M has determinant equal to 1 and is moderately badly conditioned. The similarity transformation is

A = M*B/M

A = -64.0000 82.0000 21.0000 144.0000 -178.0000 -46.0000 -771.0000 962.0000 248.0000

Because det(M) = 1 , the elements of A would be exact integers if there were no roundoff. So,

A = round(A)

A = -64 82 21 144 -178 -46 -771 962 248

This, then, is our test matrix. We can now forget how it was generated and analyze its eigenvalues.

[X,D] = eig(A)

X = 0.0891 0.0735 -0.1089 -0.1782 -0.1923 0.1634 0.9800 0.9786 -0.9805 D = 3.0000 0 0 0 1.0000 0 0 0 2.0000

% { Today the eigenvectors are scaled to have unit length. % Classic MATLAB did not scale the eigenvectors. It got % % X = % % -.0891 3.4903 41.8091 % .1782 -9.1284 -62.7136 % -.9800 46.4473 376.2818 % }

Since A is similar to B, its eigenvalues are also 1, 2 and 3. They happen to be computed in another order by the EISPACK subroutines. The fact that the columns of X, which are the eigenvectors, are so far from being orthonormal is our first indication that the eigenvalues are sensitive. To see this sensitivity, we display more figures of the computed eigenvalues.

```
format long
diag(D)
```

ans = 3.000000000003868 0.999999999998212 1.999999999997978

We see that, on this computer, the last five {today: four} significant figures are contaminated by roundoff error. A somewhat superficial explanation of this is provided by

```
format short
cond(X)
```

ans = 1.7690e+03

% { Classic: % % ANS = % % 3.2216e+05 % }

The condition number of X gives an upper bound for the relative error in the computed eigenvalues. However, this condition number is affected by scaling.

X = X/diag(X(3,:)) cond(X)

X = 0.0909 0.0751 0.1111 -0.1818 -0.1965 -0.1667 1.0000 1.0000 1.0000 ans = 1.7692e+03

Rescaling the eigenvectors so that their last components are all equal to one has two consequences. The condition of X is decreased by over two orders of magnitude. {Not today.} (This is about the minimum condition that can be obtained by such diagonal scaling.) Moreover, it is now apparent that the three eigenvectors are nearly parallel.

More detailed information on the sensitivity of the individual eigenvalues involves the left eigenvectors.

Y = inv(X') Y'*A*X

Y = -511.5000 259.5000 252.0000 616.0000 -346.0000 -270.0000 159.5000 -86.5000 -72.0000 ans = 3.0000 -0.0000 -0.0000 0.0000 1.0000 0.0000 0 -0.0000 2.0000

We are now in a position to compute the sensitivities of the individual eigenvalues.

for j = 1:3, c(j) = norm(Y(:,j))*norm(X(:,j)); end c

c = 833.1092 450.7228 383.7564

These three numbers are the reciprocals of the cosines of the angles between the left and right eigenvectors. It can be shown that perturbation of the elements of A can result in a perturbation of the j-th eigenvalue which is c(j) times as large. In this example, the first eigenvalue has the largest sensitivity.

We now proceed to show that A is close to a matrix with a double eigenvalue. The direction of the required perturbation is given by

E = -1.e-6*Y(:,1)*X(:,1)'

E = 1.0e-03 * 0.0465 -0.0930 0.5115 -0.0560 0.1120 -0.6160 -0.0145 0.0290 -0.1595

With some trial and error which we do not show, we bracket the point where two eigenvalues of a perturbed A coalesce and then become complex.

eig(A + .4*E) eig(A + .5*E)

ans = 1.1500 2.5996 2.2504 ans = 2.4067 + 0.1753i 2.4067 - 0.1753i 1.1866 + 0.0000i

Now, a bisecting search, driven by the imaginary part of one of the eigenvalues, finds the point where two eigenvalues are nearly equal.

r = .4; s = .5; while s-r > 1.e-14 t = (r+s)/2; d = eig(A+t*E); if imag(d(1)) == 0, r = t; else, s = t; end end format long t

t = 0.450380734135428

Finally, we display the perturbed matrix, which is obviously close to the original, and its pair of nearly equal eigenvalues.

A+t*E eig(A+t*E)

ans = 1.0e+02 * -0.639999790572959 0.819999581145917 0.210002303697455 1.439999747786789 -1.779999495573578 -0.460002774345322 -7.710000065305207 9.620000130610412 2.479999281642729 ans = 2.415743144226897 2.415738627741217 1.168517777651195

The first two eigenvectors of A + t*E are almost indistinguishable indicating that the perturbed matrix is almost defective.

```
[X,D] = eig(A+t*E)
format short
cond(X)
```

X = 0.094108719644215 -0.094108788388666 -0.070056238584537 -0.174780805492871 0.174780755585658 0.194872753681838 0.980099596427929 -0.980099598727050 -0.978323429806240 D = 2.415743144226897 0 0 0 2.415738627741217 0 0 0 1.168517777651195 ans = 3.9853e+08

**Category:**- Eigenvalues,
- History,
- Matrices,
- Numerical Analysis

## Comments

To leave a comment, please click here to sign in to your MathWorks Account or create a new one.