# Wilkinson and Reinsch Handbook on Linear Algebra

The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see HOPL-IV. The first drafts of papers are to be submitted by August, 2018. That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available.

The mathematical basis for the first version of MATLAB begins with a volume edited by J. H. Wilkinson and C. Reinsch, *Handbook for Automatic Computation, Volume II, Linear Algebra*, published by the eminent German publisher Springer-Verlag, in 1971.

### Contents

#### Eigenvalues

The mathematical term "eigenvalue" is a linguistic hybrid. In German the term is "eigenwert". One Web dictionary says the English translation is "intrinsic value". But after a few decades using terms like "characteristic value" and "latent root", mathematicians have given up trying to translate the entire word and generally translate only the second half.

The *eigenvalue problem* for a given matrix $A$ is the task of finding scalars $\lambda$ and, possibly, the corresponding vectors $x$ so that

$$Ax = \lambda x$$

It is important to distinguish the case where $A$ is symmetric. After the *linear equation problem*,

$$Ax = b$$

the eigenvalue problem is the most important computational task in numerical linear algebra.

The state of the art in numerical linear algebra 50 years ago, in the 1960's, did not yet provide reliable, efficient methods for solving matrix eigenvalue problems. Software libraries had a hodgepodge of methods, many of them based upon polynomial root finders.

Jim Wilkinson told a story about having two such subroutines, one using something called Bairstow's method and one using something called Laguerre's method. He couldn't decide which one was best, so he put them together in a program that first tried Bairstow's method, then printed a message and switched to Laguerre if Bairstow failed. After several months of frequent use at the National Physical Laboratory, he never saw a case where Bairstow failed. He wrote a paper with his recommendation for Bairstow's method and soon got a letter from a reader who claimed to have an example which Bairstow could not handle. Wilkinson tried the example with his program and found a bug. The program didn't print the message that it was switching to Laguerre.

#### Handbook for Automatic Computation

Over the period 1965-1970, Wilkinson and 18 of his colleagues published a series of papers in the Springer journal *Numerische Mathematik*. The papers offered procedures, written in Algol 60, for solving different versions of the linear equation problem and the eigenvalue problem. These were *research* papers presenting results about numerical stability, subtle details of implementation, and, in some cases, new methods.

In 1971, these papers were collected, sometimes with modifications, in the volume edited by Wilkinson and Reinsch. This book marks the first appearance is an organized library of the algorithms for the eigenvalue problem for dense, stored matrices that we still use in MATLAB today. The importance of using orthogonal transformations wherever possible was exposed by Wilkinson and the other authors. The effectiveness of the newly discovered QR algorithms of J. Francis and the related LR algorithm of H. Rutishauser had only recently been appreciated.

#### Contents

The Algol 60 procedures are the focus of each chapter. These codes remain a clear, readable reference for the important ideas of modern numerical linear algebra. Part I of the volume is about the linear equation problem; part II is about the eigenvalue problem. There are 40 procedures in part I and 43 in part II.

Here is a list of the procedures in Part II. A suffix 2 in the procedure name indicates that it computes both eigenvalues and eigenvectors. The "bak" procedures apply reduction transformations to eigenvectors.

Many of the procedures work with a reduced form, which is tridiagonal for symmetric or Hermitian matrices and Hessenberg (upper triangular plus one subdiagonal) for nonsymmetric matrices., Since Algol does not have a complex number data type, the complex arrays are represented by pairs of real arrays.

#### Symmetric matrices

Reduction to tridiagonal | |
---|---|

tred1, tred2, tred3, trbak1, trbak3 | Orthogonal tridiagonalization. |

Band | |

bandrd | Tridiagonalization. |

symray | Eigenvectors. |

bqr | One eigenvalue. |

Tridiagonal | |

imtql1,imtql2 | All eigenvalues and vectors, implicit QR. |

tql1, tql2 | All eigenvalues and vectors, explicit QR. |

ratqr | Few eigenvalues, rational QR. |

bisect | Few eigenvalues, bisection. |

tristurm | Few eigenvalues and vectors. |

Few eigenvalues | |

ritzit | Simultaneous iteration. |

Jacobi method | |

jacobi | Jacobi method. |

Generalized problem, Ax = \lambda Bx | |

reduc1, reduc2, rebaka, rebakb | Symmetric A, positive definite B. |

#### Nonsymmetric matrices

Reduction to Hessenberg | |
---|---|

balance, balbak | Balance (scaling) |

dirhes, dirbak, dirtrans | Elementary, accumulated innerprod. |

elmhes, elmbak, elmtrans | Elementary. |

orthes, ortbak, ortrans | Orthogonal. |

Band | |

unsray | Eigenvectors. |

Hessenberg | |

hqr, hqr2 | All eigenvalues and vectors, implicit QR. |

invit | Few eigenvectors, inverse iteration. |

Norm reducing | |

eigen | Eigenvalues. |

#### Complex matrices

comeig | Norm reducing Jacobi. |

comhes, combak | Reduce to Hessenberg form. |

comlr, comlr2 | Complex LR algorithm. |

cxinvit | Inverse iteration. |

#### Preferred paths

The preferred path for finding all the eigenvalues of a real, symmetric matrix is *tred1* followed by *imtql1*. The preferred path for finding all the eigenvalues and eigenvectors of a real, symmetric matrix is *tred2* followed by *imtql2*.

The preferred path for finding all the eigenvalues of a real, nonsymmetric matrix is *balanc*, *elmhes*, and finally *hqr*. The preferred path for finding all the eigenvalues and eigenvectors of a real, nonsymmetric matrix is *balanc*, *elmhes*, *elmtrans*, *hqr2*, and finally *balbak*.

#### QR vs. QL

"QR" and "QL" are right- and left-handed, or forward and backward, versions of the same algorithm. The algorithm is usually described in terms of factoring a matrix into an orthogonal factor, Q, and an upper or right triangular factor, R. This leads to QR algorithms. But for reasons having to do with graded matrices and terminating a loop at `1` rather than `n-1`, the authors of the Handbook decided to use left triangular and QL algorithms.

#### Historical note

When the papers from *Numerische Mathematik* were collected to form the 1971 *Handbook for Automatic Computation, Volume II, Linear Algebra*, almost all of them where reprinted without change. But, despite what its footnote says, Contribution II/4, *The Implicit QL Algorithm*, never appeared in the journal. The paper is the merger of a half-page paper by Austin Dubrulle and earlier contributions by R.S. Martin and Wilkinson. Dubrulle was able to reduce the operation count in the inner loop.

The authors of Contribution II/4 of the Handbook are listed as A. Dubrulle, R.S.Martin, and J.H.Wilkinson, although the three of them never actually worked together. Proper credit is given, but I'm afraid that an interesting little bit of history has been lost.

Dubrulle's version of the implicit tridiagonal QR algorithm continues to be important today. In future posts I will describe how the Handbook led to EISPACK, LINPACK and LAPACK. In LAPACK the *imtql1* and *imtql2* functions are combined into one subroutine named DSTEQR.F The code for the inner loop is very similar to Austin's note.

Years after making this contribution, Austin returned to grad school and was one of my Ph.D. students at the University of New Mexico. His thesis was about the distribution of the singular value of the matrices derived from photographs.

## コメント

コメントを残すには、ここ をクリックして MathWorks アカウントにサインインするか新しい MathWorks アカウントを作成します。