{"id":2870,"date":"2017-12-04T18:30:54","date_gmt":"2017-12-04T23:30:54","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=2870"},"modified":"2017-12-05T12:26:16","modified_gmt":"2017-12-05T17:26:16","slug":"wilkinson-and-reinsch-handbook-on-linear-algebra","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2017\/12\/04\/wilkinson-and-reinsch-handbook-on-linear-algebra\/","title":{"rendered":"Wilkinson and Reinsch Handbook on Linear Algebra"},"content":{"rendered":"\r\n\r\n<div class=\"content\"><!--introduction--><p>The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see <a href=\"https:\/\/hopl4.sigplan.org\/\">HOPL-IV<\/a>.  The first drafts of papers are to be submitted by August, 2018.  That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available.<\/p><p>The mathematical basis for the first version of MATLAB begins with a volume edited by J. H. Wilkinson and C. Reinsch, <i>Handbook for Automatic Computation, Volume II, Linear Algebra<\/i>, published by the eminent German publisher Springer-Verlag, in 1971.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/WandRcover_small.jpg\" alt=\"\"> <\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#c8566df6-68f0-4e13-8001-7fc009d644fc\">Eigenvalues<\/a><\/li><li><a href=\"#7f5acb81-c898-450f-a91a-78bbd07810d8\">Handbook for Automatic Computation<\/a><\/li><li><a href=\"#a3a6a965-46de-4084-b384-1a3bf75dcb07\">Contents<\/a><\/li><li><a href=\"#8d0a9ea9-a664-4eaf-85ce-72780020bd22\">Symmetric matrices<\/a><\/li><li><a href=\"#763fb26e-e167-413b-82f3-62ab923099e3\">Nonsymmetric matrices<\/a><\/li><li><a href=\"#6346753c-9553-4c99-b320-7ba3112d7517\">Complex matrices<\/a><\/li><li><a href=\"#758b6885-080d-49a9-af76-abdd8f077d45\">Preferred paths<\/a><\/li><li><a href=\"#8259f718-3e61-4104-b68f-16d1dc280d75\">QR vs. QL<\/a><\/li><li><a href=\"#e8e55d62-013e-4953-897d-c68ea668551c\">Historical note<\/a><\/li><\/ul><\/div><h4>Eigenvalues<a name=\"c8566df6-68f0-4e13-8001-7fc009d644fc\"><\/a><\/h4><p>The mathematical term \"eigenvalue\" is a linguistic hybrid.  In German the term is \"eigenwert\".  One Web dictionary says the English translation is \"intrinsic value\".  But after a few decades using terms like \"characteristic value\" and \"latent root\", mathematicians have given up trying to translate the entire word and generally translate only the second half.<\/p><p>The <i>eigenvalue problem<\/i> for a given matrix $A$ is the task of finding scalars $\\lambda$ and, possibly, the corresponding vectors $x$ so that<\/p><p>$$Ax = \\lambda x$$<\/p><p>It is important to distinguish the case where $A$ is symmetric. After the <i>linear equation problem<\/i>,<\/p><p>$$Ax = b$$<\/p><p>the eigenvalue problem is the most important computational task in numerical linear algebra.<\/p><p>The state of the art in numerical linear algebra 50 years ago, in the 1960's, did not yet provide reliable, efficient methods for solving matrix eigenvalue problems.  Software libraries had a hodgepodge of methods, many of them based upon polynomial root finders.<\/p><p><a title=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/02\/18\/jim-wilkinson\/ (link no longer works)\">Jim Wilkinson<\/a> told a story about having two such subroutines, one using something called Bairstow's method and one using something called Laguerre's method.  He couldn't decide which one was best, so he put them together in a program that first tried Bairstow's method, then printed a message and switched to Laguerre if Bairstow failed.  After several months of frequent use at the National Physical Laboratory, he never saw a case where Bairstow failed. He wrote a paper with his recommendation for Bairstow's method and soon got a letter from a reader who claimed to have an example which Bairstow could not handle.  Wilkinson tried the example with his program and found a bug.  The program didn't print the message that it was switching to Laguerre.<\/p><h4>Handbook for Automatic Computation<a name=\"7f5acb81-c898-450f-a91a-78bbd07810d8\"><\/a><\/h4><p>Over the period 1965-1970, Wilkinson and 18 of his colleagues published a series of papers in the Springer journal <i>Numerische Mathematik<\/i>. The papers offered procedures, written in Algol 60, for solving different versions of the linear equation problem and the eigenvalue problem. These were <i>research<\/i> papers presenting results about numerical stability, subtle details of implementation, and, in some cases, new methods.<\/p><p>In 1971, these papers were collected, sometimes with modifications, in the volume edited by Wilkinson and Reinsch.  This book marks the first appearance is an organized library of the algorithms for the eigenvalue problem for dense, stored matrices that we still use in MATLAB today.  The importance of using orthogonal transformations wherever possible was exposed by Wilkinson and the other authors.  The effectiveness of the newly discovered QR algorithms of J. Francis and the related LR algorithm of H. Rutishauser had only recently been appreciated.<\/p><h4>Contents<a name=\"a3a6a965-46de-4084-b384-1a3bf75dcb07\"><\/a><\/h4><p>The Algol 60 procedures are the focus of each chapter.  These codes remain a clear, readable reference for the important ideas of modern numerical linear algebra.  Part I of the volume is about the linear equation problem; part II is about the eigenvalue problem. There are 40 procedures in part I and 43 in part II.<\/p><p>Here is a list of the procedures in Part II.  A suffix 2 in the procedure name indicates that it computes both eigenvalues and eigenvectors. The \"bak\" procedures apply reduction transformations to eigenvectors.<\/p><p>Many of the procedures work with a reduced form, which is tridiagonal for symmetric or Hermitian matrices and Hessenberg (upper triangular plus one subdiagonal) for nonsymmetric matrices., Since Algol does not have a complex number data type, the complex arrays are represented by pairs of real arrays.<\/p><h4>Symmetric matrices<a name=\"8d0a9ea9-a664-4eaf-85ce-72780020bd22\"><\/a><\/h4><p><table border=1 style=\"width:85%\">\r\n<tr><th><b>Reduction to tridiagonal<\/b><\/th><\/tr>\r\n<tr><td><i>tred1, tred2, tred3, trbak1, trbak3<\/i><td>  Orthogonal tridiagonalization.<\/td><\/tr>\r\n<tr><th><b>Band<\/b><\/th><\/tr>\r\n<tr><td><i>bandrd<\/i><\/td><td>  Tridiagonalization.<\/td><\/tr>\r\n<tr><td><i>symray<\/i><\/td><td>  Eigenvectors.<\/td><\/tr>\r\n<tr><td><i>bqr<\/td><\/i><td>  One eigenvalue.<\/td><\/tr>\r\n<tr><th><b>Tridiagonal<\/b><\/th><tr>\r\n<tr><td><i>imtql1,imtql2<\/i><\/td><td>  All eigenvalues and vectors, implicit QR.<\/td><\/tr>\r\n<tr><td><i>tql1, tql2<\/i><\/td><td>  All eigenvalues and vectors, explicit QR.<\/td><\/tr>\r\n<tr><td><i>ratqr<\/td><td>  Few eigenvalues, rational QR.<\/td><\/tr>\r\n<tr><td><i>bisect<\/i><\/td><td>  Few eigenvalues, bisection.<\/td><\/tr>\r\n<tr><td><i>tristurm<\/i><\/td><td>  Few eigenvalues and vectors.<\/td><\/tr>\r\n<tr><th><b>Few eigenvalues<\/b><\/th><tr>\r\n<tr><td><i>ritzit<\/i><\/td><td>  Simultaneous iteration.<\/td><\/tr>\r\n<tr><th><b>Jacobi method<\/b><\/th><tr>\r\n<tr><td><i>jacobi<\/i><\/td><td>  Jacobi method.<\/td><\/tr>\r\n<tr><th><b>Generalized problem, Ax = \\lambda Bx<\/b><\/th><tr>\r\n<tr><td><i>reduc1, reduc2, rebaka, rebakb<\/i><\/td><td>  Symmetric A, positive definite B.<\/td><\/tr>\r\n<\/table><\/p><h4>Nonsymmetric matrices<a name=\"763fb26e-e167-413b-82f3-62ab923099e3\"><\/a><\/h4><p><table border=1 style=\"width:85%\">\r\n<tr><th><b>Reduction to Hessenberg<\/b><\/th><\/tr>\r\n<tr><td><i>balance, balbak<\/i><\/td><td>  Balance (scaling)<\/td><\/tr>\r\n<tr><td><i>dirhes, dirbak, dirtrans<\/i><\/td><td>  Elementary, accumulated innerprod.<\/td><\/tr>\r\n<tr><td><i>elmhes, elmbak, elmtrans<\/i><\/td><td>  Elementary.<\/td><\/tr>\r\n<tr><td><i>orthes, ortbak, ortrans<\/i><\/td><td>  Orthogonal.<\/td><\/tr>\r\n<tr><th><b>Band<\/b><\/th><\/tr>\r\n<tr><td><i>unsray<\/i><\/td><td>  Eigenvectors.<\/td><\/tr>\r\n<tr><th><b>Hessenberg<\/b><\/th><\/tr>\r\n<tr><td><i>hqr, hqr2<\/i><\/td><td>  All eigenvalues and vectors, implicit QR.<\/td><\/tr>\r\n<tr><td><i>invit<\/i><\/td><td>  Few eigenvectors, inverse iteration.<\/td><\/tr>\r\n<tr><th><b>Norm reducing<\/b><\/th><\/tr>\r\n<tr><td><i>eigen<\/i><\/td><td>  Eigenvalues.<\/td><\/tr>\r\n<\/table><\/p><h4>Complex matrices<a name=\"6346753c-9553-4c99-b320-7ba3112d7517\"><\/a><\/h4><p><table border=1 style=\"width:85%\">\r\n<tr><td><i>comeig<\/i><\/td><td>  Norm reducing Jacobi.<\/td><\/tr>\r\n<tr><td><i>comhes, combak<\/i><\/td><td>  Reduce to Hessenberg form.<\/td><\/tr>\r\n<tr><td><i>comlr, comlr2<\/i><\/td><td>  Complex LR algorithm.<\/td><\/tr>\r\n<tr><td><i>cxinvit<\/i><\/td><td>  Inverse iteration.<\/td><\/tr>\r\n<\/table><\/p><h4>Preferred paths<a name=\"758b6885-080d-49a9-af76-abdd8f077d45\"><\/a><\/h4><p>The preferred path for finding all the eigenvalues of a real, symmetric matrix is <i>tred1<\/i> followed by <i>imtql1<\/i>. The preferred path for finding all the eigenvalues and eigenvectors of a real, symmetric matrix is <i>tred2<\/i> followed by <i>imtql2<\/i>.<\/p><p>The preferred path for finding all the eigenvalues of a real, nonsymmetric matrix is <i>balanc<\/i>, <i>elmhes<\/i>, and finally <i>hqr<\/i>. The preferred path for finding all the eigenvalues and eigenvectors of a real, nonsymmetric matrix is <i>balanc<\/i>, <i>elmhes<\/i>, <i>elmtrans<\/i>, <i>hqr2<\/i>, and finally <i>balbak<\/i>.<\/p><h4>QR vs. QL<a name=\"8259f718-3e61-4104-b68f-16d1dc280d75\"><\/a><\/h4><p>\"QR\" and \"QL\" are right- and left-handed, or forward and backward, versions of the same algorithm.  The algorithm is usually described in terms of factoring a matrix into an orthogonal factor, Q, and an upper or right triangular factor, R.  This leads to QR algorithms.  But for reasons having to do with graded matrices and terminating a loop at <tt>1<\/tt> rather than <tt>n-1<\/tt>, the authors of the Handbook decided to use left triangular and QL algorithms.<\/p><h4>Historical note<a name=\"e8e55d62-013e-4953-897d-c68ea668551c\"><\/a><\/h4><p>When the papers from <i>Numerische Mathematik<\/i> were collected to form the 1971 <i>Handbook for Automatic Computation, Volume II, Linear Algebra<\/i>, almost all of them where reprinted without change.  But, despite what its footnote says, Contribution II\/4, <i>The Implicit QL Algorithm<\/i>, never appeared in the journal.  The paper is the merger of a half-page paper by <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2015\/06\/29\/dubrulle-creates-a-faster-tridiagonal-qr-algorithm\/\">Austin Dubrulle<\/a> and earlier contributions by R.S. Martin and Wilkinson. Dubrulle was able to reduce the operation count in the inner loop.<\/p><p>The authors of Contribution II\/4 of the Handbook are listed as A. Dubrulle, R.S.Martin, and J.H.Wilkinson, although the three of them never actually worked together. Proper credit is given, but I'm afraid that an interesting little bit of history has been lost.<\/p><p>Dubrulle's version of the implicit tridiagonal QR algorithm continues to be important today.  In future posts I will describe how the Handbook led to EISPACK, LINPACK and LAPACK. In LAPACK the <i>imtql1<\/i> and <i>imtql2<\/i> functions are combined into one subroutine named <a href=\"http:\/\/www.netlib.org\/lapack\/explore-html-3.3.1\/d9\/d3f\/dsteqr_8f_source.html\">DSTEQR.F<\/a> The code for the inner loop is very similar to Austin's note.<\/p><p>Years after making this contribution, Austin returned to grad school and was one of my Ph.D. students at the University of New Mexico. His thesis was about the distribution of the singular value of the matrices derived from photographs.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_26cefaedad044f33a20d375f994408b9() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='26cefaedad044f33a20d375f994408b9 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 26cefaedad044f33a20d375f994408b9';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2017 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_26cefaedad044f33a20d375f994408b9()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2017a<br><\/p><\/div><!--\r\n26cefaedad044f33a20d375f994408b9 ##### SOURCE BEGIN #####\r\n%% Wilkinson and Reinsch Handbook on Linear Algebra\r\n% The ACM Special Interest Group on Programming Languages, SIGPLAN,\r\n% expects to hold the fourth in a series of conferences on\r\n% the History of Programming Languages in 2020, see\r\n% <https:\/\/hopl4.sigplan.org\/ HOPL-IV>.  The first drafts of\r\n% papers are to be submitted by August, 2018.  That long lead time\r\n% gives me the opportunity to write a detailed history of MATLAB.\r\n% I plan to write the paper in sections, which I'll post in\r\n% this blog as they are available.\r\n%\r\n% The mathematical basis for the first version of MATLAB begins\r\n% with a volume edited by J. H. Wilkinson and C. Reinsch,\r\n% _Handbook for Automatic Computation, Volume II, Linear Algebra_,\r\n% published by the eminent German publisher Springer-Verlag, in 1971.\r\n%\r\n% <<WandRcover_small.jpg>>\r\n\r\n%% Eigenvalues\r\n% The mathematical term \"eigenvalue\" is a linguistic hybrid.  In German\r\n% the term is \"eigenwert\".  One Web dictionary says the English translation\r\n% is \"intrinsic value\".  But after a few decades using terms like\r\n% \"characteristic value\" and \"latent root\", mathematicians have given up \r\n% trying to translate the entire word and generally translate only the \r\n% second half.\r\n\r\n%%\r\n% The _eigenvalue problem_ for a given matrix $A$ is the task of finding\r\n% scalars $\\lambda$ and, possibly, the corresponding vectors $x$ so that\r\n%\r\n% $$Ax = \\lambda x$$\r\n%\r\n% It is important to distinguish the case where $A$ is symmetric.\r\n% After the _linear equation problem_, \r\n%\r\n% $$Ax = b$$\r\n%\r\n% the eigenvalue problem is the most important computational task\r\n% in numerical linear algebra.\r\n\r\n%%\r\n% The state of the art in numerical linear algebra 50 years ago, in the\r\n% 1960's, did not yet provide reliable, efficient methods for solving\r\n% matrix eigenvalue problems.  Software libraries had a hodgepodge\r\n% of methods, many of them based upon polynomial root finders.\r\n\r\n%%\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2013\/02\/18\/jim-wilkinson\/\r\n% Jim Wilkinson> told a story about having two such subroutines, one using \r\n% something called Bairstow's method and one using something called\r\n% Laguerre's method.  He couldn't decide which one was best,\r\n% so he put them together in a program that first tried\r\n% Bairstow's method, then printed a message and switched to Laguerre\r\n% if Bairstow failed.  After several months of frequent use at the National\r\n% Physical Laboratory, he never saw a case where Bairstow failed.\r\n% He wrote a paper with his recommendation for Bairstow's method and\r\n% soon got a letter from a reader who claimed to have an example which\r\n% Bairstow could not handle.  Wilkinson tried the example with his program\r\n% and found a bug.  The program didn't print the message that it was\r\n% switching to Laguerre.\r\n\r\n%% Handbook for Automatic Computation\r\n% Over the period 1965-1970, Wilkinson and 18 of his colleagues \r\n% published a series of\r\n% papers in the Springer journal _Numerische Mathematik_.\r\n% The papers offered procedures, written in Algol 60, for solving different\r\n% versions of the linear equation problem and the eigenvalue problem.\r\n% These were _research_ papers presenting results about numerical stability,\r\n% subtle details of implementation, and, in some cases, new methods.\r\n\r\n%%\r\n% In 1971, these papers were collected, sometimes with modifications,\r\n% in the volume edited by Wilkinson and Reinsch.  This book marks the\r\n% first appearance is an organized library of the algorithms for the\r\n% eigenvalue problem for dense, stored matrices that we still use in\r\n% MATLAB today.  The importance of using orthogonal transformations\r\n% wherever possible was exposed by Wilkinson and the other authors.  The \r\n% effectiveness of the newly discovered QR algorithms of J. Francis\r\n% and the related LR algorithm of H. Rutishauser had only recently been\r\n% appreciated.\r\n\r\n%% Contents\r\n% The Algol 60 procedures are the focus of each chapter.  These codes\r\n% remain a clear, readable reference for the important ideas of modern\r\n% numerical linear algebra.  Part I of the volume is about the linear\r\n% equation problem; part II is about the eigenvalue problem.\r\n% There are 40 procedures in part I and 43 in part II.\r\n\r\n%%\r\n% Here is a list of the procedures in Part II.  A suffix 2 in the procedure\r\n% name indicates that it computes both eigenvalues and eigenvectors.  \r\n% The \"bak\" procedures apply reduction transformations to eigenvectors.\r\n\r\n%%\r\n% Many of the procedures work with a reduced form, which is tridiagonal for\r\n% symmetric or Hermitian matrices and Hessenberg (upper triangular plus\r\n% one subdiagonal) for nonsymmetric matrices.,\r\n% Since Algol does not have a complex number data type, the complex \r\n% arrays are represented by pairs of real arrays.\r\n\r\n%% Symmetric matrices\r\n%\r\n% <html><table border=1 style=\"width:85%\">\r\n% <tr><th><b>Reduction to tridiagonal<\/b><\/th><\/tr>\r\n% <tr><td><i>tred1, tred2, tred3, trbak1, trbak3<\/i><td>  Orthogonal tridiagonalization.<\/td><\/tr>\r\n% <tr><th><b>Band<\/b><\/th><\/tr>\r\n% <tr><td><i>bandrd<\/i><\/td><td>  Tridiagonalization.<\/td><\/tr>\r\n% <tr><td><i>symray<\/i><\/td><td>  Eigenvectors.<\/td><\/tr>\r\n% <tr><td><i>bqr<\/td><\/i><td>  One eigenvalue.<\/td><\/tr>\r\n% <tr><th><b>Tridiagonal<\/b><\/th><tr>\r\n% <tr><td><i>imtql1,imtql2<\/i><\/td><td>  All eigenvalues and vectors, implicit QR.<\/td><\/tr>\r\n% <tr><td><i>tql1, tql2<\/i><\/td><td>  All eigenvalues and vectors, explicit QR.<\/td><\/tr>\r\n% <tr><td><i>ratqr<\/td><td>  Few eigenvalues, rational QR.<\/td><\/tr>\r\n% <tr><td><i>bisect<\/i><\/td><td>  Few eigenvalues, bisection.<\/td><\/tr>\r\n% <tr><td><i>tristurm<\/i><\/td><td>  Few eigenvalues and vectors.<\/td><\/tr>\r\n% <tr><th><b>Few eigenvalues<\/b><\/th><tr>\r\n% <tr><td><i>ritzit<\/i><\/td><td>  Simultaneous iteration.<\/td><\/tr>\r\n% <tr><th><b>Jacobi method<\/b><\/th><tr>\r\n% <tr><td><i>jacobi<\/i><\/td><td>  Jacobi method.<\/td><\/tr>\r\n% <tr><th><b>Generalized problem, Ax = \\lambda Bx<\/b><\/th><tr>\r\n% <tr><td><i>reduc1, reduc2, rebaka, rebakb<\/i><\/td><td>  Symmetric A, positive definite B.<\/td><\/tr>\r\n% <\/table><\/html>\r\n\r\n%% Nonsymmetric matrices\r\n%\r\n% <html><table border=1 style=\"width:85%\">\r\n% <tr><th><b>Reduction to Hessenberg<\/b><\/th><\/tr>\r\n% <tr><td><i>balance, balbak<\/i><\/td><td>  Balance (scaling)<\/td><\/tr>\r\n% <tr><td><i>dirhes, dirbak, dirtrans<\/i><\/td><td>  Elementary, accumulated innerprod.<\/td><\/tr>\r\n% <tr><td><i>elmhes, elmbak, elmtrans<\/i><\/td><td>  Elementary.<\/td><\/tr>\r\n% <tr><td><i>orthes, ortbak, ortrans<\/i><\/td><td>  Orthogonal.<\/td><\/tr>\r\n% <tr><th><b>Band<\/b><\/th><\/tr>\r\n% <tr><td><i>unsray<\/i><\/td><td>  Eigenvectors.<\/td><\/tr>\r\n% <tr><th><b>Hessenberg<\/b><\/th><\/tr>\r\n% <tr><td><i>hqr, hqr2<\/i><\/td><td>  All eigenvalues and vectors, implicit QR.<\/td><\/tr>\r\n% <tr><td><i>invit<\/i><\/td><td>  Few eigenvectors, inverse iteration.<\/td><\/tr>\r\n% <tr><th><b>Norm reducing<\/b><\/th><\/tr>\r\n% <tr><td><i>eigen<\/i><\/td><td>  Eigenvalues.<\/td><\/tr>\r\n% <\/table><\/html>\r\n\r\n%% Complex matrices\r\n%\r\n% <html><table border=1 style=\"width:85%\">\r\n% <tr><td><i>comeig<\/i><\/td><td>  Norm reducing Jacobi.<\/td><\/tr>\r\n% <tr><td><i>comhes, combak<\/i><\/td><td>  Reduce to Hessenberg form.<\/td><\/tr>\r\n% <tr><td><i>comlr, comlr2<\/i><\/td><td>  Complex LR algorithm.<\/td><\/tr>\r\n% <tr><td><i>cxinvit<\/i><\/td><td>  Inverse iteration.<\/td><\/tr>\r\n% <\/table><\/html>\r\n\r\n\r\n%% Preferred paths\r\n% The preferred path for finding all the eigenvalues of a real, symmetric\r\n% matrix is _tred1_ followed by _imtql1_.\r\n% The preferred path for finding all the eigenvalues and eigenvectors\r\n% of a real, symmetric matrix is _tred2_ followed by _imtql2_.\r\n%\r\n% The preferred path for finding all the eigenvalues of a real, nonsymmetric\r\n% matrix is _balanc_, _elmhes_, and finally _hqr_.\r\n% The preferred path for finding all the eigenvalues and eigenvectors\r\n% of a real, nonsymmetric matrix is _balanc_, _elmhes_, _elmtrans_, _hqr2_,\r\n% and finally _balbak_.\r\n\r\n%% QR vs. QL\r\n% \"QR\" and \"QL\" are right- and left-handed, or forward and backward,\r\n% versions of the same algorithm.  The algorithm is usually described in\r\n% terms of factoring a matrix into an orthogonal factor, Q, and an upper \r\n% or right triangular factor, R.  This leads to QR algorithms.  But for\r\n% reasons having to do with graded matrices and terminating a loop at |1| \r\n% rather than |n-1|, the authors of the Handbook decided to use left\r\n% triangular and QL algorithms.\r\n\r\n%% Historical note\r\n% When the papers from _Numerische Mathematik_ were collected to form the\r\n% 1971 _Handbook for Automatic Computation, Volume II, Linear Algebra_,\r\n% almost all of them where reprinted without change.  But, despite what\r\n% its footnote says, Contribution II\/4, _The Implicit QL Algorithm_,\r\n% never appeared in the journal.  The paper is the merger of a half-page\r\n% paper by\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2015\/06\/29\/dubrulle-creates-a-faster-tridiagonal-qr-algorithm\/\r\n% Austin Dubrulle> and earlier contributions by R.S. Martin and Wilkinson.\r\n% Dubrulle was able to reduce the operation count in the inner loop.\r\n\r\n%%\r\n% The authors of Contribution II\/4 of the Handbook are listed as\r\n% A. Dubrulle, R.S.Martin, and J.H.Wilkinson, although the three of them \r\n% never actually worked together.\r\n% Proper credit is given, but I'm afraid that an interesting little bit\r\n% of history has been lost.\r\n%\r\n% Dubrulle's version of the implicit tridiagonal QR algorithm continues to\r\n% be important today.  In future posts I will describe how the Handbook\r\n% led to EISPACK, LINPACK and LAPACK.\r\n% In LAPACK the _imtql1_ and _imtql2_ functions are combined into one\r\n% subroutine named\r\n% <http:\/\/www.netlib.org\/lapack\/explore-html-3.3.1\/d9\/d3f\/dsteqr_8f_source.html\r\n% DSTEQR.F>\r\n% The code for the inner loop is very similar to Austin's note.\r\n%\r\n% Years after making this contribution, Austin returned to grad school\r\n% and was one of my Ph.D. students at the University of New Mexico.\r\n% His thesis was about the distribution of the singular value of the\r\n% matrices derived from photographs.\r\n\r\n\r\n\r\n\r\n\r\n\r\n##### SOURCE END ##### 26cefaedad044f33a20d375f994408b9\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/WandRcover_small.jpg\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see <a href=\"https:\/\/hopl4.sigplan.org\/\">HOPL-IV<\/a>.  The first drafts of papers are to be submitted by August, 2018.  That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2017\/12\/04\/wilkinson-and-reinsch-handbook-on-linear-algebra\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[4,6,16,8],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2870"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=2870"}],"version-history":[{"count":6,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2870\/revisions"}],"predecessor-version":[{"id":2900,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2870\/revisions\/2900"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=2870"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=2870"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=2870"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}