{"id":2966,"date":"2018-01-23T19:12:22","date_gmt":"2018-01-24T00:12:22","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=2966"},"modified":"2018-01-31T15:08:47","modified_gmt":"2018-01-31T20:08:47","slug":"linpack-linear-equation-package","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2018\/01\/23\/linpack-linear-equation-package\/","title":{"rendered":"LINPACK, Linear Equation Package"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see <a href=\"https:\/\/hopl4.sigplan.org\/\">HOPL-IV<\/a>.  The first drafts of papers are to be submitted by August, 2018.  That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available.  This is the third such installment.<\/p><p>LINPACK and EISPACK are Fortran subroutine packages for matrix computation developed in the 1970's.  They are a follow-up to the Algol 60 procedures that I described in my blog post about the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2017\/12\/04\/wilkinson-and-reinsch-handbook-on-linear-algebra\/\">Wilkinson and Reinsch Handbook<\/a>.  They led to the first version of MATLAB.  This post is about LINPACK.  My <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2018\/01\/02\/eispack-matrix-eigensystem-routines\/\">previous post<\/a> was about EISPACK.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/linpackb.jpg\" alt=\"\"> <\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#8a457115-31f1-4aa2-80ef-1f43a7c95c2a\">Cart Before the Horse<\/a><\/li><li><a href=\"#c8d33e8b-37fb-4d07-b420-d2cc0664493a\">LINPACK Project<\/a><\/li><li><a href=\"#29a02e06-5f5a-484c-aacd-c9ad8d487ee0\">Contents<\/a><\/li><li><a href=\"#e3983130-5817-445a-82df-35f6d8bd24c0\">Programming Style<\/a><\/li><li><a href=\"#cd557e8c-a6f8-47ba-aba9-67db8d227af8\">BLAS<\/a><\/li><li><a href=\"#eca89ac1-c5a9-44b3-a33e-5ba9f50f8198\">Portability<\/a><\/li><li><a href=\"#db99ad5f-bd54-409a-9ec6-96b66123e806\">Publisher<\/a><\/li><li><a href=\"#04a4bd8b-df14-4b93-9ba6-ee15b6b89446\">LINPACK Benchmark<\/a><\/li><li><a href=\"#ca1888c9-06d9-4446-939d-07fbde96a040\">In Retrospect<\/a><\/li><li><a href=\"#a8eee6f1-0c47-4f30-a3b6-ee6fb75cd838\">References<\/a><\/li><\/ul><\/div><h4>Cart Before the Horse<a name=\"8a457115-31f1-4aa2-80ef-1f43a7c95c2a\"><\/a><\/h4><p>Logically a package of subroutines for solving linear equations should come before a package for computing matrix eigenvalues.  The tasks addressed and the algorithms involved are mathematically simpler. But chronologically EISPACK came before LINPACK.  Actually, the first section of the <i>Handbook<\/i> is about linear equations, but we pretty much skipped over that section when we did EISPACK.  The eigenvalue algorithms were new and we felt it was important to make Fortran translations widely available.<\/p><h4>LINPACK Project<a name=\"c8d33e8b-37fb-4d07-b420-d2cc0664493a\"><\/a><\/h4><p>In 1975, as EISPACK was nearing completion, we proposed to the NSF another research project investigating methods for the development of mathematical software.  A byproduct would be the software itself.<\/p><p>The four principal investigators, and eventual authors of the Users' Guide, were<\/p><div><ul><li>Jack Dongarra, Argonne National Laboratory<\/li><li>Cleve Moler, University of New Mexico<\/li><li>James Bunch, University of California, San Diego<\/li><li>G. W. (Pete) Stewart, University of Maryland<\/li><\/ul><\/div><p>The project was centered at Argonne.  The three of us at universities worked at our home institutions during the academic year and we all got together at Argonne in the summer.  Dongarra and I had previously worked on EISPACK.<\/p><h4>Contents<a name=\"29a02e06-5f5a-484c-aacd-c9ad8d487ee0\"><\/a><\/h4><p>Fortran subroutine names at the time were limited to six characters. We adopted a naming convention with five letter names of the form TXXYY.  The first letter, T, indicates the matrix data type. Standard Fortran provides three such types and, although not standard, many systems provide a fourth type, double precision complex, COMPLEX*16.  Unlike EISPACK which follows Algol and has separate arrays for the real and imaginary parts of a complex matrix, LINPACK takes full advantage of Fortran complex.  The first letter can be<\/p><pre>   S Real\r\n   D Double precision\r\n   C Complex\r\n   Z Complex*16<\/pre><p>The next two letters, XX, indicate the form of the matrix or its decomposition.  (A packed array is the upper half of a symmetric matrix stored with one linear subscript.)<\/p><pre>   GE General\r\n   GB General band\r\n   PO Positive definite\r\n   PP Positive definite packed\r\n   PB Positive definite band\r\n   SI Symmetric indefinite\r\n   SP Symmetric indefinite packed\r\n   HI Hermitian indefinite\r\n   HP Hermitian indefinite packed\r\n   TR Triangular\r\n   GT General tridiagonal\r\n   PT Positive definite tridiagonal\r\n   CH Cholesky decomposition\r\n   QR Orthogonal triangular decomposition\r\n   SV Singular value decomposition<\/pre><p>The final two letters, YY, indicate the computation to be done.<\/p><pre>   FA Factor\r\n   SL Solve\r\n   CO Factor and estimate condition\r\n   DI Determinant, inverse, inertia\r\n   DC Decompose\r\n   UD Update\r\n   DD Downdate\r\n   EX Exchange<\/pre><p>This chart shows all 44 of the LINPACK subroutines.  An initial S may be replaced by any of D, C, or Z.  An initial C may be replaced by Z.  For example, DGEFA factors a double precision matrix. This is probably the most important routine in the package.<\/p><pre>         CO    FA    SL    DI\r\n   SGE   *     *     *     *\r\n   SGB   *     *     *     *\r\n   SPO   *     *     *     *\r\n   SPP   *     *     *     *\r\n   SPB   *     *     *     *\r\n   SSI   *     *     *     *\r\n   SSP   *     *     *     *\r\n   CHI   *     *     *     *\r\n   CHP   *     *     *     *\r\n   STR   *           *     *\r\n   SGT               *\r\n   SPT               *<\/pre><pre>         DC    SL    UD    DD    EX\r\n   SCH   *           *     *     *\r\n   SQR   *     *\r\n   SSV   *<\/pre><p>Following this naming convention the subroutine for computing the SVD, the singular value decomposition, fortuitously becomes SSVDC.<\/p><h4>Programming Style<a name=\"e3983130-5817-445a-82df-35f6d8bd24c0\"><\/a><\/h4><p>Fortran at the time was notorious for unstructured, unreadable, \"spaghetti\" code.  We adopted a disciplined programming style and expected people as well as machines to read the codes.  The scope of loops and if-then-else constructions were carefully shown by indenting.  Go-to's and statement numbers were used only to exit blocks and properly handle possibly empty loops.<\/p><p>In fact, we, the authors, did not actually write the final programs, TAMPR did. TAMPR was a powerful software analysis tool developed by Jim Boyle and Ken Dritz at Argonne.  It manipulates and formats Fortran programs to clarify their structure.  It also generates the four data type variants of the programs. So our source code was the Z versions and the S, D, and C versions, as well as \"clean\" Z versions, were producing by TAMPR.<\/p><p>This wasn't just a text processing task.  For example, in ZPOFA, the Cholesky factorization of a positive definite complex Hermitian matrix, the test for a real, positive diagonal is<\/p><pre>          if (s .le. 0.0d0 .or. dimag(a(j,j)) .ne. 0.0d0) go to 40<\/pre><p>In the real version DPOFA there is no imaginary part, so this becomes simply<\/p><pre>          if (s .le. 0.0d0) go to 40<\/pre><h4>BLAS<a name=\"cd557e8c-a6f8-47ba-aba9-67db8d227af8\"><\/a><\/h4><p>All of the inner loops are done by calls to the BLAS, the Basic Linear Algebra Subprograms, developed concurrently by <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2015\/09\/24\/charles-lawson-1931-2015\">Chuck Lawson<\/a> and colleagues.  On systems that did not have optimizing Fortran compilers, the BLAS could be implemented efficiently in machine language.  On vector machines, like the CRAY-1, the loop is a single vector instruction.<\/p><p>The two most important BLAS are the inner product of two vectors, DDOT, and vector plus scalar times vector, DAXPY.  All of the algorithms are column oriented to conform to Fortran storage of two-dimensional arrays and thereby provide locality of memory access.<\/p><h4>Portability<a name=\"eca89ac1-c5a9-44b3-a33e-5ba9f50f8198\"><\/a><\/h4><p>The programs avoid all machine dependent quantities.  There is no need for anything like the EISPACK parameter MACHEP.  Only one of the algorithms, the SVD, is iterative and involves a convergence test. Convergence is detected as a special case of a negligible subdiagonal element.  Here is the code for checking if the subdiagonal element <tt>e(l)<\/tt> is negligible compared to the two nearby diagonal elements <tt>s(l)<\/tt> and <tt>s(l+1)<\/tt>.  I have included enough of the code to show how TAMPR displays structure by indentation and loop exits by comments with an initial <tt>c<\/tt>.<\/p><pre>          do 390 ll = 1, m\r\n             l = m - ll\r\n c        ...exit\r\n             if (l .eq. 0) go to 400\r\n             test = dabs(s(l)) + dabs(s(l+1))\r\n             ztest = test + dabs(e(l))\r\n             if (ztest .ne. test) go to 380\r\n                e(l) = 0.0d0\r\n c        ......exit\r\n                go to 400\r\n   380       continue\r\n   390    continue\r\n   400    continue<\/pre><h4>Publisher<a name=\"db99ad5f-bd54-409a-9ec6-96b66123e806\"><\/a><\/h4><p>No commercial book publisher would publish the <i>LINPACK Users' Guide<\/i>. It did not fit into their established categories; it was neither a textbook nor a research monograph and was neither mathematics nor computer science.<\/p><p>Only one nonprofit publisher, SIAM, was interested.  I vividly remember sitting on a dock in Lake Mendota at the University of Wisconsin-Madison during a SIAM annual meeting, talking to SIAM's Executive Director Ed Block, and deciding that SIAM would take a chance. The <i>Guide<\/i> has turned out to be one of SIAM's all-time best sellers.<\/p><p>A few years later, in 1981, Beresford Parlett reviewed the publication in <a href=\"http:\/\/epubs.siam.org\/doi\/pdf\/10.1137\/1023033\">SIAM Review<\/a>.  He wrote<\/p><p>\r\n<p style=\"margin-left:3ex;\">\r\nIt may seem strange that SIAM should publish and review a users'\r\nguide for a set of Fortran programs.  Yet history will show that\r\nSIAM is thereby helping to foster a new aspect of technology\r\nwhich currently rejoices in a name mathematical software.\r\nThere is as yet no satisfying characterization of this activity,\r\nbut it certainly concerns the strong effect that a computer\r\nsystem has on the efficiency of a program.\r\n<\/p>\r\n<\/p><h4>LINPACK Benchmark<a name=\"04a4bd8b-df14-4b93-9ba6-ee15b6b89446\"><\/a><\/h4><p>Today, LINPACK is better known as a benchmark than as a subroutine library.  I wrote a blog post about the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/06\/24\/the-linpack-benchmark\/\">LINPACK Benchmark<\/a> in 2013.<\/p><h4>In Retrospect<a name=\"ca1888c9-06d9-4446-939d-07fbde96a040\"><\/a><\/h4><p>In a sense, the LINPACK and EISPACK projects were failures.  We had proposed research projects to the NSF to \"explore the methodology, costs, and resources required to produce, test, and disseminate high-quality mathematical software.\"  We never wrote a report or paper addressing those objectives.  We only produced software<\/p><h4>References<a name=\"a8eee6f1-0c47-4f30-a3b6-ee6fb75cd838\"><\/a><\/h4><p>J. Dongarra and G. W. Stewart, \"LINPACK, A Package for Solving Linear Systems\", chapter 2 in W. Cowell, 29 pages, <i>Sources and Development of Mathematical Software<\/i>, Prentice Hall, 1984, <a href=\"http:\/\/www.netlib.org\/utk\/people\/JackDongarra\/PAPERS\/Chapter2-LINPACK.pdf\">PDF available<\/a>.<\/p><p>J. J. Dongarra, J. R.Bunch, C. B. Moler and G. W. Stewart, <i>LINPACK User's Guide<\/i>, SIAM, 1979, 344 pages, <a href=\"http:\/\/epubs.siam.org\/doi\/book\/10.1137\/1.9781611971811\">LINPACK User's Guide<\/a>.<\/p><p>C. Lawson, R. Hanson, D. Kincaid and F. Krogh, \"Basic Linear Algebra Subprograms for Fortran Usage, <i>ACM Trans. Math. Software<\/i> vol. 5, 308-371, 1979. <a href=\"\">Reprint available<\/a>.<\/p><p>J. Boyle and K. Dritz, \"An automated programming system to aid the development of quality mathematical software,\" <i>IFIP Proceedings<\/i>, North Holland, 1974.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_e86512150129490596d428f178e352a3() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='e86512150129490596d428f178e352a3 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' e86512150129490596d428f178e352a3';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2018 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_e86512150129490596d428f178e352a3()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2018a<br><\/p><\/div><!--\r\ne86512150129490596d428f178e352a3 ##### SOURCE BEGIN #####\r\n%% LINPACK, Linear Equation Package\r\n% The ACM Special Interest Group on Programming Languages, SIGPLAN,\r\n% expects to hold the fourth in a series of conferences on\r\n% the History of Programming Languages in 2020, see\r\n% <https:\/\/hopl4.sigplan.org\/ HOPL-IV>.  The first drafts of\r\n% papers are to be submitted by August, 2018.  That long lead time\r\n% gives me the opportunity to write a detailed history of MATLAB.\r\n% I plan to write the paper in sections, which I'll post in\r\n% this blog as they are available.  This is the third such installment.\r\n%\r\n% LINPACK and EISPACK are Fortran subroutine packages for matrix\r\n% computation developed in the 1970's.  They are a follow-up to\r\n% the Algol 60 procedures that I described in my blog post\r\n% about the\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2017\/12\/04\/wilkinson-and-reinsch-handbook-on-linear-algebra\/\r\n% Wilkinson and Reinsch Handbook>.  They led to the first version\r\n% of MATLAB.  This post is about LINPACK.  My\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2018\/01\/02\/eispack-matrix-eigensystem-routines\/\r\n% previous post> was about EISPACK.\r\n%\r\n% <<linpackb.jpg>>\r\n\r\n%% Cart Before the Horse\r\n% Logically a package of subroutines for solving linear equations should\r\n% come before a package for computing matrix eigenvalues.  The tasks\r\n% addressed and the algorithms involved are mathematically simpler.\r\n% But chronologically EISPACK came before LINPACK.  Actually, the first\r\n% section of the _Handbook_ is about linear equations, but we pretty much \r\n% skipped over that section when we did EISPACK.  The eigenvalue \r\n% algorithms were new and we felt it was important to make Fortran\r\n% translations widely available.\r\n\r\n%% LINPACK Project\r\n% In 1975, as EISPACK was nearing completion, we proposed to the NSF\r\n% another research project investigating methods for the development of\r\n% mathematical software.  A byproduct would be the software itself.\r\n\r\n%%\r\n% The four principal investigators, and eventual authors of the Users'\r\n% Guide, were\r\n%\r\n% * Jack Dongarra, Argonne National Laboratory\r\n% * Cleve Moler, University of New Mexico\r\n% * James Bunch, University of California, San Diego\r\n% * G. W. (Pete) Stewart, University of Maryland\r\n\r\n%%\r\n% The project was centered at Argonne.  The three of us at universities\r\n% worked at our home institutions during the academic year and we all\r\n% got together at Argonne in the summer.  Dongarra and I had previously\r\n% worked on EISPACK.\r\n\r\n%% Contents\r\n% Fortran subroutine names at the time were limited to six characters.\r\n% We adopted a naming convention with five letter names of the\r\n% form TXXYY.  The first letter, T, indicates the matrix data type.\r\n% Standard Fortran provides three such types and, although not standard,\r\n% many systems provide a fourth type, double precision complex,\r\n% COMPLEX*16.  Unlike EISPACK which follows Algol and has separate arrays\r\n% for the real and imaginary parts of a complex matrix, LINPACK takes\r\n% full advantage of Fortran complex.  The first letter can be\r\n% \r\n%     S Real\r\n%     D Double precision\r\n%     C Complex\r\n%     Z Complex*16\r\n\r\n%%\r\n% The next two letters, XX, indicate the form of the matrix or its\r\n% decomposition.  (A packed array is the upper half of a symmetric\r\n% matrix stored with one linear subscript.)\r\n%\r\n%     GE General\r\n%     GB General band          \r\n%     PO Positive definite          \r\n%     PP Positive definite packed          \r\n%     PB Positive definite band\r\n%     SI Symmetric indefinite          \r\n%     SP Symmetric indefinite packed         \r\n%     HI Hermitian indefinite\r\n%     HP Hermitian indefinite packed          \r\n%     TR Triangular          \r\n%     GT General tridiagonal          \r\n%     PT Positive definite tridiagonal          \r\n%     CH Cholesky decomposition                \r\n%     QR Orthogonal triangular decomposition\r\n%     SV Singular value decomposition \r\n\r\n%%\r\n% The final two letters, YY, indicate the computation to be done.\r\n%\r\n%     FA Factor\r\n%     SL Solve\r\n%     CO Factor and estimate condition\r\n%     DI Determinant, inverse, inertia\r\n%     DC Decompose\r\n%     UD Update\r\n%     DD Downdate\r\n%     EX Exchange\r\n\r\n%%\r\n% This chart shows all 44 of the LINPACK subroutines.  An initial\r\n% S may be replaced by any of D, C, or Z.  An initial C may be\r\n% replaced by Z.  For example, DGEFA factors a double precision matrix.\r\n% This is probably the most important routine in the package.\r\n%\r\n%           CO    FA    SL    DI\r\n%     SGE   *     *     *     *          \r\n%     SGB   *     *     *     *           \r\n%     SPO   *     *     *     *            \r\n%     SPP   *     *     *     *            \r\n%     SPB   *     *     *     *            \r\n%     SSI   *     *     *     *            \r\n%     SSP   *     *     *     *            \r\n%     CHI   *     *     *     *            \r\n%     CHP   *     *     *     *            \r\n%     STR   *           *     *            \r\n%     SGT               *            \r\n%     SPT               *            \r\n%\r\n%           DC    SL    UD    DD    EX\r\n%     SCH   *           *     *     *                 \r\n%     SQR   *     *   \r\n%     SSV   *\r\n\r\n%%\r\n% Following this naming convention the subroutine for computing the \r\n% SVD, the singular value decomposition, fortuitously becomes SSVDC.\r\n\r\n%% Programming Style\r\n% Fortran at the time was notorious for unstructured, unreadable,\r\n% \"spaghetti\" code.  We adopted a disciplined programming style and\r\n% expected people as well as machines to read the codes.  The scope\r\n% of loops and if-then-else constructions were carefully shown by\r\n% indenting.  Go-to's and statement numbers were used only to\r\n% exit blocks and properly handle possibly empty loops.\r\n\r\n%%\r\n% In fact, we, the authors, did not actually write the final programs,\r\n% TAMPR did. TAMPR was a powerful software analysis tool developed by\r\n% Jim Boyle and Ken Dritz at Argonne.  It manipulates and formats Fortran \r\n% programs to clarify their structure.  It also generates the four data\r\n% type variants of the programs. So our source code was the Z versions and\r\n% the S, D, and C versions, as well as \"clean\" Z versions, were producing\r\n% by TAMPR.\r\n\r\n%%\r\n% This wasn't just a text processing task.  For example, in ZPOFA,\r\n% the Cholesky factorization of a positive definite complex Hermitian\r\n% matrix, the test for a real, positive diagonal is\r\n%\r\n%            if (s .le. 0.0d0 .or. dimag(a(j,j)) .ne. 0.0d0) go to 40\r\n%\r\n% In the real version DPOFA there is no imaginary part, so this becomes\r\n% simply\r\n%\r\n%            if (s .le. 0.0d0) go to 40\r\n\r\n%% BLAS\r\n% All of the inner loops are done by calls to the BLAS, the Basic\r\n% Linear Algebra Subprograms, developed concurrently by \r\n% <https:\/\/blogs.mathworks.com\/cleve\/2015\/09\/24\/charles-lawson-1931-2015\r\n% Chuck Lawson> and colleagues.  On systems that did not have\r\n% optimizing Fortran compilers, the BLAS could be implemented efficiently\r\n% in machine language.  On vector machines, like the CRAY-1, the\r\n% loop is a single vector instruction.\r\n\r\n%% \r\n% The two most important BLAS are the inner product of two vectors, DDOT,\r\n% and vector plus scalar times vector, DAXPY.  All of the algorithms are\r\n% column oriented to conform to Fortran storage of two-dimensional arrays\r\n% and thereby provide locality of memory access.\r\n\r\n%% Portability\r\n% The programs avoid all machine dependent quantities.  There is no need\r\n% for anything like the EISPACK parameter MACHEP.  Only one of the\r\n% algorithms, the SVD, is iterative and involves a convergence test.\r\n% Convergence is detected as a special case of a negligible subdiagonal\r\n% element.  Here is the code for checking if the subdiagonal element\r\n% |e(l)| is negligible compared to the two nearby diagonal elements\r\n% |s(l)| and |s(l+1)|.  I have included enough of the code to show how\r\n% TAMPR displays structure by indentation and loop exits by comments with\r\n% an initial |c|.\r\n%\r\n%            do 390 ll = 1, m\r\n%               l = m - ll\r\n%   c        ...exit\r\n%               if (l .eq. 0) go to 400\r\n%               test = dabs(s(l)) + dabs(s(l+1))\r\n%               ztest = test + dabs(e(l))\r\n%               if (ztest .ne. test) go to 380\r\n%                  e(l) = 0.0d0\r\n%   c        ......exit\r\n%                  go to 400\r\n%     380       continue\r\n%     390    continue\r\n%     400    continue\r\n\r\n%% Publisher\r\n% No commercial book publisher would publish the _LINPACK Users' Guide_.\r\n% It did not fit into their established categories; it was neither\r\n% a textbook nor a research monograph and was neither mathematics nor\r\n% computer science.  \r\n\r\n%%\r\n% Only one nonprofit publisher, SIAM, was interested.  I vividly remember\r\n% sitting on a dock in Lake Mendota at the University of\r\n% Wisconsin-Madison during a SIAM annual meeting, talking to SIAM's\r\n% Executive Director Ed Block, and deciding that SIAM would take a chance.\r\n% The _Guide_ has turned out to be one of SIAM's all-time best sellers.\r\n\r\n%%\r\n% A few years later, in 1981, Beresford Parlett reviewed the publication\r\n% in <http:\/\/epubs.siam.org\/doi\/pdf\/10.1137\/1023033\r\n% SIAM Review>.  He wrote\r\n%\r\n% <html>\r\n% <p style=\"margin-left:3ex;\">\r\n% It may seem strange that SIAM should publish and review a users'\r\n% guide for a set of Fortran programs.  Yet history will show that\r\n% SIAM is thereby helping to foster a new aspect of technology\r\n% which currently rejoices in a name mathematical software.\r\n% There is as yet no satisfying characterization of this activity,\r\n% but it certainly concerns the strong effect that a computer\r\n% system has on the efficiency of a program.\r\n% <\/p>\r\n% <\/html>\r\n\r\n%% LINPACK Benchmark\r\n% Today, LINPACK is better known as a benchmark than as a subroutine\r\n% library.  I wrote a blog post about the\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2013\/06\/24\/the-linpack-benchmark\/\r\n% LINPACK Benchmark> in 2013.\r\n\r\n%% In Retrospect\r\n% In a sense, the LINPACK and EISPACK projects were failures.  We had\r\n% proposed research projects to the NSF to \"explore the methodology,\r\n% costs, and resources required to produce, test, and disseminate\r\n% high-quality mathematical software.\"  We never wrote a report or\r\n% paper addressing those objectives.  We only produced software\r\n\r\n%% References\r\n% J. Dongarra and G. W. Stewart,\r\n% \"LINPACK, A Package for Solving Linear Systems\",\r\n% chapter 2 in W. Cowell, 29 pages,\r\n% _Sources and Development of Mathematical Software_,\r\n% Prentice Hall, 1984,\r\n% <http:\/\/www.netlib.org\/utk\/people\/JackDongarra\/PAPERS\/Chapter2-LINPACK.pdf\r\n% PDF available>.\r\n%\r\n% J. J. Dongarra, J. R.Bunch, C. B. Moler and G. W. Stewart,\r\n% _LINPACK User's Guide_,\r\n% SIAM, 1979, 344 pages,\r\n% <http:\/\/epubs.siam.org\/doi\/book\/10.1137\/1.9781611971811\r\n% LINPACK User's Guide>.\r\n%\r\n% C. Lawson, R. Hanson, D. Kincaid and F. Krogh,\r\n% \"Basic Linear Algebra Subprograms for Fortran Usage,\r\n% _ACM Trans. Math. Software_ vol. 5, 308-371, 1979.\r\n% <\r\n% Reprint available>.\r\n%\r\n% J. Boyle and K. Dritz, \r\n% \"An automated programming system to aid the development of quality\r\n% mathematical software,\"\r\n% _IFIP Proceedings_, North Holland, 1974.\r\n\r\n##### SOURCE END ##### e86512150129490596d428f178e352a3\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/linpackb.jpg\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>The ACM Special Interest Group on Programming Languages, SIGPLAN, expects to hold the fourth in a series of conferences on the History of Programming Languages in 2020, see <a href=\"https:\/\/hopl4.sigplan.org\/\">HOPL-IV<\/a>.  The first drafts of papers are to be submitted by August, 2018.  That long lead time gives me the opportunity to write a detailed history of MATLAB. I plan to write the paper in sections, which I'll post in this blog as they are available.  This is the third such installment.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2018\/01\/23\/linpack-linear-equation-package\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[4,6,16],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2966"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=2966"}],"version-history":[{"count":2,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2966\/revisions"}],"predecessor-version":[{"id":2982,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2966\/revisions\/2982"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=2966"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=2966"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=2966"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}