{"id":471,"date":"2013-02-02T14:02:30","date_gmt":"2013-02-02T19:02:30","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=471"},"modified":"2013-05-02T10:07:58","modified_gmt":"2013-05-02T15:07:58","slug":"hilbert-matrices","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2013\/02\/02\/hilbert-matrices\/","title":{"rendered":"Hilbert Matrices"},"content":{"rendered":"<!DOCTYPE html\r\n  PUBLIC \"-\/\/W3C\/\/DTD HTML 4.01 Transitional\/\/EN\">\r\n<style type=\"text\/css\">\r\n\r\nh1 { font-size:18pt; }\r\nh2.titlebg { font-size:13pt; }\r\nh3 { color:#4A4F55; padding:0px; margin:5px 0px 5px; font-family:Arial, Helvetica, sans-serif; font-size:11pt; font-weight:bold; line-height:140%; border-bottom:1px solid #d6d4d4; display:block; }\r\nh4 { color:#4A4F55; padding:0px; margin:0px 0px 5px; font-family:Arial, Helvetica, sans-serif; font-size:10pt; font-weight:bold; line-height:140%; border-bottom:1px solid #d6d4d4; display:block; }\r\n   \r\np { padding:0px; margin:0px 0px 20px; }\r\nimg { padding:0px; margin:0px 0px 20px; border:none; }\r\np img, pre img, tt img, li img { margin-bottom:0px; } \r\n\r\nul { padding:0px; margin:0px 0px 20px 23px; list-style:square; }\r\nul li { padding:0px; margin:0px 0px 7px 0px; background:none; }\r\nul li ul { padding:5px 0px 0px; margin:0px 0px 7px 23px; }\r\nul li ol li { list-style:decimal; }\r\nol { padding:0px; margin:0px 0px 20px 0px; list-style:decimal; }\r\nol li { padding:0px; margin:0px 0px 7px 23px; list-style-type:decimal; }\r\nol li ol { padding:5px 0px 0px; margin:0px 0px 7px 0px; }\r\nol li ol li { list-style-type:lower-alpha; }\r\nol li ul { padding-top:7px; }\r\nol li ul li { list-style:square; }\r\n\r\npre, tt, code { font-size:12px; }\r\npre { margin:0px 0px 20px; }\r\npre.error { color:red; }\r\npre.codeinput { padding:10px; border:1px solid #d3d3d3; background:#f7f7f7; }\r\npre.codeoutput { padding:10px 11px; margin:0px 0px 20px; color:#4c4c4c; }\r\n\r\n@media print { pre.codeinput, pre.codeoutput { word-wrap:break-word; width:100%; } }\r\n\r\nspan.keyword { color:#0000FF }\r\nspan.comment { color:#228B22 }\r\nspan.string { color:#A020F0 }\r\nspan.untermstring { color:#B20000 }\r\nspan.syscmd { color:#B28C00 }\r\n\r\n.footer { width:auto; padding:10px 0px; margin:25px 0px 0px; border-top:1px dotted #878787; font-size:0.8em; line-height:140%; font-style:italic; color:#878787; text-align:left; float:none; }\r\n.footer p { margin:0px; }\r\n\r\n  <\/style><div class=\"content\"><!--introduction--><p>The inverse Hilbert matrix, <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/invhilb.html\"><tt>invhilb<\/tt><\/a>, has recently made surprise appearances in <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/cody\/problems\/4-make-a-checkerboard-matrix\">Cody<\/a>, the programming game on MATLAB Central, and <a href=\"https:\/\/blogs.mathworks.com\/community\/2013\/01\/07\/football-squares-with-matlab\/\">one of Ned's posts<\/a> in the <i>MATLAB Spoken Here<\/i> blog. Inverse Hilbert matrices had nearly been forgotten in MATLAB. Their comeback is due to the sign pattern of their entries. But I want to take you back to their original role demonstrating ill conditioning in numerical calculation.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#556789d6-254f-4960-8268-d3f2489e333a\">Hilbert Matrix<\/a><\/li><li><a href=\"#ca7562bb-dd81-4643-a778-c70fc866bfa3\">Badly Conditioned<\/a><\/li><li><a href=\"#73083b00-1b97-4570-a516-31796a031dc4\">Inverse Hilbert Matrix<\/a><\/li><li><a href=\"#8b62dd04-5ef2-4fc4-ab79-845676f7f9b0\">An Old Program<\/a><\/li><li><a href=\"#cba4e42d-6004-40d4-9198-ae80f01a6e79\">Roundoff Error<\/a><\/li><li><a href=\"#cf4bde9f-eb68-427c-862b-fe2e2ce756a9\">My Project<\/a><\/li><li><a href=\"#a52be7c8-402b-49b7-920e-11443b74a9ca\">Project Reexamined<\/a><\/li><li><a href=\"#558c1d9c-deec-4cce-bc02-e160499a90b8\">Checkerboard<\/a><\/li><li><a href=\"#5df0e699-5bcc-4213-b7a3-2fcfdad67236\">References<\/a><\/li><\/ul><\/div><pre class=\"codeinput\">   T = invhilb(8);\r\n   imagesc(sign(T))\r\n   axis <span class=\"string\">image<\/span>, axis <span class=\"string\">off<\/span>\r\n   colormap(copper)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/hilbert_blog_01.png\" alt=\"\"> <h4>Hilbert Matrix<a name=\"556789d6-254f-4960-8268-d3f2489e333a\"><\/a><\/h4><p>The Hilbert matrix is the first matrix I ever knew by name.  I met it in my first numerical analysis course, when I was a junior at Caltech in 1959. The matrix comes from least squares approximation by monomials, $x^{j}$, on the unit interval $[0,1]$.  The elements of $H_n$ are inner products<\/p><p>$$ h_{i,j} = \\int_0^1 x^{i-1} x^{j-1} dx = \\frac{1}{i+j-1} $$<\/p><p>Here is the code to generate the 6-by-6 Hilbert matrix in a single array operation.  Similar code is used in the function <tt>hilb<\/tt> that is distributed with MATLAB.  I am using single precision so we can see roundoff error in matrices that print nicely in the width of one blog page.<\/p><pre class=\"codeinput\">   format <span class=\"string\">compact<\/span>\r\n   format <span class=\"string\">long<\/span>\r\n   n = 6\r\n   J = 1:n;\r\n   J = J(ones(n,1),:);\r\n   I = J';\r\n   E = single(ones(n,n));\r\n   H = E.\/(I+J-1)\r\n<\/pre><pre class=\"codeoutput\">n =\r\n     6\r\nH =\r\n   1.0000000   0.5000000   0.3333333   0.2500000   0.2000000   0.1666667\r\n   0.5000000   0.3333333   0.2500000   0.2000000   0.1666667   0.1428571\r\n   0.3333333   0.2500000   0.2000000   0.1666667   0.1428571   0.1250000\r\n   0.2500000   0.2000000   0.1666667   0.1428571   0.1250000   0.1111111\r\n   0.2000000   0.1666667   0.1428571   0.1250000   0.1111111   0.1000000\r\n   0.1666667   0.1428571   0.1250000   0.1111111   0.1000000   0.0909091\r\n<\/pre><h4>Badly Conditioned<a name=\"ca7562bb-dd81-4643-a778-c70fc866bfa3\"><\/a><\/h4><p>The monomials $x^{j}$ are nearly linearly dependent on $[0,1]$. So the Hilbert matrices are nearly singular. The condition of $H_6$ is comparable with single precision roundoff error and the condition of $H_{12}$ is comparable with double precision roundoff error.<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span> <span class=\"string\">e<\/span>\r\n   [1\/cond(hilb(6)) eps(single(1))]\r\n   [1\/cond(hilb(12)) eps(double(1))]\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n  6.6885e-08  1.1921e-07\r\nans =\r\n   5.7268e-17   2.2204e-16\r\n<\/pre><p>The $l_2$ condition number, $\\kappa$, of $H_n$ grows exponentially.<\/p><p>$$ \\kappa(H_n) \\approx 0.01133e^{3.49n} $$<\/p><h4>Inverse Hilbert Matrix<a name=\"73083b00-1b97-4570-a516-31796a031dc4\"><\/a><\/h4><p>A Hilbert matrix qualifies as a Cauchy matrix, which is a matrix whose entries are of the form<\/p><p>$$ a_{i,j} = \\frac{1}{x_i - y_j} $$<\/p><p>A classic Knuth homework problem or the Wikipedia entry on Cauchy matrices (see References) shows how it is possible to express the elements of the inverse of a Cauchy matrix in terms of products involving the $x_i$'s and $y_j$'s.  For a Hilbert matrix, those products become binomial coefficients.<\/p><p>Here is a program that generates the inverse Hilbert matrix using doubly nested <tt>for<\/tt> loops and many scalar evaluations of binomial coefficients.<\/p><pre class=\"codeinput\">type <span class=\"string\">invh<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\nfunction T = invh(n)\r\n   for i = 1:n\r\n      for j = 1:n\r\n         T(i,j) = (-1)^(i+j)*(i+j-1)*nchoosek(n+i-1,n-j)* ...\r\n            nchoosek(n+j-1,n-i)*nchoosek(i+j-2,i-1)^2;\r\n      end\r\n   end\r\nend\r\n\r\n<\/pre><h4>An Old Program<a name=\"8b62dd04-5ef2-4fc4-ab79-845676f7f9b0\"><\/a><\/h4><p>One of the first programs that I ever wrote generated the inverse Hilbert matrix from these binomial coefficients.  It was written in absolute numeric machine language for the Burroughs 205 Datatron at Caltech around 1959.  I needed to avoid time consuming floating point arithmetic, so I used recursion relationships among the coefficients. Here is the core of the MATLAB version of that old program. For a long time this has been distributed with MATLAB as <tt>invhilb<\/tt>. This is the function that the Cody participants discovered.<\/p><pre class=\"codeinput\">type <span class=\"string\">invhilb_code<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n   T = zeros(n,n);\r\n   p = n;\r\n   for i = 1:n\r\n       r = p*p;\r\n       T(i,i) = r\/(2*i-1);\r\n       for j = i+1:n\r\n           r = -((n-j+1)*r*(n+j-1))\/(j-1)^2;\r\n           T(i,j) = r\/(i+j-1);\r\n           T(j,i) = r\/(i+j-1);\r\n       end\r\n       p = ((n-i)*p*(n+i))\/(i^2);\r\n   end\r\n\r\n<\/pre><p>We can use either code to generate the exact inverse of $H_6$.<\/p><pre class=\"codeinput\">T = invhilb(6)\r\n<\/pre><pre class=\"codeoutput\">T =\r\n          36        -630        3360       -7560        7560       -2772\r\n        -630       14700      -88200      211680     -220500       83160\r\n        3360      -88200      564480    -1411200     1512000     -582120\r\n       -7560      211680    -1411200     3628800    -3969000     1552320\r\n        7560     -220500     1512000    -3969000     4410000    -1746360\r\n       -2772       83160     -582120     1552320    -1746360      698544\r\n<\/pre><p>Since the elements of a Hilbert matrix are rational fractions, the elements of its inverse must also be rational fractions. But it turns out that the denominators of all the fractions are equal to one, so, as you can see, the inverse has integer entries. The integers are large, hence the large condition number.<\/p><h4>Roundoff Error<a name=\"cba4e42d-6004-40d4-9198-ae80f01a6e79\"><\/a><\/h4><p>Let's invert <tt>T<\/tt> and see how close we get to <tt>H<\/tt>.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   inv(single(T))\r\n<\/pre><pre class=\"codeoutput\">Warning: Matrix is close to singular or badly scaled. Results may be inaccurate.\r\nRCOND =  3.380340e-08. \r\nans =\r\n   1.0102043   0.5085138   0.3406332   0.2563883   0.2056789   0.1717779\r\n   0.5085138   0.3404373   0.2560914   0.2053309   0.1714057   0.1471225\r\n   0.3406332   0.2560914   0.2052232   0.1712378   0.1469208   0.1286576\r\n   0.2563883   0.2053309   0.1712378   0.1468577   0.1285564   0.1143121\r\n   0.2056789   0.1714057   0.1469208   0.1285564   0.1142728   0.1028457\r\n   0.1717779   0.1471225   0.1286576   0.1143121   0.1028457   0.0934704\r\n<\/pre><p>We can just recognize one or two digits of <tt>H<\/tt>.  Since the condition number of <tt>T<\/tt>, is comparable with single precision roundoff level, we might have expected to lose all the digits.  Roundoff error observed in actual matrix computation is usually not as bad as estimates based on condition numbers predict.<\/p><h4>My Project<a name=\"cf4bde9f-eb68-427c-862b-fe2e2ce756a9\"><\/a><\/h4><p>Over 50 years ago, after my first numerical analysis course, where I was introduced to matrix computation by Prof. John Todd, I did an individual undergrad research project under his direction.  Part of the project involved studying the roundoff error in matrix inversion.  First, I wrote a matrix inversion program.  Then, in order to assess roundoff error, I generated Hilbert matrices, inverted them with my program, and compared the computed inverses with the exact inverses generated by the program that I've just described.  If I'd had MATLAB at the time, the project would have gone something like this.<\/p><pre class=\"codeinput\">   n = 6;\r\n   H = single(hilb(n));\r\n   X = inv(H);\r\n   T = single(invhilb(n));\r\n   relerr = norm(X-T,inf)\/norm(T)\r\n<\/pre><pre class=\"codeoutput\">Warning: Matrix is close to singular or badly scaled. Results may be inaccurate.\r\nRCOND =  3.570602e-08. \r\nrelerr =\r\n   0.0470157\r\n<\/pre><p>(My old matrix inversion program would not have estimated condition and issued a warning message.)<\/p><p>The Datatron floating point arithmetic had eight decimal digits, so I used values of <tt>n<\/tt> up to 7 and reported the observed <tt>relerr<\/tt>'s as the result of roundoff error from matrix inversion by Gaussian elimination for this particular badly conditioned matrix.  Case closed.  Project grade: A.<\/p><h4>Project Reexamined<a name=\"a52be7c8-402b-49b7-920e-11443b74a9ca\"><\/a><\/h4><p>I went on to grad school at Stanford and met George Forsythe and later Jim Wilkinson.  As I came to appreciate the work that Wilkinson was doing, I realized that in my project at Caltech I had made a subtle but important mistake.  The same mistake is still being made today by others. I had not actually inverted the Hilbert matrix.  I had inverted the floating point approximation to the Hilbert matrix.  It turns out that the initial step -- converting the fractions that are the elements of the Hilbert matrix to floating point numbers -- has a larger effect on the computed inverse than the inversion process itself.  Even if my inversion program could somehow compute the exact inverse of the matrix it is given, it could not match the result generated by my theoretical inverse Hilbert matrix program.<\/p><p>I should have inverted the inverse, because I would not have to make any roundoff errors to generate the input.  Since the elements of the starting matrix are integers, they can be represented exactly in floating point, at least if <tt>n<\/tt> is not too large.  I should have interchanged the roles of <tt>T<\/tt> and <tt>H<\/tt> in my project.<\/p><pre class=\"codeinput\">   n = 6;\r\n   T = single(invhilb(n));\r\n   X = inv(T);\r\n   H = single(hilb(n));\r\n   relerr = norm(X-H,inf)\/norm(H)\r\n<\/pre><pre class=\"codeoutput\">Warning: Matrix is close to singular or badly scaled. Results may be inaccurate.\r\nRCOND =  3.380340e-08. \r\nrelerr =\r\n   0.0266826\r\n<\/pre><p>This confirms what Wilkinson had been saying, and proving. The roundoff error caused by inverting a matrix using Gaussian elimination with partial pivoting is comparable with -- in this case actually smaller than -- the error caused by rounding the data in the first place.<\/p><h4>Checkerboard<a name=\"558c1d9c-deec-4cce-bc02-e160499a90b8\"><\/a><\/h4><p>You can see the +\/- sign pattern of the elements in the inverse Hilbert matrix by examining <tt>T<\/tt> or the <tt>(-1)^(i+j)<\/tt> multiplying the binomial coefficients in <tt>invh<\/tt>.  That's what caught the attention of the folks playing Cody.  I have no idea how they came across it.<\/p><h4>References<a name=\"5df0e699-5bcc-4213-b7a3-2fcfdad67236\"><\/a><\/h4><p>[1] Donald E. Knuth, <i>The Art of Computer Programming<\/i>, vol. 1, Problems 1.2.3-41 and 1.2.3-45, pp. 36-37, Addison-Wesley, 1968.<\/p><p>[2] Wikipedia entry on Cauchy matrix, <a href=\"http:\/\/en.wikipedia.org\/wiki\/Cauchy_matrix\">&lt;http:\/\/en.wikipedia.org\/wiki\/Cauchy_matrix<\/a>&gt;.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_18d7b8e7795c428395ff3ae25374aebb() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='18d7b8e7795c428395ff3ae25374aebb ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 18d7b8e7795c428395ff3ae25374aebb';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2013 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_18d7b8e7795c428395ff3ae25374aebb()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2012b<br><\/p><p class=\"footer\"><br>\r\n      Published with MATLAB&reg; R2012b<br><\/p><\/div><!--\r\n18d7b8e7795c428395ff3ae25374aebb ##### SOURCE BEGIN #####\r\n%% Hilbert Matrices\r\n% The inverse Hilbert matrix,\r\n% <https:\/\/www.mathworks.com\/help\/matlab\/ref\/invhilb.html |invhilb|>,\r\n% has recently made surprise appearances in\r\n% <https:\/\/www.mathworks.com\/matlabcentral\/cody\/problems\/4-make-a-checkerboard-matrix Cody>, \r\n% the programming game on MATLAB Central,\r\n% and\r\n% <https:\/\/blogs.mathworks.com\/community\/2013\/01\/07\/football-squares-with-matlab\/  one of Ned's posts>\r\n% in the _MATLAB Spoken Here_ blog.\r\n% Inverse Hilbert matrices had nearly been forgotten in MATLAB.\r\n% Their comeback is due to the sign pattern of their entries.\r\n% But I want to take you back to their original role demonstrating \r\n% ill conditioning in numerical calculation.\r\n\r\n%%\r\n   T = invhilb(8);\r\n   imagesc(sign(T))\r\n   axis image, axis off\r\n   colormap(copper)  \r\n\r\n%% Hilbert Matrix\r\n% The Hilbert matrix is the first matrix I ever knew by name.  I met it in\r\n% my first numerical analysis course, when I was a junior at Caltech in 1959.\r\n% The matrix comes from least squares approximation by monomials, $x^{j}$,\r\n% on the unit interval $[0,1]$.  The elements of $H_n$ are inner products\r\n%\r\n% $$ h_{i,j} = \\int_0^1 x^{i-1} x^{j-1} dx = \\frac{1}{i+j-1} $$\r\n%\r\n%%\r\n% Here is the code to generate the 6-by-6 Hilbert matrix in a single array\r\n% operation.  Similar code is used in the function |hilb| that is distributed\r\n% with MATLAB.  I am using single precision so we can see roundoff error in\r\n% matrices that print nicely in the width of one blog page.\r\n\r\n   format compact\r\n   format long\r\n   n = 6\r\n   J = 1:n;\r\n   J = J(ones(n,1),:);\r\n   I = J';\r\n   E = single(ones(n,n));\r\n   H = E.\/(I+J-1)\r\n\r\n%% Badly Conditioned\r\n% The monomials $x^{j}$ are nearly linearly dependent on $[0,1]$.\r\n% So the Hilbert matrices are nearly singular.\r\n% The condition of $H_6$ is comparable with single precision roundoff error\r\n% and the condition of $H_{12}$ is comparable with double precision roundoff\r\n% error.\r\n\r\n   format short e\r\n   [1\/cond(hilb(6)) eps(single(1))]\r\n   [1\/cond(hilb(12)) eps(double(1))]\r\n\r\n%%\r\n% The $l_2$ condition number, $\\kappa$, of $H_n$ grows exponentially.\r\n%\r\n% $$ \\kappa(H_n) \\approx 0.01133e^{3.49n} $$\r\n%\r\n\r\n%% Inverse Hilbert Matrix\r\n% A Hilbert matrix qualifies as a Cauchy matrix, which is a matrix whose\r\n% entries are of the form\r\n%\r\n% $$ a_{i,j} = \\frac{1}{x_i - y_j} $$\r\n%\r\n% A classic Knuth homework problem or the Wikipedia entry on Cauchy matrices\r\n% (see References) shows how it is possible to express the elements of the\r\n% inverse of a Cauchy matrix in terms of products involving the $x_i$'s\r\n% and $y_j$'s.  For a Hilbert matrix, those products become binomial\r\n% coefficients.\r\n\r\n%%\r\n% Here is a program that generates the inverse Hilbert matrix using\r\n% doubly nested |for| loops and many scalar evaluations of binomial\r\n% coefficients.\r\n\r\ntype invh\r\n\r\n%% An Old Program\r\n% One of the first programs that I ever wrote generated the inverse Hilbert\r\n% matrix from these binomial coefficients.  It was written in absolute\r\n% numeric machine language for the Burroughs 205 Datatron at Caltech\r\n% around 1959.  I needed to avoid time consuming floating point arithmetic,\r\n% so I used recursion relationships among the coefficients.\r\n% Here is the core of the MATLAB version of that old program.\r\n% For a long time this has been distributed with MATLAB as |invhilb|.\r\n% This is the function that the Cody participants discovered.\r\n\r\ntype invhilb_code\r\n\r\n%% \r\n% We can use either code to generate the exact inverse of $H_6$.\r\n\r\nT = invhilb(6)\r\n\r\n%%\r\n% Since the elements of a Hilbert matrix are rational fractions,\r\n% the elements of its inverse must also be rational fractions.\r\n% But it turns out that the denominators of all the fractions are equal\r\n% to one, so, as you can see, the inverse has integer entries.\r\n% The integers are large, hence the large condition number.\r\n\r\n%% Roundoff Error\r\n% Let's invert |T| and see how close we get to |H|.\r\n\r\n   format long\r\n   inv(single(T))\r\n\r\n%%\r\n% We can just recognize one or two digits of |H|.  Since the condition\r\n% number of |T|, is comparable with single precision roundoff level,\r\n% we might have expected to lose all the digits.  Roundoff error observed\r\n% in actual matrix computation is usually not as bad as estimates based\r\n% on condition numbers predict.\r\n\r\n%% My Project\r\n% Over 50 years ago,\r\n% after my first numerical analysis course, where I was introduced to\r\n% matrix computation by Prof. John Todd, I did an individual undergrad\r\n% research project under his direction.  Part of the project involved\r\n% studying the roundoff error in matrix inversion.  First, I wrote a matrix\r\n% inversion program.  Then, in order to assess roundoff error, I generated\r\n% Hilbert matrices, inverted them with my program, and compared the computed\r\n% inverses with the exact inverses generated by the program that I've just\r\n% described.  If I'd had MATLAB at the time, the project would have gone\r\n% something like this.\r\n\r\n   n = 6;\r\n   H = single(hilb(n));\r\n   X = inv(H);\r\n   T = single(invhilb(n));\r\n   relerr = norm(X-T,inf)\/norm(T)\r\n\r\n%%\r\n% (My old matrix inversion program would not have estimated condition and\r\n% issued a warning message.)\r\n\r\n%%\r\n% The Datatron floating point arithmetic had eight decimal digits, so I used\r\n% values of |n| up to 7 and reported the observed |relerr|'s as the result\r\n% of roundoff error from matrix inversion by Gaussian elimination for\r\n% this particular badly conditioned matrix.  Case closed.  Project grade: A.\r\n\r\n%% Project Reexamined\r\n% I went on to grad school at Stanford and met George Forsythe and later\r\n% Jim Wilkinson.  As I came to appreciate the work that Wilkinson was doing,\r\n% I realized that in my project at Caltech I had made a subtle but\r\n% important mistake.  The same mistake is still being made today by others.\r\n% I had not actually inverted the Hilbert matrix.  I had inverted the floating\r\n% point approximation to the Hilbert matrix.  It turns out that the initial\r\n% step REPLACE_WITH_DASH_DASH converting the fractions that are the elements of the Hilbert\r\n% matrix to floating point numbers REPLACE_WITH_DASH_DASH has a larger effect on the \r\n% computed inverse than the inversion process itself.  Even if my inversion\r\n% program could somehow compute the exact inverse of the matrix it is given,\r\n% it could not match the result generated by my theoretical inverse Hilbert\r\n% matrix program.\r\n\r\n%%\r\n% I should have inverted the inverse, because I would not have to make any\r\n% roundoff errors to generate the input.  Since the elements of the starting\r\n% matrix are integers, they can be represented exactly in floating point,\r\n% at least if |n| is not too large.  I should have interchanged the roles of\r\n% |T| and |H| in my project.\r\n\r\n   n = 6;\r\n   T = single(invhilb(n));\r\n   X = inv(T);\r\n   H = single(hilb(n));\r\n   relerr = norm(X-H,inf)\/norm(H)\r\n\r\n%%\r\n% This confirms what Wilkinson had been saying, and proving.\r\n% The roundoff error caused by inverting a matrix using Gaussian\r\n% elimination with partial pivoting is comparable with REPLACE_WITH_DASH_DASH in this\r\n% case actually smaller than REPLACE_WITH_DASH_DASH the error caused by rounding the\r\n% data in the first place.\r\n\r\n%% Checkerboard\r\n% You can see the +\/- sign pattern of the elements in the inverse Hilbert\r\n% matrix by examining |T| or the |(-1)^(i+j)| multiplying the\r\n% binomial coefficients in |invh|.  That's what caught the attention of the\r\n% folks playing Cody.  I have no idea how they came across it.\r\n\r\n%% References\r\n% [1] Donald E. Knuth, _The Art of Computer Programming_, vol. 1,\r\n% Problems 1.2.3-41 and 1.2.3-45, pp. 36-37, Addison-Wesley, 1968.\r\n%%\r\n% [2] Wikipedia entry on Cauchy matrix,\r\n% <http:\/\/en.wikipedia.org\/wiki\/Cauchy_matrix\r\n% http:\/\/en.wikipedia.org\/wiki\/Cauchy_matrix>.\r\n\r\n##### SOURCE END ##### 18d7b8e7795c428395ff3ae25374aebb\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/hilbert_blog_01.png\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>The inverse Hilbert matrix, <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/invhilb.html\"><tt>invhilb<\/tt><\/a>, has recently made surprise appearances in <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/cody\/problems\/4-make-a-checkerboard-matrix\">Cody<\/a>, the programming game on MATLAB Central, and <a href=\"https:\/\/blogs.mathworks.com\/community\/2013\/01\/07\/football-squares-with-matlab\/\">one of Ned's posts<\/a> in the <i>MATLAB Spoken Here<\/i> blog. Inverse Hilbert matrices had nearly been forgotten in MATLAB. Their comeback is due to the sign pattern of their entries. But I want to take you back to their original role demonstrating ill conditioning in numerical calculation.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/02\/02\/hilbert-matrices\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[6],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/471"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=471"}],"version-history":[{"count":11,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/471\/revisions"}],"predecessor-version":[{"id":658,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/471\/revisions\/658"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=471"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=471"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=471"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}