{"id":4960,"date":"2019-06-24T20:30:54","date_gmt":"2019-06-25T01:30:54","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=4960"},"modified":"2021-01-31T14:37:07","modified_gmt":"2021-01-31T19:37:07","slug":"bohemian-matrices-in-the-matlab-gallery","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2019\/06\/24\/bohemian-matrices-in-the-matlab-gallery\/","title":{"rendered":"Bohemian Matrices in the MATLAB\u00ae Gallery"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>We will have a two-part minisymposium on \"Bohemian Matrices\" at <a title=\"Old link was https:\/\/iciam2019.org\">ICIAM2019<\/a>, the International Congress on Industrial and Applied Mathematics in Valencia, Spain, July 15-19. This is an outline of my talk.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#f9a707bd-da63-4e21-8360-c29255408186\">Why \"Bohemian\"<\/a><\/li><li><a href=\"#ba754f30-8423-473f-8e04-80909e5f8032\">The MATLAB Gallery<\/a><\/li><li><a href=\"#46b0e738-1efb-4b71-a857-3acf8906d0c3\">gallery5<\/a><\/li><li><a href=\"#223c65cc-6466-419a-9329-a87b2127647c\">triw<\/a><\/li><li><a href=\"#814dcc3b-fd02-4c0f-b8cc-50a8ca4de349\">moler<\/a><\/li><li><a href=\"#3ab9b561-0f10-4eb8-ab05-4f30b2791026\">kahan<\/a><\/li><li><a href=\"#7d0df1e5-a1bb-403a-8a83-5abc64e84b6f\">rosser<\/a><\/li><li><a href=\"#b5d742c4-c93f-4f4b-b523-7c4193b5e932\">parter<\/a><\/li><li><a href=\"#22b8f2a3-261f-4c4a-a9f0-72b751f86de1\">pascal<\/a><\/li><li><a href=\"#9ad70807-3a8a-4b46-ba6f-6d7701c1757a\">grcar<\/a><\/li><li><a href=\"#9e424e64-c12b-4ee2-80ff-39cf1b1bd77d\">gallery5<\/a><\/li><\/ul><\/div><h4>Why \"Bohemian\"<a name=\"f9a707bd-da63-4e21-8360-c29255408186\"><\/a><\/h4><p>The colorful name \"Bohemian Matrices\" is the responsibility of Rob Corless and Steven Thornton of the University of Western Ontario. Quoting <a href=\"http:\/\/www.bohemianmatrices.com\">the web site<\/a>.<\/p><p>\r\n<p style=\"margin-left:3ex;\">\r\nWhen this project was in its early stages, our focus was on random\r\ninteger matrices of bounded height. We originally used the phrase\r\n\"bounded height integer matrix eigenvalues\" to refer to our work.\r\nThis led to the acronym BHIME which isn't too far off from \"Bohemian\".\r\n<\/p>\r\n<\/p><h4>The MATLAB Gallery<a name=\"ba754f30-8423-473f-8e04-80909e5f8032\"><\/a><\/h4><p>The <tt>gallery<\/tt> is a collection of interesting matrices, maintained in MATLAB by Nick Higham.  Many of them are Bohemian, or Bohemian wannabees.  A catalog for the collection is available with<\/p><pre class=\"codeinput\">   help <span class=\"string\">gallery_<\/span>\r\n<\/pre><pre class=\"codeoutput\"> GALLERY Higham test matrices.\r\n    [out1,out2,...] = GALLERY(matname, param1, param2, ...)\r\n    takes matname, a string that is the name of a matrix family, and\r\n    the family's input parameters. See the listing below for available\r\n    matrix families. Most of the functions take an input argument\r\n    that specifies the order of the matrix, and unless otherwise\r\n    stated, return a single output.\r\n \r\n    For additional information, type \"help private\/matname\", where matname\r\n    is the name of the matrix family.\r\n \r\n    binomial    Binomial matrix -- multiple of involutory matrix.\r\n    cauchy      Cauchy matrix.\r\n    chebspec    Chebyshev spectral differentiation matrix.\r\n    chebvand    Vandermonde-like matrix for the Chebyshev polynomials.\r\n    chow        Chow matrix -- a singular Toeplitz lower Hessenberg matrix.\r\n    circul      Circulant matrix.\r\n    clement     Clement matrix -- tridiagonal with zero diagonal entries.\r\n    compar      Comparison matrices.\r\n    condex      Counter-examples to matrix condition number estimators.\r\n    cycol       Matrix whose columns repeat cyclically.\r\n    dorr        Dorr matrix -- diagonally dominant, ill-conditioned, tridiagonal.\r\n                (One or three output arguments, sparse)\r\n    dramadah    Matrix of ones and zeroes whose inverse has large integer entries.\r\n    fiedler     Fiedler matrix -- symmetric.\r\n    forsythe    Forsythe matrix -- a perturbed Jordan block.\r\n    frank       Frank matrix -- ill-conditioned eigenvalues.\r\n    gcdmat      GCD matrix.\r\n    gearmat     Gear matrix.\r\n    grcar       Grcar matrix -- a Toeplitz matrix with sensitive eigenvalues.\r\n    hanowa      Matrix whose eigenvalues lie on a vertical line in the complex\r\n                plane.\r\n    house       Householder matrix. (Three output arguments)\r\n    integerdata Array of arbitrary data from uniform distribution on\r\n                specified range of integers\r\n    invhess     Inverse of an upper Hessenberg matrix.\r\n    invol       Involutory matrix.\r\n    ipjfact     Hankel matrix with factorial elements. (Two output arguments)\r\n    jordbloc    Jordan block matrix.\r\n    kahan       Kahan matrix -- upper trapezoidal.\r\n    kms         Kac-Murdock-Szego Toeplitz matrix.\r\n    krylov      Krylov matrix.\r\n    lauchli     Lauchli matrix -- rectangular.\r\n    lehmer      Lehmer matrix -- symmetric positive definite.\r\n    leslie      Leslie matrix.\r\n    lesp        Tridiagonal matrix with real, sensitive eigenvalues.\r\n    lotkin      Lotkin matrix.\r\n    minij       Symmetric positive definite matrix MIN(i,j).\r\n    moler       Moler matrix -- symmetric positive definite.\r\n    neumann     Singular matrix from the discrete Neumann problem (sparse).\r\n    normaldata  Array of arbitrary data from standard normal distribution\r\n    orthog      Orthogonal and nearly orthogonal matrices.\r\n    parter      Parter matrix -- a Toeplitz matrix with singular values near PI.\r\n    pei         Pei matrix.\r\n    poisson     Block tridiagonal matrix from Poisson's equation (sparse).\r\n    prolate     Prolate matrix -- symmetric, ill-conditioned Toeplitz matrix.\r\n    qmult       Pre-multiply matrix by random orthogonal matrix.\r\n    randcolu    Random matrix with normalized cols and specified singular values.\r\n    randcorr    Random correlation matrix with specified eigenvalues.\r\n    randhess    Random, orthogonal upper Hessenberg matrix.\r\n    randjorth   Random J-orthogonal (hyperbolic, pseudo-orthogonal) matrix.\r\n    rando       Random matrix with elements -1, 0 or 1.\r\n    randsvd     Random matrix with pre-assigned singular values and specified\r\n                bandwidth.\r\n    redheff     Matrix of 0s and 1s of Redheffer.\r\n    riemann     Matrix associated with the Riemann hypothesis.\r\n    ris         Ris matrix -- a symmetric Hankel matrix.\r\n    sampling    Nonsymmetric matrix with integer, ill conditioned eigenvalues.\r\n    smoke       Smoke matrix -- complex, with a \"smoke ring\" pseudospectrum.\r\n    toeppd      Symmetric positive definite Toeplitz matrix.\r\n    toeppen     Pentadiagonal Toeplitz matrix (sparse).\r\n    tridiag     Tridiagonal matrix (sparse).\r\n    triw        Upper triangular matrix discussed by Wilkinson and others.\r\n    uniformdata Array of arbitrary data from standard uniform distribution\r\n    wathen      Wathen matrix -- a finite element matrix (sparse, random entries).\r\n    wilk        Various specific matrices devised\/discussed by Wilkinson.\r\n                (Two output arguments)\r\n \r\n    GALLERY(3) is a badly conditioned 3-by-3 matrix.\r\n    GALLERY(5) is an interesting eigenvalue problem.  Try to find\r\n    its EXACT eigenvalues and eigenvectors.\r\n \r\n    See also MAGIC, HILB, INVHILB, HADAMARD, PASCAL, ROSSER, VANDER, WILKINSON.\r\n\r\n<\/pre><p>The matrices in the <tt>See also<\/tt> line were available before <tt>gallery<\/tt> was introduced.  I think of them as part of the gallery.<\/p><h4>gallery5<a name=\"46b0e738-1efb-4b71-a857-3acf8906d0c3\"><\/a><\/h4><p>My favorite matrix in the gallery is<\/p><pre class=\"codeinput\">   G = gallery(5)\r\n<\/pre><pre class=\"codeoutput\">\r\nG =\r\n\r\n          -9          11         -21          63        -252\r\n          70         -69         141        -421        1684\r\n        -575         575       -1149        3451      -13801\r\n        3891       -3891        7782      -23345       93365\r\n        1024       -1024        2048       -6144       24572\r\n\r\n<\/pre><p>Let's compute its eigenvalues.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span> <span class=\"string\">e<\/span>\r\n   eig(G)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n     -3.472940132398842e-02 + 2.579009841174434e-02i\r\n     -3.472940132398842e-02 - 2.579009841174434e-02i\r\n      1.377760760018629e-02 + 4.011025813393478e-02i\r\n      1.377760760018629e-02 - 4.011025813393478e-02i\r\n      4.190358744843689e-02 + 0.000000000000000e+00i\r\n\r\n<\/pre><p>How accurate are these?  What are the exact eigenvalues?<\/p><p>I will give readers of this post, and attendees at my talk, some time to contemplate these questions by delaying the answers until the end.<\/p><h4>triw<a name=\"223c65cc-6466-419a-9329-a87b2127647c\"><\/a><\/h4><p>The <tt>triw<\/tt> matrix demonstrates that Gaussian elimination cannot reliably detect near singularity.<\/p><pre class=\"codeinput\">   dbtype <span class=\"string\">1:2<\/span> <span class=\"string\">private\/triw<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n1     function T = triw(n, alpha, k, classname)\r\n2     %TRIW   Upper triangular matrix discussed by Wilkinson and others.\r\n<\/pre><pre class=\"codeinput\">   n = 12;\r\n   T = gallery(<span class=\"string\">'triw'<\/span>,n)\r\n<\/pre><pre class=\"codeoutput\">\r\nT =\r\n\r\n     1    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1\r\n     0     1    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1\r\n     0     0     1    -1    -1    -1    -1    -1    -1    -1    -1    -1\r\n     0     0     0     1    -1    -1    -1    -1    -1    -1    -1    -1\r\n     0     0     0     0     1    -1    -1    -1    -1    -1    -1    -1\r\n     0     0     0     0     0     1    -1    -1    -1    -1    -1    -1\r\n     0     0     0     0     0     0     1    -1    -1    -1    -1    -1\r\n     0     0     0     0     0     0     0     1    -1    -1    -1    -1\r\n     0     0     0     0     0     0     0     0     1    -1    -1    -1\r\n     0     0     0     0     0     0     0     0     0     1    -1    -1\r\n     0     0     0     0     0     0     0     0     0     0     1    -1\r\n     0     0     0     0     0     0     0     0     0     0     0     1\r\n\r\n<\/pre><p><tt>T<\/tt> is already upper triangular.  It is the <tt>U<\/tt> of its own LU-decomposition.  Since there are no small pivots on the diagonal, we might be tempted to conclude that <tt>T<\/tt> is well conditioned.  However, let<\/p><pre class=\"codeinput\">   x = 2.^(-(1:n-1))';\r\n   format <span class=\"string\">rat<\/span>\r\n   x(n) = x(n-1)\r\n<\/pre><pre class=\"codeoutput\">\r\nx =\r\n\r\n       1\/2     \r\n       1\/4     \r\n       1\/8     \r\n       1\/16    \r\n       1\/32    \r\n       1\/64    \r\n       1\/128   \r\n       1\/256   \r\n       1\/512   \r\n       1\/1024  \r\n       1\/2048  \r\n       1\/2048  \r\n\r\n<\/pre><p>This vector is nearly a null vector for <tt>T<\/tt>.<\/p><pre class=\"codeinput\">   Tx = T*x\r\n<\/pre><pre class=\"codeoutput\">\r\nTx =\r\n\r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       0       \r\n       1\/2048  \r\n\r\n<\/pre><p>The 1-norm condition number of <tt>T<\/tt> is at least as large as<\/p><pre class=\"codeinput\">   norm(x,1)\/norm(Tx,1)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n    2048       \r\n\r\n<\/pre><p>That is growing like <tt>2^n<\/tt>.<\/p><h4>moler<a name=\"814dcc3b-fd02-4c0f-b8cc-50a8ca4de349\"><\/a><\/h4><p>The <tt>moler<\/tt> matrix demonstrates that Cholesky decomposition of a symmetric, positive definite matrix cannot reliably detect near singularity either.  The matrix can either be obtained from the gallery,<\/p><pre class=\"codeinput\">   M = gallery(<span class=\"string\">'moler'<\/span>,n);\r\n<\/pre><p>or generated from <tt>T<\/tt>,<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span>\r\n   M = T'*T\r\n<\/pre><pre class=\"codeoutput\">\r\nM =\r\n\r\n     1    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1\r\n    -1     2     0     0     0     0     0     0     0     0     0     0\r\n    -1     0     3     1     1     1     1     1     1     1     1     1\r\n    -1     0     1     4     2     2     2     2     2     2     2     2\r\n    -1     0     1     2     5     3     3     3     3     3     3     3\r\n    -1     0     1     2     3     6     4     4     4     4     4     4\r\n    -1     0     1     2     3     4     7     5     5     5     5     5\r\n    -1     0     1     2     3     4     5     8     6     6     6     6\r\n    -1     0     1     2     3     4     5     6     9     7     7     7\r\n    -1     0     1     2     3     4     5     6     7    10     8     8\r\n    -1     0     1     2     3     4     5     6     7     8    11     9\r\n    -1     0     1     2     3     4     5     6     7     8     9    12\r\n\r\n<\/pre><p>I was surprised when Nick named the matrix after me. The comments in the code indicate he found the matrix in a 30-year old book by a friend of mine, John Nash.<\/p><pre class=\"codeinput\">   dbtype <span class=\"string\">9:14<\/span> <span class=\"string\">private\/moler<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n9     %   Nash [1] attributes the ALPHA = -1 matrix to Moler.\r\n10    %\r\n11    %   Reference:\r\n12    %   [1] J. C. Nash, Compact Numerical Methods for Computers: Linear\r\n13    %   Algebra and Function Minimisation, second edition, Adam Hilger,\r\n14    %   Bristol, 1990 (Appendix 1).\r\n<\/pre><p>John doesn't remember when he heard about the example from me and I don't remember where or where I first came across it.<\/p><p>We know that <tt>T<\/tt> should be the Cholesky decomposition of <tt>M<\/tt>, but let's make sure.<\/p><pre class=\"codeinput\">   assert(isequal(chol(M),T))\r\n<\/pre><p>We have already seen that, despite the fact that it has no small elements, <tt>T<\/tt> is badly conditioned.  <tt>M<\/tt> must be at least as badly conditioned as <tt>T<\/tt>.  In fact, it is far worse.  A good null vector is provided by<\/p><pre class=\"codeinput\">   [~,v] = condest(M);\r\n<\/pre><p>Let's look at <tt>v<\/tt> alongside our <tt>x<\/tt>.  They are nearly the same.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   [v, x]\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n   0.500000178813913   0.500000000000000\r\n   0.250000089406957   0.250000000000000\r\n   0.125000223517391   0.125000000000000\r\n   0.062500469386522   0.062500000000000\r\n   0.031250949948913   0.031250000000000\r\n   0.015626905485761   0.015625000000000\r\n   0.007816313765488   0.007812500000000\r\n   0.003913878927961   0.003906250000000\r\n   0.001968383554413   0.001953125000000\r\n   0.001007079958072   0.000976562500000\r\n   0.000549316340766   0.000488281250000\r\n   0.000366210893844   0.000488281250000\r\n\r\n<\/pre><p>The 1-norm condition of <tt>M<\/tt> is at least as large as<\/p><pre class=\"codeinput\">   norm(v,1)\/norm(M*v,1)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n     2.796202999840670e+06\r\n\r\n<\/pre><p>It turns out that is growing faster than <tt>4^n<\/tt>.<\/p><h4>kahan<a name=\"3ab9b561-0f10-4eb8-ab05-4f30b2791026\"><\/a><\/h4><p>But wait, you say, use column pivoting.  Indeed, QR with column pivoting will detect the ill-condition of <tt>T<\/tt> or <tt>M<\/tt>. But the <tt>kahan<\/tt> matrix in the Gallery is a version of <tt>T<\/tt>, rescaled so that the columns all have nearly the same norm. Even column pivoting is fooled.<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span>\r\n   K = gallery(<span class=\"string\">'kahan'<\/span>,7)\r\n<\/pre><pre class=\"codeoutput\">\r\nK =\r\n\r\n    1.0000   -0.3624   -0.3624   -0.3624   -0.3624   -0.3624   -0.3624\r\n         0    0.9320   -0.3377   -0.3377   -0.3377   -0.3377   -0.3377\r\n         0         0    0.8687   -0.3148   -0.3148   -0.3148   -0.3148\r\n         0         0         0    0.8097   -0.2934   -0.2934   -0.2934\r\n         0         0         0         0    0.7546   -0.2734   -0.2734\r\n         0         0         0         0         0    0.7033   -0.2549\r\n         0         0         0         0         0         0    0.6555\r\n\r\n<\/pre><h4>rosser<a name=\"7d0df1e5-a1bb-403a-8a83-5abc64e84b6f\"><\/a><\/h4><p>I am very fond of the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2014\/01\/06\/the-rosser-matrix\/\">Rosser matrix<\/a>.<\/p><pre class=\"codeinput\">   R = rosser\r\n<\/pre><pre class=\"codeoutput\">\r\nR =\r\n\r\n   611   196  -192   407    -8   -52   -49    29\r\n   196   899   113  -192   -71   -43    -8   -44\r\n  -192   113   899   196    61    49     8    52\r\n   407  -192   196   611     8    44    59   -23\r\n    -8   -71    61     8   411  -599   208   208\r\n   -52   -43    49    44  -599   411   208   208\r\n   -49    -8     8    59   208   208    99  -911\r\n    29   -44    52   -23   208   208  -911    99\r\n\r\n<\/pre><p>Before the success of the QR algorithm, the Rosser matrix was a challenge for matrix eigenvalue routines.  Today we can use it to show off the Symbolic Math toolbox.<\/p><pre class=\"codeinput\">   R = sym(R);\r\n   p = charpoly(R,<span class=\"string\">'x'<\/span>)\r\n   f = factor(p)\r\n   e = solve(p)\r\n<\/pre><pre class=\"codeoutput\"> \r\np =\r\n \r\nx^8 - 4040*x^7 + 5080000*x^6 + 82518000*x^5 - 5327676250000*x^4 + 4287904631000000*x^3 - 1082852512000000000*x^2 + 106131000000000000*x\r\n \r\n \r\nf =\r\n \r\n[ x, x - 1020, x^2 - 1040500, x^2 - 1020*x + 100, x - 1000, x - 1000]\r\n \r\n \r\ne =\r\n \r\n                  0\r\n               1000\r\n               1000\r\n               1020\r\n 510 - 100*26^(1\/2)\r\n 100*26^(1\/2) + 510\r\n    -10*10405^(1\/2)\r\n     10*10405^(1\/2)\r\n \r\n<\/pre><h4>parter<a name=\"b5d742c4-c93f-4f4b-b523-7c4193b5e932\"><\/a><\/h4><p>Several years ago, I was totally surprised when I happened to compute the SVD of a nonsymmetric version of the Hilbert matrix.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   n = 20;\r\n   [I,J] = ndgrid(1:n);\r\n   P = 1.\/(I - J + 1\/2);\r\n   svd(P)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n   3.141592653589794\r\n   3.141592653589794\r\n   3.141592653589794\r\n   3.141592653589793\r\n   3.141592653589793\r\n   3.141592653589792\r\n   3.141592653589791\r\n   3.141592653589776\r\n   3.141592653589108\r\n   3.141592653567518\r\n   3.141592652975738\r\n   3.141592639179606\r\n   3.141592364762737\r\n   3.141587705293030\r\n   3.141520331851230\r\n   3.140696048746875\r\n   3.132281782811693\r\n   3.063054039559126\r\n   2.646259806143500\r\n   1.170504602453951\r\n\r\n<\/pre><p>Where did all those <tt>pi<\/tt> 's come from? <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2014\/02\/03\/surprising-svd-square-waves-and-pi\/\">Seymour Parter<\/a> was able to explain the connection between this matrix and a classic theorem of Szego about the coefficients in the Fourier series of a square wave. The gallery has this matrix as <tt>gallery('parter',n)<\/tt>.<\/p><h4>pascal<a name=\"22b8f2a3-261f-4c4a-a9f0-72b751f86de1\"><\/a><\/h4><p>I must include the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2018\/02\/19\/fun-with-the-pascal-triangle\/\">Pascal triangle<\/a>.<\/p><pre class=\"codeinput\">   help <span class=\"string\">pascal_<\/span>\r\n<\/pre><pre class=\"codeoutput\"> PASCAL Pascal matrix.\r\n    P = PASCAL(N) is the Pascal matrix of order N. P a symmetric positive\r\n    definite matrix with integer entries, made up from Pascal's triangle.\r\n    Its inverse has integer entries. PASCAL(N).^r is symmetric positive\r\n    semidefinite for all nonnegative r.\r\n \r\n    P = PASCAL(N,1) is the lower triangular Cholesky factor (up to signs of\r\n    columns) of the Pascal matrix. P is involutory, that is, it is its own\r\n    inverse.\r\n \r\n    P = PASCAL(N,2) is a rotated version of PASCAL(N,1), with sign flipped\r\n    if N is even. P is a cube root of the identity matrix.\r\n \r\n    Reference:\r\n    N. J. Higham, Accuracy and Stability of Numerical Algorithms,\r\n    Second edition, Society for Industrial and Applied Mathematics,\r\n    Philadelphia, 2002, Sec. 28.4.\r\n\r\n<\/pre><pre class=\"codeinput\">   P = pascal(12,2)\r\n<\/pre><pre class=\"codeoutput\">\r\nP =\r\n\r\n    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1    -1\r\n    11    10     9     8     7     6     5     4     3     2     1     0\r\n   -55   -45   -36   -28   -21   -15   -10    -6    -3    -1     0     0\r\n   165   120    84    56    35    20    10     4     1     0     0     0\r\n  -330  -210  -126   -70   -35   -15    -5    -1     0     0     0     0\r\n   462   252   126    56    21     6     1     0     0     0     0     0\r\n  -462  -210   -84   -28    -7    -1     0     0     0     0     0     0\r\n   330   120    36     8     1     0     0     0     0     0     0     0\r\n  -165   -45    -9    -1     0     0     0     0     0     0     0     0\r\n    55    10     1     0     0     0     0     0     0     0     0     0\r\n   -11    -1     0     0     0     0     0     0     0     0     0     0\r\n     1     0     0     0     0     0     0     0     0     0     0     0\r\n\r\n<\/pre><p>With this sign pattern and arrangement of binomial coefficients, <tt>P<\/tt> is a cube root of the identity.<\/p><pre class=\"codeinput\">   I = P*P*P\r\n<\/pre><pre class=\"codeoutput\">\r\nI =\r\n\r\n     1     0     0     0     0     0     0     0     0     0     0     0\r\n     0     1     0     0     0     0     0     0     0     0     0     0\r\n     0     0     1     0     0     0     0     0     0     0     0     0\r\n     0     0     0     1     0     0     0     0     0     0     0     0\r\n     0     0     0     0     1     0     0     0     0     0     0     0\r\n     0     0     0     0     0     1     0     0     0     0     0     0\r\n     0     0     0     0     0     0     1     0     0     0     0     0\r\n     0     0     0     0     0     0     0     1     0     0     0     0\r\n     0     0     0     0     0     0     0     0     1     0     0     0\r\n     0     0     0     0     0     0     0     0     0     1     0     0\r\n     0     0     0     0     0     0     0     0     0     0     1     0\r\n     0     0     0     0     0     0     0     0     0     0     0     1\r\n\r\n<\/pre><h4>grcar<a name=\"9ad70807-3a8a-4b46-ba6f-6d7701c1757a\"><\/a><\/h4><p>Joe Grcar will give us an interesting graphic.<\/p><pre class=\"codeinput\">   dbtype <span class=\"string\">7:10<\/span> <span class=\"string\">private\/grcar<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n7     %  References:\r\n8     %    [1] J. F. Grcar, Operator coefficient methods for linear equations,\r\n9     %    Report SAND89-8691, Sandia National Laboratories, Albuquerque,\r\n10    %    New Mexico, 1989 (Appendix 2).\r\n<\/pre><pre class=\"codeinput\">   n = 12;\r\n   Z = gallery(<span class=\"string\">'grcar'<\/span>,n)\r\n<\/pre><pre class=\"codeoutput\">\r\nZ =\r\n\r\n     1     1     1     1     0     0     0     0     0     0     0     0\r\n    -1     1     1     1     1     0     0     0     0     0     0     0\r\n     0    -1     1     1     1     1     0     0     0     0     0     0\r\n     0     0    -1     1     1     1     1     0     0     0     0     0\r\n     0     0     0    -1     1     1     1     1     0     0     0     0\r\n     0     0     0     0    -1     1     1     1     1     0     0     0\r\n     0     0     0     0     0    -1     1     1     1     1     0     0\r\n     0     0     0     0     0     0    -1     1     1     1     1     0\r\n     0     0     0     0     0     0     0    -1     1     1     1     1\r\n     0     0     0     0     0     0     0     0    -1     1     1     1\r\n     0     0     0     0     0     0     0     0     0    -1     1     1\r\n     0     0     0     0     0     0     0     0     0     0    -1     1\r\n\r\n<\/pre><pre class=\"codeinput\">   n = 100;\r\n   Z = gallery(<span class=\"string\">'grcar'<\/span>,n);\r\n   e = eig(Z);\r\n   plot(e,<span class=\"string\">'o'<\/span>)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/bohemian_blog_01.png\" alt=\"\"> <h4>gallery5<a name=\"9e424e64-c12b-4ee2-80ff-39cf1b1bd77d\"><\/a><\/h4><p>Did you compute the exact eigenvalues of <tt>gallery(5)<\/tt> yet? Remember, the matrix is<\/p><pre class=\"codeinput\">   G\r\n<\/pre><pre class=\"codeoutput\">\r\nG =\r\n\r\n          -9          11         -21          63        -252\r\n          70         -69         141        -421        1684\r\n        -575         575       -1149        3451      -13801\r\n        3891       -3891        7782      -23345       93365\r\n        1024       -1024        2048       -6144       24572\r\n\r\n<\/pre><p>MATLAB says the eigenvalues are<\/p><pre class=\"codeinput\">   e = eig(G)\r\n<\/pre><pre class=\"codeoutput\">\r\ne =\r\n\r\n -0.034729401323988 + 0.025790098411744i\r\n -0.034729401323988 - 0.025790098411744i\r\n  0.013777607600186 + 0.040110258133935i\r\n  0.013777607600186 - 0.040110258133935i\r\n  0.041903587448437 + 0.000000000000000i\r\n\r\n<\/pre><p>But, look at the powers of <tt>G<\/tt>.  Since <tt>G<\/tt> has integer entries (it is Bohemian), its entries can be computed without roundoff. The fourth power is<\/p><pre class=\"codeinput\">   G^4\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n           0           0           0           0           0\r\n         -84         168        -420        1344       -5355\r\n         568       -1136        2840       -9088       36210\r\n       -3892        7784      -19460       62272     -248115\r\n       -1024        2048       -5120       16384      -65280\r\n\r\n<\/pre><p>And the fifth power is<\/p><pre class=\"codeinput\">   G^5\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n     0     0     0     0     0\r\n     0     0     0     0     0\r\n     0     0     0     0     0\r\n     0     0     0     0     0\r\n     0     0     0     0     0\r\n\r\n<\/pre><p>All zero.<\/p><p>The <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cayley%E2%80%93Hamilton_theorem\">Cayley-Hamilton theorem<\/a> tells us the characteristic polynomial must be<\/p><p><tt>x^5<\/tt><\/p><p>The eigenvalue, the only root of this polynomial, is zero with algebraic multiplicity 5.<\/p><p>MATLAB is effectively solving<\/p><p><tt>x^5<\/tt> = roundoff<\/p><p>Hence the computed eigenvalues come from<\/p><p><tt>x<\/tt> = (roundoff)^(1\/5)<\/p><p>It's hard to recognize them as perturbations of zero.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_445b14b14b6b42358efc13f8f2c2cf09() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='445b14b14b6b42358efc13f8f2c2cf09 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 445b14b14b6b42358efc13f8f2c2cf09';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2019 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_445b14b14b6b42358efc13f8f2c2cf09()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2018b<br><\/p><\/div><!--\r\n445b14b14b6b42358efc13f8f2c2cf09 ##### SOURCE BEGIN #####\r\n%% Bohemian Matrices in the MATLAB(R) Gallery\r\n% We will have a two-part minisymposium on \"Bohemian Matrices\" at\r\n% <https:\/\/iciam2019.org ICIAM2019>, the International Congress on\r\n% Industrial and Applied Mathematics in Valencia, Spain, July 15-19.\r\n% This is an outline of my talk.\r\n\r\n%% Why \"Bohemian\"\r\n% The colorful name \"Bohemian Matrices\" is the responsibility of Rob\r\n% Corless and Steven Thornton of the University of Western Ontario.\r\n% Quoting <http:\/\/www.bohemianmatrices.com the web site>.\r\n%\r\n% <html>\r\n% <p style=\"margin-left:3ex;\">\r\n% When this project was in its early stages, our focus was on random\r\n% integer matrices of bounded height. We originally used the phrase\r\n% \"bounded height integer matrix eigenvalues\" to refer to our work.\r\n% This led to the acronym BHIME which isn't too far off from \"Bohemian\".\r\n% <\/p>\r\n% <\/html>\r\n\r\n%% The MATLAB Gallery\r\n% The |gallery| is a collection of interesting matrices, maintained in\r\n% MATLAB by Nick Higham.  Many of them are Bohemian, or Bohemian\r\n% wannabees.  A catalog for the collection is available with\r\n  \r\n   help gallery_\r\n   \r\n%%\r\n% The matrices in the |See also| line were available before |gallery| \r\n% was introduced.  I think of them as part of the gallery.\r\n   \r\n%% gallery5\r\n% My favorite matrix in the gallery is \r\n\r\n   G = gallery(5)\r\n   \r\n%%\r\n% Let's compute its eigenvalues.\r\n\r\n   format long e\r\n   eig(G)\r\n   \r\n%%\r\n% How accurate are these?  What are the exact eigenvalues?\r\n\r\n%%\r\n% I will give readers of this post, and attendees at my talk,\r\n% some time to contemplate these questions by delaying the answers\r\n% until the end.\r\n\r\n%% triw\r\n% The |triw| matrix demonstrates that Gaussian elimination cannot reliably\r\n% detect near singularity.\r\n\r\n   dbtype 1:2 private\/triw\r\n\r\n%%\r\n\r\n   n = 12;\r\n   T = gallery('triw',n)\r\n   \r\n%%\r\n% |T| is already upper triangular.  It is the |U| of its own\r\n% LU-decomposition.  Since there are no small pivots on the diagonal,\r\n% we might be tempted to conclude that |T| is well conditioned.  However,\r\n% let\r\n\r\n   x = 2.^(-(1:n-1))';\r\n   format rat\r\n   x(n) = x(n-1)\r\n   \r\n%%\r\n% This vector is nearly a null vector for |T|.\r\n\r\n   Tx = T*x\r\n   \r\n%% \r\n% The 1-norm condition number of |T| is at least as large as\r\n \r\n   norm(x,1)\/norm(Tx,1)\r\n   \r\n%%\r\n% That is growing like |2^n|.\r\n\r\n%% moler\r\n% The |moler| matrix demonstrates that Cholesky decomposition of a\r\n% symmetric, positive definite matrix cannot reliably detect near\r\n% singularity either.  The matrix can either be obtained from the gallery,\r\n\r\n   M = gallery('moler',n);\r\n   \r\n%%\r\n% or generated from |T|,\r\n\r\n   format short\r\n   M = T'*T\r\n   \r\n%%\r\n% I was surprised when Nick named the matrix after me.\r\n% The comments in the code indicate he found the matrix in a 30-year old\r\n% book by a friend of mine, John Nash.\r\n\r\n   dbtype 9:14 private\/moler\r\n\r\n%%\r\n% John doesn't remember when he heard about the example from me and\r\n% I don't remember where or where I first came across it.\r\n\r\n%%\r\n% We know that |T| should be the Cholesky decomposition of |M|,\r\n% but let's make sure.\r\n\r\n   assert(isequal(chol(M),T))\r\n   \r\n%%\r\n% We have already seen that, despite the fact that it has no small\r\n% elements, |T| is badly conditioned.  |M| must be at least as badly\r\n% conditioned as |T|.  In fact, it is far worse.  A good null vector\r\n% is provided by\r\n\r\n   [~,v] = condest(M);\r\n   \r\n%%\r\n% Let's look at |v| alongside our |x|.  They are nearly the same.\r\n\r\n   format long\r\n   [v, x]\r\n   \r\n%%\r\n% The 1-norm condition of |M| is at least as large as \r\n   \r\n   norm(v,1)\/norm(M*v,1)\r\n   \r\n%%\r\n% It turns out that is growing faster than |4^n|.\r\n\r\n%% kahan\r\n% But wait, you say, use column pivoting.  Indeed, QR with column\r\n% pivoting will detect the ill-condition of |T| or |M|.\r\n% But the |kahan| matrix in the Gallery is a version of |T|,\r\n% rescaled so that the columns all have nearly the same norm.\r\n% Even column pivoting is fooled.\r\n\r\n   format short\r\n   K = gallery('kahan',7)\r\n\r\n%% rosser\r\n% I am very fond of the\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2014\/01\/06\/the-rosser-matrix\/\r\n% Rosser matrix>.\r\n\r\n   R = rosser\r\n   \r\n%%\r\n% Before the success of the QR algorithm, the Rosser matrix was a challenge\r\n% for matrix eigenvalue routines.  Today we can use it to show off the\r\n% Symbolic Math toolbox.\r\n\r\n   R = sym(R);\r\n   p = charpoly(R,'x')\r\n   f = factor(p)\r\n   e = solve(p)\r\n   \r\n%% parter\r\n% Several years ago, I was totally surprised when I happened to compute\r\n% the SVD of a nonsymmetric version of the Hilbert matrix.\r\n\r\n   format long\r\n   n = 20;\r\n   [I,J] = ndgrid(1:n);\r\n   P = 1.\/(I - J + 1\/2);\r\n   svd(P)\r\n   \r\n%%\r\n% Where did all those |pi| 's come from?\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2014\/02\/03\/surprising-svd-square-waves-and-pi\/\r\n% Seymour Parter> was able to\r\n% explain the connection between this matrix and a classic theorem of\r\n% Szego about the coefficients in the Fourier series of a square wave.\r\n% The gallery has this matrix as |gallery('parter',n)|.\r\n\r\n%% pascal\r\n% I must include the\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2018\/02\/19\/fun-with-the-pascal-triangle\/\r\n% Pascal triangle>.\r\n\r\n   help pascal_\r\n   \r\n%%\r\n\r\n   P = pascal(12,2)\r\n   \r\n%%\r\n% With this sign pattern and arrangement of binomial coefficients,\r\n% |P| is a cube root of the identity.\r\n\r\n   I = P*P*P\r\n\r\n%% grcar\r\n% Joe Grcar will give us an interesting graphic.\r\n\r\n   dbtype 7:10 private\/grcar\r\n   \r\n%%\r\n\r\n   n = 12;\r\n   Z = gallery('grcar',n)\r\n%%\r\n\r\n   n = 100;\r\n   Z = gallery('grcar',n);\r\n   e = eig(Z);\r\n   plot(e,'o')\r\n   \r\n%%  gallery5\r\n% Did you compute the exact eigenvalues of |gallery(5)| yet?\r\n% Remember, the matrix is\r\n\r\n   G\r\n  \r\n%%\r\n% MATLAB says the eigenvalues are\r\n\r\n   e = eig(G)\r\n   \r\n%%\r\n% But, look at the powers of |G|.  Since |G| has integer entries\r\n% (it is Bohemian), its entries can be computed without roundoff.\r\n% The fourth power is\r\n\r\n   G^4\r\n\r\n%%\r\n% And the fifth power is\r\n   \r\n   G^5\r\n\r\n%%\r\n% All zero.\r\n%\r\n% The <https:\/\/en.wikipedia.org\/wiki\/Cayley%E2%80%93Hamilton_theorem\r\n% Cayley-Hamilton theorem> tells us the characteristic polynomial\r\n% must be\r\n%\r\n% |x^5|\r\n%\r\n% The eigenvalue, the only root of this polynomial, is zero with\r\n% algebraic multiplicity 5.\r\n\r\n%%\r\n% MATLAB is effectively solving\r\n%\r\n% |x^5| = roundoff\r\n%\r\n% Hence the computed eigenvalues come from\r\n%\r\n% |x| = (roundoff)^(1\/5)\r\n%\r\n% It's hard to recognize them as perturbations of zero.\r\n##### SOURCE END ##### 445b14b14b6b42358efc13f8f2c2cf09\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/bohemian_blog_01.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction--><p>We will have a two-part minisymposium on \"Bohemian Matrices\" at <a title=\"Old link was https:\/\/iciam2019.org\">ICIAM2019<\/a>, the International Congress on Industrial and Applied Mathematics in Valencia, Spain, July 15-19. This is an outline of my talk.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2019\/06\/24\/bohemian-matrices-in-the-matlab-gallery\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":4964,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[13,4,6,16,8,1],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4960"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=4960"}],"version-history":[{"count":7,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4960\/revisions"}],"predecessor-version":[{"id":6601,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4960\/revisions\/6601"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/4964"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=4960"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=4960"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=4960"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}