{"id":4913,"date":"2019-05-28T21:17:07","date_gmt":"2019-05-29T02:17:07","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=4913"},"modified":"2019-05-28T21:17:25","modified_gmt":"2019-05-29T02:17:25","slug":"matrix-eigenvalue-dating-service","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2019\/05\/28\/matrix-eigenvalue-dating-service\/","title":{"rendered":"Matrix Eigenvalue Dating Service"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>This is a summary of my talk at the conference <a href=\"https:\/\/nla-group.org\/2019\/01\/21\/celebrating-the-centenary-of-james-h-wilkinsons-birth\/\">Celebrating the Centenary of James H. Wilkinson's Birth<\/a> at the University of Manchester, May 29.<\/p><p>Jim Wilkinson knew that a matrix with badly conditioned simple eigenvalues must be close to a matrix with multiple eigenvalues. How can you find the nearest matrix where some of the eigenvalues have found their mates?<\/p><p>Much of this talk is my interpretation of the <a href=\"https:\/\/doi.org\/10.1016\/j.laa.2010.09.022\">2011 paper<\/a> by Rafikul Alam, Shreemayee Bora, Ralph Byers and Michael Overton, \"Characterization and construction of the nearest defective matrix via coalescence of pseudospectral components,\" in <i>Linear Algebra and its Applications<\/i>, vol. 435, pp. 494-513. Overton's accompanying code <tt>neardefmat<\/tt> is over 1000 lines of MATLAB&reg; and is available from <a href=\"https:\/\/cs.nyu.edu\/faculty\/overton\/software\/neardefmat\/\">his web page<\/a>.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#52913f70-5370-44dd-a7c0-850721b7271e\">A 3-by-3 Example<\/a><\/li><li><a href=\"#da296679-fa4a-45ba-a2c5-549e2adbfd6f\">Frank(9)<\/a><\/li><li><a href=\"#78f7a630-8511-4a79-b309-34e3037bf3c4\">Wilkinson(9)<\/a><\/li><\/ul><\/div><h4>A 3-by-3 Example<a name=\"52913f70-5370-44dd-a7c0-850721b7271e\"><\/a><\/h4><p>In last week's blog post, <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2019\/05\/22\/an-eigenvalue-sensitivity-example\/\">\"An Eigenvalue Sensitivity Example\"<\/a>, I resurrected this example from the 1980 User's Guide.  For a moment, let's forget  how it was constructed and start with<\/p><pre class=\"codeinput\">   A = [ -64 82 21; 144 -178 -46; -771 962 248]\r\n<\/pre><pre class=\"codeoutput\">A =\r\n   -64    82    21\r\n   144  -178   -46\r\n  -771   962   248\r\n<\/pre><p>The computed eigenvalues are<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   lambda = eig(A)\r\n<\/pre><pre class=\"codeoutput\">lambda =\r\n   3.000000000003868\r\n   0.999999999998212\r\n   1.999999999997978\r\n<\/pre><p>The exact eigenvalues are 1, 2 and 3. We've lost about four figures.  This is predicted by the eigenvalue condition numbers,<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span>\r\n   kappa = condeig(A)\r\n<\/pre><pre class=\"codeoutput\">kappa =\r\n  833.1092\r\n  450.7228\r\n  383.7564\r\n<\/pre><p>These condition numbers come from the right eigenvector matrix<\/p><pre class=\"codeinput\">   [X,~] = eig(A)\r\n<\/pre><pre class=\"codeoutput\">X =\r\n    0.0891    0.0735   -0.1089\r\n   -0.1782   -0.1923    0.1634\r\n    0.9800    0.9786   -0.9805\r\n<\/pre><p>And the left eigenvector matrix<\/p><pre class=\"codeinput\">   Y = inv(X)\r\n<\/pre><pre class=\"codeoutput\">Y =\r\n -521.9612  628.5984  162.7621\r\n  265.1820 -353.5760  -88.3940\r\n -257.0058  275.3634   73.4302\r\n<\/pre><p>The rows of <tt>Y<\/tt> are left eigenvectors. They are normalized so that their dot product with the right eigenvectors is one.<\/p><pre class=\"codeinput\">   Y*X\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n    1.0000   -0.0000   -0.0000\r\n   -0.0000    1.0000   -0.0000\r\n   -0.0000    0.0000    1.0000\r\n<\/pre><p>A function introduced in 2017b that computes the 2-norm of the columns of a matrix comes in handy at this point.<\/p><pre class=\"codeinput\">   kappa = (vecnorm(Y').*vecnorm(X))'\r\n<\/pre><pre class=\"codeoutput\">kappa =\r\n  833.1092\r\n  450.7228\r\n  383.7564\r\n<\/pre><p><a href=\"http:\/\/www.cs.ox.ac.uk\/pseudospectra\/eigtool\/\">EigTool<\/a> is open MATLAB software for analyzing eigenvalues, pseudospectra, and related spectral properties of matrices. It was developed at Oxford from 1999 - 2002 by Thomas Wright under the direction of Nick Trefethen.  It is now maintained by Mark Embree.<\/p><p>Here is the pseudospectrum of our example matrix.<\/p><p>eigtool(A)<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig1-1.png\" alt=\"\"> <\/p><p>You can see the three eigenvalues at 1, 2 and 3. The contours outline the regions where the eigenvalues can move when the matrix is perturbed by a specified amount.<\/p><p>This animated gif adjusts the region around the eigenvalues at 2 and 3 until the regions coalesce in a saddle point near 2.4.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/eigtool_movie_blog.gif\" alt=\"\"> <\/p><p>To see more precisely where they coalesce, we use the function <tt>neardefmat<\/tt>, nearest defective matrix, written by Michael Overton, a professor at NYU. The outputs from <tt>neardefmat<\/tt> are <tt>B<\/tt>, the nearest defective matrix, and <tt>z<\/tt>, the resulting double eigenvalue.  By default <tt>neardefmat<\/tt> prints some useful details about the computation.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   [B,z] = neardefmat(A)\r\n<\/pre><pre class=\"codeoutput\">neardefmat: lower bound on distance is 0.000299834 and upper bound is 0.00618718\r\nneardefmat: finished, lowest saddle found is 2.41449 + 0 i with\r\n dist = 0.000375171, mu = 0, |u'*v| = 3.67335e-06\r\n singular vector residual norms are 1.8334e-13, 5.17298e-14\r\n this saddle point was found in search # 3\r\ndistance to nearest defective matrix B found is 0.000375171\r\nneardefmat: two closest computed eigenvalues of B differ by 0.00269716\r\n and condition numbers of these eigenvalues are 272230, 272669\r\nB =\r\n   1.0e+02 *\r\n  -0.639999774916363   0.819999582124016   0.210002343527353\r\n   1.439999736641696  -1.779999511065701  -0.460002742035793\r\n  -7.710000068576393   9.620000127314574   2.479999285995845\r\nz =\r\n   2.414492300748820\r\n<\/pre><p><tt>B<\/tt> is constructed from <tt>z<\/tt>, an SVD and a rank one perturbation of <tt>A<\/tt>.<\/p><pre class=\"codeinput\">   I = eye(3);\r\n   [U,S,V] = svd(A - z*I);\r\n   format <span class=\"string\">short<\/span>\r\n   sigma = S(3,3)\r\n   u = U(:,3)\r\n   v = V(:,3)\r\n   format <span class=\"string\">long<\/span>\r\n   B = A - sigma*u*v'\r\n<\/pre><pre class=\"codeoutput\">sigma =\r\n   3.7517e-04\r\nu =\r\n   -0.6373\r\n    0.7457\r\n    0.1942\r\nv =\r\n    0.0941\r\n   -0.1748\r\n    0.9801\r\nB =\r\n   1.0e+02 *\r\n  -0.639999774916363   0.819999582124016   0.210002343527353\r\n   1.439999736641696  -1.779999511065701  -0.460002742035793\r\n  -7.710000068576393   9.620000127314574   2.479999285995845\r\n<\/pre><p>This is essentially the same perturbation that I used to construct <tt>A<\/tt> in last week's blog post. The size of the perturbation is<\/p><pre class=\"codeinput\">   sigma = S(3,3)\r\n\r\n   sigma = norm(B - A)\r\n<\/pre><pre class=\"codeoutput\">sigma =\r\n     3.751705175569800e-04\r\nsigma =\r\n     3.751705175556183e-04\r\n<\/pre><p>By construction, <tt>B<\/tt> has a double eigenvalue, but the eigenvalue's condition is infinite, so it is not computed accurately.<\/p><pre class=\"codeinput\">   eig(B)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n   2.417189453582737\r\n   2.414492295643361\r\n   1.168318252152021\r\n<\/pre><h4>Frank(9)<a name=\"da296679-fa4a-45ba-a2c5-549e2adbfd6f\"><\/a><\/h4><p>Wilkinson was interested in the Frank family of matrices.  Of course, we can find them in the <tt>gallery<\/tt>.<\/p><pre class=\"codeinput\">   help <span class=\"string\">private\/frank<\/span>\r\n<\/pre><pre class=\"codeoutput\"> FRANK Frank matrix.\r\n    F = GALLERY('FRANK',N, K) is the Frank matrix of order N.  It is\r\n    upper Hessenberg with determinant 1. If K = 1, the elements are\r\n    reflected about the anti-diagonal (1,N)--(N,1).  The eigenvalues\r\n    of F may be obtained in terms of the zeros of the Hermite\r\n    polynomials.  They are positive and occur in reciprocal pairs;\r\n    thus if N is odd, 1 is an eigenvalue.  F has FLOOR(N\/2)\r\n    ill-conditioned eigenvalues---the smaller ones.\r\n\r\n<\/pre><p>The 9x9 Frank matrix fits nicely on the page.<\/p><pre class=\"codeinput\">   F = gallery(<span class=\"string\">'frank'<\/span>,9)\r\n<\/pre><pre class=\"codeoutput\">F =\r\n     9     8     7     6     5     4     3     2     1\r\n     8     8     7     6     5     4     3     2     1\r\n     0     7     7     6     5     4     3     2     1\r\n     0     0     6     6     5     4     3     2     1\r\n     0     0     0     5     5     4     3     2     1\r\n     0     0     0     0     4     4     3     2     1\r\n     0     0     0     0     0     3     3     2     1\r\n     0     0     0     0     0     0     2     2     1\r\n     0     0     0     0     0     0     0     1     1\r\n<\/pre><p>The matrix is clearly far from symmetric.  Since it is almost upper triangular you might think the eigenvalues are near the diagonal elements, but they are not.  Here they are, sorted.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   [lambda,p] = sort(eig(F));\r\n   lambda\r\n<\/pre><pre class=\"codeoutput\">lambda =\r\n   0.044802721845067\r\n   0.082015890201792\r\n   0.162582729486328\r\n   0.374121140912600\r\n   0.999999999999966\r\n   2.672931012564413\r\n   6.150714797051293\r\n  12.192759202150013\r\n  22.320072505788492\r\n<\/pre><p>And here are the corresponding condition numbers.<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span> <span class=\"string\">e<\/span>\r\n   kappa = condeig(F);\r\n   kappa = kappa(p)\r\n<\/pre><pre class=\"codeoutput\">kappa =\r\n   1.4762e+04\r\n   2.5134e+04\r\n   1.1907e+04\r\n   1.5996e+03\r\n   6.8615e+01\r\n   3.6210e+00\r\n   1.5268e+00\r\n   2.3903e+00\r\n   1.9916e+00\r\n<\/pre><p>The three smallest eigenvalues have the largest condition numbers. We expect the nearest defective matrix to bring two of them together.<\/p><p>Eigtool of the Frank matrix.<\/p><p><tt>eigtool(F)<\/tt><\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig3-1.png\" alt=\"\"> <\/p><p>Focus on the three smallest eigenvalues. The red contour shows the regions containing the two that are coalescing.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig4.png\" alt=\"\"> <\/p><p>Find the nearest defective matrix.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   [B,z] = neardefmat(F);\r\n<\/pre><pre class=\"codeoutput\">neardefmat: lower bound on distance is 3.4791e-07 and upper bound is 8.76938e-05\r\nneardefmat: finished, lowest saddle found is 0.06104 + 0 i with\r\n dist = 5.01214e-07, mu = 0, |u'*v| = 6.09133e-10\r\n singular vector residual norms are 3.6435e-15, 6.12511e-16\r\n this saddle point was found in search # 1\r\ndistance to nearest defective matrix B found is 5.01214e-07\r\nneardefmat: two closest computed eigenvalues of B differ by 1.7808e-06\r\n and condition numbers of these eigenvalues are 3.62946e+08, 3.62937e+08\r\n<\/pre><p>The two smallest eigenvalues have merged, but they are hyper-sensitive.<\/p><pre class=\"codeinput\">   sort(eig(B))\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n   0.061039353359070\r\n   0.061041134158948\r\n   0.168054858422710\r\n   0.373370423277969\r\n   1.000016557229929\r\n   2.672931171283966\r\n   6.150714794349185\r\n  12.192759202129578\r\n  22.320072505788680\r\n<\/pre><p>If we're not too strict on the tolerance, the eigenvector matrix <tt>X<\/tt> is rank deficient, implying the matrix <tt>B<\/tt> is defective.<\/p><pre class=\"codeinput\">   [X,~] = eig(B);\r\n   rank(X,1.e-8)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n     8\r\n<\/pre><p>We can get a picture of the perturbation with the <tt>imagesc<\/tt> function. First, the matrix itself. You can see its upper Hessenberg (nearly upper triangular) structure.<\/p><p><tt>imagesc(F)<\/tt><\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig5.png\" alt=\"\"> <\/p><p>Now, the picture of the perturbation shows that it is concentrated in the upper right-hand corner.<\/p><p><tt>imagesc(F-B)<\/tt><\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig6.png\" alt=\"\"> <\/p><h4>Wilkinson(9)<a name=\"78f7a630-8511-4a79-b309-34e3037bf3c4\"><\/a><\/h4><p>What perturbation of a symmetric matrix makes its eigenvalues coalesce? A great example was provided by Wilkinson himself.<\/p><pre class=\"codeinput\">   help <span class=\"string\">wilkinson<\/span>\r\n\r\n   W = wilkinson(9)\r\n<\/pre><pre class=\"codeoutput\"> WILKINSON Wilkinson's eigenvalue test matrix.\r\n    W = WILKINSON(n) is J. H. Wilkinson's eigenvalue test matrix, Wn+. It\r\n    is a symmetric, tridiagonal matrix with pairs of nearly, but not\r\n    exactly, equal eigenvalues. The most frequently used case is\r\n    WILKINSON(21).\r\n \r\n    W = WILKINSON(n,CLASSNAME) returns a matrix of class CLASSNAME, which\r\n    can be either 'single' or 'double' (the default).\r\n \r\n    Example:\r\n \r\n    WILKINSON(7) is\r\n \r\n           3  1  0  0  0  0  0\r\n           1  2  1  0  0  0  0\r\n           0  1  1  1  0  0  0\r\n           0  0  1  0  1  0  0\r\n           0  0  0  1  1  1  0\r\n           0  0  0  0  1  2  1\r\n           0  0  0  0  0  1  3\r\n\r\n    Reference page in Doc Center\r\n       doc wilkinson\r\n\r\n\r\nW =\r\n     4     1     0     0     0     0     0     0     0\r\n     1     3     1     0     0     0     0     0     0\r\n     0     1     2     1     0     0     0     0     0\r\n     0     0     1     1     1     0     0     0     0\r\n     0     0     0     1     0     1     0     0     0\r\n     0     0     0     0     1     1     1     0     0\r\n     0     0     0     0     0     1     2     1     0\r\n     0     0     0     0     0     0     1     3     1\r\n     0     0     0     0     0     0     0     1     4\r\n<\/pre><p><tt>W<\/tt> is symmetric and tridiagonal with ones on the off-diagonals. In order to have a multiple eigenvalue such a matrix has to have a pair of zeroes somewhere on the sub- and super-diagonal.  But this matrix does not have any small off-diagonal elements.  Nevertheless, the two largest eigenvalues are very close to each other.<\/p><pre class=\"codeinput\">   lambda = eig(W)\r\n<\/pre><pre class=\"codeoutput\">lambda =\r\n  -1.125422415673319\r\n   0.254718759825861\r\n   0.952584219075215\r\n   1.822717080887109\r\n   2.178284739549981\r\n   3.177282919112892\r\n   3.247396472578982\r\n   4.745281240174140\r\n   4.747156984469142\r\n<\/pre><p>Since the matrix is symmetric, the left and right eigenvectors are orthogonal and transposes of each other.  All the eigenvalues are perfectly well conditioned.  Where should we look for the coalescing eigenvalues?<\/p><pre class=\"codeinput\">   kappa = condeig(W)\r\n<\/pre><pre class=\"codeoutput\">kappa =\r\n   1.000000000000000\r\n   1.000000000000000\r\n   1.000000000000000\r\n   1.000000000000000\r\n   1.000000000000000\r\n   1.000000000000000\r\n   1.000000000000000\r\n   1.000000000000000\r\n   1.000000000000000\r\n<\/pre><p><tt>Eigtool<\/tt> of Wilkinson(9).  There are only seven circles for a 9x9 matrix -- the two circles on the right each contain two eigenvalues.<\/p><p><tt>eigtool(W)<\/tt><\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig7.png\" alt=\"\"> <\/p><p>Let's zoom way in on the rightmost circle.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig8.png\" alt=\"\"> <\/p><p>The -3.04 contours are just barely kissing. This is a different kind of saddle point -- a nonsmooth one. You will have to study the Alam <i>et al<\/i> paper to see why this is so important.<\/p><p>Let's see what <tt>neardefmat<\/tt> has to say about Wilkinson's matrix.<\/p><pre class=\"codeinput\">   [B,z] = neardefmat(W);\r\n<\/pre><pre class=\"codeoutput\">neardefmat: A is normal within tolerance\r\n distance to nearest defective matrix B is 0.000937872\r\nneardefmat: two closest computed eigenvalues of B differ by 2.01677e-09\r\n and condition numbers of these eigenvalues are 465036, 465036\r\n<\/pre><p>Sure enough, <tt>W<\/tt> is symmetric and hence normal.  A rather different algorithm is required.<\/p><p>The perturbed eigenvalues have small imaginary parts.<\/p><pre class=\"codeinput\">   eig(B)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n -1.125422415673318 + 0.000000000000000i\r\n  0.254718759825861 + 0.000000000000000i\r\n  0.952584219075214 + 0.000000000000000i\r\n  1.822717080887109 + 0.000000000000000i\r\n  2.178284739549983 + 0.000000000000000i\r\n  3.177282919112890 + 0.000000000000000i\r\n  3.247396472578982 + 0.000000000000000i\r\n  4.746219112321641 + 0.000000001008386i\r\n  4.746219112321641 - 0.000000001008386i\r\n<\/pre><p>Finally, here are images of the symmetric, tridiagonal matrix W and the very nonsymmetric perturbation <tt>W-B<\/tt> that brings two of its eigenvalues together.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig9.png\" alt=\"\"> <\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/fig10.png\" alt=\"\"> <\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_a72ba92ebb934345be45d68da7e65678() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='a72ba92ebb934345be45d68da7e65678 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' a72ba92ebb934345be45d68da7e65678';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2019 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_a72ba92ebb934345be45d68da7e65678()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2018b<br><\/p><\/div><!--\r\na72ba92ebb934345be45d68da7e65678 ##### SOURCE BEGIN #####\r\n%% Matrix Eigenvalue Dating Service\r\n% This is a summary of my talk at the conference\r\n% <https:\/\/nla-group.org\/2019\/01\/21\/celebrating-the-centenary-of-james-h-wilkinsons-birth\/\r\n% Celebrating the Centenary of James H. Wilkinson's Birth> at the\r\n% University of Manchester, May 29.\r\n%\r\n% Jim Wilkinson knew that a matrix with badly conditioned simple\r\n% eigenvalues must be close to a matrix with multiple eigenvalues.\r\n% How can you find the nearest matrix where some of the eigenvalues\r\n% have found their mates?\r\n%\r\n% Much of this talk is my interpretation of the\r\n% <https:\/\/doi.org\/10.1016\/j.laa.2010.09.022 2011 paper> by\r\n% Rafikul Alam, Shreemayee Bora, Ralph Byers and Michael Overton,\r\n% \"Characterization and construction of the nearest defective matrix\r\n% via coalescence of pseudospectral components,\" in\r\n% _Linear Algebra and its Applications_, vol. 435, pp. 494-513.\r\n% Overton's accompanying code |neardefmat| is over 1000 lines of MATLAB(R)\r\n% and is available from\r\n% <https:\/\/cs.nyu.edu\/faculty\/overton\/software\/neardefmat\/\r\n% his web page>.\r\n\r\n%% A 3-by-3 Example\r\n% In last week's blog post,\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2019\/05\/22\/an-eigenvalue-sensitivity-example\/\r\n% \"An Eigenvalue Sensitivity Example\">, I resurrected this example from\r\n% the 1980 User's Guide.  For a moment, let's forget  how it was\r\n% constructed and start with\r\n\r\n   A = [ -64 82 21; 144 -178 -46; -771 962 248]\r\n   \r\n%%\r\n% The computed eigenvalues are\r\n\r\n   format long\r\n   lambda = eig(A)\r\n   \r\n%%\r\n% The exact eigenvalues are 1, 2 and 3.\r\n% We've lost about four figures.  This is predicted by the eigenvalue\r\n% condition numbers,\r\n   \r\n\r\n   format short\r\n   kappa = condeig(A)\r\n   \r\n%%\r\n% These condition numbers come from the right eigenvector matrix\r\n\r\n   [X,~] = eig(A)\r\n   \r\n%%\r\n% And the left eigenvector matrix\r\n\r\n   Y = inv(X)\r\n   \r\n%%\r\n% The rows of |Y| are left eigenvectors. They are normalized so that\r\n% their dot product with the right eigenvectors is one.\r\n\r\n   Y*X\r\n  \r\n%%\r\n% A function introduced in 2017b that computes the 2-norm of the columns\r\n% of a matrix comes in handy at this point.\r\n\r\n   kappa = (vecnorm(Y').*vecnorm(X))'\r\n   \r\n%%\r\n% <http:\/\/www.cs.ox.ac.uk\/pseudospectra\/eigtool\/\r\n% EigTool> is open MATLAB software for analyzing eigenvalues,\r\n% pseudospectra, and related spectral properties of matrices.\r\n% It was developed at Oxford from 1999 - 2002 by Thomas Wright\r\n% under the direction of Nick Trefethen.  It is now maintained\r\n% by Mark Embree.\r\n%\r\n% Here is the pseudospectrum of our example matrix.\r\n%\r\n% eigtool(A)\r\n%\r\n% <<fig1-1.png>>\r\n%  \r\n%%\r\n% You can see the three eigenvalues at 1, 2 and 3. The contours\r\n% outline the regions where the eigenvalues can move when the matrix\r\n% is perturbed by a specified amount.\r\n%\r\n% This animated gif adjusts the region around the eigenvalues at 2 and 3\r\n% until the regions coalesce in a saddle point near 2.4.\r\n%\r\n% <<eigtool_movie_blog.gif>>\r\n%\r\n% To see more precisely where they coalesce, we use the function\r\n% |neardefmat|, nearest defective matrix, written by Michael Overton,\r\n% a professor at NYU.\r\n% The outputs from |neardefmat| are |B|, the nearest defective matrix,\r\n% and |z|, the resulting double eigenvalue.  By default |neardefmat|\r\n% prints some useful details about the computation.\r\n\r\n   format long\r\n   [B,z] = neardefmat(A)\r\n  \r\n%%\r\n% |B| is constructed from |z|, an SVD and a rank one perturbation of |A|.\r\n\r\n   I = eye(3);\r\n   [U,S,V] = svd(A - z*I);\r\n   format short\r\n   sigma = S(3,3)\r\n   u = U(:,3)\r\n   v = V(:,3)\r\n   format long\r\n   B = A - sigma*u*v'\r\n   \r\n%%\r\n% This is essentially the same perturbation that I used to construct\r\n% |A| in last week's blog post. The size of the perturbation is\r\n\r\n   sigma = S(3,3)\r\n   \r\n   sigma = norm(B - A)\r\n   \r\n%%\r\n% By construction, |B| has a double eigenvalue, but the eigenvalue's\r\n% condition is infinite, so it is not computed accurately.\r\n \r\n   eig(B)\r\n   \r\n%% Frank(9)\r\n% Wilkinson was interested in the Frank family of matrices.  Of course,\r\n% we can find them in the |gallery|.\r\n\r\n   help private\/frank\r\n   \r\n%%\r\n% The 9x9 Frank matrix fits nicely on the page.\r\n\r\n   F = gallery('frank',9)\r\n   \r\n%%\r\n% The matrix is clearly far from symmetric.  Since it is almost\r\n% upper triangular you might think the eigenvalues are near the diagonal\r\n% elements, but they are not.  Here they are, sorted.\r\n\r\n   format long\r\n   [lambda,p] = sort(eig(F));\r\n   lambda\r\n   \r\n%%\r\n% And here are the corresponding condition numbers.\r\n\r\n   format short e\r\n   kappa = condeig(F);\r\n   kappa = kappa(p)\r\n\r\n%%\r\n% The three smallest eigenvalues have the largest condition numbers.\r\n% We expect the nearest defective matrix to bring two of them together.\r\n%\r\n% Eigtool of the Frank matrix.\r\n%\r\n% |eigtool(F)|\r\n%\r\n% <<fig3-1.png>>\r\n\r\n%%\r\n% Focus on the three smallest eigenvalues. The red contour shows the\r\n% regions containing the two that are coalescing.\r\n%\r\n% <<fig4.png>>\r\n\r\n%%\r\n% Find the nearest defective matrix.\r\n\r\n   format long\r\n   [B,z] = neardefmat(F);\r\n   %% \r\n   \r\n%%\r\n% The two smallest eigenvalues have merged, but they are hyper-sensitive.\r\n\r\n   sort(eig(B))\r\n   \r\n%%\r\n% If we're not too strict on the tolerance, the eigenvector matrix |X|\r\n% is rank deficient, implying the matrix |B| is defective.\r\n\r\n   [X,~] = eig(B);\r\n   rank(X,1.e-8)\r\n   \r\n%%                                                           \r\n% We can get a picture of the perturbation with the |imagesc| function. \r\n% First, the matrix itself. You can see its upper Hessenberg (nearly\r\n% upper triangular) structure.\r\n%\r\n% |imagesc(F)|\r\n%\r\n% <<fig5.png>>\r\n\r\n%%\r\n% Now, the picture of the perturbation shows that it is concentrated in\r\n% the upper right-hand corner.\r\n%\r\n% |imagesc(F-B)|\r\n%\r\n% <<fig6.png>>\r\n\r\n%% Wilkinson(9)\r\n% What perturbation of a symmetric matrix makes its eigenvalues coalesce?\r\n% A great example was provided by Wilkinson himself.\r\n\r\n   help wilkinson\r\n\r\n   W = wilkinson(9)\r\n   \r\n%%\r\n% |W| is symmetric and tridiagonal with ones on the off-diagonals.\r\n% In order to have a multiple eigenvalue such a matrix has to\r\n% have a pair of zeroes somewhere on the sub- and super-diagonal.  But this\r\n% matrix does not have any small off-diagonal elements.  Nevertheless,\r\n% the two largest eigenvalues are very close to each other.\r\n\r\n   lambda = eig(W)\r\n   \r\n%%\r\n% Since the matrix is symmetric, the left and right eigenvectors are\r\n% orthogonal and transposes of each other.  All the eigenvalues are\r\n% perfectly well conditioned.  Where should we look for the coalescing \r\n% eigenvalues?\r\n\r\n   kappa = condeig(W)\r\n   \r\n%%\r\n% |Eigtool| of Wilkinson(9).  There are only seven circles for a 9x9\r\n% matrix REPLACE_WITH_DASH_DASH the two circles on the right each contain two eigenvalues.\r\n%\r\n% |eigtool(W)|\r\n%\r\n% <<fig7.png>>\r\n\r\n%%\r\n% Let's zoom way in on the rightmost circle.\r\n%\r\n% <<fig8.png>>\r\n%\r\n% The -3.04 contours are just barely kissing.\r\n% This is a different kind of saddle point REPLACE_WITH_DASH_DASH a nonsmooth one.\r\n% You will have to study the Alam _et al_ paper to see why this is\r\n% so important.\r\n\r\n%%\r\n% Let's see what |neardefmat| has to say about Wilkinson's matrix.\r\n%\r\n   [B,z] = neardefmat(W);\r\n \r\n%%\r\n% Sure enough, |W| is symmetric and hence normal.  A rather different\r\n% algorithm is required.\r\n\r\n%%\r\n% The perturbed eigenvalues have small imaginary parts.\r\n\r\n   eig(B)\r\n   \r\n%%\r\n% Finally, here are images of the symmetric, tridiagonal matrix W\r\n% and the very nonsymmetric perturbation |W-B| that brings two of its\r\n% eigenvalues together.\r\n%\r\n% <<fig9.png>>\r\n%\r\n% <<fig10.png>>\r\n##### SOURCE END ##### a72ba92ebb934345be45d68da7e65678\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/fig8.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction--><p>This is a summary of my talk at the conference <a href=\"https:\/\/nla-group.org\/2019\/01\/21\/celebrating-the-centenary-of-james-h-wilkinsons-birth\/\">Celebrating the Centenary of James H. Wilkinson's Birth<\/a> at the University of Manchester, May 29.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2019\/05\/28\/matrix-eigenvalue-dating-service\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":4931,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[13,6,16,8,30],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4913"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=4913"}],"version-history":[{"count":2,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4913\/revisions"}],"predecessor-version":[{"id":4937,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4913\/revisions\/4937"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/4931"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=4913"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=4913"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=4913"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}