{"id":4901,"date":"2019-05-22T12:00:30","date_gmt":"2019-05-22T17:00:30","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=4901"},"modified":"2019-05-18T17:15:49","modified_gmt":"2019-05-18T22:15:49","slug":"an-eigenvalue-sensitivity-example","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2019\/05\/22\/an-eigenvalue-sensitivity-example\/","title":{"rendered":"An Eigenvalue Sensitivity Example"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>On May 29-30, I plan to attend a conference, organized by Nick Higham, at the University of Manchester.  The title of the conference is<\/p><p><b>Celebrating the Centenary of James H. Wilkinson's Birth<\/b><\/p><p>I am giving a talk about one of Wilkinson's favorite topics, how can perturbations of a matrix with sensitive eigenvalues produce a defective matrix with a multiple eigenvalue.  I uncovered this example from my original Fortran MATLAB User's Guide, a University of New Mexico technical report, dated 1981, four years before MathWorks. The text of this blog post is copied from that guide.  I've inserted comments where today's MATLAB scales the eigenvectors differently.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#fab26436-14bf-43a8-8df3-00c9397c18fa\">Eigenvalue Sensitivity Example<\/a><\/li><\/ul><\/div><h4>Eigenvalue Sensitivity Example<a name=\"fab26436-14bf-43a8-8df3-00c9397c18fa\"><\/a><\/h4><p>In this example, we construct a matrix whose eigenvalues are moderately  sensitive  to  perturbations  and  then  analyze that sensitivity. We begin with the statement<\/p><pre class=\"codeinput\">   B = [3 0 7; 0 2 0; 0 0 1]\r\n<\/pre><pre class=\"codeoutput\">\r\nB =\r\n\r\n     3     0     7\r\n     0     2     0\r\n     0     0     1\r\n\r\n<\/pre><p>Obviously, the eigenvalues of B are 1, 2 and 3 .   Moreover, since   B  is  not  symmetric,  these  eigenvalues  are  slightly sensitive to perturbation.  (The value B(1,3) = 7 was  chosen  so that the elements of the matrix A below are less than 1000.)<\/p><p>We now generate a similarity transformation to disguise  the eigenvalues and make them more sensitive.<\/p><pre class=\"codeinput\">   L = [1 0 0; 2 1 0; -3 4 1]\r\n\r\n   M = L\\L'\r\n<\/pre><pre class=\"codeoutput\">\r\nL =\r\n\r\n     1     0     0\r\n     2     1     0\r\n    -3     4     1\r\n\r\n\r\nM =\r\n\r\n     1     2    -3\r\n    -2    -3    10\r\n    11    18   -48\r\n\r\n<\/pre><p>The matrix M has determinant equal to 1 and is  moderately  badly conditioned.  The similarity transformation is<\/p><pre class=\"codeinput\">   A = M*B\/M\r\n<\/pre><pre class=\"codeoutput\">\r\nA =\r\n\r\n  -64.0000   82.0000   21.0000\r\n  144.0000 -178.0000  -46.0000\r\n -771.0000  962.0000  248.0000\r\n\r\n<\/pre><p>Because  det(M) = 1 , the elements of  A  would be exact integers if there were no roundoff.  So,<\/p><pre class=\"codeinput\">   A = round(A)\r\n<\/pre><pre class=\"codeoutput\">\r\nA =\r\n\r\n   -64    82    21\r\n   144  -178   -46\r\n  -771   962   248\r\n\r\n<\/pre><p>This, then, is our test matrix.  We can now  forget  how  it was generated and analyze its eigenvalues.<\/p><pre class=\"codeinput\">   [X,D] = eig(A)\r\n<\/pre><pre class=\"codeoutput\">\r\nX =\r\n\r\n    0.0891    0.0735   -0.1089\r\n   -0.1782   -0.1923    0.1634\r\n    0.9800    0.9786   -0.9805\r\n\r\n\r\nD =\r\n\r\n    3.0000         0         0\r\n         0    1.0000         0\r\n         0         0    2.0000\r\n\r\n<\/pre><pre class=\"codeinput\"><span class=\"comment\">% { Today the eigenvectors are scaled to have unit length.<\/span>\r\n<span class=\"comment\">%   Classic MATLAB did not scale the eigenvectors.  It got<\/span>\r\n<span class=\"comment\">%<\/span>\r\n<span class=\"comment\">%  X     =<\/span>\r\n<span class=\"comment\">%<\/span>\r\n<span class=\"comment\">%      -.0891    3.4903   41.8091<\/span>\r\n<span class=\"comment\">%       .1782   -9.1284  -62.7136<\/span>\r\n<span class=\"comment\">%      -.9800   46.4473  376.2818<\/span>\r\n<span class=\"comment\">%  }<\/span>\r\n<\/pre><p>Since A is similar to B, its eigenvalues are also  1,  2  and  3. They  happen  to  be  computed  in  another  order by the EISPACK subroutines.  The fact that the  columns  of  X,  which  are  the eigenvectors,  are  so  far  from  being orthonormal is our first indication that  the  eigenvalues  are  sensitive.  To  see  this sensitivity, we display more figures of the computed eigenvalues.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   diag(D)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n   3.000000000003868\r\n   0.999999999998212\r\n   1.999999999997978\r\n\r\n<\/pre><p>We see that, on this computer, the last five {today: four} significant figures are  contaminated  by  roundoff  error.  A  somewhat  superficial explanation of this is provided by<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span>\r\n   cond(X)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n   1.7690e+03\r\n\r\n<\/pre><pre class=\"codeinput\"><span class=\"comment\">% { Classic:<\/span>\r\n<span class=\"comment\">%<\/span>\r\n<span class=\"comment\">%  ANS   =<\/span>\r\n<span class=\"comment\">%<\/span>\r\n<span class=\"comment\">%   3.2216e+05<\/span>\r\n<span class=\"comment\">% }<\/span>\r\n<\/pre><p>The condition number of X gives an upper bound for  the  relative error  in  the  computed  eigenvalues.   However,  this condition number is affected by scaling.<\/p><pre class=\"codeinput\">   X = X\/diag(X(3,:))\r\n   cond(X)\r\n<\/pre><pre class=\"codeoutput\">\r\nX =\r\n\r\n    0.0909    0.0751    0.1111\r\n   -0.1818   -0.1965   -0.1667\r\n    1.0000    1.0000    1.0000\r\n\r\n\r\nans =\r\n\r\n   1.7692e+03\r\n\r\n<\/pre><p>Rescaling the eigenvectors so that their last components are all  equal  to  one  has  two consequences. The condition of X is decreased by over two orders of magnitude.  {Not today.} (This is  about  the minimum condition that can be obtained by such diagonal scaling.) Moreover, it is now apparent  that  the  three  eigenvectors  are nearly parallel.<\/p><p>More  detailed  information  on  the  sensitivity   of   the individual eigenvalues involves the left eigenvectors.<\/p><pre class=\"codeinput\">   Y = inv(X')\r\n   Y'*A*X\r\n<\/pre><pre class=\"codeoutput\">\r\nY =\r\n\r\n -511.5000  259.5000  252.0000\r\n  616.0000 -346.0000 -270.0000\r\n  159.5000  -86.5000  -72.0000\r\n\r\n\r\nans =\r\n\r\n    3.0000   -0.0000   -0.0000\r\n    0.0000    1.0000    0.0000\r\n         0   -0.0000    2.0000\r\n\r\n<\/pre><p>We are now in a position to  compute  the  sensitivities  of  the individual eigenvalues.<\/p><pre class=\"codeinput\">   <span class=\"keyword\">for<\/span> j = 1:3, c(j) = norm(Y(:,j))*norm(X(:,j)); <span class=\"keyword\">end<\/span>\r\n   c\r\n<\/pre><pre class=\"codeoutput\">\r\nc =\r\n\r\n  833.1092  450.7228  383.7564\r\n\r\n<\/pre><p>These three numbers are the reciprocals of  the  cosines  of  the angles  between the left and right eigenvectors.  It can be shown that  perturbation  of  the  elements  of  A  can  result  in   a perturbation of the j-th eigenvalue which is c(j) times as large. In  this  example,  the  first   eigenvalue   has   the   largest sensitivity.<\/p><p>We now proceed to show that A is close to a  matrix  with  a double eigenvalue.  The direction of the required perturbation is given by<\/p><pre class=\"codeinput\">   E = -1.e-6*Y(:,1)*X(:,1)'\r\n<\/pre><pre class=\"codeoutput\">\r\nE =\r\n\r\n   1.0e-03 *\r\n\r\n    0.0465   -0.0930    0.5115\r\n   -0.0560    0.1120   -0.6160\r\n   -0.0145    0.0290   -0.1595\r\n\r\n<\/pre><p>With some trial and error which we do not show,  we  bracket  the point  where  two  eigenvalues of a perturbed A coalesce and then become complex.<\/p><pre class=\"codeinput\">   eig(A + .4*E)\r\n   eig(A + .5*E)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n    1.1500\r\n    2.5996\r\n    2.2504\r\n\r\n\r\nans =\r\n\r\n   2.4067 + 0.1753i\r\n   2.4067 - 0.1753i\r\n   1.1866 + 0.0000i\r\n\r\n<\/pre><p>Now, a bisecting search, driven by the imaginary part of  one  of the eigenvalues, finds the point where two eigenvalues are nearly equal.<\/p><pre class=\"codeinput\">   r = .4;  s = .5;\r\n\r\n   <span class=\"keyword\">while<\/span> s-r &gt; 1.e-14\r\n      t = (r+s)\/2;\r\n      d = eig(A+t*E);\r\n      <span class=\"keyword\">if<\/span> imag(d(1)) == 0, r = t; <span class=\"keyword\">else<\/span>, s = t; <span class=\"keyword\">end<\/span>\r\n   <span class=\"keyword\">end<\/span>\r\n\r\n   format <span class=\"string\">long<\/span>\r\n   t\r\n<\/pre><pre class=\"codeoutput\">\r\nt =\r\n\r\n   0.450380734135428\r\n\r\n<\/pre><p>Finally, we display the perturbed matrix, which is obviously close  to the original, and its pair of nearly equal eigenvalues.<\/p><pre class=\"codeinput\">   A+t*E\r\n\r\n   eig(A+t*E)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n   1.0e+02 *\r\n\r\n  -0.639999790572959   0.819999581145917   0.210002303697455\r\n   1.439999747786789  -1.779999495573578  -0.460002774345322\r\n  -7.710000065305207   9.620000130610412   2.479999281642729\r\n\r\n\r\nans =\r\n\r\n   2.415743144226897\r\n   2.415738627741217\r\n   1.168517777651195\r\n\r\n<\/pre><p>The  first  two  eigenvectors  of  A + t*E   are   almost indistinguishable  indicating that the perturbed matrix is almost defective.<\/p><pre class=\"codeinput\">   [X,D] = eig(A+t*E)\r\n\r\n   format <span class=\"string\">short<\/span>\r\n   cond(X)\r\n<\/pre><pre class=\"codeoutput\">\r\nX =\r\n\r\n   0.094108719644215  -0.094108788388666  -0.070056238584537\r\n  -0.174780805492871   0.174780755585658   0.194872753681838\r\n   0.980099596427929  -0.980099598727050  -0.978323429806240\r\n\r\n\r\nD =\r\n\r\n   2.415743144226897                   0                   0\r\n                   0   2.415738627741217                   0\r\n                   0                   0   1.168517777651195\r\n\r\n\r\nans =\r\n\r\n   3.9853e+08\r\n\r\n<\/pre><script language=\"JavaScript\"> <!-- \r\n    function grabCode_ad4e2269ccdd4034a444425f409949dd() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='ad4e2269ccdd4034a444425f409949dd ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' ad4e2269ccdd4034a444425f409949dd';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2019 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_ad4e2269ccdd4034a444425f409949dd()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2018b<br><\/p><\/div><!--\r\nad4e2269ccdd4034a444425f409949dd ##### SOURCE BEGIN #####\r\n%%  An Eigenvalue Sensitivity Example\r\n% On May 29-30, I plan to attend a conference, organized by\r\n% Nick Higham, at the University of Manchester.  The title of the\r\n% conference is\r\n%\r\n% *Celebrating the Centenary of James H. Wilkinson's Birth*\r\n%\r\n% I am giving a talk about one of Wilkinson's favorite topics, how can\r\n% perturbations of a matrix with sensitive eigenvalues produce a\r\n% defective matrix with a multiple eigenvalue.  I uncovered this\r\n% example from my original Fortran MATLAB User's Guide, a University of\r\n% New Mexico technical report, dated 1981, four years before MathWorks. \r\n% The text of this blog post is copied from that guide.  I've inserted\r\n% comments where today's MATLAB scales the eigenvectors differently.\r\n\r\n%% Eigenvalue Sensitivity Example\r\n% In this example, we construct a matrix whose eigenvalues are\r\n% moderately  sensitive  to  perturbations  and  then  analyze that\r\n% sensitivity. We begin with the statement\r\n% \r\n   B = [3 0 7; 0 2 0; 0 0 1]\r\n   \r\n%%\r\n% Obviously, the eigenvalues of B are 1, 2 and 3 .   Moreover,\r\n% since   B  is  not  symmetric,  these  eigenvalues  are  slightly\r\n% sensitive to perturbation.  (The value B(1,3) = 7 was  chosen  so\r\n% that the elements of the matrix A below are less than 1000.)\r\n% \r\n% We now generate a similarity transformation to disguise  the\r\n% eigenvalues and make them more sensitive.\r\n% \r\n   L = [1 0 0; 2 1 0; -3 4 1]\r\n   \r\n   M = L\\L'\r\n   \r\n%%\r\n% The matrix M has determinant equal to 1 and is  moderately  badly\r\n% conditioned.  The similarity transformation is\r\n\r\n   A = M*B\/M\r\n   \r\n%%\r\n% Because  det(M) = 1 , the elements of  A  would be exact integers\r\n% if there were no roundoff.  So,\r\n\r\n   A = round(A)\r\n \r\n%%\r\n% This, then, is our test matrix.  We can now  forget  how  it\r\n% was generated and analyze its eigenvalues.\r\n \r\n   [X,D] = eig(A)\r\n   \r\n   \r\n%%\r\n\r\n% { Today the eigenvectors are scaled to have unit length.\r\n%   Classic MATLAB did not scale the eigenvectors.  It got\r\n%\r\n%  X     =\r\n%    \r\n%      -.0891    3.4903   41.8091\r\n%       .1782   -9.1284  -62.7136\r\n%      -.9800   46.4473  376.2818\r\n%  }\r\n\r\n%% \r\n% Since A is similar to B, its eigenvalues are also  1,  2  and  3.\r\n% They  happen  to  be  computed  in  another  order by the EISPACK\r\n% subroutines.  The fact that the  columns  of  X,  which  are  the\r\n% eigenvectors,  are  so  far  from  being orthonormal is our first\r\n% indication that  the  eigenvalues  are  sensitive.  To  see  this\r\n% sensitivity, we display more figures of the computed eigenvalues.\r\n\r\n   format long\r\n   diag(D)\r\n\r\n%% \r\n% We see that, on this computer, the last five {today: four} significant\r\n% figures\r\n% are  contaminated  by  roundoff  error.  A  somewhat  superficial\r\n% explanation of this is provided by\r\n\r\n   format short\r\n   cond(X)\r\n \r\n%%\r\n\r\n% { Classic:\r\n%\r\n%  ANS   =\r\n%    \r\n%   3.2216e+05\r\n% }\r\n\r\n   \r\n%%\r\n% The condition number of X gives an upper bound for  the  relative\r\n% error  in  the  computed  eigenvalues.   However,  this condition\r\n% number is affected by scaling.\r\n\r\n   X = X\/diag(X(3,:))\r\n   cond(X)\r\n   \r\n%%\r\n% Rescaling the eigenvectors so that their last components are\r\n% all  equal  to  one  has  two consequences. The condition of X is\r\n% decreased by over two orders of magnitude.  {Not today.}\r\n% (This is  about  the\r\n% minimum condition that can be obtained by such diagonal scaling.)\r\n% Moreover, it is now apparent  that  the  three  eigenvectors  are\r\n% nearly parallel.\r\n% \r\n% More  detailed  information  on  the  sensitivity   of   the\r\n% individual eigenvalues involves the left eigenvectors.\r\n\r\n   Y = inv(X')\r\n   Y'*A*X\r\n\r\n%% \r\n% We are now in a position to  compute  the  sensitivities  of  the\r\n% individual eigenvalues.\r\n\r\n   for j = 1:3, c(j) = norm(Y(:,j))*norm(X(:,j)); end\r\n   c\r\n\r\n%%\r\n% These three numbers are the reciprocals of  the  cosines  of  the\r\n% angles  between the left and right eigenvectors.  It can be shown\r\n% that  perturbation  of  the  elements  of  A  can  result  in   a\r\n% perturbation of the j-th eigenvalue which is c(j) times as large.\r\n% In  this  example,  the  first   eigenvalue   has   the   largest\r\n% sensitivity.\r\n% \r\n% We now proceed to show that A is close to a  matrix  with  a\r\n% double eigenvalue.  The direction of the required perturbation is\r\n% given by\r\n \r\n   E = -1.e-6*Y(:,1)*X(:,1)'\r\n\r\n%% \r\n% With some trial and error which we do not show,  we  bracket  the\r\n% point  where  two  eigenvalues of a perturbed A coalesce and then\r\n% become complex.\r\n\r\n   eig(A + .4*E)\r\n   eig(A + .5*E)\r\n\r\n%%\r\n% Now, a bisecting search, driven by the imaginary part of  one  of\r\n% the eigenvalues, finds the point where two eigenvalues are nearly\r\n% equal.\r\n\r\n   r = .4;  s = .5;\r\n \r\n   while s-r > 1.e-14\r\n      t = (r+s)\/2; \r\n      d = eig(A+t*E);\r\n      if imag(d(1)) == 0, r = t; else, s = t; end\r\n   end\r\n \r\n   format long\r\n   t\r\n\r\n%%\r\n% Finally, we display the perturbed matrix, which is obviously\r\n% close  to the original, and its pair of nearly equal eigenvalues.\r\n\r\n   A+t*E\r\n   \r\n   eig(A+t*E)\r\n\r\n%%\r\n% The  first  two  eigenvectors  of  A + t*E   are   almost\r\n% indistinguishable  indicating that the perturbed matrix is almost\r\n% defective.\r\n\r\n   [X,D] = eig(A+t*E)\r\n   \r\n   format short\r\n   cond(X)\r\n\r\n##### SOURCE END ##### ad4e2269ccdd4034a444425f409949dd\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/AA-1.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction--><p>On May 29-30, I plan to attend a conference, organized by Nick Higham, at the University of Manchester.  The title of the conference is... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2019\/05\/22\/an-eigenvalue-sensitivity-example\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":4899,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[13,4,6,16],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4901"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=4901"}],"version-history":[{"count":2,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4901\/revisions"}],"predecessor-version":[{"id":4905,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4901\/revisions\/4905"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/4899"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=4901"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=4901"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=4901"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}