{"id":5733,"date":"2020-01-30T11:58:58","date_gmt":"2020-01-30T16:58:58","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=5733"},"modified":"2020-05-06T21:50:24","modified_gmt":"2020-05-07T01:50:24","slug":"pejorative-manifolds-of-polynomials-and-matrices-part-2","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2020\/01\/30\/pejorative-manifolds-of-polynomials-and-matrices-part-2\/","title":{"rendered":"Pejorative Manifolds of Polynomials and Matrices, part 2"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>In an unpublished 1972 technical report \"Conserving confluence curbs ill-condition,\"  Velvel Kahan coined the descriptive term <i>pejorative manifold<\/i>. In case you  don't use it in everyday conversation, <i>pejorative<\/i>  means \"expressing contempt or disapproval.\"<\/p><p>Velvel's report concerns polynomials with multiple roots, which are usually regarded with contempt because they are so badly conditioned. But Velvel's key observation is that although multiple roots are sensitive to arbitrary perturbation, they are insensitive to perturbations that preserve multiplicity.<\/p><p><a href=\"https:\/\/blogs.mathworks.com\/cleve\/2020\/01\/18\/pejorative-manifolds-of-polynomials-and-matrices-part-1\/\">Part 1<\/a> was about polynomials.  This part is about matrix eigenvalues.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#a43077df-7993-4e40-b4a3-2ddb32759798\">The manifold<\/a><\/li><li><a href=\"#d647746d-b7b8-4f67-a4f5-0298468b1353\">Two matrices<\/a><\/li><li><a href=\"#b95f231d-04b6-44e1-8eee-97ade2611da6\">Convex combination<\/a><\/li><li><a href=\"#b2b4a4b3-6510-41c2-b90c-85b250a99fcd\">Characteristic polynomial<\/a><\/li><li><a href=\"#73cd686a-dbd9-4cc8-a187-29f6f24fb970\">Plot<\/a><\/li><li><a href=\"#766dd717-ad71-4e1d-ac68-2b0ed400981c\">Similarity transformation<\/a><\/li><li><a href=\"#e1fc19fc-5b95-4eac-a269-93f7f34d68de\">Yes ma'am<\/a><\/li><li><a href=\"#9ae24555-f74c-484c-b6c5-5a9acef9343f\">Eigenvalues<\/a><\/li><li><a href=\"#aca61ba1-8f27-45ca-b942-597a83183125\">Eigenvectors<\/a><\/li><li><a href=\"#1f79b20c-87ff-4e4f-b14d-5405b595a69a\">Condition of eigenvalues<\/a><\/li><li><a href=\"#7fe5be7f-241e-42aa-bd92-99232bca51cd\">Jordan Canonical Form<\/a><\/li><li><a href=\"#4dea4aff-003a-480d-be12-b15efa70876c\">And The Beat Goes On ...<\/a><\/li><li><a href=\"#66d4eb44-0e50-4c6a-9c16-78f6d32c9557\">JCF<\/a><\/li><li><a href=\"#f089d442-fb16-4c31-85a5-3958eb416f03\">Symbolic Eigenvectors<\/a><\/li><li><a href=\"#85ca7882-519a-434c-8929-0476283d2d2a\">Check<\/a><\/li><li><a href=\"#b6d11247-c54c-4a72-b200-8e82e5990993\">TKP Preview<\/a><\/li><\/ul><\/div><h4>The manifold<a name=\"a43077df-7993-4e40-b4a3-2ddb32759798\"><\/a><\/h4><p>The pejorative manifold $\\mathcal{M}$ is now the set of all 6-by-6 matrices which have an eigenvalue of multiplicity three at $\\lambda$ = 3.  These are severe restrictions, of course, and $\\mathcal{M}$ is a tiny subset of the set of all matrices. But if we stay within $\\mathcal{M}$, life is not nearly so harsh.<\/p><h4>Two matrices<a name=\"d647746d-b7b8-4f67-a4f5-0298468b1353\"><\/a><\/h4><p>The Jordan Canonical Form of a matrix is bidiagonal, with eigenvalues on the diagonal and 1's and 0's on the super-diagonal.  In our situation here, each eigenvalue with multiplicity m has a single m-by-m Jordan block with 1's on the superdiagonal. The Jordan blocks for distinct eigenvalues are separated by a zero on the super-diagonal.<\/p><p>Our first matrix has a 3-by-3 block with $\\lambda$ = 3, then a 1-by-1 block with $\\lambda$ = 2, and finally a 2-by-2 block with $\\lambda$ = 1, so the diagonal is<\/p><pre class=\"codeinput\">   d = [3  3  3  2  1  1];\r\n<\/pre><p>and the superdiagonal is<\/p><pre class=\"codeinput\">   j = [1  1  0  0  1];\r\n<\/pre><p>Here it is<\/p><pre class=\"codeinput\">   J1 = diag(j,1)\r\n   A1 = diag(d) + J1\r\n<\/pre><pre class=\"codeoutput\">J1 =\r\n     0     1     0     0     0     0\r\n     0     0     1     0     0     0\r\n     0     0     0     0     0     0\r\n     0     0     0     0     0     0\r\n     0     0     0     0     0     1\r\n     0     0     0     0     0     0\r\nA1 =\r\n     3     1     0     0     0     0\r\n     0     3     1     0     0     0\r\n     0     0     3     0     0     0\r\n     0     0     0     2     0     0\r\n     0     0     0     0     1     1\r\n     0     0     0     0     0     1\r\n<\/pre><p>Our second matrix moves one super-diagonal element to swap the multiplicities of $\\lambda$ = 2 and $\\lambda$ = 1.<\/p><pre class=\"codeinput\">   J2 = diag([1 1 0 1 0],1)\r\n   A2 = diag([3 3 3 2 2 1]) + J2\r\n<\/pre><pre class=\"codeoutput\">J2 =\r\n     0     1     0     0     0     0\r\n     0     0     1     0     0     0\r\n     0     0     0     0     0     0\r\n     0     0     0     0     1     0\r\n     0     0     0     0     0     0\r\n     0     0     0     0     0     0\r\nA2 =\r\n     3     1     0     0     0     0\r\n     0     3     1     0     0     0\r\n     0     0     3     0     0     0\r\n     0     0     0     2     1     0\r\n     0     0     0     0     2     0\r\n     0     0     0     0     0     1\r\n<\/pre><p>These two matrices were constructed to have the two polynomials from <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2020\/01\/18\/pejorative-manifolds-of-polynomials-and-matrices-part-1\/\">my last post<\/a> as characteristic polynomials.  There is no need to compute the zeros, they are on the diagonals.<\/p><pre class=\"codeinput\">   p1 = charpoly(A1,<span class=\"string\">'s'<\/span>);\r\n   p2 = charpoly(A2,<span class=\"string\">'s'<\/span>);\r\n<\/pre><p>$$ p1 = s^6-13\\,s^5+68\\,s^4-182\\,s^3+261\\,s^2-189\\,s+54 $$<\/p><p>$$ p2 = s^6-14\\,s^5+80\\,s^4-238\\,s^3+387\\,s^2-324\\,s+108 $$<\/p><h4>Convex combination<a name=\"b95f231d-04b6-44e1-8eee-97ade2611da6\"><\/a><\/h4><p>The convex linear combination puts the weights on the superdiagonal and the new eigenvalue of the diagonal.<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span>\r\n   A = 1\/3*A1 + 2\/3*A2\r\n<\/pre><pre class=\"codeoutput\">A =\r\n    3.0000    1.0000         0         0         0         0\r\n         0    3.0000    1.0000         0         0         0\r\n         0         0    3.0000         0         0         0\r\n         0         0         0    2.0000    0.6667         0\r\n         0         0         0         0    1.6667    0.3333\r\n         0         0         0         0         0    1.0000\r\n<\/pre><h4>Characteristic polynomial<a name=\"b2b4a4b3-6510-41c2-b90c-85b250a99fcd\"><\/a><\/h4><p>Let's check that the characteristic polynomial is the same as the third polynomial we had last time.<\/p><pre class=\"codeinput\">   p3 = charpoly(A,<span class=\"string\">'s'<\/span>);\r\n<\/pre><p>$$ p3 = s^6-\\frac{41\\,s^5}{3}+76\\,s^4-\\frac{658\\,s^3}{3}+345\\,s^2-279\\,s+90 $$<\/p><h4>Plot<a name=\"73cd686a-dbd9-4cc8-a187-29f6f24fb970\"><\/a><\/h4><p>The plots of the three polynomials show how the triple root is more sensitive than either double root, which are, in turn, more sensitive than any of the simple roots.<\/p><pre class=\"codeinput\">   plot_polys(p1,p2,p3)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/pejorative_blog2_01.png\" alt=\"\"> <h4>Similarity transformation<a name=\"766dd717-ad71-4e1d-ac68-2b0ed400981c\"><\/a><\/h4><p>A similarity transformation preserves, but disguises, the eigenvalues. Because it's handy, I'll use the HPL-AI matrix from my blog post last December <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2019\/12\/04\/a-matrix-for-the-new-hpl-ai-benchmark\/\">HPL-AI Benchmark<\/a>.<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span>\r\n   M = HPLAI(6,-1)\r\n<\/pre><pre class=\"codeoutput\">M =\r\n    1.1667    0.1429    0.1250    0.1111    0.1000    0.0909\r\n    0.2000    1.1667    0.1429    0.1250    0.1111    0.1000\r\n    0.2500    0.2000    1.1667    0.1429    0.1250    0.1111\r\n    0.3333    0.2500    0.2000    1.1667    0.1429    0.1250\r\n    0.5000    0.3333    0.2500    0.2000    1.1667    0.1429\r\n    1.0000    0.5000    0.3333    0.2500    0.2000    1.1667\r\n<\/pre><h4>Yes ma'am<a name=\"e1fc19fc-5b95-4eac-a269-93f7f34d68de\"><\/a><\/h4><p>Here's how we do a similarity transformation in MATLAB.<\/p><pre class=\"codeinput\">   B = M*A\/M\r\n<\/pre><pre class=\"codeoutput\">B =\r\n    3.0610    1.1351    0.0899   -0.1695   -0.1178   -0.2053\r\n    0.0405    3.1519    1.1018   -0.1919   -0.1296   -0.2244\r\n    0.1360    0.2922    3.2144   -0.1402   -0.0745   -0.1867\r\n    0.1096    0.3808    0.2786    1.8527    0.6013   -0.1919\r\n    0.2839    0.6709    0.4447   -0.1222    1.6349    0.1467\r\n    1.5590    1.5424    0.7469   -0.1300   -0.0449    0.7517\r\n<\/pre><h4>Eigenvalues<a name=\"9ae24555-f74c-484c-b6c5-5a9acef9343f\"><\/a><\/h4><p>What did this do to the eigenvalues?<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   e = eig(B)\r\n<\/pre><pre class=\"codeoutput\">e =\r\n  1.000000000000000 + 0.000000000000000i\r\n  1.666666666666671 + 0.000000000000000i\r\n  1.999999999999999 + 0.000000000000000i\r\n  3.000006294572211 + 0.000000000000000i\r\n  2.999996852713897 + 0.000005451273553i\r\n  2.999996852713897 - 0.000005451273553i\r\n<\/pre><p>The simple roots have hardly moved.  The root with multiplicity three is perturbed by roughly the cube root of precision.<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span>\r\n   e3 = e(4:6) - 3;\r\n   r3 = abs(e3)\r\n   circle(e3)\r\n<\/pre><pre class=\"codeoutput\">r3 =\r\n   1.0e-05 *\r\n    0.6295\r\n    0.6295\r\n    0.6295\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/pejorative_blog2_02.png\" alt=\"\"> <h4>Eigenvectors<a name=\"aca61ba1-8f27-45ca-b942-597a83183125\"><\/a><\/h4><p>What about the eigenvectors?<\/p><pre class=\"codeinput\">   [V,~] = eig(B);\r\n   imagV = imag(V)\r\n   realV = real(V)\r\n<\/pre><pre class=\"codeoutput\">imagV =\r\n   1.0e-05 *\r\n         0         0         0         0         0         0\r\n         0         0         0         0    0.3705   -0.3705\r\n         0         0         0         0    0.0549   -0.0549\r\n         0         0         0         0    0.0678   -0.0678\r\n         0         0         0         0    0.0883   -0.0883\r\n         0         0         0         0    0.1225   -0.1225\r\nrealV =\r\n   -0.0601   -0.0519   -0.0904   -0.6942    0.6942    0.6942\r\n   -0.0664   -0.0590   -0.1017   -0.1190    0.1190    0.1190\r\n   -0.0742   -0.0683   -0.1162   -0.1487    0.1487    0.1487\r\n   -0.3413   -0.9310   -0.9488   -0.1983    0.1983    0.1983\r\n    0.2883    0.3258   -0.1627   -0.2975    0.2975    0.2975\r\n   -0.8870   -0.1275   -0.2033   -0.5950    0.5950    0.5950\r\n<\/pre><p>The last two vectors have small imaginary components and their real components are nearly the same as the fourth vector. So only the first four columns of <tt>V<\/tt> are good eigenvectors. We see <tt>B<\/tt>, and hence <tt>A<\/tt>, is <i>defective<\/i>.  It does not have a full set of linearly independent eigenvectors.<\/p><h4>Condition of eigenvalues<a name=\"1f79b20c-87ff-4e4f-b14d-5405b595a69a\"><\/a><\/h4><pre class=\"codeinput\">   help <span class=\"string\">condeig<\/span>\r\n   format <span class=\"string\">short<\/span> <span class=\"string\">e<\/span>\r\n   kappa = condeig(B)\r\n<\/pre><pre class=\"codeoutput\"> CONDEIG Condition number with respect to eigenvalues.\r\n    CONDEIG(A) is a vector of condition numbers for the eigenvalues\r\n    of A. These condition numbers are the reciprocals of the cosines \r\n    of the angles between the left and right eigenvectors.\r\n        [V,D,s] = CONDEIG(A) is equivalent to:\r\n        [V,D] = EIG(A); s = CONDEIG(A);\r\n \r\n    Large condition numbers imply that A is near a matrix with\r\n    multiple eigenvalues.\r\n \r\n    Class support for input A:\r\n       float: double, single\r\n \r\n    See also COND.\r\n\r\n    Documentation for condeig\r\n       doc condeig\r\n\r\n\r\nkappa =\r\n   1.6014e+00\r\n   2.9058e+00\r\n   2.9286e+00\r\n   1.3213e+10\r\n   1.3213e+10\r\n   1.3213e+10\r\n<\/pre><p>Look carefully at those numbers -- the power of 10 on the first three is zero, while on the last three it is 10. This confirms that the first three eigenvalues are well conditioned, while the fourth is not.<\/p><h4>Jordan Canonical Form<a name=\"7fe5be7f-241e-42aa-bd92-99232bca51cd\"><\/a><\/h4><p><tt>A<\/tt> is almost its own JCF, so this is no surprise.<\/p><pre class=\"codeinput\">   A_jcf = jordan(A)\r\n<\/pre><pre class=\"codeoutput\">A_jcf =\r\n   3.0000e+00   1.0000e+00            0            0            0            0\r\n            0   3.0000e+00   1.0000e+00            0            0            0\r\n            0            0   3.0000e+00            0            0            0\r\n            0            0            0   1.0000e+00            0            0\r\n            0            0            0            0   1.6667e+00            0\r\n            0            0            0            0            0   2.0000e+00\r\n<\/pre><p>But what about <tt>B<\/tt>?  With exact computation, it would have the same JCF.<\/p><pre class=\"codeinput\">   format <span class=\"string\">short<\/span> <span class=\"string\">e<\/span>\r\n   B_jcf = jordan(B);\r\n   real_B_jcf = real(B_jcf)\r\n   imag_B_jcf = imag(B_jcf)\r\n<\/pre><pre class=\"codeoutput\">real_B_jcf =\r\n   1.0000e+00            0            0            0            0            0\r\n            0   1.6667e+00            0            0            0            0\r\n            0            0   2.0000e+00            0            0            0\r\n            0            0            0   3.0000e+00            0            0\r\n            0            0            0            0   3.0000e+00            0\r\n            0            0            0            0            0   3.0000e+00\r\nimag_B_jcf =\r\n            0            0            0            0            0            0\r\n            0            0            0            0            0            0\r\n            0            0            0            0            0            0\r\n            0            0            0            0            0            0\r\n            0            0            0            0  -2.7873e-06            0\r\n            0            0            0            0            0   2.7873e-06\r\n<\/pre><p>The computed JCF is diagonal.  This is, yet again, an example of the fact that the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2018\/05\/28\/the-jordan-canonical-form-just-doesnt-compute\/\">The Jordan Canonical Form Just Does't Compute<\/a>. Also see <a href=\"https:\/\/epubs.siam.org\/doi\/10.1137\/1018113\">Golub and Wilkinson<\/a>.<\/p><h4>And The Beat Goes On ...<a name=\"4dea4aff-003a-480d-be12-b15efa70876c\"><\/a><\/h4><p>I was just about to make that the end of the post when I tried this. I didn't realize it at the time, but the use of the HPL-AI matrix for the \"random\" similarity transformation has some welcome consequences.  The elements of this matrix are the ratios of small integers.<\/p><pre class=\"codeinput\">   M = sym(M)\r\n<\/pre><pre class=\"codeoutput\">M =\r\n[ 7\/6, 1\/7, 1\/8, 1\/9, 1\/10, 1\/11]\r\n[ 1\/5, 7\/6, 1\/7, 1\/8,  1\/9, 1\/10]\r\n[ 1\/4, 1\/5, 7\/6, 1\/7,  1\/8,  1\/9]\r\n[ 1\/3, 1\/4, 1\/5, 7\/6,  1\/7,  1\/8]\r\n[ 1\/2, 1\/3, 1\/4, 1\/5,  7\/6,  1\/7]\r\n[   1, 1\/2, 1\/3, 1\/4,  1\/5,  7\/6]\r\n<\/pre><p>The elements of <tt>A<\/tt> are also the ratios of small integers.<\/p><pre class=\"codeinput\">   sym(A)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n[ 3, 1, 0, 0,   0,   0]\r\n[ 0, 3, 1, 0,   0,   0]\r\n[ 0, 0, 3, 0,   0,   0]\r\n[ 0, 0, 0, 2, 2\/3,   0]\r\n[ 0, 0, 0, 0, 5\/3, 1\/3]\r\n[ 0, 0, 0, 0,   0,   1]\r\n<\/pre><p>The elements of <tt>inv(M)<\/tt> are ratios of large integers, but I don't have to show them because we're not going to invert <tt>M<\/tt>, even symbolically. Instead we use <i>forward slash<\/i> to compute the similarity transformation.<\/p><pre class=\"codeinput\">   B = M*A\/M;\r\n<\/pre><p>The elements of <tt>B<\/tt> are also the ratios of large integers.  Let's just look at the first column; the other columns are similar.<\/p><pre class=\"codeinput\">    B1 = B(:,1)\r\n<\/pre><pre class=\"codeoutput\">B1 =\r\n 5261534240243927141\/1718872313588352715\r\n   69679116377352174\/1718872313588352715\r\n    46738873967260860\/343774462717670543\r\n      2898457606578534\/26444189439820811\r\n    97602262779214116\/343774462717670543\r\n 2679724812276211392\/1718872313588352715\r\n<\/pre><h4>JCF<a name=\"66d4eb44-0e50-4c6a-9c16-78f6d32c9557\"><\/a><\/h4><p>With this exact symbolic computation, the characteristic polynomial of <tt>B<\/tt> is <tt>p3<\/tt>.<\/p><pre class=\"codeinput\">   charpoly(B,<span class=\"string\">'s'<\/span>)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\ns^6 - (41*s^5)\/3 + 76*s^4 - (658*s^3)\/3 + 345*s^2 - 279*s + 90\r\n<\/pre><p>As a result, the JCF of <tt>B<\/tt> is correct.<\/p><pre class=\"codeinput\">   jordan(B)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n[ 1,   0, 0, 0, 0, 0]\r\n[ 0, 5\/3, 0, 0, 0, 0]\r\n[ 0,   0, 2, 0, 0, 0]\r\n[ 0,   0, 0, 3, 1, 0]\r\n[ 0,   0, 0, 0, 3, 1]\r\n[ 0,   0, 0, 0, 0, 3]\r\n<\/pre><h4>Symbolic Eigenvectors<a name=\"f089d442-fb16-4c31-85a5-3958eb416f03\"><\/a><\/h4><p>The third output argument of the symbolic version of <tt>eig<\/tt> is a vector whose length is the number of linearly independent eigenvectors. This is the sum of the <i>geometric multiplicities<\/i>.  In this case, it is 4.<\/p><pre class=\"codeinput\">   [V,E,k] = eig(B)\r\n<\/pre><pre class=\"codeoutput\">V =\r\n[  11\/27,  463\/6831,  4\/9, 7\/6]\r\n[  25\/54,    31\/414,  1\/2, 1\/5]\r\n[  15\/28,  485\/5796,  4\/7, 1\/4]\r\n[ 460\/63, 1115\/2898, 14\/3, 1\/3]\r\n[  -23\/9,  -157\/483,  4\/5, 1\/2]\r\n[      1,         1,    1,   1]\r\nE =\r\n[ 5\/3, 0, 0, 0, 0, 0]\r\n[   0, 1, 0, 0, 0, 0]\r\n[   0, 0, 2, 0, 0, 0]\r\n[   0, 0, 0, 3, 0, 0]\r\n[   0, 0, 0, 0, 3, 0]\r\n[   0, 0, 0, 0, 0, 3]\r\nk =\r\n     1     2     3     4\r\n<\/pre><h4>Check<a name=\"85ca7882-519a-434c-8929-0476283d2d2a\"><\/a><\/h4><p>Verify that the eigenvectors work.<\/p><pre class=\"codeinput\">   BV = B*V\r\n\r\n   VE_k = V*E(k,k)\r\n<\/pre><pre class=\"codeoutput\">BV =\r\n[    55\/81,  463\/6831,  8\/9, 7\/2]\r\n[  125\/162,    31\/414,    1, 3\/5]\r\n[    25\/28,  485\/5796,  8\/7, 3\/4]\r\n[ 2300\/189, 1115\/2898, 28\/3,   1]\r\n[  -115\/27,  -157\/483,  8\/5, 3\/2]\r\n[      5\/3,         1,    2,   3]\r\nVE_k =\r\n[    55\/81,  463\/6831,  8\/9, 7\/2]\r\n[  125\/162,    31\/414,    1, 3\/5]\r\n[    25\/28,  485\/5796,  8\/7, 3\/4]\r\n[ 2300\/189, 1115\/2898, 28\/3,   1]\r\n[  -115\/27,  -157\/483,  8\/5, 3\/2]\r\n[      5\/3,         1,    2,   3]\r\n<\/pre><h4>TKP Preview<a name=\"b6d11247-c54c-4a72-b200-8e82e5990993\"><\/a><\/h4><p>While preparing these two posts about \"pejorative manifolds\", I discovered some beautiful patterns of roundoff error generated by Triple Kronecker Products with very high multiplicity.  That is going to be <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2020\/02\/14\/roundoff-patterns-from-triple-kronecker-products\/\">my next post.<\/a> Here is a preview. <a href=\"https:\/\/blogs.mathworks.com\/cleve\/files\/tkp_preview.gif\">tkp_preview.gif<\/a><\/p><pre class=\"codeinput\">   close\r\n<\/pre><script language=\"JavaScript\"> <!-- \r\n    function grabCode_51aef6b3629b4019834a59c0ee9b4d00() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='51aef6b3629b4019834a59c0ee9b4d00 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 51aef6b3629b4019834a59c0ee9b4d00';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2020 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_51aef6b3629b4019834a59c0ee9b4d00()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2019b<br><\/p><p class=\"footer\"><br>\r\n      Published with MATLAB&reg; R2019b<br><\/p><\/div><!--\r\n51aef6b3629b4019834a59c0ee9b4d00 ##### SOURCE BEGIN #####\r\n%% Pejorative Manifolds of Polynomials and Matrices, part 2\r\n% In an unpublished 1972 technical report \"Conserving confluence curbs\r\n% ill-condition,\"  Velvel Kahan coined the descriptive term \r\n% _pejorative manifold_. In case you  don't use it in everyday\r\n% conversation, _pejorative_  means \"expressing contempt or disapproval.\"\r\n% \r\n% Velvel's report concerns polynomials with multiple roots, which are\r\n% usually regarded with contempt because they are so badly conditioned.\r\n% But Velvel's key observation is that although multiple roots are\r\n% sensitive to arbitrary perturbation, they are insensitive to perturbations \r\n% that preserve multiplicity. \r\n%\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2020\/01\/18\/pejorative-manifolds-of-polynomials-and-matrices-part-1\/\r\n% Part 1> was about polynomials.  This part is about matrix eigenvalues.\r\n\r\n%% The manifold\r\n% The pejorative manifold $\\mathcal{M}$ is now the set of all\r\n% 6-by-6 matrices which have an eigenvalue of multiplicity three\r\n% at $\\lambda$ = 3.  These are severe restrictions, of course,\r\n% and $\\mathcal{M}$ is a tiny subset of the set of all matrices.\r\n% But if we stay within $\\mathcal{M}$, life is not nearly so harsh.\r\n\r\n%% Two matrices\r\n% The Jordan Canonical Form of a matrix is bidiagonal, with eigenvalues\r\n% on the diagonal and 1's and 0's on the super-diagonal.  In our\r\n% situation here, each eigenvalue with multiplicity m has a single m-by-m\r\n% Jordan block with 1's on the superdiagonal. The Jordan blocks for\r\n% distinct eigenvalues are separated by a zero on the super-diagonal.\r\n%\r\n% Our first matrix has a 3-by-3 block with $\\lambda$ = 3,\r\n% then a 1-by-1 block with $\\lambda$ = 2, and\r\n% finally a 2-by-2 block with $\\lambda$ = 1, so the diagonal is\r\n\r\n   d = [3  3  3  2  1  1];\r\n   \r\n%%\r\n% and the superdiagonal is\r\n\r\n   j = [1  1  0  0  1];\r\n\r\n%%\r\n% Here it is\r\n\r\n   J1 = diag(j,1)\r\n   A1 = diag(d) + J1\r\n   \r\n%%\r\n% Our second matrix moves one super-diagonal element to swap\r\n% the multiplicities of $\\lambda$ = 2 and $\\lambda$ = 1.\r\n\r\n   J2 = diag([1 1 0 1 0],1)\r\n   A2 = diag([3 3 3 2 2 1]) + J2\r\n   \r\n%%\r\n% These two matrices were constructed to have the two polynomials from\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2020\/01\/18\/pejorative-manifolds-of-polynomials-and-matrices-part-1\/\r\n% my last post> as characteristic polynomials.  There is no need to compute\r\n% the zeros, they are on the diagonals.\r\n\r\n   p1 = charpoly(A1,'s');    \r\n   p2 = charpoly(A2,'s');\r\n   \r\n%%\r\n% $$ p1 = s^6-13\\,s^5+68\\,s^4-182\\,s^3+261\\,s^2-189\\,s+54 $$\r\n%\r\n% $$ p2 = s^6-14\\,s^5+80\\,s^4-238\\,s^3+387\\,s^2-324\\,s+108 $$\r\n\r\n%% Convex combination\r\n% The convex linear combination puts the weights on the superdiagonal\r\n% and the new eigenvalue of the diagonal.\r\n\r\n   format short\r\n   A = 1\/3*A1 + 2\/3*A2\r\n   \r\n%% Characteristic polynomial\r\n% Let's check that the characteristic polynomial is the same as\r\n% the third polynomial we had last time.\r\n\r\n   p3 = charpoly(A,'s');\r\n%%\r\n% $$ p3 = s^6-\\frac{41\\,s^5}{3}+76\\,s^4-\\frac{658\\,s^3}{3}+345\\,s^2-279\\,s+90 $$\r\n\r\n%% Plot \r\n% The plots of the three polynomials show how the triple root is more\r\n% sensitive than either double root, which are, in turn, more sensitive \r\n% than any of the simple roots.\r\n\r\n   plot_polys(p1,p2,p3)\r\n  \r\n%% Similarity transformation\r\n% A similarity transformation preserves, but disguises, the eigenvalues.\r\n% Because it's handy, I'll use the HPL-AI matrix from my blog post\r\n% last December\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2019\/12\/04\/a-matrix-for-the-new-hpl-ai-benchmark\/\r\n% HPL-AI Benchmark>.\r\n\r\n   format short\r\n   M = HPLAI(6,-1)\r\n   \r\n%% Yes ma'am\r\n% Here's how we do a similarity transformation in MATLAB.\r\n\r\n   B = M*A\/M\r\n   \r\n%% Eigenvalues\r\n% What did this do to the eigenvalues?\r\n\r\n   format long\r\n   e = eig(B)\r\n\r\n%%\r\n% The simple roots have hardly moved.  The root with multiplicity three\r\n% is perturbed by roughly the cube root of precision.\r\n\r\n   format short\r\n   e3 = e(4:6) - 3;\r\n   r3 = abs(e3)\r\n   circle(e3)\r\n   \r\n%% Eigenvectors\r\n% What about the eigenvectors?\r\n\r\n   [V,~] = eig(B);    \r\n   imagV = imag(V)   \r\n   realV = real(V)\r\n   \r\n%%\r\n% The last two vectors have small imaginary components\r\n% and their real components are nearly the same as the fourth vector.\r\n% So only the first four columns of |V| are good eigenvectors.\r\n% We see |B|, and hence |A|, is _defective_.  It does not have a full\r\n% set of linearly independent eigenvectors.\r\n\r\n%% Condition of eigenvalues\r\n\r\n   help condeig\r\n   format short e\r\n   kappa = condeig(B)\r\n   \r\n%%\r\n% Look carefully at those numbers REPLACE_WITH_DASH_DASH the power of 10 on the first three\r\n% is zero, while on the last three it is 10.\r\n% This confirms that the first three eigenvalues are well conditioned,\r\n% while the fourth is not.\r\n   \r\n%% Jordan Canonical Form\r\n% |A| is almost its own JCF, so this is no surprise.\r\n\r\n   A_jcf = jordan(A)\r\n \r\n%%\r\n% But what about |B|?  With exact computation, it would have the same JCF.\r\n\r\n   format short e\r\n   B_jcf = jordan(B);\r\n   real_B_jcf = real(B_jcf)\r\n   imag_B_jcf = imag(B_jcf)\r\n   \r\n%%\r\n% The computed JCF is diagonal.  This is, yet again, an example\r\n% of the fact that the\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2018\/05\/28\/the-jordan-canonical-form-just-doesnt-compute\/\r\n% The Jordan Canonical Form Just Does't Compute>.\r\n% Also see <https:\/\/epubs.siam.org\/doi\/10.1137\/1018113 Golub and\r\n% Wilkinson>.\r\n\r\n%% And The Beat Goes On ...\r\n% I was just about to make that the end of the post when I tried this.\r\n% I didn't realize it at the time, but the use of the HPL-AI matrix\r\n% for the \"random\" similarity transformation has some welcome\r\n% consequences.  The elements of this matrix are the ratios of small\r\n% integers.\r\n\r\n   M = sym(M)\r\n   \r\n%%\r\n% The elements of |A| are also the ratios of small integers.\r\n\r\n   sym(A)\r\n   \r\n%%\r\n% The elements of |inv(M)| are ratios of large integers, but I\r\n% don't have to show them because we're not going to invert |M|, even\r\n% symbolically.\r\n% Instead we use _forward slash_ to compute the similarity transformation.\r\n\r\n   B = M*A\/M;\r\n   \r\n%%\r\n% The elements of |B| are also the ratios of large integers.  Let's just\r\n% look at the first column; the other columns are similar.\r\n   \r\n    B1 = B(:,1)\r\n  \r\n%% JCF\r\n% With this exact symbolic computation, the characteristic polynomial\r\n% of |B| is |p3|.\r\n\r\n   charpoly(B,'s')\r\n   \r\n%%\r\n% As a result, the JCF of |B| is correct.\r\n\r\n   jordan(B)\r\n   \r\n%% Symbolic Eigenvectors\r\n% The third output argument of the symbolic version of |eig| is a\r\n% vector whose length is the number of linearly independent eigenvectors.\r\n% This is the sum of the _geometric multiplicities_.  In this case,\r\n% it is 4.\r\n\r\n   [V,E,k] = eig(B)\r\n   \r\n%% Check\r\n% Verify that the eigenvectors work.\r\n\r\n   BV = B*V\r\n   \r\n   VE_k = V*E(k,k)\r\n   \r\n%% TKP Preview\r\n% While preparing these two posts about \"pejorative manifolds\",\r\n% I discovered some beautiful patterns of roundoff error generated by\r\n% Triple Kronecker Products with\r\n% very high multiplicity.  That is going to be\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2020\/02\/14\/roundoff-patterns-from-triple-kronecker-products my next post.>\r\n% Here is a preview.\r\n% <https:\/\/blogs.mathworks.com\/cleve\/files\/tkp_preview.gif tkp_preview.gif>\r\n\r\n   close\r\n##### SOURCE END ##### 51aef6b3629b4019834a59c0ee9b4d00\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/pejorative_blog2_01.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction--><p>In an unpublished 1972 technical report \"Conserving confluence curbs ill-condition,\"  Velvel Kahan coined the descriptive term <i>pejorative manifold<\/i>. In case you  don't use it in everyday conversation, <i>pejorative<\/i>  means \"expressing contempt or disapproval.\"... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2020\/01\/30\/pejorative-manifolds-of-polynomials-and-matrices-part-2\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":5755,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[12,13,16,7,20],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/5733"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=5733"}],"version-history":[{"count":3,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/5733\/revisions"}],"predecessor-version":[{"id":6229,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/5733\/revisions\/6229"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/5755"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=5733"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=5733"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=5733"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}