{"id":850,"date":"2013-12-23T12:00:42","date_gmt":"2013-12-23T17:00:42","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=850"},"modified":"2017-06-19T08:50:30","modified_gmt":"2017-06-19T13:50:30","slug":"fiedler-companion-matrix","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2013\/12\/23\/fiedler-companion-matrix\/","title":{"rendered":"Fiedler Companion Matrix"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>The Fiedler companion matrix distributes the coefficients of a polynomial along the diagonals of an elegant pentadiagonal matrix whose eigenvalues are equal to the zeros of the polynomial.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#a9d4ee08-6a3e-415e-8230-68eacc2a16e8\">Miroslav Fiedler<\/a><\/li><li><a href=\"#d8432c43-ca9c-4036-a910-175f7b830f0a\">Traditional companion matrix<\/a><\/li><li><a href=\"#f5cd78a1-b5ea-43e7-842e-1bf5792ed56b\">Fiedler companion matrix<\/a><\/li><li><a href=\"#0b0187fc-33d9-4faf-8123-6844ffd80f64\">Roots<\/a><\/li><li><a href=\"#30e2f87e-62ef-4f55-9308-6fac6b3d19a1\">Wilkinson 10<\/a><\/li><li><a href=\"#c497bfbd-4f52-4b66-9cff-8ec0041ab1c2\">Balancing<\/a><\/li><li><a href=\"#95a1616f-f69a-4d94-908a-c4cff6a5044a\">Similarity Exercise<\/a><\/li><li><a href=\"#4e5abf22-3531-4229-8a0b-c9cb692fd067\">Conclusion<\/a><\/li><li><a href=\"#36880c59-da32-4099-a2c4-4e7db628ec75\">References<\/a><\/li><\/ul><\/div><h4>Miroslav Fiedler<a name=\"a9d4ee08-6a3e-415e-8230-68eacc2a16e8\"><\/a><\/h4><p>I spent a very pleasant day in Berkeley several weeks ago with my friend Beresford Parlett. During the visit he told me about an elegant version of the companion matrix that Miroslav Fiedler introduced in 2003. Fiedler is Czech mathematician known for his contributions to linear algebra, graph theory, and the connections between the two.  He has been a frequent participant in the Householder symposia and, at various times, a member of its organizing committee.<\/p><h4>Traditional companion matrix<a name=\"d8432c43-ca9c-4036-a910-175f7b830f0a\"><\/a><\/h4><p>MATLAB usually represents a polynomial of degree $n$ by a vector with $n+1$ components.  With such a representation, the leading term has its own coefficient, which might be zero.  In this article, I want to have <i>monic<\/i> polynomials, where the leading coefficient is one, and so there are just $n$ components in the corresponding vector.<\/p><p>$$ p(x) = x^n + a_1 x^{n-1} + a_2 ^{n-2} + ... + a_{n-1} x + a_n $$<\/p><p>Associated with every such polynomial is its <i>companion matrix<\/i> with the coefficients strung out along the first row and ones along the first subdiagonal.  This traditional companion matrix is readily generated by a slick bit of MATLAB code.<\/p><pre class=\"codeinput\">  type <span class=\"string\">companion<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n  function C = companion(a)\r\n  n = length(a);\r\n  C = [-a; eye(n-1,n)];\r\n<\/pre><p>Here is the traditional companion matrix for the coefficients <tt>1:10<\/tt>.<\/p><pre class=\"codeinput\">  companion(1:10)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n    -1    -2    -3    -4    -5    -6    -7    -8    -9   -10\r\n     1     0     0     0     0     0     0     0     0     0\r\n     0     1     0     0     0     0     0     0     0     0\r\n     0     0     1     0     0     0     0     0     0     0\r\n     0     0     0     1     0     0     0     0     0     0\r\n     0     0     0     0     1     0     0     0     0     0\r\n     0     0     0     0     0     1     0     0     0     0\r\n     0     0     0     0     0     0     1     0     0     0\r\n     0     0     0     0     0     0     0     1     0     0\r\n     0     0     0     0     0     0     0     0     1     0\r\n\r\n<\/pre><p>And here is the <tt>spy<\/tt> plot for 30 coefficients.<\/p><pre class=\"codeinput\">  spy(companion(1:30))\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/fiedler_blog_01.png\" alt=\"\"> <h4>Fiedler companion matrix<a name=\"f5cd78a1-b5ea-43e7-842e-1bf5792ed56b\"><\/a><\/h4><p>Fiedler showed that we could generate a different companion matrix by arranging the coefficients, alternating with zeros, along the super- and subdiagonal, together with ones and zeros along the supersuper- and subsubdiagonal of a pentadiagonal matrix. I love the MATLAB code that generates the Fiedler companion matrix.<\/p><pre class=\"codeinput\">  type <span class=\"string\">fiedler<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n  function F = fiedler(a)\r\n  n = length(a);\r\n  b = ones(n-2,1); b(1:2:n-2) = 0;\r\n  c = -a(2:n); c(1:2:n-1) = 0; c(1) = 1;\r\n  d = -a(2:n); d(2:2:n-1) = 0;\r\n  e = ones(n-2,1); e(2:2:n-2) = 0;\r\n  F = diag(b,-2) + diag(c,-1) + diag(d,1) + diag(e,2); F(1,1) = -a(1);\r\n   \r\n<\/pre><p>Here is the Fiedler companion matrix for the coefficients <tt>1:10<\/tt>.<\/p><pre class=\"codeinput\">  fiedler(1:10)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n    -1    -2     1     0     0     0     0     0     0     0\r\n     1     0     0     0     0     0     0     0     0     0\r\n     0    -3     0    -4     1     0     0     0     0     0\r\n     0     1     0     0     0     0     0     0     0     0\r\n     0     0     0    -5     0    -6     1     0     0     0\r\n     0     0     0     1     0     0     0     0     0     0\r\n     0     0     0     0     0    -7     0    -8     1     0\r\n     0     0     0     0     0     1     0     0     0     0\r\n     0     0     0     0     0     0     0    -9     0   -10\r\n     0     0     0     0     0     0     0     1     0     0\r\n\r\n<\/pre><p>And here is the <tt>spy<\/tt> plot for 30 coefficients.<\/p><pre class=\"codeinput\">  spy(fiedler(1:30))\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/fiedler_blog_02.png\" alt=\"\"> <h4>Roots<a name=\"0b0187fc-33d9-4faf-8123-6844ffd80f64\"><\/a><\/h4><p>When I wrote the first Fortran MATLAB in the late 1970s, I included the <tt>roots<\/tt> function.  I will admit now that it was hardly more than a quick hack.  <tt>roots<\/tt> finds the zeros of a polynomial by setting up the (traditional) companion matrix and calling the <tt>eig<\/tt> function to compute its eigenvalues.  I included this function for two reasons.  First, it gave me a polynomial zero finder at very little cost in the way of additional code.  And second, it turned the tables on the linear algebra textbook approach of finding matrix eigenvalues by computing the zeros of the characteristic polynomial.  But I wasn't sure that <tt>roots<\/tt> was always accurate.<\/p><p>I was lucky.  <tt>roots<\/tt> turned out to be a good idea.  Papers published 15 years later by Kim-Chuan Toh and Nick Trefethen, and by Alan Edelman and H. Murakami, established that, as long as <i>balancing<\/i> is used, <tt>roots<\/tt> is numerically reliable.<\/p><p>On the other hand, <tt>roots<\/tt> can be criticized from an efficiency point of view.  A polynomial has $O(n)$ coefficients.  I'm not sure what the computational complexity of the zero finding problem should be, but the complexity of the matrix eigenvalue problem is $O(n^2)$ storage and $O(n^3)$ time, which is certainly too much.<\/p><p>This is where Fiedler comes in.  Fiedler can replace the traditional companion matrix in <tt>roots<\/tt>.  Now, if we could find a matrix eigenvalue algorithm that works on nonsymmetric pentadiagonal matrices in $O(n)$ storage and $O(n^2)$ time, then we would have the ultimate <tt>roots<\/tt>.<\/p><h4>Wilkinson 10<a name=\"30e2f87e-62ef-4f55-9308-6fac6b3d19a1\"><\/a><\/h4><p>For an example, let's use the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/03\/04\/wilkinsons-polynomials\">Wilkinson polynomial<\/a> with roots 1 through 10.<\/p><pre class=\"codeinput\">  syms <span class=\"string\">x<\/span>\r\n  p = prod(x-(1:10))\r\n  p = expand(p)\r\n  a = coeffs(p);\r\n  F = fiedler(double(fliplr(a(1:end-1))))\r\n<\/pre><pre class=\"codeoutput\"> \r\np =\r\n \r\n(x - 1)*(x - 2)*(x - 3)*(x - 4)*(x - 5)*(x - 6)*(x - 7)*(x - 8)*(x - 9)*(x - 10)\r\n \r\n \r\np =\r\n \r\nx^10 - 55*x^9 + 1320*x^8 - 18150*x^7 + 157773*x^6 - 902055*x^5 + 3416930*x^4 - 8409500*x^3 + 12753576*x^2 - 10628640*x + 3628800\r\n \r\n\r\nF =\r\n\r\n  Columns 1 through 6\r\n\r\n          55       -1320           1           0           0           0\r\n           1           0           0           0           0           0\r\n           0       18150           0     -157773           1           0\r\n           0           1           0           0           0           0\r\n           0           0           0      902055           0    -3416930\r\n           0           0           0           1           0           0\r\n           0           0           0           0           0     8409500\r\n           0           0           0           0           0           1\r\n           0           0           0           0           0           0\r\n           0           0           0           0           0           0\r\n\r\n  Columns 7 through 10\r\n\r\n           0           0           0           0\r\n           0           0           0           0\r\n           0           0           0           0\r\n           0           0           0           0\r\n           1           0           0           0\r\n           0           0           0           0\r\n           0   -12753576           1           0\r\n           0           0           0           0\r\n           0    10628640           0    -3628800\r\n           0           1           0           0\r\n\r\n<\/pre><p>Look at those coefficients. Can you imagine what contortions a vector $v$ must go through in order to satisfy $Fv = \\lambda v$ for some integer $\\lambda$ between 1 and 10?  As a result, the computed eigenvalues of $F$ show a loss of accuracy caused by <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/03\/04\/wilkinsons-polynomials\">the sensitivity<\/a> of the roots of this polynomial.<\/p><pre class=\"codeinput\">  format <span class=\"string\">long<\/span>\r\n  lambda = eig(F)\r\n<\/pre><pre class=\"codeoutput\">\r\nlambda =\r\n\r\n   9.999999999992546\r\n   8.999999999933312\r\n   8.000000000317854\r\n   6.999999999479941\r\n   6.000000000423509\r\n   4.999999999814492\r\n   4.000000000042952\r\n   2.999999999995175\r\n   2.000000000000225\r\n   0.999999999999996\r\n\r\n<\/pre><h4>Balancing<a name=\"c497bfbd-4f52-4b66-9cff-8ec0041ab1c2\"><\/a><\/h4><p>Symmetric matrices have perfectly well conditioned eigenvalues.  Balancing, an idea introduced by Parlett and Reinsch, is the attempt to find a diagonal similarity transformation that makes a nonsymmetric matrix \"more nearly symmetric\", in the sense that the norms of its rows and the norms of its columns are closer to being equal.  Balancing is not always possible. For example, an upper or lower triangular matric cannot be balanced. Triangular matrices are hopelessly nonsymmetric, but that's OK because their eigenvalues are on the diagonal.<\/p><p>By default, MATLAB, using LAPACK, balances nonsymmetric matrices before computing eigenvalues.  As Toh and Trefethen, and Edelman and Murakami, emphasized, this is essential for the numerical stability of <tt>roots<\/tt>. The same is true for <tt>roots<\/tt> based on a Fiedler matrix.  When we called <tt>eig<\/tt> above, MATLAB balanced the input matrix.  It helps a lot.<\/p><p>Balancing is done by powers of two, so as not to introduce any roundoff errors.  The large elements are scaled down significantly, while the ones on the outer diagonals are scaled up by only a few powers of two.  Here is the balanced Fiedler matrix.<\/p><pre class=\"codeinput\">  format <span class=\"string\">short<\/span>\r\n  B = balance(F)\r\n<\/pre><pre class=\"codeoutput\">\r\nB =\r\n\r\n  Columns 1 through 7\r\n\r\n   55.0000  -20.6250   32.0000         0         0         0         0\r\n   64.0000         0         0         0         0         0         0\r\n         0    8.8623         0   -4.8148    8.0000         0         0\r\n         0   16.0000         0         0         0         0         0\r\n         0         0         0    3.4411         0   -3.2586    4.0000\r\n         0         0         0    4.0000         0         0         0\r\n         0         0         0         0         0    2.0050         0\r\n         0         0         0         0         0    2.0000         0\r\n         0         0         0         0         0         0         0\r\n         0         0         0         0         0         0         0\r\n\r\n  Columns 8 through 10\r\n\r\n         0         0         0\r\n         0         0         0\r\n         0         0         0\r\n         0         0         0\r\n         0         0         0\r\n         0         0         0\r\n   -1.5203    2.0000         0\r\n         0         0         0\r\n    0.6335         0   -0.4326\r\n    0.5000         0         0\r\n\r\n<\/pre><p>Balancing decreases the norm of the matrix by five orders of magnitude. Here is the before and after.<\/p><pre class=\"codeinput\">  format <span class=\"string\">short<\/span> <span class=\"string\">g<\/span>\r\n  disp(<span class=\"string\">'   no balance      balance'<\/span>)\r\n  disp([norm(F) norm(B)])\r\n<\/pre><pre class=\"codeoutput\">   no balance      balance\r\n   1.8092e+07       88.433\r\n\r\n<\/pre><p>Sometimes, but not often, it is desirable to bypass balancing. This is the situation when the matrix contains small elements that are the result of roundoff or other errors that should not be scaled up to become comparable with the other elements.  There are not any such elements in this matrix.<\/p><p>Here are the computed eigenvalues, without and with balancing.<\/p><pre class=\"codeinput\">  format <span class=\"string\">long<\/span>\r\n  disp(<span class=\"string\">'       no balance            balance'<\/span>)\r\n  disp([eig(F,<span class=\"string\">'nobalance'<\/span>) eig(F)])\r\n<\/pre><pre class=\"codeoutput\">       no balance            balance\r\n   9.999999330734642   9.999999999992546\r\n   8.999996381627081   8.999999999933312\r\n   8.000028620785038   8.000000000317854\r\n   6.999932559108236   6.999999999479941\r\n   6.000079411601993   6.000000000423509\r\n   4.999948356986658   4.999999999814492\r\n   4.000018256147182   4.000000000042952\r\n   2.999996926380218   2.999999999995175\r\n   2.000000155398907   2.000000000000225\r\n   1.000000001522463   0.999999999999996\r\n\r\n<\/pre><p>You have to look at the trailing digits to see that the eigenvalues computed without balancing are less accurate.  It becomes clearer when we compute the errors.  Balancing improves the accuracy by the five orders of magnitude that is achieved by reducing the norm of the input.<\/p><pre class=\"codeinput\">  format <span class=\"string\">short<\/span> <span class=\"string\">e<\/span>\r\n  disp(<span class=\"string\">'     balance    no balance'<\/span>)\r\n  disp([eig(F)-(10:-1:1)' eig(F,<span class=\"string\">'nobalance'<\/span>)-(10:-1:1)'])\r\n<\/pre><pre class=\"codeoutput\">     balance    no balance\r\n  -7.4536e-12  -6.6927e-07\r\n  -6.6688e-11  -3.6184e-06\r\n   3.1785e-10   2.8621e-05\r\n  -5.2006e-10  -6.7441e-05\r\n   4.2351e-10   7.9412e-05\r\n  -1.8551e-10  -5.1643e-05\r\n   4.2952e-11   1.8256e-05\r\n  -4.8255e-12  -3.0736e-06\r\n   2.2515e-13   1.5540e-07\r\n  -3.7748e-15   1.5225e-09\r\n\r\n<\/pre><h4>Similarity Exercise<a name=\"95a1616f-f69a-4d94-908a-c4cff6a5044a\"><\/a><\/h4><p>The traditional companion matrix and the Fiedler companion matrix are similar to each other.  Can you find the similarity transformation? It has a very interesting nonzero structure.<\/p><h4>Conclusion<a name=\"4e5abf22-3531-4229-8a0b-c9cb692fd067\"><\/a><\/h4><p>If we could figure out how to take advantage of the pentadiagonal structure, Miraslav Fiedler's elegant companion matrix would be an all-around winner, but we must not forget to balance.<\/p><h4>References<a name=\"36880c59-da32-4099-a2c4-4e7db628ec75\"><\/a><\/h4><p>Miroslav Fiedler, A note on companion matrices, Linear Algebra and its Applications 372 (2003), 325-331.<\/p><p>Alan Edelman and H. Murakami, Polynomial Roots from Companion Matrix Eigenvalues, Mathematics of Computation. 64 (1995), 763--776, <a href=\"http:\/\/www-math.mit.edu\/~edelman\/homepage\/papers\/companion.pdf\">&lt;http:\/\/www-math.mit.edu\/~edelman\/homepage\/papers\/companion.pdf<\/a>&gt;<\/p><p>Kim-Chuan Toh and Lloyd N. Trefethen, Pseudozeros of polynomials and pseudospectra of companion matrices, Numer. Math. 68: (1994), 403-425, <a href=\"http:\/\/people.maths.ox.ac.uk\/~trefethen\/publication\/PDF\/1994_61.pdf\">&lt;http:\/\/people.maths.ox.ac.uk\/~trefethen\/publication\/PDF\/1994_61.pdf<\/a>&gt;<\/p><p>Cleve Moler, Cleve's Corner: ROOTS -- Of polynomials, that is, <a href=\"https:\/\/www.mathworks.com\/company\/newsletters\/articles\/roots-of-polynomials-that-is.html\">&lt;https:\/\/www.mathworks.com\/company\/newsletters\/articles\/roots-of-polynomials-that-is.html<\/a>&gt;<\/p><p>Beresford Parlett and Christian Reinsch, Balancing a matrix for calculation of eigenvalues and eigenvectors, Numer.  Math., 13, (1969), 293-304, <a href=\"http:\/\/link.springer.com\/article\/10.1007\/BF02165404\">&lt;http:\/\/link.springer.com\/article\/10.1007\/BF02165404<\/a>&gt;<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_6e55dde8f0d946888b9bdb5b42bdc55b() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='6e55dde8f0d946888b9bdb5b42bdc55b ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 6e55dde8f0d946888b9bdb5b42bdc55b';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2013 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_6e55dde8f0d946888b9bdb5b42bdc55b()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2014a<br><\/p><p class=\"footer\"><br>\r\n      Published with MATLAB&reg; R2014a<br><\/p><\/div><!--\r\n6e55dde8f0d946888b9bdb5b42bdc55b ##### SOURCE BEGIN #####\r\n%% Fiedler Companion Matrix\r\n% The Fiedler companion matrix distributes the coefficients of a polynomial\r\n% along the diagonals of an elegant pentadiagonal matrix whose eigenvalues\r\n% are equal to the zeros of the polynomial.\r\n\r\n%% Miroslav Fiedler\r\n% I spent a very pleasant day in Berkeley several weeks ago with my friend\r\n% Beresford Parlett. During the visit he told me about an elegant version\r\n% of the companion matrix that Miroslav Fiedler introduced in 2003.\r\n% Fiedler is Czech mathematician known for his contributions to linear\r\n% algebra, graph theory, and the connections between the two.  He has been\r\n% a frequent participant in the Householder symposia and, at various times,\r\n% a member of its organizing committee.\r\n\r\n%% Traditional companion matrix\r\n% MATLAB usually represents a polynomial of degree $n$ by a vector with\r\n% $n+1$ components.  With such a representation, the leading term has its\r\n% own coefficient, which might be zero.  In this article, I want to have\r\n% _monic_ polynomials, where the leading coefficient is one, and so there\r\n% are just $n$ components in the corresponding vector.\r\n%\r\n% $$ p(x) = x^n + a_1 x^{n-1} + a_2 ^{n-2} + ... + a_{n-1} x + a_n $$\r\n%\r\n\r\n%%\r\n% Associated with every such polynomial is its _companion matrix_ with\r\n% the coefficients strung out along the first row and ones along the\r\n% first subdiagonal.  This traditional companion matrix is readily\r\n% generated by a slick bit of MATLAB code.\r\n%\r\n\r\n  type companion\r\n\r\n%%\r\n% Here is the traditional companion matrix for the coefficients |1:10|.\r\n\r\n  companion(1:10)\r\n\r\n%%\r\n% And here is the |spy| plot for 30 coefficients.\r\n\r\n  spy(companion(1:30))\r\n\r\n%% Fiedler companion matrix\r\n% Fiedler showed that we could generate a different companion matrix by\r\n% arranging the coefficients, alternating with zeros, along the super- and\r\n% subdiagonal, together with ones and zeros along the supersuper- and\r\n% subsubdiagonal of a pentadiagonal matrix. \r\n% I love the MATLAB code that generates the Fiedler companion matrix.\r\n\r\n  type fiedler\r\n\r\n%%\r\n% Here is the Fiedler companion matrix for the coefficients |1:10|.\r\n\r\n  fiedler(1:10)\r\n\r\n%%\r\n% And here is the |spy| plot for 30 coefficients.\r\n\r\n  spy(fiedler(1:30))\r\n\r\n%% Roots\r\n% When I wrote the first Fortran MATLAB in the late 1970s, I included the\r\n% |roots| function.  I will admit now that it was hardly more than a\r\n% quick hack.  |roots| finds the zeros of a polynomial by setting up the\r\n% (traditional) companion matrix and calling the |eig| function to compute\r\n% its eigenvalues.  I included this function for two reasons.  First,\r\n% it gave me a polynomial zero finder at very little cost in the way of\r\n% additional code.  And second, it turned the tables on the linear algebra\r\n% textbook approach of finding matrix eigenvalues by computing the zeros\r\n% of the characteristic polynomial.  But I wasn't sure that |roots| was\r\n% always accurate.\r\n\r\n%%\r\n% I was lucky.  |roots| turned out to be a good idea.  Papers published\r\n% 15 years later by Kim-Chuan Toh and Nick Trefethen, and by Alan Edelman\r\n% and H. Murakami, established that, as long as _balancing_ is used,\r\n% |roots| is numerically reliable.  \r\n\r\n%%\r\n% On the other hand, |roots| can be criticized from an efficiency point of\r\n% view.  A polynomial has $O(n)$ coefficients.  I'm not sure what the\r\n% computational complexity of the zero finding problem should be, but the\r\n% complexity of the matrix eigenvalue problem is $O(n^2)$ storage and\r\n% $O(n^3)$ time, which is certainly too much.\r\n\r\n%%\r\n% This is where Fiedler comes in.  Fiedler can replace the traditional\r\n% companion matrix in |roots|.  Now, if we could find a matrix eigenvalue\r\n% algorithm that works on nonsymmetric pentadiagonal matrices in $O(n)$\r\n% storage and $O(n^2)$ time, then we would have the ultimate |roots|.\r\n\r\n%% Wilkinson 10\r\n% For an example, let's use the\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2013\/03\/04\/wilkinsons-polynomials\r\n% Wilkinson polynomial> with roots 1 through 10.\r\n\r\n  syms x\r\n  p = prod(x-(1:10))\r\n  p = expand(p)\r\n  a = coeffs(p);\r\n  F = fiedler(double(fliplr(a(1:end-1))))\r\n\r\n%%\r\n% Look at those coefficients. Can you imagine what contortions a vector $v$\r\n% must go through in order to satisfy $Fv = \\lambda v$ for some integer\r\n% $\\lambda$ between 1 and 10?  As a result, the computed eigenvalues of $F$\r\n% show a loss of accuracy caused by\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2013\/03\/04\/wilkinsons-polynomials\r\n% the sensitivity> of the roots of this polynomial.\r\n\r\n  format long\r\n  lambda = eig(F) \r\n\r\n%% Balancing\r\n% Symmetric matrices have perfectly well conditioned eigenvalues.  Balancing,\r\n% an idea introduced by Parlett and Reinsch, is the attempt to find a diagonal\r\n% similarity transformation that makes a nonsymmetric matrix \"more nearly\r\n% symmetric\", in the sense that the norms of its rows and the norms of its\r\n% columns are closer to being equal.  Balancing is not always possible.\r\n% For example, an upper or lower triangular matric cannot be balanced.\r\n% Triangular matrices are hopelessly nonsymmetric, but that's OK because\r\n% their eigenvalues are on the diagonal.\r\n\r\n%%\r\n% By default, MATLAB, using LAPACK, balances nonsymmetric matrices before\r\n% computing eigenvalues.  As Toh and Trefethen, and Edelman and Murakami,\r\n% emphasized, this is essential for the numerical stability of |roots|.\r\n% The same is true for |roots| based on a Fiedler matrix.  When we\r\n% called |eig| above, MATLAB balanced the input matrix.  It helps a lot.\r\n\r\n%%\r\n% Balancing is done by powers of two, so as not to introduce any\r\n% roundoff errors.  The large elements are scaled down significantly,\r\n% while the ones on the outer diagonals are scaled up by only a few\r\n% powers of two.  Here is the balanced Fiedler matrix.\r\n\r\n  format short\r\n  B = balance(F)\r\n\r\n%% \r\n% Balancing decreases the norm of the matrix by five orders of magnitude.\r\n% Here is the before and after.\r\n\r\n  format short g\r\n  disp('   no balance      balance')\r\n  disp([norm(F) norm(B)])\r\n\r\n%%\r\n% Sometimes, but not often, it is desirable to bypass balancing.\r\n% This is the situation when the matrix contains small elements that are\r\n% the result of roundoff or other errors that should not be scaled up\r\n% to become comparable with the other elements.  There are not any such\r\n% elements in this matrix.\r\n\r\n%%\r\n% Here are the computed eigenvalues, without and with balancing.\r\n\r\n  format long\r\n  disp('       no balance            balance')\r\n  disp([eig(F,'nobalance') eig(F)])\r\n\r\n%%\r\n% You have to look at the trailing digits to see that the eigenvalues\r\n% computed without balancing are less accurate.  It becomes clearer when\r\n% we compute the errors.  Balancing improves the accuracy by the five orders\r\n% of magnitude that is achieved by reducing the norm of the input.\r\n\r\n  format short e\r\n  disp('     balance    no balance')\r\n  disp([eig(F)-(10:-1:1)' eig(F,'nobalance')-(10:-1:1)'])\r\n\r\n%% Similarity Exercise\r\n% The traditional companion matrix and the Fiedler companion matrix\r\n% are similar to each other.  Can you find the similarity transformation?\r\n% It has a very interesting nonzero structure.\r\n\r\n%% Conclusion\r\n% If we could figure out how to take advantage of the pentadiagonal\r\n% structure, Miraslav Fiedler's elegant companion matrix would be an\r\n% all-around winner, but we must not forget to balance.\r\n  \r\n%% References\r\n% Miroslav Fiedler, A note on companion matrices,\r\n% Linear Algebra and its Applications 372 (2003), 325-331,\r\n% <http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?rep=rep1&type=pdf&doi=10.1.1.210.6269\r\n% http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download>\r\n\r\n%%\r\n% Alan Edelman and H. Murakami,\r\n% Polynomial Roots from Companion Matrix Eigenvalues,\r\n% Mathematics of Computation. 64 (1995), 763REPLACE_WITH_DASH_DASH776,\r\n% <http:\/\/www-math.mit.edu\/~edelman\/homepage\/papers\/companion.pdf\r\n% http:\/\/www-math.mit.edu\/~edelman\/homepage\/papers\/companion.pdf>\r\n\r\n%%\r\n% Kim-Chuan Toh and Lloyd N. Trefethen,\r\n% Pseudozeros of polynomials and pseudospectra of companion matrices,\r\n% Numer. Math. 68: (1994), 403-425,\r\n% <http:\/\/people.maths.ox.ac.uk\/~trefethen\/publication\/PDF\/1994_61.pdf\r\n% http:\/\/people.maths.ox.ac.uk\/~trefethen\/publication\/PDF\/1994_61.pdf>\r\n%\r\n\r\n%%\r\n% Cleve Moler, Cleve's Corner: ROOTS REPLACE_WITH_DASH_DASH Of polynomials, that is,\r\n% <https:\/\/www.mathworks.com\/company\/newsletters\/articles\/roots-of-polynomials-that-is.html\r\n% https:\/\/www.mathworks.com\/company\/newsletters\/articles\/roots-of-polynomials-that-is.html>\r\n\r\n%% \r\n% Beresford Parlett and Christian Reinsch,\r\n% Balancing a matrix for calculation of eigenvalues and eigenvectors,\r\n% Numer.  Math., 13, (1969), 293-304,\r\n% <http:\/\/link.springer.com\/article\/10.1007\/BF02165404\r\n% http:\/\/link.springer.com\/article\/10.1007\/BF02165404>\r\n\r\n##### SOURCE END ##### 6e55dde8f0d946888b9bdb5b42bdc55b\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/fiedler_blog_01.png\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>The Fiedler companion matrix distributes the coefficients of a polynomial along the diagonals of an elegant pentadiagonal matrix whose eigenvalues are equal to the zeros of the polynomial.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/12\/23\/fiedler-companion-matrix\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[13,6,16,8],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/850"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=850"}],"version-history":[{"count":12,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/850\/revisions"}],"predecessor-version":[{"id":2560,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/850\/revisions\/2560"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=850"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=850"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=850"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}