{"id":1783,"date":"2016-06-27T12:00:28","date_gmt":"2016-06-27T17:00:28","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=1783"},"modified":"2017-06-19T08:47:03","modified_gmt":"2017-06-19T13:47:03","slug":"19-dubious-ways-to-compute-the-zeros-of-a-polynomial","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2016\/06\/27\/19-dubious-ways-to-compute-the-zeros-of-a-polynomial\/","title":{"rendered":"19 Dubious Ways to Compute the Zeros of a Polynomial"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>During the <a href=\"http:\/\/www.siam.org\/meetings\/an16\/\">SIAM Annual Meeting<\/a> this summer in Boston there will be a special <a href=\"http:\/\/meetings.siam.org\/sess\/dsp_programsess.cfm?SESSIONCODE=23492\">minisymposium Wednesday afternoon<\/a>, July 13, honoring Charlie Van Loan, who is retiring at Cornell. (I use \"at\" because he's not leaving Ithaca.)  I will give a talk titled <a href=\"http:\/\/meetings.siam.org\/sess\/dsp_talk.cfm?p=78577\">\"19 Dubious Way to Compute the Zeros of a Polynomial\"<\/a>, following in the footsteps of the paper about the matrix exponential that Charlie and I wrote in 1978 and <a title=\"https:\/\/www.cs.cornell.edu\/cv\/ResearchPDF\/19ways+.pdf (link no longer works)\">updated 25 years later<\/a>. I really don't have 19 ways to compute polynomial zeros, but then I only have a half hour for my talk.  Most of the methods have been described previously in this blog.  Today's post is mostly about \"roots\".<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#e0971664-1228-4309-98f3-e52816fb2219\">Origin of MATLAB \"Roots\"<\/a><\/li><li><a href=\"#a9d06a5b-a0c8-4a98-9630-ade8cad96b2d\">Is \"roots\" dubious?<\/a><\/li><li><a href=\"#a6688c48-c66c-49d0-a4eb-f25da57a41c5\">Balancing<\/a><\/li><li><a href=\"#5b2b850d-9880-4332-9b51-527a17b90f5d\">Execution time<\/a><\/li><li><a href=\"#1b70d05e-ff64-499b-9615-32054ff1e966\">Memory requirements<\/a><\/li><li><a href=\"#bd7b5d43-9aa4-472c-ba61-e1423114b182\">AMVW<\/a><\/li><li><a href=\"#90e57284-c6b6-405b-92d4-07183d23bb4b\">Fiedler Companion Matrix<\/a><\/li><li><a href=\"#77456146-da5e-43ba-aa42-2a6c651f7596\">Please comment<\/a><\/li><li><a href=\"#e220c001-5c5a-4c70-af39-a17f52d63d89\">References<\/a><\/li><\/ul><\/div><h4>Origin of MATLAB \"Roots\"<a name=\"e0971664-1228-4309-98f3-e52816fb2219\"><\/a><\/h4><p>Almost 40 years ago, in the late 1970s, when I was developing the original Fortran-based MATLAB, I wanted to have a command to find the roots of a polynomial.  At the time MATLAB was just a primitive matrix calculator.  It was not yet a programming environment; I did not yet have M-files or toolboxes.<\/p><p>I had two objectives.  I needed to keep the amount of software to a minimum.  Mike Jenkins was a friend of mine who had recently submitted his Ph. D. thesis, under Joe Traub's direction, to Stanford, with what is now known as the Jenkins-Traub algorithm for polynomial zero finding.  But that would require way too much memory for my MATLAB. And I wanted to turn the tables on the traditional role the characteristic polynomial plays in linear algebra.  First year students are taught to compute eigenvalues of a matrix by finding the zeros of the determinant of its companion matrix.  Wilkinson had shown that in the presence of roundoff error this is a bad idea.  I wanted to emphasize Wilkinson's point by using powerful matrix eigenvalue techniques on the companion matrix to compute polynomial roots.<\/p><p>I already had the EISPACK code for the Francis double shift QR iteration to find the eigenvalues of a Hessenberg matrix.  It took just a few more lines to find polynomial roots.<\/p><p>For example, MATLAB represents this polynomial $z^5 - 15 z^4 + 85 z^3 - 225 z^2 + 274 z - 120$ by a vector of coefficients.<\/p><pre class=\"codeinput\">   p = [1 -15 85 -225 274 -120]\r\n<\/pre><pre class=\"codeoutput\">p =\r\n     1   -15    85  -225   274  -120\r\n<\/pre><p>The <tt>roots<\/tt> function finds its roots.<\/p><pre class=\"codeinput\">   roots(p)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n    5.0000\r\n    4.0000\r\n    3.0000\r\n    2.0000\r\n    1.0000\r\n<\/pre><p>MATLAB does this by forming the companion matrix, which is upper Hessenberg, and computing its eigenvalues.<\/p><pre class=\"codeinput\">   A = [15 -85 225 -274 120\r\n            eye(4,5)       ]\r\n   eig(A)\r\n<\/pre><pre class=\"codeoutput\">A =\r\n    15   -85   225  -274   120\r\n     1     0     0     0     0\r\n     0     1     0     0     0\r\n     0     0     1     0     0\r\n     0     0     0     1     0\r\nans =\r\n    5.0000\r\n    4.0000\r\n    3.0000\r\n    2.0000\r\n    1.0000\r\n<\/pre><h4>Is \"roots\" dubious?<a name=\"a9d06a5b-a0c8-4a98-9630-ade8cad96b2d\"><\/a><\/h4><p>In the original documentation for \"roots\" I wrote<\/p><p>\r\n<p style=\"margin-left:3ex;\">\r\nIt uses order n^2 storage and order n^3 time. An algorithm\r\ndesigned specifically for polynomial roots might use order n\r\nstorage and n^2 time. And the roundoff errors introduced correspond\r\nto perturbations in the elements of the companion matrix,\r\nnot the polynomial coefficients.\r\n<\/p>\r\n<\/p><p>So, at the time I knew that \"roots\" had impeccable numerical properties from the matrix eigenvalue point of view, but I wasn't sure about the numerical properties from the polynomial of view.  Were we computing the exact roots of some polynomial near the original one?  Maybe not. It depends crucially upon the scaling.<\/p><p>\"Roots\" is possibly dubious numerically, and certainly dubious from a computation cost point of view.<\/p><h4>Balancing<a name=\"a6688c48-c66c-49d0-a4eb-f25da57a41c5\"><\/a><\/h4><p>Almost twenty years later, in 1994 and 1995, a pair of papers, one by Nick Trefethen and K. C. Toh, and one by Alan Edelman and H. Murakami, finally gave \"roots\" a passing grade numerically, provided the companion matrix is first scaled by balancing.<\/p><p>Balancing was introduced by Parlett and Reinsch in the Algol procedures that spawned EISPACK.  Balancing is a diagonal similarity transformation by a matrix of powers of two to attempt to bring a nonsymmetric matrix closer to symmetric by making its row and column norms equal.  It is available in MATLAB through the function <tt>balance<\/tt> and is used by default by <tt>eig<\/tt>.<\/p><p>To see the importance of balancing, consider the companion matrix of $W_{20}$, the Wilkinson polynomial of degree 20 whose roots are the integers from 1 to 20.  The coefficients of this polynomial are huge.  The constant term is 20!, which is 2432902008176640000. But that's not the largest.  The coefficient of $z^2$ is 13803759753640704000.  Five of the coefficients, those of $z^3$ through $z^7$ are so large that they cannot be exactly represented as double precision floating pointing numbers.  So<\/p><pre class=\"language-matlab\">roots(poly(1:20))\r\n<\/pre><p>starts with a companion matrix that has coefficients of magnitude <tt>10^19<\/tt> along its first row and ones on its subdiagonal.  It is seriously nonsymmetric.<\/p><p>The elements of the balanced matrix are much more reasonable.  They range from just 1\/4 to 256.  This is the matrix involved in the actual eigenvalue calculation.  All the scaling action is captured in the diagonal similarity, whose elements range from $2^{-35}$ to $2^{27}$. This matrix is only involved in scaling the eigenvectors.<\/p><p>So I was lucky on the numerical side.  Balancing turned out to provide \"roots\" satisfactory stability.<\/p><h4>Execution time<a name=\"5b2b850d-9880-4332-9b51-527a17b90f5d\"><\/a><\/h4><p>Let's measure the execution time and memory requirements for \"roots\" with polynomials of very large degree -- tens of thousands.  Here I just wanted to see how bad it is.  Short answer: it takes \"roots\" almost 40 minutes to find the zeros of a polynomial of order sixteen thousand on my Lenovo X250 laptop.<\/p><p>Here is the experiment.  Polynomials of degree <tt>1000*n<\/tt> for <tt>n = 1:16<\/tt>.<\/p><pre class=\"language-matlab\">T = zeros(16,1);\r\n<span class=\"keyword\">for<\/span> n = 1:16\r\n   n\r\n   r = randn(1,1000*n);\r\n   tic\r\n   roots(r);\r\n   t = toc\r\n   T(n) = t;\r\n   <span class=\"syscmd\">!systeminfo | grep 'In Use'<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre><p>As expected, the observed exection times are fit nicely by a cubic in the polynomial degree.<\/p><pre class=\"codeinput\">   time_roots\r\n<\/pre><pre class=\"codeoutput\">c =\r\n         0\r\n    1.8134\r\n    0.4632\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/nineteen_blog_01.png\" alt=\"\"> <h4>Memory requirements<a name=\"1b70d05e-ff64-499b-9615-32054ff1e966\"><\/a><\/h4><p>On the other hand, the measured virtual memory requirements are all other the map.  A least squares fit by a quadratic in the polynomial degree $n$ shows very little dependence on the $n^2$ term.  It's fair to say that on today's machines memory requirements are not a significant restriction for this algorithm.<\/p><pre class=\"codeinput\">   memory_roots\r\n<\/pre><pre class=\"codeoutput\">c =\r\n   1.0e+03 *\r\n    3.0328\r\n    0.0177\r\n    0.0002\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/nineteen_blog_02.png\" alt=\"\"> <h4>AMVW<a name=\"bd7b5d43-9aa4-472c-ba61-e1423114b182\"><\/a><\/h4><p>AMVW are the initials of the last names of Jared Aurentz, Thomas Mach, Raf Vandebril, and David Watkins.  It is also the name of their Fortran code for what they claim is a competitor to \"roots\". Their recent paper in SIMAX, referenced below, summarizes all the work that has been done on computing the eigenvalues of companion matrices, exploiting their special structure.<\/p><p>They also present their own algorithm where the companion matrix is never formed explicitly, but is represented as a product of Givens rotations. The QR algorithm is carried out on this representation. As a result only $O(n)$ storage and $O(n^2)$ operations are required.<\/p><p>But their software is in Fortran.  I was a big Fortran fan for a long time.  I used to be considered one of the Fortran gurus at MathWorks. I now don't even have access to a Fortran compiler.  So I can't try their stuff myself.<\/p><h4>Fiedler Companion Matrix<a name=\"90e57284-c6b6-405b-92d4-07183d23bb4b\"><\/a><\/h4><p>So far I've been talking about the classic companion matrix where the polynomial coefficients occupy the entire first or last row or column.  In 2003 Miroslav Fiedler introduced another version of a companion matrix, a pentadiagonal matrix with the polynomial coefficients alternating with zeros on the super- and subdiagonal and ones and zeros alternating on the next diagonals of the nonsymmetric pentadiagonal matrix.  I wrote about it my blog <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/12\/23\/fiedler-companion-matrix\/\">just before last Christmas<\/a>.<\/p><p>Here's an example, W_8.  First, the coefficients.<\/p><pre class=\"codeinput\">   c = poly(1:8)'\r\n<\/pre><pre class=\"codeoutput\">c =\r\n           1\r\n         -36\r\n         546\r\n       -4536\r\n       22449\r\n      -67284\r\n      118124\r\n     -109584\r\n       40320\r\n<\/pre><p>The Fiedler companion matrix.<\/p><pre class=\"codeinput\">   F = fiedler(c(2:end))\r\n<\/pre><pre class=\"codeoutput\">F =\r\n  Columns 1 through 6\r\n          36        -546           1           0           0           0\r\n           1           0           0           0           0           0\r\n           0        4536           0      -22449           1           0\r\n           0           1           0           0           0           0\r\n           0           0           0       67284           0     -118124\r\n           0           0           0           1           0           0\r\n           0           0           0           0           0      109584\r\n           0           0           0           0           0           1\r\n  Columns 7 through 8\r\n           0           0\r\n           0           0\r\n           0           0\r\n           0           0\r\n           1           0\r\n           0           0\r\n           0      -40320\r\n           0           0\r\n<\/pre><p>Balance it.<\/p><pre class=\"codeinput\">   B = balance(F)\r\n<\/pre><pre class=\"codeoutput\">B =\r\n  Columns 1 through 7\r\n   36.0000  -17.0625   16.0000         0         0         0         0\r\n   32.0000         0         0         0         0         0         0\r\n         0    8.8594         0   -5.4807    8.0000         0         0\r\n         0    8.0000         0         0         0         0         0\r\n         0         0         0    2.0533         0   -0.9012    1.0000\r\n         0         0         0    4.0000         0         0         0\r\n         0         0         0         0         0    0.8361         0\r\n         0         0         0         0         0    0.5000         0\r\n  Column 8\r\n         0\r\n         0\r\n         0\r\n         0\r\n         0\r\n         0\r\n   -0.6152\r\n         0\r\n<\/pre><p>Check the eigenvalues<\/p><pre class=\"codeinput\">   e = eig(B)\r\n<\/pre><pre class=\"codeoutput\">e =\r\n    8.0000\r\n    7.0000\r\n    6.0000\r\n    5.0000\r\n    4.0000\r\n    3.0000\r\n    2.0000\r\n    1.0000\r\n<\/pre><p>If we could find a way to compute the eigenvalues and eigenvectors of the Fiedler companion matrix while exploiting its structure, then we would have a gorgeous algorithm.  Beresford Parlett tells me that he's working on it.<\/p><h4>Please comment<a name=\"77456146-da5e-43ba-aa42-2a6c651f7596\"><\/a><\/h4><p>If you have software that I don't know about, please comment.<\/p><h4>References<a name=\"e220c001-5c5a-4c70-af39-a17f52d63d89\"><\/a><\/h4><p>Kim-Chuan Toh and Lloyd N. Trefethen, \"Pseudozeros of polynomials and pseudospectra of companion matrices,\" <i>Numer. Math.<\/i>, 68, 1994. <\/p><p>Alan Edelman and H. Murakami, \"Polynomial roots from companion matrices,\" <i>Mathematics of Computation<\/i>, 64, 1995. <a href=\"http:\/\/www.ams.org\/journals\/mcom\/1995-64-210\/S0025-5718-1995-1262279-2\/\">&lt;http:\/\/www.ams.org\/journals\/mcom\/1995-64-210\/S0025-5718-1995-1262279-2\/<\/a>&gt;<\/p><p>Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S. Watkins, \"Fast and backward stable computation of roots of polynomials,\" <i>SIAM J. Matrix Anal. Appl.<\/i>, 36, 2015. <a href=\"http:\/\/www.math.wsu.edu\/faculty\/watkins\/pdfiles\/AMVW15_SIMAX.pdf\">&lt;http:\/\/www.math.wsu.edu\/faculty\/watkins\/pdfiles\/AMVW15_SIMAX.pdf<\/a>&gt;<\/p><p>Miroslav Fiedler, A note on companion matrices, Linear Algebra and its Applications 372 (2003), 325-331.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_5087b47e0b7648edb47f43a2210eabeb() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='5087b47e0b7648edb47f43a2210eabeb ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 5087b47e0b7648edb47f43a2210eabeb';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2016 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_5087b47e0b7648edb47f43a2210eabeb()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2016a<br><\/p><\/div><!--\r\n5087b47e0b7648edb47f43a2210eabeb ##### SOURCE BEGIN #####\r\n%% 19 Dubious Ways to Compute the Zeros of a Polynomial\r\n% During the <http:\/\/www.siam.org\/meetings\/an16\/ SIAM Annual Meeting>\r\n% this summer in Boston there will be a special \r\n% <http:\/\/meetings.siam.org\/sess\/dsp_programsess.cfm?SESSIONCODE=23492\r\n% minisymposium Wednesday afternoon>, July 13, honoring Charlie\r\n% Van Loan, who is retiring at Cornell. (I use \"at\" because he's not\r\n% leaving Ithaca.)  I will give a talk titled \r\n% <http:\/\/meetings.siam.org\/sess\/dsp_talk.cfm?p=78577\r\n% \"19 Dubious Way to Compute the Zeros of a Polynomial\">,\r\n% following in the footsteps of the paper about the matrix exponential\r\n% that Charlie and I wrote in 1978 and\r\n% <https:\/\/www.cs.cornell.edu\/cv\/ResearchPDF\/19ways+.pdf\r\n% updated 25 years later>.\r\n% I really don't have 19 ways to compute polynomial zeros, but then I only\r\n% have a half hour for my talk.  Most of the methods have been described\r\n% previously in this blog.  Today's post is mostly about \"roots\".\r\n\r\n%% Origin of MATLAB \"Roots\"\r\n% Almost 40 years ago, in the late 1970s, when I was developing the\r\n% original Fortran-based MATLAB, I wanted to have a command to find\r\n% the roots of a polynomial.  At the time MATLAB was just a primitive\r\n% matrix calculator.  It was not yet a programming environment; I did\r\n% not yet have M-files or toolboxes.\r\n\r\n%%\r\n% I had two objectives.  I needed to keep the amount of software to\r\n% a minimum.  Mike Jenkins was a friend of mine who had recently\r\n% submitted his Ph. D. thesis, under Joe Traub's direction, to Stanford,\r\n% with what is now known as the Jenkins-Traub algorithm for polynomial\r\n% zero finding.  But that would require way too much memory for my MATLAB.\r\n% And I wanted to turn the tables on the traditional role the\r\n% characteristic polynomial plays in linear algebra.  First year students\r\n% are taught to compute eigenvalues of a matrix by finding the zeros of\r\n% the determinant of its companion matrix.  Wilkinson had shown that\r\n% in the presence of roundoff error this is a bad idea.  I wanted to\r\n% emphasize Wilkinson's point by using powerful matrix eigenvalue\r\n% techniques on the companion matrix to compute polynomial roots.\r\n\r\n%%\r\n% I already had the EISPACK code for the Francis double shift QR iteration\r\n% to find the eigenvalues of a Hessenberg matrix.  It took just a few more\r\n% lines to find polynomial roots.\r\n%\r\n% For example, MATLAB represents this polynomial\r\n% $z^5 - 15 z^4 + 85 z^3 - 225 z^2 + 274 z - 120$\r\n% by a vector of coefficients.\r\n\r\n   p = [1 -15 85 -225 274 -120]\r\n   \r\n%%\r\n% The |roots| function finds its roots.\r\n\r\n   roots(p)\r\n\r\n%%\r\n% MATLAB does this by forming the companion matrix, which is upper\r\n% Hessenberg, and computing its eigenvalues.\r\n\r\n   A = [15 -85 225 -274 120\r\n            eye(4,5)       ]\r\n   eig(A)\r\n   \r\n%% Is \"roots\" dubious?\r\n% In the original documentation for \"roots\" I wrote\r\n%\r\n% <html>\r\n% <p style=\"margin-left:3ex;\">\r\n% It uses order n^2 storage and order n^3 time. An algorithm\r\n% designed specifically for polynomial roots might use order n\r\n% storage and n^2 time. And the roundoff errors introduced correspond\r\n% to perturbations in the elements of the companion matrix,\r\n% not the polynomial coefficients.\r\n% <\/p>\r\n% <\/html>\r\n\r\n%%\r\n% So, at the time I knew that \"roots\" had impeccable numerical properties\r\n% from the matrix eigenvalue point of view, but I wasn't sure about the\r\n% numerical properties from the polynomial of view.  Were we computing\r\n% the exact roots of some polynomial near the original one?  Maybe not.\r\n% It depends crucially upon the scaling.\r\n\r\n%%\r\n% \"Roots\" is possibly dubious numerically, and certainly dubious from a\r\n% computation cost point of view.\r\n\r\n%% Balancing\r\n% Almost twenty years later, in 1994 and 1995, a pair of papers, one by\r\n% Nick Trefethen and K. C. Toh, and one by Alan Edelman and H. Murakami,\r\n% finally gave \"roots\" a passing grade numerically, provided the companion\r\n% matrix is first scaled by balancing.\r\n\r\n%% \r\n% Balancing was introduced by Parlett and Reinsch in the Algol procedures\r\n% that spawned EISPACK.  Balancing is a diagonal similarity transformation\r\n% by a matrix of powers of two to attempt to bring a nonsymmetric matrix\r\n% closer to symmetric by making its row and column norms equal.  It is\r\n% available in MATLAB through the function |balance| and is used by\r\n% default by |eig|.\r\n\r\n%%\r\n% To see the importance of balancing, consider the companion matrix\r\n% of $W_{20}$, the Wilkinson polynomial of degree 20 whose roots are\r\n% the integers from 1 to 20.  The coefficients of this polynomial are\r\n% huge.  The constant term is 20!, which is 2432902008176640000.\r\n% But that's not the largest.  The coefficient of $z^2$ is\r\n% 13803759753640704000.  Five of the coefficients, those of $z^3$ through\r\n% $z^7$ are so large that they cannot be exactly represented as double\r\n% precision floating pointing numbers.  So\r\n%\r\n%   roots(poly(1:20))\r\n%\r\n% starts with a companion matrix that has coefficients of magnitude |10^19|\r\n% along its first row and ones on its subdiagonal.  It is seriously\r\n% nonsymmetric.\r\n\r\n%%\r\n% The elements of the balanced matrix are much more reasonable.  They\r\n% range from just 1\/4 to 256.  This is the matrix involved in the actual \r\n% eigenvalue calculation.  All the scaling action is captured in the\r\n% diagonal similarity, whose elements range from $2^{-35}$ to $2^{27}$.\r\n% This matrix is only involved in scaling the eigenvectors.\r\n\r\n%%\r\n% So I was lucky on the numerical side.  Balancing turned out to\r\n% provide \"roots\" satisfactory stability.\r\n\r\n%% Execution time\r\n% Let's measure the execution time and memory requirements for \"roots\"\r\n% with polynomials of very large degree REPLACE_WITH_DASH_DASH tens of thousands.  Here I\r\n% just wanted to see how bad it is.  Short answer: it takes \"roots\"\r\n% almost 40 minutes to find the zeros of a polynomial of order\r\n% sixteen thousand on my Lenovo X250 laptop.\r\n\r\n%%\r\n% Here is the experiment.  Polynomials of degree |1000*n| for |n = 1:16|.\r\n%\r\n%   T = zeros(16,1);\r\n%   for n = 1:16\r\n%      n\r\n%      r = randn(1,1000*n);\r\n%      tic\r\n%      roots(r);\r\n%      t = toc\r\n%      T(n) = t;\r\n%      !systeminfo | grep 'In Use'\r\n%   end\r\n\r\n\r\n%%\r\n% As expected, the observed exection times are fit nicely by\r\n% a cubic in the polynomial degree.\r\n\r\n   time_roots\r\n   \r\n%% Memory requirements\r\n% On the other hand, the measured virtual memory requirements are\r\n% all other the map.  A least squares fit by a quadratic in the polynomial\r\n% degree $n$ shows very little dependence on the $n^2$ term.  It's fair to\r\n% say that on today's machines memory requirements are not a significant\r\n% restriction for this algorithm.\r\n\r\n   memory_roots\r\n   \r\n%% AMVW\r\n% AMVW are the initials of the last names of Jared Aurentz, Thomas Mach,\r\n% Raf Vandebril, and David Watkins.  It is also the name of their Fortran\r\n% code for what they claim is a competitor to \"roots\".\r\n% Their recent paper in SIMAX, referenced below, summarizes all the work\r\n% that has been done on computing the eigenvalues of companion matrices,\r\n% exploiting their special structure.\r\n\r\n%%\r\n% They also present their own algorithm where the companion matrix is never\r\n% formed explicitly, but is represented as a product of Givens rotations.\r\n% The QR algorithm is carried out on this representation.  \r\n% As a result only $O(n)$ storage and $O(n^2)$ operations are required.\r\n\r\n%%\r\n% But their software is in Fortran.  I was a big Fortran fan for a long\r\n% time.  I used to be considered one of the Fortran gurus at MathWorks.\r\n% I now don't even have access to a Fortran compiler.  So I can't\r\n% try their stuff myself.\r\n\r\n%% Fiedler Companion Matrix\r\n% So far I've been talking about the classic companion matrix where\r\n% the polynomial coefficients occupy the entire first or last row or\r\n% column.  In 2003 Miroslav Fiedler introduced another version of a\r\n% companion matrix, a pentadiagonal matrix with the polynomial coefficients\r\n% alternating with zeros on the super- and subdiagonal and ones and zeros\r\n% alternating on the next diagonals of the nonsymmetric pentadiagonal\r\n% matrix.  I wrote about it my blog\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2013\/12\/23\/fiedler-companion-matrix\/\r\n% just before last Christmas>.\r\n\r\n%%\r\n% Here's an example, W_8.  First, the coefficients.\r\n\r\n   c = poly(1:8)'\r\n   \r\n%%\r\n% The Fiedler companion matrix.\r\n\r\n   F = fiedler(c(2:end))\r\n   \r\n%%\r\n% Balance it.\r\n\r\n   B = balance(F)\r\n\r\n%%\r\n% Check the eigenvalues\r\n\r\n   e = eig(B)\r\n   \r\n%%\r\n% If we could find a way to compute the eigenvalues and eigenvectors\r\n% of the Fiedler companion matrix while exploiting its structure, then\r\n% we would have a gorgeous algorithm.  Beresford Parlett tells me that\r\n% he's working on it.\r\n\r\n%% Please comment\r\n% If you have software that I don't know about, please comment.\r\n\r\n%% References\r\n% Kim-Chuan Toh and Lloyd N. Trefethen, \"Pseudozeros of polynomials and \r\n% pseudospectra of companion matrices,\" _Numer. Math._, 68, 1994.\r\n% <http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.49.9072&rep=rep1&type=pdf\r\n% http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.49.9072&rep=rep1&type=pdf>\r\n\r\n%%\r\n% Alan Edelman and H. Murakami, \"Polynomial roots from companion matrices,\"\r\n% _Mathematics of Computation_, 64, 1995.\r\n% <http:\/\/www.ams.org\/journals\/mcom\/1995-64-210\/S0025-5718-1995-1262279-2\/\r\n% http:\/\/www.ams.org\/journals\/mcom\/1995-64-210\/S0025-5718-1995-1262279-2\/>\r\n\r\n%%\r\n% Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S. Watkins,\r\n% \"Fast and backward stable computation of roots of polynomials,\"\r\n% _SIAM J. Matrix Anal. Appl._, 36, 2015.\r\n% <http:\/\/www.math.wsu.edu\/faculty\/watkins\/pdfiles\/AMVW15_SIMAX.pdf\r\n% http:\/\/www.math.wsu.edu\/faculty\/watkins\/pdfiles\/AMVW15_SIMAX.pdf>\r\n\r\n%%\r\n% Miroslav Fiedler, A note on companion matrices,\r\n% Linear Algebra and its Applications 372 (2003), 325-331,\r\n% <http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?rep=rep1&type=pdf&doi=10.1.1.210.6269\r\n% http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download>\r\n\r\n\r\n\r\n\r\n##### SOURCE END ##### 5087b47e0b7648edb47f43a2210eabeb\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/nineteen_blog_01.png\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>During the <a href=\"http:\/\/www.siam.org\/meetings\/an16\/\">SIAM Annual Meeting<\/a> this summer in Boston there will be a special <a href=\"http:\/\/meetings.siam.org\/sess\/dsp_programsess.cfm?SESSIONCODE=23492\">minisymposium Wednesday afternoon<\/a>, July 13, honoring Charlie Van Loan, who is retiring at Cornell. (I use \"at\" because he's not leaving Ithaca.)  I will give a talk titled <a href=\"http:\/\/meetings.siam.org\/sess\/dsp_talk.cfm?p=78577\">\"19 Dubious Way to Compute the Zeros of a Polynomial\"<\/a>, following in the footsteps of the paper about the matrix exponential that Charlie and I wrote in 1978 and <a title=\"https:\/\/www.cs.cornell.edu\/cv\/ResearchPDF\/19ways+.pdf (link no longer works)\">updated 25 years later<\/a>. I really don't have 19 ways to compute polynomial zeros, but then I only have a half hour for my talk.  Most of the methods have been described previously in this blog.  Today's post is mostly about \"roots\".... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2016\/06\/27\/19-dubious-ways-to-compute-the-zeros-of-a-polynomial\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[11,13,4,6,16,14],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1783"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=1783"}],"version-history":[{"count":3,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1783\/revisions"}],"predecessor-version":[{"id":2557,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1783\/revisions\/2557"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=1783"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=1783"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=1783"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}