{"id":1303,"date":"2016-01-04T12:00:43","date_gmt":"2016-01-04T17:00:43","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=1303"},"modified":"2016-04-11T08:44:35","modified_gmt":"2016-04-11T13:44:35","slug":"testing-zero-finders","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2016\/01\/04\/testing-zero-finders\/","title":{"rendered":"Testing Zero Finders"},"content":{"rendered":"<div class=\"content\">\r\n\r\n<!--introduction-->Use the historic cubic polynomial $x^3 - 2x - 5$ to test a few zero-finding algorithms.\r\n\r\n<!--\/introduction-->\r\n<h3>Contents<\/h3>\r\n<div>\r\n<ul>\r\n\t<li><a href=\"#07636ff6-6cc5-4796-886a-8b1261de4b46\">The Footnote<\/a><\/li>\r\n\t<li><a href=\"#3e4aef46-7453-44ea-838d-76ee4e780934\">Newton's method<\/a><\/li>\r\n\t<li><a href=\"#afa9423c-2ed5-466f-8eb6-46dbd120af78\">fzero<\/a><\/li>\r\n\t<li><a href=\"#e070bbec-6025-483d-be18-5709beecf91a\">Fixed point iteration<\/a><\/li>\r\n\t<li><a href=\"#2a5091a4-09f6-4821-a0c2-35929c9389b3\">Root squaring<\/a><\/li>\r\n\t<li><a href=\"#8d205b79-eb44-4629-aed6-a196eb1ba0c7\">Reduced Penultimate Remainder<\/a><\/li>\r\n\t<li><a href=\"#8eed4963-6fe8-4357-9327-ea639cf15cb0\">References<\/a><\/li>\r\n<\/ul>\r\n<\/div>\r\n<h4>The Footnote<a name=\"07636ff6-6cc5-4796-886a-8b1261de4b46\"><\/a><\/h4>\r\nMy <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2015\/12\/21\/a-historic-cubic\/\">previous post<\/a> was about a footnote in a classic numerical analysis text by Whittaker and Robinson that was a quip by Augustus de Morgan.\r\n\r\n\"The reason I call $x^3-2x-5=0$ a celebrated equation is because it was the one on which Wallis chanced to exhibit Newton's method when he first published it, in consequence of which every numerical solver has felt bound in duty to make it one of his examples. Invent a numerical method, neglect to show how it works on this equation, and you are a pilgrim who does not come in at the little wicket (<i>vide<\/i> J. Bunyan).\"\r\n\r\nSo, let's try a few numerical solvers on this equation.\r\n<h4>Newton's method<a name=\"3e4aef46-7453-44ea-838d-76ee4e780934\"><\/a><\/h4>\r\nI must first try Newton's method.\r\n\r\n$$ x_{n+1} = x_n - \\frac{f(x_n)} {f'(x_n)} $$\r\n\r\nI know the derivative and I have a good starting guess.\r\n<pre class=\"codeinput\">   f = @(x) x.^3-2*x-5;\r\n   fprime = @(x) 3*x.^2-2;\r\n   x = 2;\r\n   disp(x)\r\n   z = 0;\r\n   <span class=\"keyword\">while<\/span> abs(x-z) &gt; eps(x)\r\n      z = x;\r\n      x = x - f(x)\/fprime(x);\r\n      disp(x)\r\n   <span class=\"keyword\">end<\/span>\r\n<\/pre>\r\n<pre class=\"codeoutput\">     2\r\n   2.100000000000000\r\n   2.094568121104185\r\n   2.094551481698199\r\n   2.094551481542327\r\n   2.094551481542327\r\n<\/pre>\r\nThis reaches the solution very quickly. Only five steps, with the fifth confirming that the fourth was already correct to full precision. But Newton's method requires knowledge of the derivative and a really good starting point. These are big drawbacks for general use.\r\n<h4>fzero<a name=\"afa9423c-2ed5-466f-8eb6-46dbd120af78\"><\/a><\/h4>\r\nLet's see how the zero finder available in MATLAB works. I wrote a series of postings about it not too long ago; <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2015\/10\/12\/zeroin-part-1-dekkers-algorithm\/\">part1<\/a>, <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2015\/10\/26\/zeroin-part-2-brents-version\/\">part2<\/a>, <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2015\/11\/09\/zeroin-part-3-matlab-zero-finder-fzero\/\">part3<\/a>.\r\n\r\nSet a display parameter asking for intermediate results.\r\n<pre class=\"codeinput\"><span class=\"comment\">%  opt = optimset('display','iter');<\/span>\r\n<\/pre>\r\nPretend we don't know anything about this function and start the search at $x = 0$.\r\n<pre class=\"codeinput\"><span class=\"comment\">%  fzero(f,0,opt)<\/span>\r\n<\/pre>\r\nWe get one line of output each time <tt>a<\/tt> is decreased and <tt>b<\/tt> is increased. All the values of <tt>f(a)<\/tt> are negative and all the values of <tt>f(b)<\/tt> are also negative, until <tt>b = 2.56<\/tt> when the first sign change is reached.\r\n<pre class=\"codeinput\"><span class=\"comment\">%  Search for an interval around 0 containing a sign change:<\/span>\r\n<span class=\"comment\">%   Func-count    a          f(a)             b          f(b)        Procedure<\/span>\r\n<span class=\"comment\">%    1               0            -5             0            -5   initial interval<\/span>\r\n<span class=\"comment\">%    3      -0.0282843      -4.94345     0.0282843      -5.05655   search<\/span>\r\n<span class=\"comment\">%    5           -0.04      -4.92006          0.04      -5.07994   search<\/span>\r\n<span class=\"comment\">%    7      -0.0565685      -4.88704     0.0565685      -5.11296   search<\/span>\r\n<span class=\"comment\">%    9           -0.08      -4.84051          0.08      -5.15949   search<\/span>\r\n<span class=\"comment\">%   11       -0.113137      -4.77517      0.113137      -5.22483   search<\/span>\r\n<span class=\"comment\">%   13           -0.16       -4.6841          0.16       -5.3159   search<\/span>\r\n<span class=\"comment\">%   15       -0.226274      -4.55904      0.226274      -5.44096   search<\/span>\r\n<span class=\"comment\">%   17           -0.32      -4.39277          0.32      -5.60723   search<\/span>\r\n<span class=\"comment\">%   19       -0.452548      -4.18759      0.452548      -5.81241   search<\/span>\r\n<span class=\"comment\">%   21           -0.64      -3.98214          0.64      -6.01786   search<\/span>\r\n<span class=\"comment\">%   23       -0.905097      -3.93126      0.905097      -6.06874   search<\/span>\r\n<span class=\"comment\">%   25           -1.28      -4.53715          1.28      -5.46285   search<\/span>\r\n<span class=\"comment\">%   27        -1.81019      -7.31125       1.81019      -2.68875   search<\/span>\r\n<span class=\"comment\">%   29           -2.56      -16.6572          2.56       6.65722   search<\/span>\r\n<\/pre>\r\nNow the classic <i>zeroin<\/i> algorithm can quickly and rapidly find the zero. To see the details, run <tt>fzerogui<\/tt> from <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/37976-numerical-computing-with-matlab\">Numerical Computing with MATLAB<\/a> and click on the <tt>auto<\/tt> button. Both the secant method and inverse quadratic interpolation are used to find the zero.\r\n<pre class=\"codeinput\"><span class=\"comment\">%   start      -2.5600000000000001<\/span>\r\n<span class=\"comment\">%   start       2.5600000000000001<\/span>\r\n<span class=\"comment\">%   secant      1.0980323260716793<\/span>\r\n<span class=\"comment\">%   secant      1.7832168816106038<\/span>\r\n<span class=\"comment\">%   iqi         2.2478393639958036<\/span>\r\n<span class=\"comment\">%   secant      2.0660057758331045<\/span>\r\n<span class=\"comment\">%   secant      2.0922079131171945<\/span>\r\n<span class=\"comment\">%   iqi         2.0945566700001779<\/span>\r\n<span class=\"comment\">%   secant      2.0945514746903111<\/span>\r\n<span class=\"comment\">%   secant      2.0945514815423065<\/span>\r\n<span class=\"comment\">%   iqi         2.0945514815423265<\/span>\r\n<span class=\"comment\">%   small       2.0945514815423274<\/span>\r\n<\/pre>\r\nSo, even when it is started at a lousy initial guess, <tt>fzero<\/tt> passes de Morgan's test.\r\n<h4>Fixed point iteration<a name=\"e070bbec-6025-483d-be18-5709beecf91a\"><\/a><\/h4>\r\nI like to call this the \"WS\" algorithm for \"World's Simplest\". Try to find a fixed point of a mapping $G(x)$.\r\n\r\n$$ x = G(x) $$\r\n\r\nThe iteration is\r\n\r\n$$ x_{n+1} = G(x_n) $$\r\n\r\nWhat should I choose for $G$ ? The most obvious choice is the equation\r\n\r\n$$ x = \\frac{1}{2} (x^3-5) $$\r\n\r\nBut near the zero the slope of $G(x)$ is too large and the WS iteration diverges.\r\n\r\nAnother choice is\r\n\r\n$$ x = \\sqrt[3]{2x+5} $$\r\n\r\nThis will work. While we're at it, also produce a plot.\r\n<pre class=\"codeinput\">   G = @(x) (2*x+5).^(1\/3)\r\n   ezplot(G,[1 3])\r\n   line([1 3],[1 3],<span class=\"string\">'color'<\/span>,<span class=\"string\">'k'<\/span>)\r\n   dkgreen = [0 2\/3 0];\r\n   x = 1.4;\r\n   z = 0;\r\n   disp(x)\r\n   <span class=\"keyword\">while<\/span> abs(x-z) &gt; eps(x)\r\n      z = x;\r\n      x = G(x);\r\n      line([z z],[z x],<span class=\"string\">'color'<\/span>,dkgreen)\r\n      line([z x],[x x],<span class=\"string\">'color'<\/span>,dkgreen)\r\n      disp(x)\r\n   <span class=\"keyword\">end<\/span>\r\n<\/pre>\r\n<pre class=\"codeoutput\">G = \r\n    @(x)(2*x+5).^(1\/3)\r\n   1.400000000000000\r\n   1.983192482680775\r\n   2.077490885128178\r\n   2.091955753470501\r\n   2.094156962781544\r\n   2.094491529117444\r\n   2.094542371187228\r\n   2.094550097140211\r\n   2.094551271169830\r\n   2.094551449574315\r\n   2.094551476684497\r\n   2.094551480804135\r\n   2.094551481430152\r\n   2.094551481525281\r\n   2.094551481539736\r\n   2.094551481541933\r\n   2.094551481542267\r\n   2.094551481542317\r\n   2.094551481542325\r\n   2.094551481542326\r\n   2.094551481542327\r\n<\/pre>\r\n<img decoding=\"async\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/cubic_blog2_01.png\" alt=\"\" hspace=\"5\" vspace=\"5\" \/>\r\n\r\nThis takes a long time. It's just linear convergence. But we squeeze through de Morgan's wicket anyway.\r\n<h4>Root squaring<a name=\"2a5091a4-09f6-4821-a0c2-35929c9389b3\"><\/a><\/h4>\r\nThe \"Graffe\" root-squaring method was invented independently by Germinal Pierre Dandelin in 1826, Nikolai Lobachevsky in 1834, and Karl Heinrich Graffe in 1837. An article by Alston Householder referenced below goes into detail about who invented what.\r\n\r\nThe idea is to transform the coefficients of one polynomial into another whose zeros are the squares of the original. If the zeros are well separated in magnitude, repeating the process will eventually produce a high degree polynomial whose first two terms provide a good approximation to the dominant zero. The method was popular for hand computation, but there are serious difficulties when it comes to making a reliable automatic implementation. Repeated zeros, complex zeros of equal magnitude, and especially scaling to prevent floating point overflow and underflow are all obstacles. But it works OK on our historic cubic, which has a simple dominant zero.\r\n\r\nSuppose $p(x)$ is a cubic polynomial.\r\n\r\n$$ p(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4 $$\r\n\r\nRearrange the equation $p(x) = 0$ so that the terms with odd powers of $x$ are on one side of the equals sign and the even powers are on the other.\r\n\r\n$$ p_1 x^3 + p_3 x = - (p_2 x^2 + p_4) $$\r\n\r\nSquare both sides of this equation and collect terms to form a new polynomial, $q(x)$.\r\n\r\n$$ q(x) = (p_1 x^3 + p_3 x)^2 - (p_2 x^2 + p_4)^2 $$\r\n\r\n$$ = p_1^2 x^6 - (p_2^2 - 2 p_1 p_3) x^4 + (p_3^2 - 2 p_2 p_4) x^2 - p_4^2 $$\r\n\r\nThe zeros of $q$ are the squares of the zeros of $p$. Here is a function that does the job for cubics.\r\n<pre class=\"language-matlab\"><span class=\"keyword\">function<\/span> q = graffe(p);\r\n   <span class=\"comment\">% Graffe root squaring.  q = graffe(p).<\/span>\r\n   <span class=\"comment\">% p &amp; q are 4-vectors with coefficients of cubics.<\/span>\r\n   <span class=\"comment\">% roots(q) = roots(p).^2.<\/span>\r\n   q = zeros(size(p));\r\n   q(1) = p(1)^2;\r\n   q(2) = -(p(2)^2 - 2*p(1)*p(3));\r\n   q(3) = p(3)^2 - 2*p(2)*p(4);\r\n   q(4) = -p(4)^2;\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre>\r\nLet's try it on our historic polynomial $x^3-2x-5$. The floating point exponents of the coefficients soon start doubling at each step. Luckily, we reach my stopping criterion in eight steps. If we had to try more steps without rescaling, we would overflow. But by this time, the 256-th power of the zero we're after completely dominates the powers of the other roots, and so the 256-th root of the ratio of the just first coefficients provides a result that is accurate to full precision.\r\n<pre class=\"codeinput\">   p = [1 0 -2 -5];\r\n   fmt = <span class=\"string\">'%5.0f %5.0g %13.5g %13.5g %13.5g %20.15f\\n'<\/span>;\r\n   fprintf(fmt,1,p,0)\r\n   m = 1;\r\n   <span class=\"keyword\">while<\/span> max(abs(p)) &lt; sqrt(realmax)\r\n      m = 2*m;\r\n      p = graffe(p);\r\n      z = (-p(2)\/p(1)).^(1\/m);\r\n      fprintf(fmt,m,p,z);\r\n   <span class=\"keyword\">end<\/span>\r\n<\/pre>\r\n<pre class=\"codeoutput\">    1     1             0            -2            -5    0.000000000000000\r\n    2     1            -4             4           -25    2.000000000000000\r\n    4     1            -8          -184          -625    1.681792830507429\r\n    8     1          -432         23856   -3.9063e+05    2.135184796196703\r\n   16     1   -1.3891e+05    2.3161e+08   -1.5259e+11    2.096144583759898\r\n   32     1   -1.8833e+10     1.125e+16   -2.3283e+22    2.094553557477412\r\n   64     1   -3.5467e+20   -7.5043e+32    -5.421e+44    2.094551481347086\r\n  128     1   -1.2579e+41    1.7861e+65   -2.9387e+89    2.094551481542327\r\n  256     1   -1.5824e+82  -4.2032e+130  -8.6362e+178    2.094551481542327\r\n<\/pre>\r\nThis basic implementation of root-squaring makes it through De Morgan's wicket.\r\n<h4>Reduced Penultimate Remainder<a name=\"8d205b79-eb44-4629-aed6-a196eb1ba0c7\"><\/a><\/h4>\r\nAlmost two years ago, I blogged about the <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/01\/21\/reduced-penultimate-remainder\">Reduced Penultimate Remainder algorithm<\/a>. It was an undergrad individual research project that I now fondly remember as the most obscure, impractical algorithm I have even seen. The algorithm tries to find a lower degree polynomial that divides exactly, without any remainder, into the polynomial whose zeros we seek. The zeros of the resulting divisor are then some of the zeros of the dividend.\r\n\r\nAs an example in that post, I tried to find a linear factor of $x^3-2x-5$. That would have given me the zero that I've been computing in this post. But the algorithm fails to converge, no matter what starting factor is provided.\r\n\r\nHow about computing a quadratic factor instead? If successful, that will yield two zeros. Here is the function that takes one step.\r\n<pre class=\"language-matlab\"><span class=\"keyword\">function<\/span> q = rpr(p,q)\r\n<span class=\"comment\">% RPR Reduced Penultimate Remainder.<\/span>\r\n<span class=\"comment\">% r = rpr(p,q), for polynomials p and q.<\/span>\r\n<span class=\"comment\">% Assume p(1) = 1, q(1) = 1.<\/span>\r\n   <span class=\"comment\">% Polynomial long division.<\/span>\r\n   <span class=\"comment\">% Stop when degree of r = degree of q.<\/span>\r\n   n = length(p);\r\n   m = length(q);\r\n   r = p(1:m);\r\n   <span class=\"keyword\">for<\/span> k = m+1:n\r\n      r = r - r(1)*q;\r\n      r = [r(2:end) p(k)];\r\n   <span class=\"keyword\">end<\/span>\r\n   q = r\/r(1);\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre>\r\nThe starting factor $q(x)$ can be any quadratic that has imaginary zeros. Let's try $q(x) = x^2+x+1$.\r\n<pre class=\"codeinput\"><span class=\"comment\">%    p = [1 0 -2 -5];<\/span>\r\n<span class=\"comment\">%    q = [1 1 1]<\/span>\r\n<span class=\"comment\">%    k = 1;<\/span>\r\n<span class=\"comment\">%    r = 0*q;<\/span>\r\n<span class=\"comment\">%    while max(abs(q-r)) &gt; 10*eps(max(abs(q)))<\/span>\r\n<span class=\"comment\">%       r = q;<\/span>\r\n<span class=\"comment\">%       q = rpr(p,q)<\/span>\r\n<span class=\"comment\">%       k = k+1;<\/span>\r\n<span class=\"comment\">%    end<\/span>\r\n<span class=\"comment\">%    k<\/span>\r\n<\/pre>\r\nHere are the first ten and the last four of 114 steps. I've edited out 100 steps in between.\r\n<pre>  1.000000000000000   1.000000000000000   1.000000000000000\r\n  1.000000000000000   3.000000000000000   5.000000000000000\r\n  1.000000000000000   2.333333333333334   1.666666666666667\r\n  1.000000000000000   1.571428571428571   2.142857142857143\r\n  1.000000000000000   2.636363636363636   3.181818181818182\r\n  1.000000000000000   1.965517241379310   1.896551724137931\r\n  1.000000000000000   1.982456140350877   2.543859649122807\r\n  1.000000000000000   2.292035398230088   2.522123893805309\r\n  1.000000000000000   1.972972972972974   2.181467181467182\r\n  1.000000000000000   2.119373776908023   2.534246575342465\r\n           .                   .                   .\r\n           .                   .                   .\r\n           .                   .                   .\r\n  1.000000000000000   2.094551481542332   2.387145908831160\r\n  1.000000000000000   2.094551481542323   2.387145908831149\r\n  1.000000000000000   2.094551481542327   2.387145908831159\r\n  1.000000000000000   2.094551481542328   2.387145908831155\r\nk =\r\n  114<\/pre>\r\nI can now use deconvolution to do the polynomial division $p(x)\/q(x)$. This produces a linear quotient and the desired zero.\r\n<pre class=\"codeinput\"><span class=\"comment\">%   r = deconv(p,q)<\/span>\r\n<span class=\"comment\">%   x1 = -r(1)<\/span>\r\n<\/pre>\r\nr = 1.000000000000000 -2.094551481542328\r\n\r\nx1 = 2.094551481542328\r\n<h4>References<a name=\"8eed4963-6fe8-4357-9327-ea639cf15cb0\"><\/a><\/h4>\r\nAlston S. Householder, \"Dandelin, Lobacevskii, or Graeffe\", <i>The American Mathematical Monthly<\/i>, vol. 66, 1959, pp. 464-466. &lt; <a href=\"http:\/\/www.jstor.org\/stable\/2310626\">http:\/\/www.jstor.org\/stable\/2310626<\/a> <a href=\"http:\/\/www.jstor.org\/stable\/2310626\">http:\/\/www.jstor.org\/stable\/2310626<\/a>&gt;\r\n\r\n<script>\/\/ <![CDATA[\r\nfunction grabCode_bdb789c513934a29843a3ed3f1b95321() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='bdb789c513934a29843a3ed3f1b95321 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' bdb789c513934a29843a3ed3f1b95321';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2016 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('\r\n\r\n\r\n\r\n<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\r\n\r\n\r\n\r\n\r\n\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }\r\n\/\/ ]]><\/script>\r\n<p style=\"text-align: right; font-size: xx-small; font-weight: lighter; font-style: italic; color: gray;\"><a><span style=\"font-size: x-small; font-style: italic;\">Get\r\nthe MATLAB code<noscript>(requires JavaScript)<\/noscript><\/span><\/a><\/p>\r\nPublished with MATLAB\u00ae R2015a\r\n\r\n<\/div>\r\n<!--\r\nbdb789c513934a29843a3ed3f1b95321 ##### SOURCE BEGIN #####\r\n%% Testing Zero Finders\r\n% Use the historic cubic polynomial $x^3 - 2x - 5$ to test a few zero-finding\r\n% algorithms.\r\n\r\n%% The Footnote\r\n% My <https:\/\/blogs.mathworks.com\/cleve\/2015\/12\/21\/a-historic-cubic\/ % previous post> was about a footnote in a classic numerical analysis text\r\n% by  Whittaker and Robinson that was a quip by Augustus de Morgan.\r\n\r\n%%\r\n% \"The reason I call $x^3-2x-5=0$ a celebrated equation is because it was the\r\n% one on which Wallis chanced to exhibit Newton's method when he first published\r\n% it, in consequence of which every numerical solver has felt bound in duty to\r\n% make it one of his examples. Invent a numerical method, neglect to show how\r\n% it works on this equation, and you are a pilgrim who does not come in at the\r\n% little wicket (_vide_ J. Bunyan).\"\r\n\r\n%%\r\n% So, let's try a few numerical solvers on this equation.\r\n\r\n%% Newton's method\r\n% I must first try Newton's method.\r\n%\r\n% $$ x_{n+1} = x_n - \\frac{f(x_n)} {f'(x_n)} $$\r\n%\r\n% I know the derivative and I have a good starting guess.\r\n\r\nf = @(x) x.^3-2*x-5;\r\nfprime = @(x) 3*x.^2-2;\r\nx = 2;\r\ndisp(x)\r\nz = 0;\r\nwhile abs(x-z) > eps(x)\r\nz = x;\r\nx = x - f(x)\/fprime(x);\r\ndisp(x)\r\nend\r\n\r\n%%\r\n% This reaches the solution very quickly.  Only five steps, with the fifth\r\n% confirming that the fourth was already correct to full precision.\r\n% But Newton's method requires knowledge of the derivative and a really\r\n% good starting point.  These are big drawbacks for general use.\r\n\r\n%% fzero\r\n% Let's see how the zero finder available in MATLAB works.\r\n% I wrote a series of postings about it not too long ago;\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2015\/10\/12\/zeroin-part-1-dekkers-algorithm\/ part1>,\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2015\/10\/26\/zeroin-part-2-brents-version\/ part2>,\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2015\/11\/09\/zeroin-part-3-matlab-zero-finder-fzero\/ part3>.\r\n\r\n%%\r\n% Set a display parameter asking for intermediate results.\r\n\r\n%  opt = optimset('display','iter');\r\n\r\n%%\r\n% Pretend we don't know anything about this function and start the search\r\n% at $x = 0$.\r\n\r\n%  fzero(f,0,opt)\r\n\r\n%%\r\n% We get one line of output each time |a| is decreased and |b| is increased.\r\n% All the values of |f(a)| are negative and all the values of |f(b)| are\r\n% also negative, until |b = 2.56| when the first sign change is reached.\r\n\r\n%  Search for an interval around 0 containing a sign change:\r\n%   Func-count    a          f(a)             b          f(b)        Procedure\r\n%    1               0            -5             0            -5   initial interval\r\n%    3      -0.0282843      -4.94345     0.0282843      -5.05655   search\r\n%    5           -0.04      -4.92006          0.04      -5.07994   search\r\n%    7      -0.0565685      -4.88704     0.0565685      -5.11296   search\r\n%    9           -0.08      -4.84051          0.08      -5.15949   search\r\n%   11       -0.113137      -4.77517      0.113137      -5.22483   search\r\n%   13           -0.16       -4.6841          0.16       -5.3159   search\r\n%   15       -0.226274      -4.55904      0.226274      -5.44096   search\r\n%   17           -0.32      -4.39277          0.32      -5.60723   search\r\n%   19       -0.452548      -4.18759      0.452548      -5.81241   search\r\n%   21           -0.64      -3.98214          0.64      -6.01786   search\r\n%   23       -0.905097      -3.93126      0.905097      -6.06874   search\r\n%   25           -1.28      -4.53715          1.28      -5.46285   search\r\n%   27        -1.81019      -7.31125       1.81019      -2.68875   search\r\n%   29           -2.56      -16.6572          2.56       6.65722   search\r\n\r\n%%\r\n% Now the classic _zeroin_ algorithm can quickly and rapidly find the zero.\r\n% To see the details, run |fzerogui| from\r\n% <https:\/\/www.mathworks.com\/moler.htmlncmfilelist.html?refresh=true % Numerical Computing with MATLAB> and click on the |auto| button.\r\n% Both the secant method and inverse quadratic interpolation are used\r\n% to find the zero.\r\n\r\n%   start      -2.5600000000000001\r\n%   start       2.5600000000000001\r\n%   secant      1.0980323260716793\r\n%   secant      1.7832168816106038\r\n%   iqi         2.2478393639958036\r\n%   secant      2.0660057758331045\r\n%   secant      2.0922079131171945\r\n%   iqi         2.0945566700001779\r\n%   secant      2.0945514746903111\r\n%   secant      2.0945514815423065\r\n%   iqi         2.0945514815423265\r\n%   small       2.0945514815423274\r\n\r\n%%\r\n% So, even when it is started at a lousy initial guess,\r\n% |fzero| passes de Morgan's test.\r\n\r\n%% Fixed point iteration\r\n% I like to call this the \"WS\" algorithm for \"World's Simplest\".\r\n% Try to find a fixed point of a mapping $G(x)$.\r\n%\r\n% $$ x = G(x) $$\r\n%\r\n% The iteration is\r\n%\r\n% $$ x_{n+1} = G(x_n) $$\r\n\r\n%%\r\n% What should I choose for $G$ ?  The most obvious choice is the equation\r\n%\r\n% $$ x = \\frac{1}{2} (x^3-5) $$\r\n%\r\n% But near the zero the slope of $G(x)$ is too large and the WS iteration\r\n% diverges.\r\n\r\n%%\r\n% Another choice is\r\n%\r\n% $$ x = \\sqrt[3]{2x+5} $$\r\n%\r\n% This will work.  While we're at it, also produce a plot.\r\n\r\nG = @(x) (2*x+5).^(1\/3)\r\nezplot(G,[1 3])\r\nline([1 3],[1 3],'color','k')\r\ndkgreen = [0 2\/3 0];\r\nx = 1.4;\r\nz = 0;\r\ndisp(x)\r\nwhile abs(x-z) > eps(x)\r\nz = x;\r\nx = G(x);\r\nline([z z],[z x],'color',dkgreen)\r\nline([z x],[x x],'color',dkgreen)\r\ndisp(x)\r\nend\r\n\r\n%%\r\n% This takes a long time.  It's just linear convergence.\r\n% But we squeeze through de Morgan's wicket anyway.\r\n\r\n%% Root squaring\r\n% The \"Graffe\" root-squaring method was invented independently by\r\n% Germinal Pierre Dandelin in 1826, Nikolai Lobachevsky in 1834,\r\n% and Karl Heinrich Graffe in 1837.  An article by Alston Householder\r\n% referenced below goes into detail about who invented what.\r\n\r\n%%\r\n% The idea is to transform the coefficients of one polynomial into another\r\n% whose zeros are the squares of the original.  If the zeros are well separated\r\n% in magnitude, repeating the process will eventually produce a high degree\r\n% polynomial whose first two terms provide a good approximation to the\r\n% dominant zero.  The method was popular for hand computation, but there\r\n% are serious difficulties when it comes to making a reliable automatic\r\n% implementation.  Repeated zeros, complex zeros of equal magnitude, and\r\n% especially scaling to prevent floating point overflow and underflow are\r\n% all obstacles.  But it works OK on our historic cubic, which has a\r\n% simple dominant zero.\r\n\r\n%%\r\n% Suppose $p(x)$ is a cubic polynomial.\r\n%\r\n% $$ p(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4 $$\r\n%\r\n% Rearrange the equation $p(x) = 0$ so that the terms with odd powers of $x$\r\n% are on one side of the equals sign and the even powers are on the other.\r\n%\r\n% $$ p_1 x^3 + p_3 x = - (p_2 x^2 + p_4) $$\r\n%\r\n% Square both sides of this equation and collect terms to form a new polynomial,\r\n% $q(x)$.\r\n%\r\n%\r\n% $$ q(x) = (p_1 x^3 + p_3 x)^2 - (p_2 x^2 + p_4)^2 $$\r\n%\r\n%\r\n%\r\n% $$ = p_1^2 x^6 - (p_2^2 - 2 p_1 p_3) x^4 + (p_3^2 - 2 p_2 p_4) x^2 - p_4^2 $$\r\n%\r\n% The zeros of $q$ are the squares of the zeros of $p$.\r\n% Here is a function that does the job for cubics.\r\n%\r\n%   function q = graffe(p);\r\n%      % Graffe root squaring.  q = graffe(p).\r\n%      % p & q are 4-vectors with coefficients of cubics.\r\n%      % roots(q) = roots(p).^2.\r\n%      q = zeros(size(p));\r\n%      q(1) = p(1)^2;\r\n%      q(2) = -(p(2)^2 - 2*p(1)*p(3));\r\n%      q(3) = p(3)^2 - 2*p(2)*p(4);\r\n%      q(4) = -p(4)^2;\r\n%   end\r\n\r\n%%\r\n% Let's try it on our historic polynomial $x^3-2x-5$.\r\n% The floating point exponents of the coefficients soon start doubling\r\n% at each step.  Luckily, we reach my stopping criterion in eight steps.\r\n% If we had to try more steps without rescaling, we would overflow.\r\n% But by this time, the 256-th power of the zero we're after completely\r\n% dominates the powers of the other roots, and so the 256-th root of the\r\n% ratio of the just first coefficients provides a result that is accurate\r\n% to full precision.\r\n\r\np = [1 0 -2 -5];\r\nfmt = '%5.0f %5.0g %13.5g %13.5g %13.5g %20.15f\\n';\r\nfprintf(fmt,1,p,0)\r\nm = 1;\r\nwhile max(abs(p)) < sqrt(realmax)\r\nm = 2*m;\r\np = graffe(p);\r\nz = (-p(2)\/p(1)).^(1\/m);\r\nfprintf(fmt,m,p,z);\r\nend\r\n\r\n%%\r\n% This basic implementation of root-squaring makes it through De Morgan's\r\n% wicket.\r\n\r\n%% Reduced Penultimate Remainder\r\n% Almost two years ago, I blogged about the\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2013\/01\/21\/reduced-penultimate-remainder % Reduced Penultimate Remainder algorithm>.\r\n% It was an undergrad individual research project that I now fondly remember\r\n% as the most obscure, impractical algorithm I have even seen.  The\r\n% algorithm tries to find a lower degree polynomial that divides exactly,\r\n% without any remainder, into the polynomial whose zeros we seek.\r\n% The zeros of the resulting divisor are then some of the zeros of the dividend.\r\n\r\n%%\r\n% As an example in that post, I tried to find a linear factor of $x^3-2x-5$.\r\n% That would have given me the zero that I've been computing in this post.\r\n% But the algorithm fails to converge, no matter what starting factor is\r\n% provided.\r\n\r\n%%\r\n% How about computing a quadratic factor instead?  If successful, that will\r\n% yield two zeros.  Here is the function that takes one step.\r\n%\r\n%   function q = rpr(p,q)\r\n%   % RPR Reduced Penultimate Remainder.\r\n%   % r = rpr(p,q), for polynomials p and q.\r\n%   % Assume p(1) = 1, q(1) = 1.\r\n%      % Polynomial long division.\r\n%      % Stop when degree of r = degree of q.\r\n%      n = length(p);\r\n%      m = length(q);\r\n%      r = p(1:m);\r\n%      for k = m+1:n\r\n%         r = r - r(1)*q;\r\n%         r = [r(2:end) p(k)];\r\n%      end\r\n%      q = r\/r(1);\r\n%   end\r\n\r\n%%\r\n% The starting factor $q(x)$ can be any quadratic that has imaginary zeros.\r\n% Let's try $q(x) = x^2+x+1$.\r\n\r\n%    p = [1 0 -2 -5];\r\n%    q = [1 1 1]\r\n%    k = 1;\r\n%    r = 0*q;\r\n%    while max(abs(q-r)) > 10*eps(max(abs(q)))\r\n%       r = q;\r\n%       q = rpr(p,q)\r\n%       k = k+1;\r\n%    end\r\n%    k\r\n\r\n%%\r\n% Here are the first ten and the last four of 114 steps.\r\n% I've edited out 100 steps in between.\r\n%\r\n%    1.000000000000000   1.000000000000000   1.000000000000000\r\n%    1.000000000000000   3.000000000000000   5.000000000000000\r\n%    1.000000000000000   2.333333333333334   1.666666666666667\r\n%    1.000000000000000   1.571428571428571   2.142857142857143\r\n%    1.000000000000000   2.636363636363636   3.181818181818182\r\n%    1.000000000000000   1.965517241379310   1.896551724137931\r\n%    1.000000000000000   1.982456140350877   2.543859649122807\r\n%    1.000000000000000   2.292035398230088   2.522123893805309\r\n%    1.000000000000000   1.972972972972974   2.181467181467182\r\n%    1.000000000000000   2.119373776908023   2.534246575342465\r\n%             .                   .                   .\r\n%             .                   .                   .\r\n%             .                   .                   .\r\n%    1.000000000000000   2.094551481542332   2.387145908831160\r\n%    1.000000000000000   2.094551481542323   2.387145908831149\r\n%    1.000000000000000   2.094551481542327   2.387145908831159\r\n%    1.000000000000000   2.094551481542328   2.387145908831155\r\n% k =\r\n%    114\r\n\r\n%%\r\n% I can now use deconvolution to do the polynomial division $p(x)\/q(x)$.\r\n% This produces a linear quotient and the desired zero.\r\n\r\n%   r = deconv(p,q)\r\n%   x1 = -r(1)\r\n\r\n%%\r\n% r =\r\n%    1.000000000000000  -2.094551481542328\r\n%\r\n% x1 =\r\n%    2.094551481542328\r\n\r\n%% References\r\n% Alston S. Householder, \"Dandelin, Lobacevskii, or Graeffe\",\r\n% _The American Mathematical Monthly_, vol. 66, 1959, pp. 464-466.\r\n% < http:\/\/www.jstor.org\/stable\/2310626 http:\/\/www.jstor.org\/stable\/2310626>\r\n\r\n##### SOURCE END ##### bdb789c513934a29843a3ed3f1b95321\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/cubic_blog2_01.png\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction-->Use the historic cubic polynomial $x^3 - 2x - 5$ to test a few zero-finding algorithms.\r\n\r\n<!--\/introduction-->... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2016\/01\/04\/testing-zero-finders\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[11,4,16],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1303"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=1303"}],"version-history":[{"count":5,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1303\/revisions"}],"predecessor-version":[{"id":1405,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1303\/revisions\/1405"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=1303"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=1303"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=1303"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}