Testing Zero Finders

Use the historic cubic polynomial $x^3 - 2x - 5$ to test a few zero-finding algorithms.

Contents

The Footnote

My previous post was about a footnote in a classic numerical analysis text by Whittaker and Robinson that was a quip by Augustus de Morgan. "The reason I call $x^3-2x-5=0$ a celebrated equation is because it was the one on which Wallis chanced to exhibit Newton's method when he first published it, in consequence of which every numerical solver has felt bound in duty to make it one of his examples. Invent a numerical method, neglect to show how it works on this equation, and you are a pilgrim who does not come in at the little wicket (vide J. Bunyan)." So, let's try a few numerical solvers on this equation.

Newton's method

I must first try Newton's method. $$ x_{n+1} = x_n - \frac{f(x_n)} {f'(x_n)} $$ I know the derivative and I have a good starting guess.
   f = @(x) x.^3-2*x-5;
   fprime = @(x) 3*x.^2-2;
   x = 2;
   disp(x)
   z = 0;
   while abs(x-z) > eps(x)
      z = x;
      x = x - f(x)/fprime(x);
      disp(x)
   end
     2
   2.100000000000000
   2.094568121104185
   2.094551481698199
   2.094551481542327
   2.094551481542327
This reaches the solution very quickly. Only five steps, with the fifth confirming that the fourth was already correct to full precision. But Newton's method requires knowledge of the derivative and a really good starting point. These are big drawbacks for general use.

fzero

Let's see how the zero finder available in MATLAB works. I wrote a series of postings about it not too long ago; part1, part2, part3. Set a display parameter asking for intermediate results.
%  opt = optimset('display','iter');
Pretend we don't know anything about this function and start the search at $x = 0$.
%  fzero(f,0,opt)
We get one line of output each time a is decreased and b is increased. All the values of f(a) are negative and all the values of f(b) are also negative, until b = 2.56 when the first sign change is reached.
%  Search for an interval around 0 containing a sign change:
%   Func-count    a          f(a)             b          f(b)        Procedure
%    1               0            -5             0            -5   initial interval
%    3      -0.0282843      -4.94345     0.0282843      -5.05655   search
%    5           -0.04      -4.92006          0.04      -5.07994   search
%    7      -0.0565685      -4.88704     0.0565685      -5.11296   search
%    9           -0.08      -4.84051          0.08      -5.15949   search
%   11       -0.113137      -4.77517      0.113137      -5.22483   search
%   13           -0.16       -4.6841          0.16       -5.3159   search
%   15       -0.226274      -4.55904      0.226274      -5.44096   search
%   17           -0.32      -4.39277          0.32      -5.60723   search
%   19       -0.452548      -4.18759      0.452548      -5.81241   search
%   21           -0.64      -3.98214          0.64      -6.01786   search
%   23       -0.905097      -3.93126      0.905097      -6.06874   search
%   25           -1.28      -4.53715          1.28      -5.46285   search
%   27        -1.81019      -7.31125       1.81019      -2.68875   search
%   29           -2.56      -16.6572          2.56       6.65722   search
Now the classic zeroin algorithm can quickly and rapidly find the zero. To see the details, run fzerogui from Numerical Computing with MATLAB and click on the auto button. Both the secant method and inverse quadratic interpolation are used to find the zero.
%   start      -2.5600000000000001
%   start       2.5600000000000001
%   secant      1.0980323260716793
%   secant      1.7832168816106038
%   iqi         2.2478393639958036
%   secant      2.0660057758331045
%   secant      2.0922079131171945
%   iqi         2.0945566700001779
%   secant      2.0945514746903111
%   secant      2.0945514815423065
%   iqi         2.0945514815423265
%   small       2.0945514815423274
So, even when it is started at a lousy initial guess, fzero passes de Morgan's test.

Fixed point iteration

I like to call this the "WS" algorithm for "World's Simplest". Try to find a fixed point of a mapping $G(x)$. $$ x = G(x) $$ The iteration is $$ x_{n+1} = G(x_n) $$ What should I choose for $G$ ? The most obvious choice is the equation $$ x = \frac{1}{2} (x^3-5) $$ But near the zero the slope of $G(x)$ is too large and the WS iteration diverges. Another choice is $$ x = \sqrt[3]{2x+5} $$ This will work. While we're at it, also produce a plot.
   G = @(x) (2*x+5).^(1/3)
   ezplot(G,[1 3])
   line([1 3],[1 3],'color','k')
   dkgreen = [0 2/3 0];
   x = 1.4;
   z = 0;
   disp(x)
   while abs(x-z) > eps(x)
      z = x;
      x = G(x);
      line([z z],[z x],'color',dkgreen)
      line([z x],[x x],'color',dkgreen)
      disp(x)
   end
G = 
    @(x)(2*x+5).^(1/3)
   1.400000000000000
   1.983192482680775
   2.077490885128178
   2.091955753470501
   2.094156962781544
   2.094491529117444
   2.094542371187228
   2.094550097140211
   2.094551271169830
   2.094551449574315
   2.094551476684497
   2.094551480804135
   2.094551481430152
   2.094551481525281
   2.094551481539736
   2.094551481541933
   2.094551481542267
   2.094551481542317
   2.094551481542325
   2.094551481542326
   2.094551481542327
This takes a long time. It's just linear convergence. But we squeeze through de Morgan's wicket anyway.

Root squaring

The "Graffe" root-squaring method was invented independently by Germinal Pierre Dandelin in 1826, Nikolai Lobachevsky in 1834, and Karl Heinrich Graffe in 1837. An article by Alston Householder referenced below goes into detail about who invented what. The idea is to transform the coefficients of one polynomial into another whose zeros are the squares of the original. If the zeros are well separated in magnitude, repeating the process will eventually produce a high degree polynomial whose first two terms provide a good approximation to the dominant zero. The method was popular for hand computation, but there are serious difficulties when it comes to making a reliable automatic implementation. Repeated zeros, complex zeros of equal magnitude, and especially scaling to prevent floating point overflow and underflow are all obstacles. But it works OK on our historic cubic, which has a simple dominant zero. Suppose $p(x)$ is a cubic polynomial. $$ p(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4 $$ Rearrange the equation $p(x) = 0$ so that the terms with odd powers of $x$ are on one side of the equals sign and the even powers are on the other. $$ p_1 x^3 + p_3 x = - (p_2 x^2 + p_4) $$ Square both sides of this equation and collect terms to form a new polynomial, $q(x)$. $$ q(x) = (p_1 x^3 + p_3 x)^2 - (p_2 x^2 + p_4)^2 $$ $$ = p_1^2 x^6 - (p_2^2 - 2 p_1 p_3) x^4 + (p_3^2 - 2 p_2 p_4) x^2 - p_4^2 $$ The zeros of $q$ are the squares of the zeros of $p$. Here is a function that does the job for cubics.
function q = graffe(p);
   % Graffe root squaring.  q = graffe(p).
   % p & q are 4-vectors with coefficients of cubics.
   % roots(q) = roots(p).^2.
   q = zeros(size(p));
   q(1) = p(1)^2;
   q(2) = -(p(2)^2 - 2*p(1)*p(3));
   q(3) = p(3)^2 - 2*p(2)*p(4);
   q(4) = -p(4)^2;
end
Let's try it on our historic polynomial $x^3-2x-5$. The floating point exponents of the coefficients soon start doubling at each step. Luckily, we reach my stopping criterion in eight steps. If we had to try more steps without rescaling, we would overflow. But by this time, the 256-th power of the zero we're after completely dominates the powers of the other roots, and so the 256-th root of the ratio of the just first coefficients provides a result that is accurate to full precision.
   p = [1 0 -2 -5];
   fmt = '%5.0f %5.0g %13.5g %13.5g %13.5g %20.15f\n';
   fprintf(fmt,1,p,0)
   m = 1;
   while max(abs(p)) < sqrt(realmax)
      m = 2*m;
      p = graffe(p);
      z = (-p(2)/p(1)).^(1/m);
      fprintf(fmt,m,p,z);
   end
    1     1             0            -2            -5    0.000000000000000
    2     1            -4             4           -25    2.000000000000000
    4     1            -8          -184          -625    1.681792830507429
    8     1          -432         23856   -3.9063e+05    2.135184796196703
   16     1   -1.3891e+05    2.3161e+08   -1.5259e+11    2.096144583759898
   32     1   -1.8833e+10     1.125e+16   -2.3283e+22    2.094553557477412
   64     1   -3.5467e+20   -7.5043e+32    -5.421e+44    2.094551481347086
  128     1   -1.2579e+41    1.7861e+65   -2.9387e+89    2.094551481542327
  256     1   -1.5824e+82  -4.2032e+130  -8.6362e+178    2.094551481542327
This basic implementation of root-squaring makes it through De Morgan's wicket.

Reduced Penultimate Remainder

Almost two years ago, I blogged about the Reduced Penultimate Remainder algorithm. It was an undergrad individual research project that I now fondly remember as the most obscure, impractical algorithm I have even seen. The algorithm tries to find a lower degree polynomial that divides exactly, without any remainder, into the polynomial whose zeros we seek. The zeros of the resulting divisor are then some of the zeros of the dividend. As an example in that post, I tried to find a linear factor of $x^3-2x-5$. That would have given me the zero that I've been computing in this post. But the algorithm fails to converge, no matter what starting factor is provided. How about computing a quadratic factor instead? If successful, that will yield two zeros. Here is the function that takes one step.
function q = rpr(p,q)
% RPR Reduced Penultimate Remainder.
% r = rpr(p,q), for polynomials p and q.
% Assume p(1) = 1, q(1) = 1.
   % Polynomial long division.
   % Stop when degree of r = degree of q.
   n = length(p);
   m = length(q);
   r = p(1:m);
   for k = m+1:n
      r = r - r(1)*q;
      r = [r(2:end) p(k)];
   end
   q = r/r(1);
end
The starting factor $q(x)$ can be any quadratic that has imaginary zeros. Let's try $q(x) = x^2+x+1$.
%    p = [1 0 -2 -5];
%    q = [1 1 1]
%    k = 1;
%    r = 0*q;
%    while max(abs(q-r)) > 10*eps(max(abs(q)))
%       r = q;
%       q = rpr(p,q)
%       k = k+1;
%    end
%    k
Here are the first ten and the last four of 114 steps. I've edited out 100 steps in between.
  1.000000000000000   1.000000000000000   1.000000000000000
  1.000000000000000   3.000000000000000   5.000000000000000
  1.000000000000000   2.333333333333334   1.666666666666667
  1.000000000000000   1.571428571428571   2.142857142857143
  1.000000000000000   2.636363636363636   3.181818181818182
  1.000000000000000   1.965517241379310   1.896551724137931
  1.000000000000000   1.982456140350877   2.543859649122807
  1.000000000000000   2.292035398230088   2.522123893805309
  1.000000000000000   1.972972972972974   2.181467181467182
  1.000000000000000   2.119373776908023   2.534246575342465
           .                   .                   .
           .                   .                   .
           .                   .                   .
  1.000000000000000   2.094551481542332   2.387145908831160
  1.000000000000000   2.094551481542323   2.387145908831149
  1.000000000000000   2.094551481542327   2.387145908831159
  1.000000000000000   2.094551481542328   2.387145908831155
k =
  114
I can now use deconvolution to do the polynomial division $p(x)/q(x)$. This produces a linear quotient and the desired zero.
%   r = deconv(p,q)
%   x1 = -r(1)
r = 1.000000000000000 -2.094551481542328 x1 = 2.094551481542328

References

Alston S. Householder, "Dandelin, Lobacevskii, or Graeffe", The American Mathematical Monthly, vol. 66, 1959, pp. 464-466. < http://www.jstor.org/stable/2310626 http://www.jstor.org/stable/2310626>

Published with MATLAB® R2015a
|
  • print

댓글

댓글을 남기려면 링크 를 클릭하여 MathWorks 계정에 로그인하거나 계정을 새로 만드십시오.