# Cleve's Corner

## Pentium Division Bug Affair

In my previous blog post I reprinted the Cleve's Corner article from the 1995 issues of MATLAB News and Notes and SIAM News about the Pentium division bug. In today's post I would like to describe some of the effects that affair had on the emerging Internet, the MathWorks, and the Intel Corporation,

### Contents

#### 1994 Internet

In the fall of 1994 we did not have anything like the Web as we know it today. There was an Internet, but the connections and protocols were all text based. We did have email and file transfer. News groups, including comp.soft-sys.matlab and comp.sys.intel, existed. The Mosaic graphical browser had been available for just a few months, but most people, including me, had not yet started to use it. Internet Explorer was not yet on the scene and it would be four more years before we would hear of Google.

When I began to get email asking for information about the bug, I would start my reply with "If you have access to the Internet ..." and then give instructions on how to use the MathWorks FTP site. Imagine doing that today.

And, as far as I can remember, there was not yet any spam. All of the email, and all of the postings to the newsgroups, were legitimate. Maybe they weren't all worthwhile, but at least there wasn't yet any of the absolute junk we unfortunately see today.

#### Initial Activity

Prof. Thomas Nicely started all the activity by sending a memo about an error in the calculation of the reciprocal of a twin prime to the CompuServe network on October 30, 1994. This soon found its way to Terje Mathisen in Norway who confirmed a bug in floating point division on the new Intel Pentium processor and posted an announcement, along with a test program, to the comp.sys.intel newsgroup on November 3.

A couple of MathWorks customers saw the news and called our tech support, asking how the bug might affect MATLAB. At nearly the same time I heard about the bug from a mailing list covering floating point arithmetic.

On November 14 Tim Coe posted what turned out to be an instance of the worst possible error. The next day I made my first posting, to both comp.soft-sys.matlab and comp.sys.intel, summarizing what was known up to then. I said that as far as I was concerned the worst aspect of the situation was that we didn't know how bad it was and so we had to be concerned about it.

#### Intel's Response

Here is Intel's initial response, in the form of the FAXBACK document produced by their customer support system.

There has been a lot of communication recently on the Internet about a floating point flaw on the Pentium processor. For almost all users, this is not a problem.

Here are the facts. Intel detected a subtle flaw in the precision of the divide operation for the Pentium processor. For rare cases (one in nine billion divides), the precision of the result is reduced. Intel discovered this subtle flaw during on going testing after several trillions of floating point operations in our continuing testing of the Pentium processor. Intel immediately tested the most stringent technical applications that use the floating point unit over the course of months and we have been unable to detect any error. In fact, after extensive testing and shipping millions of Pentium processor-based systems there has only been one reported instance of this flaw affecting a user to our knowledge, In this case, a mathematician doing theoretical analysis of prime numbers and reciprocals saw reduced precision at the 9th place to the right of the decimal.

In fact, extensive engineering tests demonstrated that an average spreadsheet user could encounter this subtle flaw of reduced precision once in every 27,000 years of use. Based on these empirical observations and our extensive testing, the user of standard off-the-shelf software will not be impacted. If you have this kind of prime number generation or other complex mathematics, call 1 800 628-8686 (International) 916 356-3551). If you don't, you won't encounter any problems with your Pentium processor-based system. If ever in the life of the computer this becomes a problem, Intel will work with the customer to resolve the issue.

Needless to say, that kind of response did not satisfy activists on the net.

#### Internet Emboldened

Intel had only recently begun to appeal directly to consumers with its "Intel Inside" campaign. Intel's traditional customers were computer manufacturers, and they were not concerned with "subtle flaws" that had already been fixed in the latest stepping of the part.

But now posts criticizing Intel started to appear on the net. A small, but important and vocal portion of the customer base was protesting.

On November 22, two engineers at the Jet Propulsion Laboratory suggested to their purchasing department that the laboratory stop ordering computers with Pentium chips. Steve Young, a reporter with CNN, heard about JPL's decision, found my posting on the newsgroup, and called me. After talking to me for a few minutes, he sent a video crew to MathWorks in Massachusetts and interviewed me over the phone from wherever he was in California. That evening, CNN's Moneyline used Young's news about JPL and his interview with me to make the Pentium Division Bug mainstream news. Two days later -- it happened to be Thanksgiving -- stories appeared in The New York Times, The Boston Globe, The San Jose Mercury News, and elsewhere. Hundreds of articles followed in the next several weeks.

On November 27, Andy Grove, CEO of Intel, tried to post his view of the situation on comp.sys.intel. But I'm afraid he only made matters worse. For one thing, he didn't even have his own logon, so he had a colleague, Richard Wirt, post the message on his behalf. Many netters were suspicious that it was a fake. The general tone was similar to the FAXBACK I reproduced above, aggressively defensive. And the posting was too long. I have included the entire posting in the collection of Pentium bug documents available from MATLAB Central.

In the three weeks from Mathisen's first posting until the end of November the traffic on comp.sys.intel increased from a trickle to hundreds messages a day. Most of it was rants about Intel and rants about the rants. I had to stop reading it. Some of it was cross posted to comp.soft-sys.matlab, so the traffic was heavy there too for a while.

The net had been heard. It had gone from being a plaything for a few academic geeks to something with economic clout. The Internet was beginning to grow up.

#### Effect on MathWorks

In the fall of 1994 the MathWorks was celebrating its 10th anniversary. We had not yet moved to Apple Hill. We were still located on Prime Parkway, off of Speen Street. We had less than 250 employees.

Our product name, MATLAB, was known to our customers, but our company name, MathWorks, was not widely known. On November 23, we announced that we were preparing a version of MATLAB that could detect and correct the division bug. Our public relations firm issued a press release with the headline,

THE MATHWORKS DEVELOPS FIX FOR THE INTEL PENTIUM(tm) FLOATING POINT ERROR

So on the day the story appears in The New York Times, this message showed up in the fax machines of media outlets all over the country. It turned out to be a very successful press campaign.

MathWorks has been at the forefront of companies using the Internet. We were 73rd company to register as a ".com" URL. So with the Mosaic browser becoming available about this time, we put up one of the first company Home pages.

I collected as many of the documents associated with this affair as I could and made them available in a .zip file from an FTP site. We put a notice about that collection on our home page, even though most people learned about it from newsgroups and email. Here is that first MathWorks Home Page.

#### Reaction at IBM

On December 12, IBM issued a report entitled "Pentium Study" that claimed financial spreadsheet calculations did not produce floating point numbers with uniformly distributed random bit patterns, but rather frequently produced numbers with long strings of 1's in their binary representation. These are the numbers that are "at risk" for encountering the FDIV bug. As a result, they predicted division errors would be much more frequent than Intel claimed.

Citing their report, IBM announced they were halting shipment of Pentium-based personal computers. Intel's stock dropped 10 points within a few hours of the IBM press release.

A week after the IBM announcement, on December 20, Intel finally acknowledged the situation. They announced a no-questions-asked return policy. They would replace the Pentium chip for anyone that requested it. They set up a number of replacement centers around the country (and around the world, I assume). They set aside $475-million to pay for it all. Their announcement said: Our previous policy was to talk with users to determine whether their needs required replacement of the processor. To some people, this policy seemed arrogant and uncaring. We apologize. We were motivated by a belief that replacement is simply unnecessary for most people. We still feel that way, but we are changing our policy because we want there to be no doubt that we stand behind this product. When I first thought about writing a blog post recalling the Pentium affair, I ordered the recent biography of Andy Grove listed in the References. It reads like a ghost-written autobiography. There is a substantial chapter entitled "The Pentium Launch: Intel Meets the Internet" which provides Grove's view of these events and their consequences. He still doesn't quite get it -- he continues to refer to newsgroup traffic as email. But the chapter does reveal that the affair had a deep affect on him. I think his message is well summarized by this quote on a key chain containing a surplus defective Pentium. Bad companies are destroyed by crises; Good companies survive them; Great companies are improved by them.; --Andy Grove, December 1994 Pentium key chain. Image thanks to Thomas Johansson, thomas@cpucollection.se. #### Public Reaction The entire affair fascinated the public. In addition to the key chains, there were necklaces, pendants, and other jewelry. There were cartoons in newspapers and popular magazines, and jokes in late-night TV monologues. The interest lasted for a couple of months. Some of the jokes, unfortunately some of the worst ones, are preserved in Web sites today. Just Google "Pentium Jokes". One of my favorite stories is personal. My daughter came to visit for Christmas. She uses a Macintosh and I accidentally dropped it while unloading the car. I took it into a shop in Natick that I had never been in before to see about getting it repaired. My picture had recently been in the local newspaper and a guy behind the counter recognized me. He said, "Hey, you're Cleve Moler. I've got something to show you." He clicked on an icon on a nearby Mac. The icon said "Pentium Powered Calculator". Sure enough, the results of divisions had errors in the fifth decimal place. This is on a Mac! The splash screen said the program came from an outfit called "Sarcastic Software". Who are those guys? I'd love to meet them. #### Postmortem I'm not sure of the actual number, but I understand that relatively few people actually asked to replace their Pentiums after all. Intel did not have to spend very much of the$475-million they had allocated. Most people were satisfied just to be assured that they could get a replacement if they ever felt they really needed it.

Only a few dozen people requested the Pentium-aware version of MATLAB. And none of them reported setting off the floating point division warning message without deliberately trying to trigger it.

So, as far as I know, Thomas Nicely was the only person to ever actually encounter the FDIV bug without actually looking for it. We had other reports of suspected division error sightings, but they were all ultimately traced to arithmetic errors in other places. In fact, the probability of the occurrence of the FDIV error is extremely small. But we didn't know that at the beginning of this affair, so that was why we had to be concerned.

#### References

A collection of original documents associated with the Pentium division bug is available from MATLAB Central. Pentium Papers

Richard S. Tedlow, Andy Grove, The Life and Times of an American Business Icon, Portfolio, Penguin Books, 2006, 568 pp.

Get the MATLAB code

Published with MATLAB® R2012b

## Pentium Division Bug Revisited

The Pentium division bug episode in the fall of 1994 was a defining moment for the MathWorks, for the Internet, for Intel Corporation, and for me personally. In this blog I am reprinting the article that I wrote for the Winter 1995 issue of MATLAB News and Notes and for SIAM News. In my next blog I want to discuss the episode's impact.

### Contents

#### A Tale of Two Numbers (Reprinted from MATLAB News and Notes, Winter 1995)

This is the tale of two numbers, and how they found their way over the Internet to the world's front pages on Thanksgiving day, embarrassing the world's premier computer chip manufacturer.

#### Thomas Nicely

The first number is a twelve digit integer

$$p = 824633702441$$

Thomas Nicely of Lynchburg College in Virginia is interested in this number because both p and p+2 are prime. Two consecutive odd numbers that are both prime are called twin primes. Nicely's research involves the distribution of twin primes and, in particular, the sum of their reciprocals,

$$S = 1/5 + 1/7 + ... + 1/29 + 1/31 + ... + 1/p + 1/(p+2) + ...$$

It is known that this infinite sum has a finite value, but nobody knows what that value is. For over a year, Nicely has been carrying out various computations involving twin, triple and quadruple primes. One of his objectives has been to demonstrate that today's PCs are just as effective as supercomputers for this kind of research. He was using half a dozen different computers in his work until March, when he added a machine based on Intel Corporation's Pentium chip to this collection.

In June, Nicely noticed inconsistencies among several different quantities he was computing. He traced it to the values he was getting for $1/p$ and $1/(p+2)$. Although he was using double precision, they were accurate only to roughly single precision. He blamed at first on his own programs, then the compiler and operating system, then the bus and motherboard in his machine. By late October though, he was convinced that the difficulty was in the chip itself.

#### Alex Wolfe

On October 30, in an e-mail message to several other people who had access to Pentium systems, Nicely said that he thought there was a bug in the processor's floating point arithmetic unit. One of the recipients of Nicely's memo posted it on the CompuServe network. Alex Wolfe, a reporter for the EE Times, spotted the posting and forwarded it to Terje Mathisen.

#### Terje Mathisen

Terje Mathisen, a computer jock at Norsk Hydro in Oslo, Norway, has published articles on programming the Pentium, and posted notes on the Internet about the accuracy of its transcendental functions. Within hours of receiving Wolfe's query, Mathisen confirmed Nicely's example, wrote a short, assembly language test program, and, on November 3, initiated a chain of Internet postings in newsgroup comp.sys.intel about the FDIV bug. (FDIV is the floating point divide instruction on the Pentium.) A day later, Germany's Andreas Kaiser posted a list of two dozen numbers whose reciprocals are computed to only single precision accuracy on the Pentium.

#### Tim Coe

Tim Coe is an engineer at Vitesse Semiconductor in southern California. He has designed floating point arithmetic units himself and saw in Kaiser's list of erroneous reciprocals clues to how other designers had tackled the same task. He surmised, correctly it turns out, that the Pentium's division instruction employs a technique known to chip designers as a radix 4 SLT algorithm. This produces two bits of quotient per machine cycle and thereby makes the Pentium twice as fast at division as previous Intel chips running at the same clock rate.

Coe created a model that explained all the errors Kaiser had reported in reciprocals. He also realized that division operations involving numerators other than one potentially had even larger relative errors. His model lead him to a pair of seven digit integers whose ratio

$$c = 4195835 / 3145727$$

appeared to be an instance of the worst case error. Coe did his analysis without actually using a Pentium -- he doesn't own one. To verify his prediction, he had to bundle his year-and-a-half old daughter into his car, drive down to a local computer store, and borrow a demonstration machine.

This true value of Coe's ratio is

$$c = 1.33382044...$$

But computed on a Pentium, it is

$$c = 1.33373906...$$

The computed quotient is accurate to only 14 bits. The relative error is $6.1 \cdot 10^{-5}$. This is worse than single precision roundoff error and over ten orders of magnitude worse than what we expect from double precision computation. The error doesn't occur very often, but when it does, it can be very significant. We are faced with a very small probability of making a very big error.

#### Press Coverage

I first learned of the FDIV bug a few days before Coe's revelation from an electronic mailing list maintained by David Hough; at that point I began to follow comp.sys.intel. At first, I was curious but not alarmed. It seemed to be some kind of single- versus double-precision problem which, although annoying, is not all that uncommon. But Coe's discovery, together with a couple of customer calls to The MathWorks tech support, raised my level of interest considerably.

On November 15, I posted a summary of what I knew then to both the Intel and the MATLAB newsgroups, using Nicely's prime and Coe's ratio as examples. I also pointed out that the divisor in both cases is a little less than 3 times a power of two:

$$824633702441 = 3 \cdot 2^{38} - 18391$$

and

$$3145727 = 3 \cdot 2^{20} - 1$$

By this time, the Net had become hyperactive and my posting was redistributed widely. A week later, reporters for major newspapers and news services had Xeroxed copies of faxed copies of printouts of my posting.

The story began to get popular press coverage when Net activists called their local newspapers and TV stations. I was interviewed by CNN on Tuesday, the 22nd, and spent most of the next day on the telephone with other reporters. My picture was in the New York Times and the San Jose Mercury News on Thursday which happened to be Thanksgiving. The Times story included a sidebar, titled "Close, But Not Close Enough", which used Coe's ratio to illustrate the problem. The story was front-page news in the Boston Globe, with the headline "Sorry, Wrong Number." The Globe's sidebar demonstrated the error by evaluating Coe's ratio in a spreadsheet.

In the month since I learned of Nicely's prime and Coe's ratio they have certainly earned their place in the Great Numbers Hall of Fame.

One challenging aspect of this has been how to explaing to the press how big the error is. The focus of attention has been on what we numerical analysts call the residual,

$$r = 4195835 - (4195835 / 3145727) (3145727)$$

With exact computation, r would be zero, but on the Pentium it is 256. To most people, 256 seems like a pretty big error. But when compared to the input data, of course, it doesn't seem so large. This gets us into relative error and significant digits -- terms that are not encountered in everyday journalistic prose. There was even confusion on the part of some reporters about where to starting counting "decimal digits". Not everybody got the details right. The New York Times was the only publication I saw that thought to show the actual value of the erroneous quotient. The British publication, The Economist, had the good sense to describe the error as 0.006%. In retrospect, perhaps the best description of the error is "about 61 parts per million."

Since double precision floating point computation is vital to MATLAB, and since Pentium-based machines account for a significant portion of new MATLAB usage, we decided early on to provide a release of our software that works around the bug. My first thought was to modify our source code so that every division operation

   |z = x/y|

was followed by a residual calculation

   |r = x - y*z|

When the residual is too large, the division could be redone in software using Newton's method. We abandoned both aspects of this approach -- we don't compute the residual, and we don't do software emulation.

#### Peter Tang

Coe, Mathisen and I are now working with Peter Tang of Argonne Laboratory (on sabbatical in Hong Kong) and a group of hardware and software engineers from Intel, who are in California and Oregon. We have never met face-to-face. Our collaboration is all by e-mail and telephone. (11 am in Massachusetts is 8 am in California and Oregon, 5 pm in Oslo and 1 am in Hong Kong.) We are developing, implementing, testing and proving the correctness of software that will work around the FDIV bug and related bugs in the Pentium's on-chip tangent, arctangent and remainder instructions. We will offer the software to compiler vendors, commercial software developers and individuals folks via the Net. We hope this software will be used by anyone doing floating point arithmetic on a Pentium who is unable to replace the chip.

#### SRT Algorithm

The Pentium's division woes can be traced to five missing entries in a lookup table that is actually part of the chip's circuitry. The radix-4 SRT algorithm is a variation of the familiar long division technique we all earned in school. Each step of the algorithm takes one machine cycle and appends another two-bit digit to the quotient. Both the quotient and the remainder are represented in a redundant fashion as the difference between two numbers. In this way, the next digit of the quotient can be obtained by table lookup, with several high-order bits from both the divisor and remainder used as indices into the table. The table is not rectangular; a triangular portion of it is eliminated by constraints on the possible indices.

When the algorithm was implemented in silicon, five entries on the boundary of this triangular portion were dropped. These missing entries (each of which should have a value of +2) effectively mean that, for certain combinations of bits developed during the division process, the chips make an error while updating the remainder.

#### Our Workaround

The key to our workaround is the fact the chip does a perfectly good job with division almost all the time. It is simply a question of avoiding the operands that don't work very well, which our software does with a quick test of the divisor before each attempted FDIV. The absence of dangerous bit patterns indicates that the FDIV can be done safely. The presence of one of the patterns does not guarantee that the error will occur, just a signal that it might. In this case, scaling both numerator and denominator by 15/16 takes the divisor out of the unsafe region and insures that the subsequent division will be fully accurate.

With this approach, it is not necessary to test the magnitude of the residual resulting from a division. It is known a priori that all divisions will produce fully accurate results. If desired, an additional test can compare the result of scaled and unscaled divisions and count the number of errors that would occur on an unmodified Pentium. We will offer this test in MATLAB, but it can be turned off for maximum execution speed.

The regions to be avoided can be characterized by examining the high order bits of the fraction in the IEEE floating point representation. Floating point numbers in any of the three available precisions can be thought of as real numbers of the form

$$\pm (1 + f ) \cdot 2^e$$

where e is an integer in the range determined by under- and overflow and f is a rational number in the half open interval [0, 1). The eight high order bits of f divide this interval into $2^8$ = 256 subintervals. Five of these subintervals contain all the at-risk divisors. The hexadecimal representations of the leading bits of the five subintervals are

  1F, 4F, 7F, AF, and DF

(As I write this article, Coe, Tang and some of the Intel people are still working on a proof of correctness. The ultimate version of the algorithm may well use more than eight bits, but the idea will be the same.)

A picture of the floating point numbers between any two powers-of-two looks something like this

  [____()______()______()______()______()____]
1F      4F      7F      AF      DF

The length of each of the five subintervals is 1/256 of the overall length. The result of any division involving a divisor outside these subintervals will be accurate. Only a small fraction of divisions involving divisors inside the subintervals produce erroneous results, but it is easiest, and quickest, to avoid the subintervals altogether. Scaling the numerator and denominator by 15/16 shifts it to a safe region. (I had originally proposed scaling by 3/4, but this takes the 7Fthird subinterval into the 1F subinterval.)

For example, the denominator in Coe's ratio is

$$3145727 = 1.49999952316284 \cdot 2^{21}$$

The hexadecimal representation is printed as 4147FFFF80000000. So e= 21 and f = $2^{-1} - 2^{-21}$, and the number is close to the right end point of the 7F subinterval. The 17 consecutive high order 1 bits in f conspire with the bits in Coe's numerator to produce an instance of worst-case error.

The Pentium, like most other microprocessors, saves floating point numbers in memory with either a 32-bit single precision format, or a 64-bit double precision format. In the floating point arithmetic unit itself, however, an 80-bit extended-precision format is used for the calculations. Thus, if an at-risk denominator is encountered in either single or double precision, it can be scaled by 15/16 and extended precision can be used for the division. Then, when the resulting quotient is rounded back to the working precision, it will yield exactly the same result, down to the last bit, as would be produced by a chip without the bug.

#### Pentium-Safe Division

The entire algorithm can be summarized in a few lines of MATLAB pseudo-code. This doesn't actually work, because MATLAB doesn't do bit-level operations on floating point numbers. Moreover, we do not make the distinction between the 64- and 80-bit representations that ensure full accuracy when scaling is required. But I hope this explanation has given readers an idea of the thinking that went into the workaround.

The algorithm is:

  function z = fdiv(x,y)
if at_risk(y)
x = (15/16)*x;
y = (15/16)*y;
end
z = x/y;
end

The safety filter is:

  function a = at_risk(y)
f = and(hex(y),'000FF00000000000');
a = any(f == ['1F'; '4F'; '7F'; 'AF'; 'DF'])
end

Get the MATLAB code

Published with MATLAB® R2012b

## Wilkinson’s Matrices

One of Jim Wilkinson's outstanding skills was his ability to pick great examples. A previous post featured his signature polynomial. This post features his signature matrices.

### Contents

#### Tridiagonal Matrices

When working with eigenvalues, we always have to be concerned about multiplicity and separation. Multiple eigenvalues and clusters of close eigenvalues require careful attention. Symmetric tridiagonal matrices are easier to handle than the general case. If none of the off diagonal elements are zero, there are no multiple eigenvalues. But if none of the off diagonal elements are "small", can we conclude that the eigenvalues are reasonably well separated? One of Wilkinson's matrices answers this question with a resounding "no".

#### Wilkinson's Matrices

Wilkinson introduces the matrices $W_{2n+1}^-$ and $W_{2n+1}^+$ on page 308 of The Algebraic Eigenvalue Problem and then employs them throughout the rest of the book. These matrices are the families of symmetric tridiagonal matrices generated by the following MATLAB function. ($W_n^+$ is also generated by the function wilkinson in toolbox\matlab\elmat that has been part of MATLAB for many years.)

type generate_wilkinson_matrices

function [Wm,Wp] = generate_wilkinson_matrices(n)

D = diag(ones(2*n,1),1);
Wm = diag(-n:n) + D + D';
Wp = abs(diag(-n:n)) + D + D';



Here, for example, is $n = 3, 2n+1 = 7$.

format compact
[Wm,Wp] = generate_wilkinson_matrices(3)

Wm =
-3     1     0     0     0     0     0
1    -2     1     0     0     0     0
0     1    -1     1     0     0     0
0     0     1     0     1     0     0
0     0     0     1     1     1     0
0     0     0     0     1     2     1
0     0     0     0     0     1     3
Wp =
3     1     0     0     0     0     0
1     2     1     0     0     0     0
0     1     1     1     0     0     0
0     0     1     0     1     0     0
0     0     0     1     1     1     0
0     0     0     0     1     2     1
0     0     0     0     0     1     3


#### Eigenvalues

Wilkinson usually has $2n+1 = 21$. Here are the eigenvalues of $W_{21}^-$ and $W_{21}^+$.

format long
[Wm,Wp] = generate_wilkinson_matrices(10);
disp('       eig(Wm(21))         eig(Wp(21))')
disp(flipud([eig(Wm) eig(Wp)]))

       eig(Wm(21))         eig(Wp(21))
10.746194182903357  10.746194182903393
9.210678647333035  10.746194182903322
8.038941119306445   9.210678647361332
7.003952002665359   9.210678647304919
6.000225680185169   8.038941122829023
5.000008158672943   8.038941115814275
4.000000205070442   7.003952209528674
3.000000003808129   7.003951798616375
2.000000000054486   6.000234031584166
1.000000000000619   6.000217522257097
0.000000000000003   5.000244425001915
-1.000000000000618   4.999782477742903
-2.000000000054490   4.004354023440857
-3.000000003808127   3.996048201383625
-4.000000205070437   3.043099292578824
-5.000008158672947   2.961058884185726
-6.000225680185168   2.130209219362506
-7.003952002665363   1.789321352695084
-8.038941119306442   0.947534367529292
-9.210678647333047   0.253805817096678
-10.746194182903357  -1.125441522119985


#### Properties

Here are some important properties of these eigenvalues.

• All the eigenvalues of $W_{2n+1}^-$, except the middle one, occur in $\pm$ pairs.
• $W_{2n+1}^-$ is singular. The middle eigenvalue is zero.
• All the eigenvalues of $W_{2n+1}^+$, but one, are positive.
• The positive eigenvalues of $W_{2n+1}^+$ occur roughly in pairs, with a positive eigenvalue of $W_{2n+1}^-$ approximately in the center of each pair.
• The larger pairs of eigenvalues of $W_{2n+1}^+$ are incredibly close together.

#### Remarkable Closeness

All of the off diagonal elements in $W_{21}^+$ equal one, so none are "small", and yet the first two eigenvalues of $W_{21}^+$ agree to fourteen significant figures. Wilkinson calls this "remarkable" and "pathological". The relative distance is

format short e
e = flipud(eig(Wp));
delta = (e(1)-e(2))/e(1)

delta =
6.6120e-15


#### Eigenvectors

The behavior of the eigenvalues can be understood by looking at the eigenvectors. Here is the vector $u_1$ associated with the first eigenvalue of $W_{21}^-$ and the vectors $v_1$ and $v_2$ associated with the leading pair of eigenvalues of $W_{21}^+$, Each vector has been normalized so that its first component is one.

[U,~] = eig(Wm);
[V,~] = eig(Wp);
u1 = flipud(U(:,end)/U(end,end));
v1 = flipud(V(:,end)/V(end,end));
v2 = flipud(V(:,end-1)/V(end,end-1));

format long
[u1 v1 v2]

ans =
1.000000000000000   1.000000000000000   1.000000000000000
0.746194182903358   0.746194182903392   0.746194182903321
0.302999941502167   0.302999941502254   0.302999941502075
0.085902493869951   0.085902493870164   0.085902493869724
0.018807481330335   0.018807481331049   0.018807481329571
0.003361464615150   0.003361464618322   0.003361464611748
0.000508147087273   0.000508147104788   0.000508147068490
0.000066594309069   0.000066594424058   0.000066594185758
0.000007705362252   0.000007706235465   0.000007704425844
0.000000798285440   0.000000805807738   0.000000790218740
0.000000074882661   0.000000147323229  -0.000000002800555
0.000000006418173   0.000000777356287  -0.000000820314052
0.000000000506441   0.000007428942095  -0.000007992139484
0.000000000037026   0.000064197613845  -0.000069080489810
0.000000000002522   0.000489858240829  -0.000527118748833
0.000000000000161   0.003240481200881  -0.003486964947267
0.000000000000010   0.018130575985481  -0.019509658947141
0.000000000000001   0.082810753074099  -0.089109664858083
0.000000000000000   0.292094585462557  -0.314312449184672
0.000000000000000   0.719337698380754  -0.774053354706959
0.000000000000000   0.964008718993031  -1.037335016061421


And here is a plot of these three eigenvectors. The components of the first half of each of the three vectors decay rapidly and all three agree with each other to many decimal places.

n = 10;
plot_wilkinson_eigenvectors


For a tridiagonal matrix $A$, the eigenvalue/vector equation

$$Ax = \lambda x$$

is a three-term recurrence relation for the components of $x$. For the first half of each of the three vectors, we have the same positive elements on the diagonal of $A$, ones on the off diagonal, and nearly the same positive value of $\lambda$, so the recurrence relation is nearly the same and leads to nearly the same rapidly decreasing components.

Once the recurrence relation gets half way through the vector and the components get small, this argument breaks down. For $u_1$ the diagonal elements are negative and so the components of the vector remain small. Perron-Frobenius implies that $v_1$ goes positive. Symmetry implies that $v_2$ goes negative to stay orthogonal to $v_1$.

#### Perturbation Theory

A closer examination of the elements of $v_1$ reveals that they decay exponentially until they reach a minimum half way through the vector at index $n+1$. It is possible to estimate a priori that this minimum value is of order $O(1/n!)$

format short e
minimum = v1(n+1)
estimate = 1/factorial(n)

minimum =
1.4732e-07
estimate =
2.7557e-07


This information about the shape of the dominant eigenvector makes it possible to prove that the corresponding eigenvalue, $\lambda_1$, is a double eigenvalue with separation of order the square of the minimum in the eigenvector, which is $O(1/(n!)^2)$.

separation = e(1)-e(2)
estimate = 1/factorial(n)^2

separation =
7.1054e-14
estimate =
7.5941e-14


#### Acknowledgement

Thanks to Pete Stewart, University of Maryland, for some helpful email while I was working on this post.

#### Reference

J. H. Wilkinson, The Algebraic Eigenvalue Problem, Claredon Press, Oxford, 1965, 662 pp.

Get the MATLAB code

Published with MATLAB® R2012b

## Quantum Matrix Processor

The Quantum Matrix Processor, being announced today, is the world's first viable quantum array processor. Basic matrix operations are done instantaneously, with infinite procession. The programming environment is classic MATLAB.

### Contents

#### Quantum Research Partners

I am pleased to be involved in today's announcement of the QMP, the Quantum Matrix Processor. This new device has been developed by Quantum Research Partners, a startup here in Santa Fe that was formed two years ago by a team from Los Alamos National Lab. I have been a consultant from the beginning of their project because they have based their design on classic MATLAB.

#### QMP

The QMP is similar conceptually to a GPU, or Graphics Processing Unit. It is a coprocessor that interfaces with the motherboard via an expansion slot such as PCI Express or Accelerated Graphics Port. Matrices are transferred between the memories and registers of the CPU, or central processor, and the coprocessor. Once the data is available in the QMP, quantum computing can take place.

#### Qureal

Traditional quantum computing involves "qubits", where a quantum circuit represents a single bit and quantum arithmetic operations are similar to digital arithmetic operations. The primary innovative aspect of the QMP is the "qureal" where a single quantum circuit represents a real number. The five basic arithmetic operations of addition, subtraction, multiplication, division, and square root, can be carried out instantaneously, according to the rules of real arithmetic.

Quantities are stored in qureal registers and qureal memory as real numbers. There is no representation error. Arithmetic operations are carried out exactly. There is no roundoff error. Arithmetic operations are done instantaneously. If it could be measured, the execution time of an addition or multiplication operation would be found to be zero.

#### Uncertainty Principle

Heisenberg's uncertainty principle implies that it is not possible to access an infinitely precise real number in finite time. It would take an infinite amount of time, for example, to access the binary or decimal representation of a real result.

#### Programming Environment

The programming environment of the QMP is based on MATLAB. The beta release of the environment is an implementation of what is now called "classic MATLAB", the simple interactive matrix calculator that I developed before we started MathWorks. The available features include basic MATLAB matrix management and command processing. Each of the following functions has been implemented as a single QMP instruction, so it is executed essentially instantaneously on a matrix of qureals.

   ABS   ATAN  BASE  CHAR  CHOL  CHOP  COND  CONJ
COS   DET   DIAG  DIAR  DISP  EIG   EPS   EXEC
EXP   EYE   FLOP  HESS  HILB  IMAG  INV   KRON
LINE  LOAD  LOG   LU    MAGI  NORM  ONES  ORTH
PINV  PLOT  POLY  PRIN  PROD  QR    RAND  RANK
RAT   RCON  REAL  ROOT  ROUN  RREF  SAVE  SCHU
SIN   SIZE  SQRT  SUM   SVD   TRIL  TRIU  USER
CLEA  ELSE  END   EXIT  FOR   HELP  IF    LONG
RETU  SEMI  SHOR  WHAT  WHIL  WHO   WHY

#### LINPACK Benchmark

This processor can solve a system of simultaneous linear equations instantaneously. Does that qualify the QMP as the faster computer in the world? My colleague Jack Dongarra and the other folks running the top 500 list will have two concerns when we submit our benchmark results. First, we can't access the computed solution in zero time. And, second my program is written in MATLAB, not Fortran.

Get the MATLAB code

Published with MATLAB® R2012b

## Easter

Easter Sunday is March 31 this year. Why? How is the date of Easter determined?

### Contents

#### Easter Day

Easter Day is one of the most important events in the Christian calendar. It is also one of the most mathematically elusive. In fact, regularization of the observance of Easter was one of the primary motivations for calendar reform centuries ago.

Easter is linked to the Jewish Passover. The informal rule is that Easter Day is the first Sunday after the first full moon after the vernal equinox. But the ecclesiastical full moon and equinox involved in this rule are not always the same as the corresponding astronomical events, which, after all, depend upon the location of the observer on the earth. The date varies between March 22 and April 25.

There is an easter program in Experiments with MATLAB. Let's check this year.

datestr(easter(2013))

ans =
31-Mar-2013


datestr(easter(2014))

ans =
20-Apr-2014


#### Don Knuth

The EXM program is based on the algorithm presented in the first volume of the classic series by Donald Knuth, The Art of Computer Programming. Knuth has used it in several publications to illustrate different programming languages. The task has often been the topic of an exercise in computer programming courses.

Knuth says that the algorithm is due to the Neapolitan astronomer Aloysius Lilius and the German Jesuit mathematician Christopher Clavious in the late 16th century and that it is used by most Western churches to determine the date of Easter Sunday for any year after 1582.

#### Metonic cycle

The earth's orbit around the sun and the moon's orbit around the earth are not in sync. It takes the earth about 365.2425 days to orbit the sun. This is known as a tropical year. The moon's orbit around the earth is complicated, but an average orbit takes about 29.53 days. This is known as a synodic month. The fraction

year = 365.2425;
month = 29.53;
format rat
ratio = year/month

ratio =
6444/521


is not the ratio of small integers. However, in the 5th century BC, an astronomer from Athens named Meton observed that the ratio is very close to 235/19.

format short
ratio
meton = 235/19

ratio =
12.3685
meton =
12.3684


In other words, 19 tropical years is close to 235 synodic months. This Metonic cycle was the basis for the Greek calendar and is the key to the algorithm for determining Easter.

#### MATLAB program

Here is the complete MATLAB program.

% addpath ../../exm
type easter

function dn = easter(y)
% EASTER  Date of Easter.
% EASTER(y) is the datenum of Easter in year y.
% Ref: Donald Knuth, The Art of Computer Programming,
%      Fundamental Algorithms, pp. 155-156.

% Golden number in 19-year Metonic cycle.
g = mod(y,19) + 1;

% Century number.
c = floor(y/100) + 1;

% Corrections for leap years and moon's orbit.
x = floor(3*c/4) - 12;
z = floor((8*c+5)/25) - 5;

% Epact.
e = mod(11*g+20+z-x,30);
if (e==25 && g>11 || e==24), e = e + 1; end

% Full moon.
n = 44 - e;
if n < 21, n = n + 30; end

% Find a Sunday.
d = floor(5*y/4) - x - 10;

% Easter is a Sunday in March or April.
d = n + 7 - mod(d+n,7);
dn = datenum(y,3,d);



#### References

[1] Donald E. Knuth, The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd edition), pp. 155-156, Addison-Wesley, 1997, ISBN 0-201-89683-4.

[2] Wikipedia, Primary article on Easter. <http://en.wikipedia.org/wiki/Easter>

[3] Wikipedia, Computus, details on calculation of Easter. <http://en.wikipedia.org/wiki/Computus>

[4] Wikipedia, Metonic cycle. <http://en.wikipedia.org/wiki/Metonic_cycle>

Get the MATLAB code

Published with MATLAB® R2012b

## Wilkinson’s Polynomials

Wilkinson's polynomials are a family of polynmials with deceptively sensitive roots.

### Contents

#### $P_n$

The Wilkinson polynomial of degree $n$ has as roots the integers from $1$ to $n$.

$$P_n(x) = \prod_{k=1}^{n} (x-k)$$

#### $P_{20}$

The usual value of $n$ is 20.

format compact
n = 20
syms x
P20 = prod(x-(1:n))

n =
20
P20 =
(x - 1)*(x - 2)*(x - 3)*(x - 4)*(x - 5)*(x - 6)*(x - 7)*(x - 8)*(x - 9)*(x - 10)*(x - 11)*(x - 12)*(x - 13)*(x - 14)*(x - 15)*(x - 16)*(x - 17)*(x - 18)*(x - 19)*(x - 20)


Expressed in this form, the polynomial is well behaved.

#### Coefficients

It gets interesting when $P_n(x)$ is expanded in the monomial basis, $\{x^j\}$.

P = expand(P20)

P =
x^20 - 210*x^19 + 20615*x^18 - 1256850*x^17 + 53327946*x^16 - 1672280820*x^15 + 40171771630*x^14 - 756111184500*x^13 + 11310276995381*x^12 - 135585182899530*x^11 + 1307535010540395*x^10 - 10142299865511450*x^9 + 63030812099294896*x^8 - 311333643161390640*x^7 + 1206647803780373360*x^6 - 3599979517947607200*x^5 + 8037811822645051776*x^4 - 12870931245150988800*x^3 + 13803759753640704000*x^2 - 8752948036761600000*x + 2432902008176640000


The coefficients are huge. The constant term is $20!$, and that's not even the largest coefficient.

C = flipud(coeffs(P)')

C =
1
-210
20615
-1256850
53327946
-1672280820
40171771630
-756111184500
11310276995381
-135585182899530
1307535010540395
-10142299865511450
63030812099294896
-311333643161390640
1206647803780373360
-3599979517947607200
8037811822645051776
-12870931245150988800
13803759753640704000
-8752948036761600000
2432902008176640000


#### Symbolic form

Let's have the Symbolic Toolbox generate LaTeX and cut and paste the result into the source for the blog.

L = latex(P);
% edit(L)


$$x^{20} - 210\, x^{19} + 20615\, x^{18} \\ - 1256850\, x^{17} + 53327946\, x^{16} - 1672280820\, x^{15} \\ + 40171771630\, x^{14} - 756111184500\, x^{13} + 11310276995381\, x^{12} \\ - 135585182899530\, x^{11} + 1307535010540395\, x^{10} - 10142299865511450\, x^9 \\ + 63030812099294896\, x^8 - 311333643161390640\, x^7 + 1206647803780373360\, x^6 \\ - 3599979517947607200\, x^5 + 8037811822645051776\, x^4 - 12870931245150988800\, x^3 \\ + 13803759753640704000\, x^2 - 8752948036761600000\, x + 2432902008176640000$$

With the monomial basis the Wilkinson polynomial does not look so innocent. The spacing between the roots is tiny relative to these coefficients. But the toolbox can easily find the roots $1$ to $20$ with no error.

Z = sort(solve(P))'

Z =
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]


#### Double precision

Convert the symbolic form to double precision floating point.

format long e
p = sym2poly(P)';
c = flipud(coeffs(poly2sym(p))')

c =
1
-210
20615
-1256850
53327946
-1672280820
40171771630
-756111184500
11310276995381
-135585182899530
1307535010540395
-10142299865511450
63030812099294896
-311333643161390656
1206647803780373248
-3599979517947607040
8037811822645051392
-12870931245150988288
13803759753640704000
-8752948036761600000
2432902008176640000


Five of the coefficients cannot be represented in double precision format. They have been perturbed.

delta = C-c

delta =
0
0
0
0
0
0
0
0
0
0
0
0
0
16
112
-160
384
-512
0
0
0


#### Double precision roots

The roots of the polynomial with the double precision coefficients are no longer 1:20. The largest perturbations occur to the roots at 16 and 17.

z = sort(roots(p));
fmt = '%25.16f\n';
fprintf(fmt,z)

       0.9999999999997972
2.0000000000475513
2.9999999973842115
4.0000000600349486
4.9999992200080809
6.0000064898423089
6.9999645183068031
8.0001214537649652
8.9997977879058215
9.9997884273944386
11.0021078422257670
11.9944660326764140
13.0076639325972590
13.9958171375909190
14.9955431043450050
16.0114122070902880
16.9888815075391190
18.0060272186900080
18.9981637365860050
20.0002393259700360


#### Wilkinson's perturbation

Wilkinson made a different perturbation, a deliberate roundoff error on his machine to the second coefficient, the $-210$. He changed this coefficient by $2^{-23}$ and discovered that several of the roots were driven into the complex plane.

I am not sure $^\dagger$ about the sign of Wilkinson's perturbation, so let's do both. Here is a movie, an animated GIF, of the root locus in the complex plane produced by perturbations like his. It shows the trajectories of the roots from $9$ to $20$ of

$$P_{20}(x) - \alpha x^{19}$$

as we vary $\alpha$ over the range

$$\alpha = \pm 2^{-k}, k = 23, ..., 36$$ The roots $1$ to $8$ stay real for perturbations in this range. Wilkinson's result is at the end of either the red or the black trajectories.

#### Sensitivity

With a little calculus, we can get an analytic expression for the sensitivity of the roots with Wilkinson's perturbation. Regard each root $x$ as function of $\alpha$ and differentiate the following equation with respect to $\alpha$.

$$P_{20}(x) - \alpha x^{19} = 0$$

At $\alpha = 0$, we have

$$\frac{dx}{d\alpha} = \frac{x^{19}}{P_{20}'(x)}$$

This is easy to evaluate.

pprime = sym2poly(diff(P20));
xdot = zeros(n,1);
for k = 1:n
xdot(k) = k^19/polyval(pprime,k);
end
format short e
xdot

xdot =
-8.2206e-18
8.1890e-11
-1.6338e-06
2.1896e-03
-6.0774e-01
5.8248e+01
-2.5424e+03
5.9698e+04
-8.3916e+05
7.5994e+06
-4.6378e+07
1.9894e+08
-6.0434e+08
1.3336e+09
-2.1150e+09
2.4094e+09
-1.9035e+09
9.9571e+08
-3.0901e+08
4.3100e+07


The sensitivities vary over 27 orders of magnitude, with the largest values again at $16$ and $17$. Here is a log plot.

This is a different perturbation than the one from symbolic to double, but the qualitative effect is the same.

#### Note added March 10, 2013.

$^\dagger$ I am back in the office where I have access to Wilkinson's The Algebraic Eigenvalue Problem. This polynomial is discussed, among other places, on pages 417 and 418. The perturbation he makes is negative, so the coefficient of $x^{19}$ becomes $-210 - 2^{-23}$ and the resulting roots are at the ends of our red trajectories.

Get the MATLAB code

Published with MATLAB® R2012b

## Jim Wilkinson

I have already made several posts about gatlin, the image distributed with MATLAB of the organizing committee for the Gatlinburg III conference. From the MATLAB point of view, the first man on the left in the photo, J. H. Wilkinson, is by far the most important. From the early days of computers in the 1950s until his death in 1986, Wilkinson was the world's authority on matrix computation. His research on eigenvalue algorithms and their implementation in Algol led directly to EISPACK, the mathematical foundation for the first MATLAB.

### Contents

load gatlin
image(X)
colormap(map)
axis image
axis off


#### Early Life

James Hardy Wilkinson was born in 1919 in Strood, England, a small town east of London, on the way towards Canterbury and Dover. The family owned a small dairy.

When Jim was only eleven years old he received a scholarship to Sir Joseph Williamson's Mathematical School in nearby Rochester. When he was barely 17, two years earlier than average, he entered Cambridge on a Trinity Major Scholarship in Mathematics. At Cambridge, his tutors included the famous mathematicians G. H. Hardy, J. E. Littlewood and A. S. Besicovitch. Besicovitch characterized Jim's performance on the Tripos exams, which are the culminatation of the undergraduate mathematics program at Cambridge, as "easily at the top of the First Class".

#### World War II

In the early years of World War II Wilkinson joined a team of mathematicians in an office at Cambridge. They computed trajectories of artillery shells using mechanical desk calculators. Later he transferred to a laboratory at Fort Halstead to work on the thermodynamics of explosions.

At Fort Halsead, he met Heather Ware, a mathematician from King's College. They soon married and lived happily together for over 40 years.

At the end of the war, Jim transferred to the National Physical Laboratory, where he would remain for the rest of his career. NPL is located just west of London, not far from Heathrow airport, and is like a British version of the American National Bureau of Standards.

#### Leslie Fox

One of Jim's colleagues at NPL, who became his lifelong friend, was Leslie Fox. Leslie was a year older than Jim, had been educated at Oxford, and returned there as a professor after eleven years at NPL. After Jim's death, Leslie wrote a 40 page biographical memoir for the British Royal Society. I am using that memoir as a reference for this blog.

I bring up Fox at this point because I was so impressed by his remark in the memoir that in their early days at NPL the two of them, together with E. T. Goodwin, solved an 18-by-18 set of simultaneous linear equations using desk computing equipment. It took them two weeks! It was one of the reasons they decided to build an automatic computer.

#### Pilot Ace

The famous mathematician Alan Turing was at NPL at the time and proposed the construction of a elaborate machine known as the Automatic Computing Engine, or ACE. Wilkinson became an assistant to Turing. After some disagreements with the Laboratory management, Turing departed for the University of Manchester and Jim was put in charge of the project. The team built only a portion of Turing's design, the Pilot ACE. Jim actually did some of the soldering. The Pilot ACE proved to be a useful machine on its own and was used for several years. A commercial version was produced by the English Electric Company under the name DEUCE.

Here is Jim at the console of the Pilot Ace, probably at its public debut in November 1950.

#### Ann Arbor

Beginning in the late 1950's, the University of Michigan offered a summer short course in computing. At first it was just one course, but it soon grew to be several courses in different aspects of computing -- programming, compilers, systems, numerical methods.

Bob Bartels was both a professor of mathematics and director of the University of Michigan computing center. Bob invited eminent numerical analysts to give the lectures in the numerical methods short course. Starting around 1958, Alston Householder and Jim Wilkinson were regular lecturers in the course and came to Ann Arbor for two weeks every summer.

In 1965, Bartels decided that running the computing center had become a full time job and that he had to give up his regular teaching duties. So the university recruited someone to replace him and I became an Assistant Professor of Mathematics at the University of Michigan in the fall of 1966. I also took over organizing the summer numerical methods short course. I was pleased that Alston and Jim continued to participate. Jim's notes for the Michigan summer course became the basis for his now classic 1965 book, The Algebraic Eigenvalue Problem.

Wilkinson and Householder knew all the restaurants in Ann Arbor, including the Pretzel Bell and the Old German, where the short course lecturers would usually order beer by the pitcher. Olga Taussky-Todd, who accompanied her husband John Todd to these dinners, had a fairly reserved manner. One day she planned to take a visiting friend to the Pretzel Bell for lunch and she asked Jim, "Is it possible to order beer in more modest quantities?" Jim replied, "Yes, but it has never been one of my ambitions."

#### Stanford

I took my first numerical analysis course in grad school at Stanford from George Forsythe in 1962-63. There were only a handful of students in the class. Forsythe was editor of the Prentice-Hall Series in Automatic Computation and was preparing Wilkinson's book Rounding Errors in Algebraic Processes for publication. He gave us the galley proofs to both read as course notes and check for typographical errors. Forsythe was a fastidious editor, so I suspect the class did not find any typos that he had not already caught.

Sometime after I left Stanford in 1965 Wilkinson was appointed to a part time professorship there. He visited perhaps one quarter a year. I'm not sure if he ever taught a regular course. He was a terrific lecturer for a seminar or a short course and wonderful in one-on-one discussions, but he was never anxious to be a full time academic.

#### Handbook

Much of Wilkinson's research was published in a series of papers in the journal Numerische Mathematik in the late 1960's. The papers described algorithms for matrix computation and included programs in Algol 60, the International Algorithmic Language. A number of other authors contributed papers and programs to the series, which was known as the Handbook Series.

All of the papers in the Handbook Series were eventually collected in an actual handbook, edited by Wilkinson and Christian Reinsch, and published by Springer-Verlag in 1971. Even today this Wilkinson and Reinsch handbook remains an important resource for work in matrix computation. The Algol codes provide a readable description of the techniques that lie at the heart of the matrix library in MATLAB.

#### Argonne

Argonne National Laboratory is a located in a forest preserve 25 miles southwest of Chicago. A herd of white deer roam freely on the grounds. The laboratory carries out a program of primarily unclassified research for the U.S. Department of Energy on basic science, energy, transportation, environmental issues, computing and national security.

Very few people could actually run the programs in the Handbook because very few computer centers had Algol compilers. Around 1970, the Mathematics and Computer Science Division, MCS, at Argonne began a project to translate a portion of the Handbook into Fortran, and to study the subsequent testing, distribution and support of the software. The project involved the subset of the Handbook devoted to eigenvalue computation because most computer center libraries already had satisfactory linear equation solvers, but few had the new eigenvalue methods that Wilkinson and colleagues had developed.

The project was called NATS, National Activity to Test Software. It was supported by the National Science Foundation and what was then called the Atomic Energy Commission. The stated objective of the project was research into the software development, testing and distribution process. The software itself, which became known as EISPACK, for Eigensysem Package, was just a byproduct. Simply writing good software wasn't regarded as supportable research. It wasn't in 1970, and still isn't in 2013.

Jim was an important participant in the NATS/EISPACK project. He visited Argonne at least once a year, usually after the Michigan summer course. He didn't write any Fortran code, but he was an active consultant in the process of rewriting what was mostly his Algol. And, the testing process in those days was more complicated because there were so many machines with different arithmetic behavior.

If you were involved in mathematical software back then, Argonne was the place to visit. I moved to the University of New Mexico in 1972 and came to Argonne for most of every summer. (When I walked in, Jim Boyle would say, "Moler is here, it must be June.") There were always lots of international visitors. In addition to Wilkinson, there were many other Brits. MCS had a close connection with NAG, the Numerical Algorithms Group in the UK. At the annual picnic one summer, we got the NAG chaps to play softball. That resulted in a challenge to try cricket and they brought the necessary equipment with them the next summer. Jim was a cricket connoisseur. He gave us a lecture on cricket strategy at the same blackboard he is using in the photo above. A day or two later at the next picnic, he continued the lecture. Jack Dongarra found this faded color photo in one of his shoe boxes.

The EISPACK project was followed by the LINPACK project, also centered at Argonne. These two projects motivated the first MATLAB. Wilkinson wasn't directly involved in MATLAB, but he certainly was the primary source of the algorithms that formed its original mathematical core.

#### References

[1] Leslie Fox, James Hardy Wilkinson, Biographical Memoirs of Fellows of the Royal Society 33 (1987): 669-708. <http://rsbm.royalsocietypublishing.org/content/33/670>

[2] Nick Higham's Gallery of Wilkinson Photos, <http://www.maths.manchester.ac.uk/~higham/photos/wilkinson/index.htm>.

[3] History of Householder Symposia, <http://sites.uclouvain.be/HHXIX/SymposiumHistory>.

Get the MATLAB code

Published with MATLAB® R2012b

## Hilbert Matrices

The inverse Hilbert matrix, invhilb, has recently made surprise appearances in Cody, the programming game on MATLAB Central, and one of Ned's posts in the MATLAB Spoken Here blog. Inverse Hilbert matrices had nearly been forgotten in MATLAB. Their comeback is due to the sign pattern of their entries. But I want to take you back to their original role demonstrating ill conditioning in numerical calculation.

### Contents

   T = invhilb(8);
imagesc(sign(T))
axis image, axis off
colormap(copper)


#### Hilbert Matrix

The Hilbert matrix is the first matrix I ever knew by name. I met it in my first numerical analysis course, when I was a junior at Caltech in 1959. The matrix comes from least squares approximation by monomials, $x^{j}$, on the unit interval $[0,1]$. The elements of $H_n$ are inner products

$$h_{i,j} = \int_0^1 x^{i-1} x^{j-1} dx = \frac{1}{i+j-1}$$

Here is the code to generate the 6-by-6 Hilbert matrix in a single array operation. Similar code is used in the function hilb that is distributed with MATLAB. I am using single precision so we can see roundoff error in matrices that print nicely in the width of one blog page.

   format compact
format long
n = 6
J = 1:n;
J = J(ones(n,1),:);
I = J';
E = single(ones(n,n));
H = E./(I+J-1)

n =
6
H =
1.0000000   0.5000000   0.3333333   0.2500000   0.2000000   0.1666667
0.5000000   0.3333333   0.2500000   0.2000000   0.1666667   0.1428571
0.3333333   0.2500000   0.2000000   0.1666667   0.1428571   0.1250000
0.2500000   0.2000000   0.1666667   0.1428571   0.1250000   0.1111111
0.2000000   0.1666667   0.1428571   0.1250000   0.1111111   0.1000000
0.1666667   0.1428571   0.1250000   0.1111111   0.1000000   0.0909091


The monomials $x^{j}$ are nearly linearly dependent on $[0,1]$. So the Hilbert matrices are nearly singular. The condition of $H_6$ is comparable with single precision roundoff error and the condition of $H_{12}$ is comparable with double precision roundoff error.

   format short e
[1/cond(hilb(6)) eps(single(1))]
[1/cond(hilb(12)) eps(double(1))]

ans =
6.6885e-08  1.1921e-07
ans =
5.7268e-17   2.2204e-16


The $l_2$ condition number, $\kappa$, of $H_n$ grows exponentially.

$$\kappa(H_n) \approx 0.01133e^{3.49n}$$

#### Inverse Hilbert Matrix

A Hilbert matrix qualifies as a Cauchy matrix, which is a matrix whose entries are of the form

$$a_{i,j} = \frac{1}{x_i - y_j}$$

A classic Knuth homework problem or the Wikipedia entry on Cauchy matrices (see References) shows how it is possible to express the elements of the inverse of a Cauchy matrix in terms of products involving the $x_i$'s and $y_j$'s. For a Hilbert matrix, those products become binomial coefficients.

Here is a program that generates the inverse Hilbert matrix using doubly nested for loops and many scalar evaluations of binomial coefficients.

type invh

function T = invh(n)
for i = 1:n
for j = 1:n
T(i,j) = (-1)^(i+j)*(i+j-1)*nchoosek(n+i-1,n-j)* ...
nchoosek(n+j-1,n-i)*nchoosek(i+j-2,i-1)^2;
end
end
end



#### An Old Program

One of the first programs that I ever wrote generated the inverse Hilbert matrix from these binomial coefficients. It was written in absolute numeric machine language for the Burroughs 205 Datatron at Caltech around 1959. I needed to avoid time consuming floating point arithmetic, so I used recursion relationships among the coefficients. Here is the core of the MATLAB version of that old program. For a long time this has been distributed with MATLAB as invhilb. This is the function that the Cody participants discovered.

type invhilb_code

   T = zeros(n,n);
p = n;
for i = 1:n
r = p*p;
T(i,i) = r/(2*i-1);
for j = i+1:n
r = -((n-j+1)*r*(n+j-1))/(j-1)^2;
T(i,j) = r/(i+j-1);
T(j,i) = r/(i+j-1);
end
p = ((n-i)*p*(n+i))/(i^2);
end



We can use either code to generate the exact inverse of $H_6$.

T = invhilb(6)

T =
36        -630        3360       -7560        7560       -2772
-630       14700      -88200      211680     -220500       83160
3360      -88200      564480    -1411200     1512000     -582120
-7560      211680    -1411200     3628800    -3969000     1552320
7560     -220500     1512000    -3969000     4410000    -1746360
-2772       83160     -582120     1552320    -1746360      698544


Since the elements of a Hilbert matrix are rational fractions, the elements of its inverse must also be rational fractions. But it turns out that the denominators of all the fractions are equal to one, so, as you can see, the inverse has integer entries. The integers are large, hence the large condition number.

#### Roundoff Error

Let's invert T and see how close we get to H.

   format long
inv(single(T))

Warning: Matrix is close to singular or badly scaled. Results may be inaccurate.
RCOND =  3.380340e-08.
ans =
1.0102043   0.5085138   0.3406332   0.2563883   0.2056789   0.1717779
0.5085138   0.3404373   0.2560914   0.2053309   0.1714057   0.1471225
0.3406332   0.2560914   0.2052232   0.1712378   0.1469208   0.1286576
0.2563883   0.2053309   0.1712378   0.1468577   0.1285564   0.1143121
0.2056789   0.1714057   0.1469208   0.1285564   0.1142728   0.1028457
0.1717779   0.1471225   0.1286576   0.1143121   0.1028457   0.0934704


We can just recognize one or two digits of H. Since the condition number of T, is comparable with single precision roundoff level, we might have expected to lose all the digits. Roundoff error observed in actual matrix computation is usually not as bad as estimates based on condition numbers predict.

#### My Project

Over 50 years ago, after my first numerical analysis course, where I was introduced to matrix computation by Prof. John Todd, I did an individual undergrad research project under his direction. Part of the project involved studying the roundoff error in matrix inversion. First, I wrote a matrix inversion program. Then, in order to assess roundoff error, I generated Hilbert matrices, inverted them with my program, and compared the computed inverses with the exact inverses generated by the program that I've just described. If I'd had MATLAB at the time, the project would have gone something like this.

   n = 6;
H = single(hilb(n));
X = inv(H);
T = single(invhilb(n));
relerr = norm(X-T,inf)/norm(T)

Warning: Matrix is close to singular or badly scaled. Results may be inaccurate.
RCOND =  3.570602e-08.
relerr =
0.0470157


(My old matrix inversion program would not have estimated condition and issued a warning message.)

The Datatron floating point arithmetic had eight decimal digits, so I used values of n up to 7 and reported the observed relerr's as the result of roundoff error from matrix inversion by Gaussian elimination for this particular badly conditioned matrix. Case closed. Project grade: A.

#### Project Reexamined

I went on to grad school at Stanford and met George Forsythe and later Jim Wilkinson. As I came to appreciate the work that Wilkinson was doing, I realized that in my project at Caltech I had made a subtle but important mistake. The same mistake is still being made today by others. I had not actually inverted the Hilbert matrix. I had inverted the floating point approximation to the Hilbert matrix. It turns out that the initial step -- converting the fractions that are the elements of the Hilbert matrix to floating point numbers -- has a larger effect on the computed inverse than the inversion process itself. Even if my inversion program could somehow compute the exact inverse of the matrix it is given, it could not match the result generated by my theoretical inverse Hilbert matrix program.

I should have inverted the inverse, because I would not have to make any roundoff errors to generate the input. Since the elements of the starting matrix are integers, they can be represented exactly in floating point, at least if n is not too large. I should have interchanged the roles of T and H in my project.

   n = 6;
T = single(invhilb(n));
X = inv(T);
H = single(hilb(n));
relerr = norm(X-H,inf)/norm(H)

Warning: Matrix is close to singular or badly scaled. Results may be inaccurate.
RCOND =  3.380340e-08.
relerr =
0.0266826


This confirms what Wilkinson had been saying, and proving. The roundoff error caused by inverting a matrix using Gaussian elimination with partial pivoting is comparable with -- in this case actually smaller than -- the error caused by rounding the data in the first place.

#### Checkerboard

You can see the +/- sign pattern of the elements in the inverse Hilbert matrix by examining T or the (-1)^(i+j) multiplying the binomial coefficients in invh. That's what caught the attention of the folks playing Cody. I have no idea how they came across it.

#### References

[1] Donald E. Knuth, The Art of Computer Programming, vol. 1, Problems 1.2.3-41 and 1.2.3-45, pp. 36-37, Addison-Wesley, 1968.

[2] Wikipedia entry on Cauchy matrix, <http://en.wikipedia.org/wiki/Cauchy_matrix>.

Get the MATLAB code

Published with MATLAB® R2012b

## Reduced Penultimate Remainder

I investigated the reduced penultimate remainder algorithm in an undergraduate research project under professor John Todd at Caltech in 1961. I remember it today for two reasons. First, I learned what penultimate means. And second, it is the most obscure, impractical algorithm that I know. I suspect none of my readers have ever heard of it.

### Contents

#### Penultimate

Penultimate means "next to last". November is the penultimate month of the year. In the fifty-plus years since I learned its meaning, I've never had a chance to use penultimate in everyday conversation, or in my writing. I think of it as British English. Is it?

#### Reduced Penultimate Remainder

Start with two polynomials, both with leading coefficient equal to one. When you divide one polynomial into the other using long division, you usually stop when the degree of the remainder is one less than the degree of the divisor. But instead, stop one step earlier, with the penultimate remainder. Then scale the leading coefficient of the remainder to be equal to one. This is the reduced penultimate remainder. If the original divisor had actually been an exact factor of the dividend, then this reduced remainder would be equal to the divisor. Iterating the process with exact divisors exhibits fixed points.

#### An Example

Our example involves dividing $q(x) = x^2 - 10x + 5$ into $p(x) = x^5 - 15x^4 + 85x^3 - 225x^2 + 274x - 120$. Here is a scan of one step of the long division carried out by hand. (I don't know enough LaTeX to be able to do the typesetting.)

The reduced penultimate remainder is $r(x) = x^2 + 1.24x - 1.20$. Since $r(x)$ is not close to $q(x)$, we can conclude that $q(x)$ is not close to a factor of $p(x)$ and consequently that the roots of $q(x)$ are not close to the roots of $p(x)$. I will return to this example later.

#### Reduced Penultimate Remainder Algorithm

If the divisor happens to be a factor of the dividend, then the reduced penultimate remainder process is a fixed point iteration. In 1943, Shi-nge Lin suggested that starting with an initial approximation that is nearly a factor and repeating the RPR process, the iteration might converge to a factor and hence could be used to find roots of polynomials. I do not know who Shi-nge Lin was and have not actually seen his original paper.

#### A. C. Aitken

A. C. Aitken(1895-1967) wrote papers about Lin's idea in 1952 and 1955. Aitken was born in New Zealand, educated in Scotland, and became a professor of mathematics at the University of Edinburgh. He made many contributions to many areas of mathematics. In numerical analysis he is best known for the delta-squared sequence acceleration technique. His prodigious memory made some aspects of his life unpleasant. He served with the New Zealand forces in World War I and his memories of the battle at Gallipoli haunted him throughout his life.

John Todd was the professor at Caltech who introduced me to numerical analysis, matrices, and computing. I believe that Todd knew Aitken personally and he was certainly familiar with his work. So he suggested I investigate the method and I read Aitken's papers. I wrote programs in machine language for the Burroughs 205 Datatron. Unfortunately, about all I can remember about the project is the name of the algorithm.

Alston Householder(1904-1993) is one of my heroes. I knew him quite well. He's the fourth guy from the left in our photo of the organizing committee of the 1964 Gatlinburg Conference on Numerical Algebra

load gatlin
image(X)
colormap(map)
axis off


I just learned that Householder wrote a paper about the Reduced Penultimate Remainder algorithm in 1970, ten years after I did my undergraduate research project. I didn't know about that paper until I came across it while writing this blog.

Householder makes some very candid comments about the earlier papers by Lin and Aitken. He says "Lin's development is complicated and unsuggestive". He says "Aitken's derivation of the conditions for convergence is very sketchy and leaves much to the reader." Householder's paper was published in the SIAM Journal of Applied Math in 1970. I don't think we could make comments like these in SIAM papers today.

Aitken had introduced a device called the catalytic multiplier and Householder tries to explain it. I do not understand his paper at all. Near the end of the paper he says "In notation elsewhere used by the author ..." That is not very helpful. I'm afraid that is this particular paper Alston is no clearer than the authors he is criticizing. I can say this in a blog.

#### MATLAB Code

type rpr

function q = rpr(p,q)
% RPR Reduced Penultimate Remainder.
% r = rpr(p,q), for polynomials p and q.
% Assume p(1) = 1, q(1) = 1.
for c = 1:15
% Polynomial long division.
% Stop when degree of r = degree of q.
n = length(p);
m = length(q);
r = p(1:m);
for k = m+1:n
r = r - r(1)*q;
r = [r(2:end) p(k)];
end
q = r/r(1);
disp(q)
end



#### Example, continued

The example I used in hand calculation above is the polynomial whose roots are the integers from one to five.

p = charpoly(diag(1:5))

p =
1   -15    85  -225   274  -120


The starting guess is a deliberately bad approximation to a quadratic factor.

q = [1 -10 5]

q =
1   -10     5


Let's run 15 iterations

format long
format compact
rpr(p,q)

   1.000000000000000   1.240000000000000  -1.200000000000000
1.000000000000000  -1.067114979620489   0.318854992571954
1.000000000000000  -1.723550954295960   0.821587108699062
1.000000000000000  -2.062229096576079   1.106543069892922
1.000000000000000  -2.272884387512085   1.294527998591067
1.000000000000000  -2.417143826065064   1.428233837285976
1.000000000000000  -2.522010435111559   1.527880806063684
1.000000000000000  -2.601434375494343   1.604615444108659
1.000000000000000  -2.663426234246922   1.665180563853953
1.000000000000000  -2.712940049546721   1.713920778923779
1.000000000000000  -2.753213571017107   1.753767759646395
1.000000000000000  -2.786455813893868   1.786771702201661
1.000000000000000  -2.814227004785662   1.814408345663866
1.000000000000000  -2.837661124583663   1.837765841878627
1.000000000000000  -2.857602508367706   1.857663278238952
ans =
1.000000000000000  -2.857602508367706   1.857663278238952


It's converging, slowly, to the factor $x^2 - 3x + 2$ with roots $1$ and $2$. It takes about 300 iterations to reach this factor to double precision accuracy.

#### Another Example

One of my favorite polynomials is $p(x) = x^3 - 2x - 5$. It's only real root is near $2.09$. Let's try to find it.

p = [1 0 -2 -5]
q = [1 -2]
rpr(p,q)

p =
1     0    -2    -5
q =
1    -2
1.000000000000000  -2.500000000000000
1.000000000000000  -1.176470588235294
1.000000000000000   8.117977528089890
1.000000000000000  -0.078245352175702
1.000000000000000   2.507676417722349
1.000000000000000  -1.165924862052266
1.000000000000000   7.804948516596162
1.000000000000000  -0.084864830447043
1.000000000000000   2.509035084827172
1.000000000000000  -1.164074683720088
1.000000000000000   7.752777799980693
1.000000000000000  -0.086050279678108
1.000000000000000   2.509290208665588
1.000000000000000  -1.163727809437372
1.000000000000000   7.743083431952472
ans =
1.000000000000000   7.743083431952472


No luck, it's not going to converge. The real root repels any attempt to find it with this iteration.

#### Conclusions

Finding polynomial roots is not a very important task and this algorithm is not very good at it anyway. Its convergence properties are too unreliable. So for me, this is just a quirky memory. If anybody else has heard of the reduced penultimate remainder algorithm, please contribute a comment.

#### References

[1] A. C. AITKEN, Studies in practical mathematics. VII. On the theory of factorizing polynomials by iterated division, Proc. Roy. Soc. Edinburgh Sec..A, 63 (1952), pp. 326-335.

[2] A. C. AITKEN, Note on the acceleration of Lin's process of iterated penultimate remainder, Quart. J. Mech. Appl. Math., 8 (1955), pp. 251-258.

[3] SHI-NGE LIN, A method for finding roots of algebraic equations, J. Math. Phys. Mass. Inst. Tech., 22 (1943), pp. 60-77.

[4] ALSTON S. HOUSEHOLDER, On the penultimate remainder algorithm and the catalytic multiplier, SIAM J. Appl. Math., 19 (1970), pp. 668-671.

Get the MATLAB code

Published with MATLAB® R2012b

## George Forsythe

Tuesday, January 8, 2013, would have been George Forsythe's 96th birthday. He passed away in 1972 at the age of 55. A pioneer in the establishment of computer science as an intellectual discipline, he was my Ph.D. thesis advisor, colleague, and friend.

### Contents

#### Early Life

George Elmer Forsythe was born in State College, Pennsylvania, in 1917 and moved to Ann Arbor, Michigan, when he was a small boy. He went to Swarthmore as an undergraduate, where he majored in mathematics, and then to Brown for a Ph.D., also in mathematics, which he received in 1941.

Forsythe served in the Air Force in World War II. Working there as a mathematician and meteorologist, he developed some of the first methods of numerical weather prediction. He is the coauthor of an early textbook on numerical meteorology.

#### Institute for Numerical Analysis

After the war, the National Bureau of Standards, with funding from the Office of Naval Research, sponsored the development of two electronic computers. The one on the west coast, at UCLA, was known as the SWAC, for Standards Western Automatic Computer.

The INA, the Institute for Numerical Analysis, was established at UCLA in 1947 to support the SWAC. Forsythe was one of its principal members.

Our photo, taken at UCLA sometime in the early 1950s, shows Forsythe in the center. Olga Taussky-Todd is the only woman in the photo and her husband, John Todd, is in the back row peering out between George and Olga.

The other institute members in the front row are Mark Kac, who would later ask if one could hear the shape of a drum; J. Barkley Rosser, who created the amazing rosser test matrix in MATLAB; Wolfgang Wasow, who would later coauthor a seminal book with Forsythe about the numerical solution of PDEs; and Magnes Hestenes, coinventor of the conjugate gradient method.

#### The Todds and Caltech

The INA was dissolved in 1953 as collateral damage from political scandals at the Bureau of Standards involving McCarthyism and congressional hearings about battery additives. The Todds moved a few miles across Los Angeles to Pasadena and Caltech and Forsythe moved a few hundred miles north to Palo Alto and Stanford.

I enrolled in Caltech in 1957 and a couple of years later was taking courses from John Todd. He introduced me to matrices, numerical analysis and computing and set me in the direction that brought me to where I am today. Olga would occasionally guest lecture in John's classes on her specialty, matrix theory.

One quarter in my junior year I got a B- in advanced calculus and an A+ in numerical analysis. I never looked back. When it was time to go to grad course, Todd recommended Stanford and his friend George Forsythe. I didn't apply anywhere else.

#### Stanford Computer Science

In 1961 Forsythe was a math professor at Stanford. Computing was just a master's degree specialization within the math department. I am not sure if it was even yet called "Computer Science". But Forsythe and others could see the need for a full fledged academic program, department, and discipline. They wanted to see Stanford appoint faculty and offer courses in new fields such as artificial intelligence and programming languages that certainly were not research mathematics.

So in January, 1965, the Stanford Computer Science Department was born. Forsythe was its founding department chairman and remained head until his death seven years later.

Today Stanford Computer Science is famous as the incubator for Google, Yahoo, and many other not-so-famous Silicon Valley commercial successes. But at their 50th anniversary celebration Stanford's president John Hennessy, himself a CS professor, reminded his audience that the department also deserves to be known for its some of its world renowned academics who did not start companies, including John McCarthy and Donald Knuth. These professors were recruited by George Forsythe.

#### Computer Science Discipline

Forsythe not only started Computer Science at Stanford, he was one of its advocates as an academic and intellectual discipline nationally and internationally. He served as president of the ACM early in its history. He was an editor of several journals. He was asked to advise other universities. He wrote influential articles and letters.

Shortly after Forsythe's death, Donald Knuth wrote, "It is generally agreed that he [Forsythe], more than any other man, is responsible for the rapid development of computer science in the world's colleges and universities."

#### Gatlinburg and Householder Meetings

load gatlin
image(X)
colormap(map)
axis image
axis off


This photo is one of the very first images distributed with MATLAB. The photo, taken in 1964, shows the organizing committee of the Gatlinburg conferences on numerical algebra. All of these men were to have some influence on the development of MATLAB. From left to right are:

• James Wilkinson, matrix computation
• Wallace Givens, plane rotations
• George Forsythe
• Alston Householder, elementary reflections
• Peter Henrici, complex algorithms
• F. L. Bauer, Algol 60

Householder was head of computing at Oak Ridge National Lab and a professor at the University of Tennessee in Knoxville. He organized the first two conferences on numerical algebra in the early 1960s. The meetings were held in Gatlinburg, Tennessee, which was then a lovely, small resort town in the Smoky Mountains.

These were intensive, week-long, research workshops with attendance by invitation only from this committee. When it came to organizing the third Gatlinburg conference in 1964, Forsythe wanted to invite one of his graduate students -- me. But the other committee members objected, students had not been invited to the previous meetings. I understand that Forsythe said if Moler could not come, neither would he. So I was invited. I was the only student there, among 40 or 50 of the most important people in my field. It was one of the most significant events of my professional career. And I was only 25 years old.

An Oak Ridge photographer took 8-by-10 black-and-white glossy photos. I kept the one of the organizing committee. Twenty-five years later we scanned it in to become a MATLAB, and Internet, classic.

After Householder retired, the committee evolved, and the locations changed. The meetings are now called the Householder Conferences and are held every four years. The locations alternate between North American and Europe. Attendance is still by invitation only, but anybody can apply. The next conference will be Householder XIX in June 2014, in Belgium, hosted by Universite catholique de Louvain and the Katholieke Universiteit Leuven.

#### Prentice-Hall Series

Forsythe served as advising editor of the prestigious Prentice-Hall Series in Automatic Computation. He solicited manuscripts from many influential authors. With more than 75 titles published in the early years, this series helped define the Computer Science discipline. The distinctive red and white dust jackets can still be seen on many library shelves.

#### Our Books

Two of the books in the Prentice-Hall series were by Forsythe and Moler, and by Forsythe, Malcolm and Moler. Before I was known for MATLAB, I could meet people at conferences and say, "You know those books by Forsythe and somebody, well I'm that somebody."

The Forsythe and Moler book, "Computer Solution of Linear Algebraic Systems", evolved from notes for the second quarter of a two-quarter basic numerical analysis course that Forsythe taught in the early 1960's and that I taught in 1965. The first quarter used a text by Peter Henrici that covered all of the material in our syllabus except matrix computation. The notes that George had written for the second quarter eventually became the book.

I wrote programs DECOMP and SOLVE that implement column-oriented Gaussian elimination in three languages, Algol 60, Fortran, and PL/1. When I was working on the book, the year was 1966 and Algol, the International Algorithmic Language, was the generally accepted standard for publishing mathematical material. Fortran was already over 10 years old and we didn't expect it to last much longer. PL/1 had been developed recently by IBM and we thought it might represent the future. You can see how accurate those predictions turned out to be.

The Forsythe, Malcolm, and Moler book, "Computer Methods for Mathematical Computations", had a longer incubation period. After I left grad school at Stanford, Forsythe started a more elementary course in numerical methods intended for engineers and other students outside of math and computer science. The course centered around a collection of well written mathematical software. Mike Malcolm was his teaching assistant.

Forsythe fell ill before he and Malcolm could refine their course notes into a book and they asked me to help complete the project. The reviewers of our first draft said that it read like it was written by three different authors who never read each other's chapters. They were right. It took Mike and I five years to smooth out the exposition, by which time we had rewritten much of the software. And, oh yes, it was 1977 and the clear choice for programming language was Fortran.

Forsythe was a great believer that published computer programs should be a means of scientific discourse between humans, as well as a means of controlling machines. He would read and edit code as carefully as he would prose, which was very carefully. One of the biggest reasons these two books were as successful as they were was because the programs in them were not only useful and correct, they were short and readable.

#### The Forsythe Tree

One of Forsythe's most important legacies is his students and his students' students. The Forsythe Tree has George at the root and links that are Ph.D. theses. There are 17 first-level nodes, one of which is me. The version on the Web is 10 years old and woefully out of date. Even this version has a depth of five or six and over 200 nodes. There are dozens of university professors and several deans, provosts and presidents. I suspect that a current version might be almost twice as deep and have several hundred nodes. It reaches far into academia and industry throughout the world. We should bring the tree up to date.

#### Postscript

George never got to see MATLAB. He passed away before its birth. He would have loved it.

#### Reference

<http://www.stanford.edu/dept/ICME/docs/history/forsythe_knuth.pdf.> Knuth, Donald E. (1972). "George Forsythe and the Development of Computer Science". Communications of the ACM 15 (8).

Get the MATLAB code

Published with MATLAB® R2012b

Cleve Moler is the author of the first MATLAB, one of the founders of MathWorks, and is currently Chief Mathematician at the company. He is the author of two books about MATLAB that are available online. He writes here about MATLAB, scientific computing and interesting mathematics.

These postings are the author's and don't necessarily represent the opinions of MathWorks.