This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

MathWorks Logo, Part Five, Evolution of the Logo 5

Posted by Cleve Moler,

Our plots of the first eigenfunction of the L-shaped membrane have changed several times over the last fifty years.

Contents

1964, Stanford

Ph.D. thesis. Two dimensional contour plot. Calcomp plotter with ball-point pen drawing on paper mounted on a rotating drum.

1968, University of Michigan

Calcomp plotter. Math department individual study student project. Original 3d perspective and hidden line algorithms. Unfortunately, I don't remember the student's name.

1985, MATLAB 1.0

The first MathWorks documentation and the first surf plot. Apple Macintosh daisy wheel electric typewriter with perhaps 72 dots-per-inch resolution.

1987, MATLAB 3.5

Laser printer with much better resolution.

1992, MATLAB 4.0

Sun workstation with CRT display and color. Hot colormap, proportional to height.

1994, MATLAB 4.2

A hand made lighting model. I have to admit this looks pretty phony today.

1996, MATLAB 5.0

Good lighting and shading. But at the time this was just a nice graphics demonstration. It was a few more years before it became the official company logo.

1990, Nantucket Beach

A prototype 3d printer. Pure granular silicon, a few dozen grains per inch. (Thanks to Robin Nelson and David Eells for curating photos.)

More reading

Cleve's Corner, "The Growth of MATLAB and The MathWorks over Two Decades", MathWorks News&Notes, Jan. 2006.


Get the MATLAB code

Published with MATLAB® R2014b

Note

Comments are closed.

5 CommentsOldest to Newest

Bob Jones replied on : 1 of 5
Dear Dr. Moler, For the last year or so, I have been working on the Helmholtz problem as a hobby. I also did it while working on my physics PhD in the early 1990s. Anyway, yesterday, I applied my new eigenvalue bounding method to your famous L-shape. With a modest effort (15 minutes of CPU), I coaxed my computer to calculate the lowest eigenvalue to 82 correct digits (did I count them right?): 9.639723844021941052711459262364823156267289525821906456109579700564035647863370390 I understand that you have a particular fondness of this number. Based on the convergence rate, I can probably get about 250 digits with about 2 or 3 days of cpu time on my laptop. With a bigger computer, one can probably get a thousand digits with relative ease. Most of my efforts have been on the regular pentagon. For example, with 2 days of cpu, I calculate the lowest Dirichlet eigenvalue within the unit-edged regular pentagon to 502 digits (at twice the convergence rate of the L-shape). 10.996427084559806648376217352434650641833359995632 37894580825875576367724334246467686955218722701415 76003245842094863020175628274679805263604950882189 07917111669971729458438577840231176346790319533010 11923430107693443760577351037965994015994422766013 66034288347043392716545024815696192116422461745439 59163018349746617500442725050308489108663578447310 92819681411540178947571787915793369719045227221102 00171725391320867384069050750319502551467771360357 70227635725742015466458835434076721938562837383080 92 I am fairly confident that these are all correct digits, but surprisingly, there are no other comparable results so there is no way to be certain yet. My hobby is actually getting a little out of control as I am also in the process of calculating the lowest 8000+ pentagon Dirichlet eigenvalues to at least 60 digits (an arbitrary, but now seemingly small number). I am doing one eigenvalue every five minutes on average. I can also do Neumann eigenvalues with equal difficulty and convergence rates. I am a little embarrassed to say that I don't use matlab for this, but I am sure matlab can be coaxed into reproducing these results with a good multi-precision toolbox and trustworthy Bessel functions. I actually used the pari pg calculator with the gnu mp library. Of course, I am preparing a little paper to share the technique, but I thought I would like to share these results with you now. The technique is based on an observation I made in 1993 but never did anything with it until now, and a (now obviously clear) suggestion by Dr Barnett to use the non-analytic vertex to get exponential convergence. Enjoy, and Happy New Year! -- Bob Jones rsjones7 at yahoo dot com
Bob Jones replied on : 2 of 5
1 hr CPU, 101 digits: 9.63972384402194105271145926236482315626728952582190 64561095797005640356478633703907228731650087967888 lower bound, append 2157; upper bound, append 3349 Convergence rate: about 4.51 basis functions per digit.
Bob Jones replied on : 3 of 5
The eigenvalue of this eigenfunction, correct to just over 100 digits, is 9.63972384402194105271145926236482315626728952582190645610957970056403564786337039072287316500879678883
Cleve Moler replied on : 4 of 5
Yes, I am interested in this number, but just seeing 100 digits is not nearly as interesting as knowing how it was computed.
Bob Jones replied on : 5 of 5
Dear Dr. Moler, You may be surprised to learn that I am applying your original point-matching method, but on steroids. Include enough digits in the intermediate calculations and it is numerically very well behaved, i.e., no spurious solutions or ill-conditioning appears in this shape (and probably many others that exhibit exponential convergence). I use fractional order Bessel functions and properly symmetrized sinusoids centered on the re-entrant angle, my matching points are uniformly distributed as 3i/(N+1) for i=1..N along the length 3 matching edge (in the half-L), number of matching points equals number of terms in expansion, and no interior points per GSVD, nor extra points on the edges, for example. Calculate the determinant and be clever about root finding (working with dL/L=1E-150 and determinant values of 1E-200000 may scare people at first). Make sure Bessel functions (and sinusoid, etc) are good to such precision. As I increment N (by 3 for the L-shape, so same proportion of points stays in each segment), the approximate eigenvalue alternates above and below an asymptote while smoothly approaching it. As one increments N, the previous results can be used to help bracket the root. If no mistakes, it works perfectly from N=20 to 700, so far. (Change the distribution even a little and it may fail, but still works to some extent.) Note, exponential convergence rate is as expected, about 4 to 5 basis functions per digit, so nothing new there. I think the trade-off in numerics pays off as compared with other popular methods like GSVD, especially if one pursues many digits. It works for any (non-degenerate, non-closed-form) eigenvalue as long as you can get enough expansion terms. (Closed form eigenvalues converge many orders of magnitude faster, but the same pattern.) Anyone can do it. To get the 150+ digits below, I use up to N=708 and 711 matching points and perhaps 750 digits in the intermediate calculations. (Keep the precision well above the number of terms in the expansion.) I was greatly affected by your now famous paper in the early 1990s when I observed something similar (but not exactly, I wasn't expanding about the correct angle), but it only made it into my thesis. Only now, I decided to pick up again, as a hobby. I am quite surprised that no one else saw that nice numerical behavior and ease with which one can bound eigenvalues. It was only last week that I first tried the L-shape (among others) and it worked like a charm. I had to share it with you. 9.6397238440219410527114592623648231562672895258219 064561095797005640356478633703907228731650087967888 3115756686104335651595260970541019403854362864964220 Lower bound, append 2007, upper bound, append 0851 Happy New Year! Bob Jones