{"id":1094,"date":"2014-10-22T19:08:33","date_gmt":"2014-10-23T00:08:33","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=1094"},"modified":"2014-10-22T19:13:07","modified_gmt":"2014-10-23T00:13:07","slug":"mathworks-logo-part-two-finite-differences","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2014\/10\/22\/mathworks-logo-part-two-finite-differences\/","title":{"rendered":"MathWorks Logo, Part Two.  Finite Differences"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>After reviewing the state of affairs fifty years ago, I use classic finite difference methods, followed by extrapolation, to find the first eigenvalue of the region underlying the MathWorks logo.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#2b1407c6-7696-437a-a884-aadb36e2e025\">Fifty Years Ago<\/a><\/li><li><a href=\"#06fdff70-f2cc-4ac2-97aa-e0cc23594d18\">My Calculations in 1964<\/a><\/li><li><a href=\"#1a509613-b929-4765-9819-a3916cbcea69\">Slow Convergence<\/a><\/li><li><a href=\"#a2a581af-7374-4445-a5a8-af3c67af31e4\">Difference Methods in MATLAB<\/a><\/li><li><a href=\"#98916d34-3a91-41ef-b92c-0c9532e33767\">My Calculations in 2014<\/a><\/li><li><a href=\"#5232f4ad-b379-4351-a0d9-2be549892de3\">Extrapolation<\/a><\/li><li><a href=\"#9b73c467-f941-4e86-811a-82932f9e87f2\">Eigenfunction<\/a><\/li><\/ul><\/div><h4>Fifty Years Ago<a name=\"2b1407c6-7696-437a-a884-aadb36e2e025\"><\/a><\/h4><p>My Ph. D. dissertation, submitted to the Stanford Math Department in 1965, was titled \"Finite Difference Methods for Eigenvalues of Laplace's Operator\".  The thesis was not just about the L-shaped region, but the L was my primary example.<\/p><p>My thesis advisor was George Forsythe, who had investigated the L in some of his own research.  At the time, many people were interested in methods for obtaining guaranteed upper and lower bounds for eigenvalues of differential operators.  Forsythe had shown that certain kinds of finite difference methods could provide lower bounds for convex regions. But the L is not convex, so that made it a region to investigate.<\/p><p>Nobody knew the first eigenvalue very precisely.  Forsythe and Wolfgang Wasow published a book in 1960, \"Finite Difference Methods for Partial Differential Equations\" (John Wiley &amp; Sons), that features the L in several different sections.  In section 24.3 Forsythe references some calculations he had done in 1954 that led him to estimate that<\/p><p>$$ \\lambda_1 = 9.636 $$<\/p><p>He goes on to report an informal conjecture made in the late '50s from his colleague R. De Vogelaere that<\/p><p>$$ 9.63968 &lt; \\lambda_1 &lt; 9.63972 $$<\/p><p>I repeat these numbers here to give you some idea of the accuracy we were working with back then.  We will see in a later post that the right answer is just outside De Vogelaere's upper bound.<\/p><h4>My Calculations in 1964<a name=\"06fdff70-f2cc-4ac2-97aa-e0cc23594d18\"><\/a><\/h4><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/thesis2.jpg\" alt=\"\"> <\/p><p>Here's a page from my thesis. The contour plot at the bottom of the page would eventually evolve into the MathWorks logo.  The plot was made on a <a href=\"http:\/\/computermuseum.informatik.uni-stuttgart.de\/dev\/ibm1130\/album\/rt_calcomp.html\">Calcomp plotter<\/a>, which involved a computer controlled solenoid holding a ball-point or liquid ink pen moving across a paper rolled on a rotating drum.  This was a very effective device and was used in computer centers worldwide for a number of years.<\/p><p>The calculations shown on this page are the first eigenvalue of the finite difference Laplacian for ten values of the mesh size $h$.  The finest mesh, $h$ = 1\/100, has 29,601 interior points.  As I recall, the calculation of the first eigenvalue, which I did with a kind of inverse power method, took about half an hour on Stanford's campus main frame, an IBM 7090.<\/p><p>Here is a plot of these values I computed fifty years ago.  I've added a black line at what we now know to be the limiting value, the first eigenvalue of the continuous differential operator.  You can see that the finite difference values are converging quite slowly.  Furthermore, they are converging from above.  Forsythe was not going to get his lower bound for this nonconvex region.<\/p><pre class=\"codeinput\">   thesis_graph\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/logo_diffs_01.png\" alt=\"\"> <h4>Slow Convergence<a name=\"1a509613-b929-4765-9819-a3916cbcea69\"><\/a><\/h4><p>The show rate of convergence of the finite difference eigenvalue with decreasing mesh size is caused by the fact that the corresponding eigenfunction is singular at the reentrant corner in the region. The gradient becomes infinite at that corner.  You can see the contour lines crowding together near the corner. If you tried to make a drum head or tambourine by stretching a membrane over an L-shaped frame, it would rip.<\/p><p>To be more precise, use polar coordinates, $r$ and $\\theta$, centered at the reentrant corner.  Note that to cover the interior of the region $\\theta$ will run from $0$ to $\\frac{3}{2}\\pi$. It turns out that approaching the corner the first eigenfunction has the asymptotic behavior<\/p><p>$$ u(r,\\theta)  \\approx r^\\alpha \\sin{\\alpha \\theta} $$<\/p><p>with $\\alpha = \\frac{2}{3}$.  This implies that the $k$ -th derivatives of the eigenfunction have singularities that behave like<\/p><p>$$ \\partial^k u  \\approx r^{\\alpha-k}  $$<\/p><p>When this fact is combined with the Taylor series analysis of the discretization error of the five-point finite difference operator we find that the first eigenvalue converges at the rate<\/p><p>$$ \\lambda_{1,h} - \\lambda_1 \\approx O(h^{2 \\alpha}) = O(h^\\frac{4}{3}) $$<\/p><p>This is significantly slower than the $O(h^2)$ convergence rate obtained when the eigenfunction is not singular.<\/p><h4>Difference Methods in MATLAB<a name=\"a2a581af-7374-4445-a5a8-af3c67af31e4\"><\/a><\/h4><p>Now let's use my laptop and the sparse capabilities in MATLAB. The functions <tt>numgrid<\/tt>, <tt>delsq<\/tt>, <tt>spy<\/tt>, and <tt>eigs<\/tt> have been included in the <tt>sparfun<\/tt> directory since its introduction.  We begin by generating a small L-shaped numbering scheme.<\/p><pre class=\"codeinput\">   n = 10;\r\n   L = numgrid(<span class=\"string\">'L'<\/span>,n+1)\r\n<\/pre><pre class=\"codeoutput\">L =\r\n     0     0     0     0     0     0     0     0     0     0     0\r\n     0     1     5     9    13    17    21    30    39    48     0\r\n     0     2     6    10    14    18    22    31    40    49     0\r\n     0     3     7    11    15    19    23    32    41    50     0\r\n     0     4     8    12    16    20    24    33    42    51     0\r\n     0     0     0     0     0     0    25    34    43    52     0\r\n     0     0     0     0     0     0    26    35    44    53     0\r\n     0     0     0     0     0     0    27    36    45    54     0\r\n     0     0     0     0     0     0    28    37    46    55     0\r\n     0     0     0     0     0     0    29    38    47    56     0\r\n     0     0     0     0     0     0     0     0     0     0     0\r\n<\/pre><p>To illustrate how this scheme works, superimpose it on a finite difference grid.<\/p><pre class=\"codeinput\">   Lgrid(L)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/logo_diffs_02.png\" alt=\"\"> <p>The statement<\/p><pre class=\"codeinput\">   A = delsq(L);\r\n<\/pre><p>constructs matrix representation of the five-point discrete Laplacian for the grid <tt>L<\/tt>.<\/p><pre class=\"codeinput\">   whos <span class=\"string\">A<\/span>\r\n   issym = isequal(A,A')\r\n   nnzA = nnz(A)\r\n<\/pre><pre class=\"codeoutput\">  Name       Size            Bytes  Class     Attributes\r\n\r\n  A         56x56             3156  double    sparse    \r\n\r\nissym =\r\n     1\r\nnnzA =\r\n   244\r\n<\/pre><p>So for this grid, the discrete Laplacian <tt>A<\/tt> is a 56-by-56 sparse double precision symmetric matrix with 244 nonzero elements. That's an average of<\/p><pre class=\"codeinput\">   ratio = nnz(A)\/size(A,1)\r\n<\/pre><pre class=\"codeoutput\">ratio =\r\n    4.3571\r\n<\/pre><p>nonzero elements per row.  Each interior grid point is connected to its four nearest neighbors.  For example, point number 44 is connected to points 35, 43, 45, and 53.  So the nonzero elements in column 44 are<\/p><pre class=\"codeinput\">   A(:,44)\r\n<\/pre><pre class=\"codeoutput\">ans =\r\n  (35,1)       -1\r\n  (43,1)       -1\r\n  (44,1)        4\r\n  (45,1)       -1\r\n  (53,1)       -1\r\n<\/pre><p>There are 4's on the diagonal and -1's is positions on the off-diagonals corresponding to connections between neighbors.  They can be seen in the <tt>spy<\/tt> plot which shows the location of the nonzeros.<\/p><pre class=\"codeinput\">   spy(A)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/logo_diffs_03.png\" alt=\"\"> <p>Scale <tt>A<\/tt> by the square of the mesh size and then use <tt>eigs<\/tt> to find its five smallest eigenvalues.<\/p><pre class=\"codeinput\">   h = 2\/n\r\n   A = A\/h^2;\r\n   eigvals = eigs(A,5,<span class=\"string\">'sm'<\/span>)\r\n<\/pre><pre class=\"codeoutput\">h =\r\n    0.2000\r\neigvals =\r\n   30.3140\r\n   27.7177\r\n   19.0983\r\n   14.6708\r\n    9.6786\r\n<\/pre><h4>My Calculations in 2014<a name=\"98916d34-3a91-41ef-b92c-0c9532e33767\"><\/a><\/h4><p>Run for decreasing mesh size, up to the memory capacity of my laptop.<\/p><pre class=\"codeinput\">   type <span class=\"string\">L_finite_diffs<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n% L_finite_diffs\r\n% Script for MathWorks Logo, Part Two\r\n% Finite Differences\r\n\r\n   opts.tol = 1.0e-12;\r\n   eigvals = zeros(13,5);\r\n   tic\r\n   for n = 50:50:650\r\n      L = numgrid('L',n+1);\r\n      h = 2\/n;\r\n      A = delsq(L)\/h^2;\r\n      e = eigs(A,5,'sm',opts);\r\n      eigvals(n\/50,:) = fliplr(e');\r\n      fprintf('%4.0f %10.6f %9.0f %10.0f %16.12f\\n', ...\r\n         n,h,size(A,1),nnz(A),e(5))\r\n   end\r\n   toc\r\n   save eigvals eigvals\r\n<\/pre><pre class=\"codeinput\">   type <span class=\"string\">L_finite_diffs_results<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\n     n       h       size(A)    nnz(A)     lambda_1,h\r\n\r\n     50  0.040000      1776       8684   9.661331286788\r\n    100  0.020000      7301      36109   9.649547111146\r\n    150  0.013333     16576      82284   9.645736333124\r\n    200  0.010000     29601     147209   9.643932372382\r\n    250  0.008000     46376     230884   9.642903412412\r\n    300  0.006667     66901     333309   9.642247498956\r\n    350  0.005714     91176     454484   9.641797238078\r\n    400  0.005000    119201     594409   9.641471331746\r\n    450  0.004444    150976     753084   9.641225852818\r\n    500  0.004000    186501     930509   9.641035124947\r\n    550  0.003636    225776    1126684   9.640883203502\r\n    600  0.003333    268801    1341609   9.640759699387\r\n    650  0.003077    315576    1575284   9.640657572983\r\n\r\n   Elapsed time is 29.729023 seconds.\r\n<\/pre><h4>Extrapolation<a name=\"5232f4ad-b379-4351-a0d9-2be549892de3\"><\/a><\/h4><p>I know the Taylor series analysis of the discretization error involves terms with both $h^{4\/3}$ and $h^2$, so let's use these as the basis for extrapolation.  Use backslash to do a least squares fit.<\/p><pre class=\"codeinput\">   load <span class=\"string\">eigvals<\/span>\r\n   eigvals = eigvals(:,1);\r\n   n = (50:50:650)';\r\n   h = 2.\/n;\r\n   e = ones(size(h));\r\n   X = [e h.^(4\/3) h.^2];\r\n   c = X(4:end,:)\\eigvals(4:end);\r\n   lambda1 = c(1);\r\n\r\n   n = (50:10:650)';\r\n   h = 2.\/n;\r\n   fit = c(1) + c(2)*h.^(4\/3) + c(3)*h.^2;\r\n   plot(n(1:5:end),eigvals,<span class=\"string\">'o'<\/span>,n,fit,<span class=\"string\">'-'<\/span>,[0 700],[lambda1 lambda1],<span class=\"string\">'k-'<\/span>)\r\n   set(gca,<span class=\"string\">'xtick'<\/span>,100:100:600,<span class=\"string\">'xticklabel'<\/span>,<span class=\"keyword\">...<\/span>\r\n      num2str(sprintf(<span class=\"string\">'2\/%3.0f\\n'<\/span>,100:100:600)))\r\n   xlabel(<span class=\"string\">'h'<\/span>)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/logo_diffs_04.png\" alt=\"\"> <p>Frankly, I was surprised when the extrapolation of the finite difference eigenvalues produced this result.<\/p><pre class=\"codeinput\">   format <span class=\"string\">long<\/span>\r\n   lambda1\r\n<\/pre><pre class=\"codeoutput\">lambda1 =\r\n   9.639723753239963\r\n<\/pre><p>I know from techniques that I will describe in later posts that this agrees with the exact eigenvalue of the continuous differential operator to seven decimal places.  Before I wrote this blog I never tried this computation on computers with enough memory to handle meshes this fine and was never able to obtain this kind of accuracy from extrapolated finite difference eigenvalues.<\/p><h4>Eigenfunction<a name=\"9b73c467-f941-4e86-811a-82932f9e87f2\"><\/a><\/h4><p>Let's do a contour plot of the eigenfunction.  The tricky part is using the grid numbering in <tt>L<\/tt> to insert the eigenvector back onto the grid.<\/p><pre class=\"codeinput\">   close\r\n   n = 100;\r\n   h = 2\/n;\r\n   L = numgrid(<span class=\"string\">'L'<\/span>,n+1);\r\n   A = delsq(L)\/h^2;\r\n   [V,E] = eigs(A,5,<span class=\"string\">'sm'<\/span>);\r\n   L(L&gt;0) = -V(:,5);\r\n   contourf(fliplr(L),12);\r\n   axis <span class=\"string\">square<\/span>\r\n   axis <span class=\"string\">off<\/span>\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/logo_diffs_05.png\" alt=\"\"> <p>The plot shows off <tt>parula<\/tt>, the MATLAB default colormap with Handle Graphics II and Release 2014b.  The colormap derives its name from the <a href=\"http:\/\/www.allaboutbirds.org\/guide\/Northern_Parula\/id\">parula<\/a>, which is a small warbler found throughout eastern North America that exhibits these colors.  As our contour plot demonstrates, the color intensity varies smooth and uniformly as the contour levels increase.  This is not true of <tt>jet<\/tt>, the old default. <a href=\"https:\/\/blogs.mathworks.com\/steve\/2014\/10\/20\/a-new-colormap-for-matlab-part-2-troubles-with-rainbows\/\">Steve Eddins<\/a> has more about <tt>parula<\/tt> in his blog.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_6fc0540ed29a46ee8db3041274679c5d() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='6fc0540ed29a46ee8db3041274679c5d ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 6fc0540ed29a46ee8db3041274679c5d';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2014 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_6fc0540ed29a46ee8db3041274679c5d()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2014b<br><\/p><\/div><!--\r\n6fc0540ed29a46ee8db3041274679c5d ##### SOURCE BEGIN #####\r\n%% MathWorks Logo, Part Two.  Finite Differences\r\n% After reviewing the state of affairs fifty years ago, I use\r\n% classic finite difference methods, followed by extrapolation, to\r\n% find the first eigenvalue of the region underlying the MathWorks logo.\r\n\r\n%% Fifty Years Ago\r\n% My Ph. D. dissertation, submitted to the Stanford Math Department in 1965,\r\n% was titled \"Finite Difference Methods for Eigenvalues of Laplace's\r\n% Operator\".  The thesis was not just about the L-shaped region, but\r\n% the L was my primary example.\r\n\r\n%%\r\n% My thesis advisor was George Forsythe, who had investigated the L in\r\n% some of his own research.  At the time, many people were interested\r\n% in methods for obtaining guaranteed upper and lower bounds for eigenvalues\r\n% of differential operators.  Forsythe had shown that certain kinds of\r\n% finite difference methods could provide lower bounds for convex regions.\r\n% But the L is not convex, so that made it a region to investigate.\r\n\r\n%%\r\n% Nobody knew the first eigenvalue very precisely.  Forsythe and Wolfgang \r\n% Wasow published a book in 1960, \"Finite Difference Methods for Partial\r\n% Differential Equations\" (John Wiley & Sons), that features the L in\r\n% several different sections.  In section 24.3 Forsythe references some\r\n% calculations he had done in 1954 that led him to estimate that\r\n%\r\n% $$ \\lambda_1 = 9.636 $$\r\n%\r\n% He goes on to report an informal conjecture made in the late '50s\r\n% from his colleague R. De Vogelaere that\r\n%\r\n% $$ 9.63968 < \\lambda_1 < 9.63972 $$\r\n\r\n%%\r\n% I repeat these numbers here to give you some idea of the accuracy we\r\n% were working with back then.  We will see in a later post that the\r\n% right answer is just outside De Vogelaere's upper bound.\r\n\r\n%% My Calculations in 1964\r\n%\r\n% <<thesis2.jpg>>\r\n%\r\n\r\n%%\r\n% Here's a page from my thesis.\r\n% The contour plot at the bottom of the page would eventually evolve into\r\n% the MathWorks logo.  The plot was made on a\r\n% <http:\/\/computermuseum.informatik.uni-stuttgart.de\/dev\/ibm1130\/album\/rt_calcomp.html\r\n% Calcomp plotter>, which involved a computer controlled solenoid holding\r\n% a ball-point or liquid ink pen moving across a paper rolled on a rotating\r\n% drum.  This was a very effective device and was used in computer centers\r\n% worldwide for a number of years.\r\n\r\n%%\r\n% The calculations shown on this page are the first eigenvalue of the finite\r\n% difference Laplacian for ten values of the mesh size $h$.  The finest mesh,\r\n% $h$ = 1\/100, has 29,601 interior points.  As I recall, the calculation\r\n% of the first eigenvalue, which I did with a kind of inverse power method,\r\n% took about half an hour on Stanford's campus main frame, an IBM 7090.\r\n\r\n%%\r\n% Here is a plot of these values I computed fifty years ago.  I've added\r\n% a black line at what we now know to be the limiting value, the first\r\n% eigenvalue of the continuous differential operator.  You can see that\r\n% the finite difference values are converging quite slowly.  Furthermore,\r\n% they are converging from above.  Forsythe was not going to get his lower\r\n% bound for this nonconvex region.\r\n\r\n   thesis_graph\r\n\r\n%% Slow Convergence\r\n% The show rate of convergence of the finite difference eigenvalue with\r\n% decreasing mesh size is caused by the fact that the corresponding\r\n% eigenfunction is singular at the reentrant corner in the region.\r\n% The gradient becomes infinite at that corner.  You can see the contour\r\n% lines crowding together near the corner. If you tried to make a drum head\r\n% or tambourine by stretching a membrane over an L-shaped frame,\r\n% it would rip.\r\n\r\n%%\r\n% To be more precise, use polar coordinates, $r$ and $\\theta$, centered\r\n% at the reentrant corner.  Note that to cover the interior of the region\r\n% $\\theta$ will run from $0$ to $\\frac{3}{2}\\pi$.\r\n% It turns out that approaching the corner the first eigenfunction has\r\n% the asymptotic behavior\r\n%\r\n% $$ u(r,\\theta)  \\approx r^\\alpha \\sin{\\alpha \\theta} $$\r\n%\r\n% with $\\alpha = \\frac{2}{3}$.  This implies that the $k$ -th derivatives\r\n% of the eigenfunction have singularities that behave like\r\n%\r\n% $$ \\partial^k u  \\approx r^{\\alpha-k}  $$\r\n%\r\n\r\n%%\r\n% When this fact is combined with the Taylor series analysis of the\r\n% discretization error of the five-point finite difference operator\r\n% we find that the first eigenvalue converges at the rate\r\n%\r\n% $$ \\lambda_{1,h} - \\lambda_1 \\approx O(h^{2 \\alpha}) = O(h^\\frac{4}{3}) $$\r\n%\r\n% This is significantly slower than the $O(h^2)$ convergence rate obtained\r\n% when the eigenfunction is not singular.\r\n\r\n%% Difference Methods in MATLAB\r\n% Now let's use my laptop and the sparse capabilities in MATLAB.\r\n% The functions |numgrid|, |delsq|, |spy|, and |eigs| have been included\r\n% in the |sparfun| directory since its introduction.  We begin by generating\r\n% a small L-shaped numbering scheme.\r\n\r\n   n = 10;\r\n   L = numgrid('L',n+1)\r\n\r\n%%\r\n% To illustrate how this scheme works, superimpose it on a finite\r\n% difference grid.\r\n\r\n   Lgrid(L)\r\n\r\n%%\r\n% The statement\r\n\r\n   A = delsq(L);\r\n\r\n%%\r\n% constructs matrix representation of the five-point discrete Laplacian\r\n% for the grid |L|.\r\n\r\n   whos A\r\n   issym = isequal(A,A')\r\n   nnzA = nnz(A)\r\n\r\n%%\r\n% So for this grid, the discrete Laplacian |A| is a 56-by-56 sparse\r\n% double precision symmetric matrix with 244 nonzero elements.\r\n% That's an average of\r\n\r\n   ratio = nnz(A)\/size(A,1)\r\n\r\n%%\r\n% nonzero elements per row.  Each interior grid point is connected \r\n% to its four nearest neighbors.  For example, point number 44 is connected\r\n% to points 35, 43, 45, and 53.  So the nonzero elements in column 44 are\r\n\r\n   A(:,44)\r\n\r\n%%\r\n% There are 4's on the diagonal and -1's is positions on the off-diagonals\r\n% corresponding to connections between neighbors.  They can be seen in the\r\n% |spy| plot which shows the location of the nonzeros.\r\n\r\n   spy(A)\r\n\r\n%%\r\n% Scale |A| by the square of the mesh size and then use |eigs| to find\r\n% its five smallest eigenvalues.\r\n\r\n   h = 2\/n\r\n   A = A\/h^2;\r\n   eigvals = eigs(A,5,'sm')\r\n\r\n%% My Calculations in 2014\r\n% Run for decreasing mesh size, up to the memory capacity of my laptop.\r\n\r\n   type L_finite_diffs\r\n\r\n%%\r\n\r\n   type L_finite_diffs_results\r\n\r\n%% Extrapolation\r\n% I know the Taylor series analysis of the discretization error involves\r\n% terms with both $h^{4\/3}$ and $h^2$, so let's use these as the basis\r\n% for extrapolation.  Use backslash to do a least squares fit.\r\n\r\n   load eigvals\r\n   eigvals = eigvals(:,1);\r\n   n = (50:50:650)';\r\n   h = 2.\/n;\r\n   e = ones(size(h));\r\n   X = [e h.^(4\/3) h.^2];\r\n   c = X(4:end,:)\\eigvals(4:end);\r\n   lambda1 = c(1);\r\n\r\n   n = (50:10:650)';\r\n   h = 2.\/n;\r\n   fit = c(1) + c(2)*h.^(4\/3) + c(3)*h.^2;\r\n   plot(n(1:5:end),eigvals,'o',n,fit,'-',[0 700],[lambda1 lambda1],'k-')\r\n   set(gca,'xtick',100:100:600,'xticklabel',...\r\n      num2str(sprintf('2\/%3.0f\\n',100:100:600)))\r\n   xlabel('h')\r\n\r\n%%\r\n% Frankly, I was surprised when the extrapolation of the finite difference\r\n% eigenvalues produced this result.\r\n\r\n   format long\r\n   lambda1\r\n\r\n%%\r\n% I know from techniques that I will describe in later posts that this\r\n% agrees with the exact eigenvalue of the continuous differential\r\n% operator to seven decimal places.  Before I wrote this blog I never\r\n% tried this computation on computers with enough memory to handle\r\n% meshes this fine and was never able to obtain this kind of accuracy\r\n% from extrapolated finite difference eigenvalues.\r\n\r\n%% Eigenfunction\r\n% Let's do a contour plot of the eigenfunction.  The tricky part is using\r\n% the grid numbering in |L| to insert the eigenvector back onto the grid.\r\n\r\n   close\r\n   n = 100;\r\n   h = 2\/n;\r\n   L = numgrid('L',n+1); \r\n   A = delsq(L)\/h^2;\r\n   [V,E] = eigs(A,5,'sm');\r\n   L(L>0) = -V(:,5);\r\n   contourf(fliplr(L),12);\r\n   axis square\r\n   axis off\r\n   \r\n%%\r\n% The plot shows off |parula|, the MATLAB default colormap with Handle\r\n% Graphics II and Release 2014b.  The colormap derives its name from the\r\n% <http:\/\/www.allaboutbirds.org\/guide\/Northern_Parula\/id parula>, which\r\n% is a small warbler found throughout eastern North America that exhibits\r\n% these colors.  As our contour plot demonstrates, the color intensity\r\n% varies smooth and uniformly as the contour levels increase.  This is\r\n% not true of |jet|, the old default.  \r\n% <https:\/\/blogs.mathworks.com\/steve\/2014\/10\/20\/a-new-colormap-for-matlab-part-2-troubles-with-rainbows\/\r\n% Steve Eddins> has more about |parula| in his blog.\r\n\r\n##### SOURCE END ##### 6fc0540ed29a46ee8db3041274679c5d\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/thesis2.jpg\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>After reviewing the state of affairs fifty years ago, I use classic finite difference methods, followed by extrapolation, to find the first eigenvalue of the region underlying the MathWorks logo.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2014\/10\/22\/mathworks-logo-part-two-finite-differences\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[24,13,4,25],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1094"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=1094"}],"version-history":[{"count":13,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1094\/revisions"}],"predecessor-version":[{"id":1107,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1094\/revisions\/1107"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=1094"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=1094"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=1094"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}