{"id":1030,"date":"2014-08-04T11:09:42","date_gmt":"2014-08-04T16:09:42","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=1030"},"modified":"2019-04-08T11:44:34","modified_gmt":"2019-04-08T16:44:34","slug":"gaussian-elimination-with-partial-pivoting","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2014\/08\/04\/gaussian-elimination-with-partial-pivoting\/","title":{"rendered":"Gaussian Elimination with Partial Pivoting"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>In rare cases, Gaussian elimination with partial pivoting is unstable. But the situations are so unlikely that we continue to use the algorithm as the foundation for our matrix computations.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#e0a30251-67ff-423b-b30c-d9226479e925\">Pivot Growth<\/a><\/li><li><a href=\"#56ba2aac-6aea-44db-9f27-6c3c17f872cf\">Swap Rows<\/a><\/li><li><a href=\"#105b3618-0546-400a-8c3a-9fbd3c4d3a41\">Introduce Noise<\/a><\/li><li><a href=\"#10743308-55d8-4acc-b69b-f0d60d6935b8\">Growth Factor<\/a><\/li><li><a href=\"#7612c76c-0233-4832-becd-2995ccd7c1d7\">Average Case Growth<\/a><\/li><li><a href=\"#946484f6-d51e-4a7b-9c86-c4d2d3ab01df\">Worst Case Growth<\/a><\/li><li><a href=\"#78e4fb41-296a-48f1-84dd-0f3d4851de44\">Exponential Growth in Practice<\/a><\/li><li><a href=\"#c059221d-b661-4d19-be0f-ebbd8bbfc52d\">Complete Pivoting<\/a><\/li><li><a href=\"#ef45ee8d-c2d5-4180-9cc7-c6d2d90b124f\">lugui<\/a><\/li><li><a href=\"#9636a198-d45d-481b-8f17-b5bc708b143f\">References<\/a><\/li><\/ul><\/div><h4>Pivot Growth<a name=\"e0a30251-67ff-423b-b30c-d9226479e925\"><\/a><\/h4><p>I almost hesitate to bring this up.  Gaussian elimination with partial pivoting is potentially unstable.  We know of a particular test matrix, and have known about it for years, where the solution to simultaneous linear equations computed by our iconic backslash operator is less accurate than we typically expect.  The solution is contaminated by unacceptably large roundoff errors associated with large elements encountered during the elimination process.  The offending matrix is generated by the following function, which is a special case of the <tt>gfpp<\/tt> function in Nick Higham's collection of MATLAB test matrices.<\/p><pre class=\"codeinput\">   type <span class=\"string\">gfpp<\/span>\r\n<\/pre><pre class=\"codeoutput\">\r\nfunction A = gfpp(n)\r\n%  A = gfpp(n)  Pivot growth of partial pivoting.\r\n\r\n   A = eye(n,n) - tril(ones(n,n),-1);\r\n   A(:,n) = 1;\r\n<\/pre><p>Here is the 8-by-8.<\/p><pre class=\"codeinput\">   n = 8\r\n   A = gfpp(n)\r\n<\/pre><pre class=\"codeoutput\">n =\r\n     8\r\nA =\r\n     1     0     0     0     0     0     0     1\r\n    -1     1     0     0     0     0     0     1\r\n    -1    -1     1     0     0     0     0     1\r\n    -1    -1    -1     1     0     0     0     1\r\n    -1    -1    -1    -1     1     0     0     1\r\n    -1    -1    -1    -1    -1     1     0     1\r\n    -1    -1    -1    -1    -1    -1     1     1\r\n    -1    -1    -1    -1    -1    -1    -1     1\r\n<\/pre><p>Gaussian elimination with partial pivoting does not actually do any pivoting with this particular matrix.  The first row is added to each of the other rows to introduce zeroes in the first column.  This produces twos in the last column.  As similar steps are repeated to create an upper triangular <tt>U<\/tt>, elements in the last column double with each step.  The element growth in the final triangular factor <tt>U<\/tt> is <tt>2^(n-1)<\/tt>.<\/p><pre class=\"codeinput\">   [L,U] = lu(A);\r\n   U\r\n<\/pre><pre class=\"codeoutput\">U =\r\n     1     0     0     0     0     0     0     1\r\n     0     1     0     0     0     0     0     2\r\n     0     0     1     0     0     0     0     4\r\n     0     0     0     1     0     0     0     8\r\n     0     0     0     0     1     0     0    16\r\n     0     0     0     0     0     1     0    32\r\n     0     0     0     0     0     0     1    64\r\n     0     0     0     0     0     0     0   128\r\n<\/pre><p>So if the matrix order <tt>n<\/tt> is set to the number of bits in the floating point word (plus one), the last diagonal element of <tt>U<\/tt> will grow to be <tt>1\/eps<\/tt>.<\/p><pre class=\"codeinput\">   n = 53\r\n   A = gfpp(n);\r\n   [L,U] = lu(A);\r\n   unn = U(n,n)\r\n<\/pre><pre class=\"codeoutput\">n =\r\n    53\r\nunn =\r\n   4.5036e+15\r\n<\/pre><p>If we try to use backslash to solve a linear system involving this matrix and a random right hand side, the back substitution will begin with <tt>U(n,n)<\/tt>. The roundoff errors can be expected to be as large as <tt>eps(U(n,n))<\/tt> and swamp the solution itself.<\/p><pre class=\"codeinput\">   b = randn(n,1);\r\n   x = A\\b;\r\n<\/pre><p>Ordinarily, we expect the solution computed by Gaussian elimination to have a residual on the order of roundoff error.  But in this case, the residual is many orders of magnitude larger.  Backslash has failed.<\/p><pre class=\"codeinput\">   r = b - A*x;\r\n   relative_residual = norm(r)\/(norm(A)*norm(x))\r\n<\/pre><pre class=\"codeoutput\">relative_residual =\r\n   3.7579e-05\r\n<\/pre><h4>Swap Rows<a name=\"56ba2aac-6aea-44db-9f27-6c3c17f872cf\"><\/a><\/h4><p>Anything we do to this matrix to provoke pivoting during the elimination will prevent the growth of the elements in <tt>U<\/tt>.  One possibility is to swap the first two rows.<\/p><pre class=\"codeinput\">   A = gfpp(n);\r\n   A([1 2],:) = A([2 1],:);\r\n   [L,U] = lu(A);\r\n   unn = U(n,n)\r\n<\/pre><pre class=\"codeoutput\">unn =\r\n     2\r\n<\/pre><p>That's much better.  Now use backslash and look at the residual.<\/p><pre class=\"codeinput\">   x = A\\b;\r\n   r = b - A*x;\r\n   relative_residual = norm(r)\/(norm(A)*norm(x))\r\n<\/pre><pre class=\"codeoutput\">relative_residual =\r\n   1.7761e-17\r\n<\/pre><p>So the residual is now on the order of roundoff relative to <tt>A<\/tt> and <tt>x<\/tt>. Backslash works as expected.<\/p><h4>Introduce Noise<a name=\"105b3618-0546-400a-8c3a-9fbd3c4d3a41\"><\/a><\/h4><p>Another way to provoke pivoting is to perturb the matrix with a little white noise.<\/p><pre class=\"codeinput\">   A = gfpp(n);\r\n   A = A + randn(n,n);\r\n   [L,U] = lu(A);\r\n   unn = U(n,n)\r\n<\/pre><pre class=\"codeoutput\">unn =\r\n   -3.8699\r\n<\/pre><p>So <tt>U<\/tt> is well behaved.  Let's see about backslash.<\/p><pre class=\"codeinput\">   x = A\\b;\r\n   r = b - A*x;\r\n   relative_residual = norm(r)\/(norm(A)*norm(x))\r\n<\/pre><pre class=\"codeoutput\">relative_residual =\r\n   1.2870e-16\r\n<\/pre><p>Again the residual is on the order of roundoff and backslash works as expected.<\/p><h4>Growth Factor<a name=\"10743308-55d8-4acc-b69b-f0d60d6935b8\"><\/a><\/h4><p>Suppose we are using Gaussian elimination with any pivoting process to solve a system of linear equations involving a matrix $A$.  Let $a_{i,j}^{(k)}$ denote the elements in the matrix after the $k$ th step of the elimination.  The <i>growth factor<\/i> for that pivoting process is the quantity<\/p><p>$$ \\rho = { \\max_{i,j,k} |a_{i,j}^{(k)}| \\over  \\max_{i,j} |a_{i,j}| } $$<\/p><p>This growth factor is a crucial quantity in Wilkinson's roundoff error analysis.  If $\\rho$ is not too large, then the elimination algorithm is stable.  Unfortunately, the matrix we have just seen, <tt>gfpp<\/tt>, shows that the growth factor for partial pivoting is at least $2^{n-1}$.<\/p><p>At the time he was presenting his analysis in early 1970s, Wilkinson knew about this matrix.  But he said it was a very special case.  He reported:<\/p><p>\r\n<p style=\"margin-left:3ex;\">\r\nIt is our experience that any substantial increase in size of elements of\r\nsuccessive An is extremely uncommon even with partial pivoting. ...\r\nNo example which has arisen naturally has in my experience given an\r\nincrease as large as 16.\r\n<\/p>\r\n<\/p><p>During the development of LINPACK in the late 1970s, a colleague at Argonne, J. T. Goodman, and I did some experiments with random matrices to check for element growth with partial pivoting.  We used several different element distributions and what were then matrices of modest order, namely values of $n$ between 10 and 50.  The largest growth factor we found was for a matrix of 0's and 1's of order 40.  The value was $\\rho$ = 23.  We never saw growth as large as linear, $\\rho = n$, let alone exponential, $\\rho = 2^{n-1}$.<\/p><h4>Average Case Growth<a name=\"7612c76c-0233-4832-becd-2995ccd7c1d7\"><\/a><\/h4><p>Nick Trefethen and Rob Schreiber published an important paper, \"Average-case Stability of Gaussian Elimination\", in 1990.  They present both theoretical and probabilistic models, and the results of extensive experiments.  They show that for many different types of matrices, the typical growth factor for partial pivoting does not increase exponentially with matrix order $n$.  It does not even grow linearly.  It actually grows like $n^{2\/3}$.  Their final section, \"Conclusions\", begins:<\/p><p>\r\n<p style=\"margin-left:3ex;\">\r\nIs Gaussian elimination with partial pivoting stable on average?\r\nEverything we know on the subject indicates that the answer is emphatically\r\nyes, and that one needs no hypotheses beyond statistical properties to\r\naccount for the success of this algorithm during nearly half a century of\r\ndigital computation.\r\n<\/p>\r\n<\/p><h4>Worst Case Growth<a name=\"946484f6-d51e-4a7b-9c86-c4d2d3ab01df\"><\/a><\/h4><p>The Higham Brothers, Nick and Des Higham, published a paper in 1989 about growth factors in Gaussian elimination.  Most of their paper applies to any kind of pivoting and involves growth that is $O(n)$.  But they do have one theorem that applies only to partial pivoting and that characterizes all matrices that have $\\rho = 2^{n-1}$.  Essentially these matrices have to have the behavior we have seen in <tt>gfpp<\/tt>, namely no actual pivoting and each step adds a row to all the later rows, doubling the values in the last column in the process.<\/p><h4>Exponential Growth in Practice<a name=\"78e4fb41-296a-48f1-84dd-0f3d4851de44\"><\/a><\/h4><p>In two different papers, Stephen Wright in 1993 and Les Foster in 1994, the authors present classes of matrices encountered in actual computational practice for which Gaussian elimination with partial pivoting blows up with exponential growth in the elements in the last few columns of <tt>U<\/tt>. Wright's application is a two-point boundary value problem for an ordinary differential equation.  The discrete approximation produces a band matrix. Foster's application is a Volterra integral equation model for population dynamics.  The discrete approximation produces a lower trapezoidal matrix. In both cases the matrices are augmented by several final columns. Certain choices of parameters causes the elimination to proceed with no pivoting and results in exponential in growth in the final columns. The behavior is similar to <tt>gfpp<\/tt> and the worst case growth in the Highams theorem.<\/p><h4>Complete Pivoting<a name=\"c059221d-b661-4d19-be0f-ebbd8bbfc52d\"><\/a><\/h4><p>Complete pivoting requires $O(n^3)$ comparisons instead of the $O(n^2)$ required by partial pivoting.  More importantly, the repeated access to the entire matrix complete changes the memory access patterns.  With modern computer storage hierarchy, including cache and virtual memory, this is an all-important consideration.<\/p><p>The growth factor $\\rho$ with complete pivoting is interesting and I plan to investigate it in my next blog.<\/p><h4>lugui<a name=\"ef45ee8d-c2d5-4180-9cc7-c6d2d90b124f\"><\/a><\/h4><p><tt>lugui<\/tt> is one of the demonstration programs included with <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/37976-numerical-computing-with-matlab\">Numerical Computing with MATLAB<\/a>. I will also describe it in my next blog. For now, I will just use it to generate a graphic for this blog. Here is the result of a factorization, the lower triangular factor <tt>L<\/tt> in green and the upper triangular factor <tt>U<\/tt> in blue.<\/p><pre class=\"codeinput\">   A = gfpp(6);\r\n   lugui(A,<span class=\"string\">'partial'<\/span>)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/partial_pivot_blog_01.png\" alt=\"\"> <h4>References<a name=\"9636a198-d45d-481b-8f17-b5bc708b143f\"><\/a><\/h4><p>Nicholas J. Higham and Desmond J. Higham, \"Large Growth Factors in Gaussian Elimination with Pivoting\", <a href=\"http:\/\/www.maths.manchester.ac.uk\/~higham\/papers\/hihi89.pdf\">&lt;http:\/\/www.maths.manchester.ac.uk\/~higham\/papers\/hihi89.pdf<\/a>&gt;, SIAM J. Matrix Anal. Appl. 10, 155-164, 1989.<\/p><p>Lloyd N. Trefethen and Robert S. Schreiber, \"Average-case Stability of Gaussian Elimination\", SIAM J. Matrix Anal. Appl. 11, 335-360, 1990.<\/p><p>Stephen J. Wright, A Collection of Problems for Which Gaussian \"Elimination with Partial Pivoting is Unstable\", <a title=\"No longer working\">&lt;ftp:\/\/info.mcs.anl.gov\/pub\/tech_reports\/reports\/P266.ps.Z<\/a>&gt;, SIAM J. Sci. Statist. Comput. 14, 231-238, 1993.<\/p><p>Leslie Foster, \"Gaussian Elimination Can Fail In Practice\", <a href=\"http:\/\/www.math.sjsu.edu\/~foster\/geppfail.pdf\">&lt;http:\/\/www.math.sjsu.edu\/~foster\/geppfail.pdf<\/a>&gt;, SIAM J. Matrix Anal. Appl. 15, 1354-1362, 1994.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_b67996bffeb64ecea9275c5286e37bb8() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='b67996bffeb64ecea9275c5286e37bb8 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' b67996bffeb64ecea9275c5286e37bb8';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2014 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_b67996bffeb64ecea9275c5286e37bb8()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2014a<br><\/p><\/div><!--\r\nb67996bffeb64ecea9275c5286e37bb8 ##### SOURCE BEGIN #####\r\n%% Gaussian Elimination with Partial Pivoting\r\n% In rare cases, Gaussian elimination with partial pivoting is unstable.\r\n% But the situations are so unlikely that we continue to use the algorithm\r\n% as the foundation for our matrix computations.\r\n\r\n%% Pivot Growth\r\n% I almost hesitate to bring this up.  Gaussian elimination with partial\r\n% pivoting is potentially unstable.  We know of a particular test matrix,\r\n% and have known about it for years, where the solution to simultaneous\r\n% linear equations computed by our iconic backslash operator is less accurate\r\n% than we typically expect.  The solution is contaminated by unacceptably\r\n% large roundoff errors associated with large elements encountered during\r\n% the elimination process.  The offending matrix is generated by the\r\n% following function, which is a special case of the |gfpp| function in\r\n% Nick Higham's collection of MATLAB test matrices.\r\n\r\n   type gfpp\r\n\r\n%%\r\n% Here is the 8-by-8.\r\n\r\n   n = 8\r\n   A = gfpp(n)\r\n\r\n%%\r\n% Gaussian elimination with partial pivoting does not actually do any pivoting\r\n% with this particular matrix.  The first row is added to each of the other\r\n% rows to introduce zeroes in the first column.  This produces twos in the\r\n% last column.  As similar steps are repeated to create an upper triangular\r\n% |U|, elements in the last column double with each step.  The element growth\r\n% in the final triangular factor |U| is |2^(n-1)|.  \r\n\r\n   [L,U] = lu(A);\r\n   U\r\n\r\n%%\r\n% So if the matrix order |n| is set to the number of bits in the floating\r\n% point word (plus one), the last diagonal element of |U| will grow to be\r\n% |1\/eps|.\r\n\r\n   n = 53\r\n   A = gfpp(n);\r\n   [L,U] = lu(A);\r\n   unn = U(n,n)\r\n\r\n%%\r\n% If we try to use backslash to solve a linear system involving this matrix\r\n% and a random right hand side, the back substitution will begin with |U(n,n)|.\r\n% The roundoff errors can be expected to be as large as |eps(U(n,n))| and\r\n% swamp the solution itself.\r\n\r\n   b = randn(n,1);\r\n   x = A\\b;\r\n\r\n%%\r\n% Ordinarily, we expect the solution computed by Gaussian elimination to have\r\n% a residual on the order of roundoff error.  But in this case, the residual\r\n% is many orders of magnitude larger.  Backslash has failed.\r\n\r\n   r = b - A*x;\r\n   relative_residual = norm(r)\/(norm(A)*norm(x))\r\n\r\n%% Swap Rows\r\n% Anything we do to this matrix to provoke pivoting during the elimination\r\n% will prevent the growth of the elements in |U|.  One possibility is to\r\n% swap the first two rows.\r\n\r\n   A = gfpp(n);\r\n   A([1 2],:) = A([2 1],:);\r\n   [L,U] = lu(A);\r\n   unn = U(n,n)\r\n\r\n%%\r\n% That's much better.  Now use backslash and look at the residual.\r\n\r\n   x = A\\b;\r\n   r = b - A*x;\r\n   relative_residual = norm(r)\/(norm(A)*norm(x))\r\n\r\n%%\r\n% So the residual is now on the order of roundoff relative to |A| and |x|.\r\n% Backslash works as expected.\r\n\r\n%% Introduce Noise\r\n% Another way to provoke pivoting is to perturb the matrix with a\r\n% little white noise.\r\n\r\n   A = gfpp(n);\r\n   A = A + randn(n,n);\r\n   [L,U] = lu(A);\r\n   unn = U(n,n)\r\n\r\n%%\r\n% So |U| is well behaved.  Let's see about backslash.\r\n\r\n   x = A\\b;\r\n   r = b - A*x;\r\n   relative_residual = norm(r)\/(norm(A)*norm(x))\r\n\r\n%%\r\n% Again the residual is on the order of roundoff and backslash works\r\n% as expected.\r\n\r\n%% Growth Factor\r\n% Suppose we are using Gaussian elimination with any pivoting process to \r\n% solve a system of linear equations involving a matrix $A$.  Let\r\n% $a_{i,j}^{(k)}$ denote the elements in the matrix after the $k$ th step\r\n% of the elimination.  The _growth factor_ for that pivoting process is the\r\n% quantity\r\n%\r\n% $$ \\rho = { \\max_{i,j,k} |a_{i,j}^{(k)}| \\over  \\max_{i,j} |a_{i,j}| } $$\r\n\r\n%%\r\n% This growth factor is a crucial quantity in Wilkinson's roundoff error\r\n% analysis.  If $\\rho$ is not too large, then the elimination algorithm is\r\n% stable.  Unfortunately, the matrix we have just seen, |gfpp|, shows that\r\n% the growth factor for partial pivoting is at least $2^{n-1}$.\r\n\r\n%%\r\n% At the time he was presenting his analysis in early 1970s, Wilkinson\r\n% knew about this matrix.  But he said it was a very special case.  He \r\n% reported:\r\n%\r\n% <html>\r\n% <p style=\"margin-left:3ex;\">\r\n% It is our experience that any substantial increase in size of elements of\r\n% successive An is extremely uncommon even with partial pivoting. ...\r\n% No example which has arisen naturally has in my experience given an\r\n% increase as large as 16.\r\n% <\/p>\r\n% <\/html>\r\n\r\n%%\r\n% During the development of LINPACK in the late 1970s, a colleague at\r\n% Argonne, J. T. Goodman, and I did some experiments with random matrices\r\n% to check for element growth with partial pivoting.  We used several\r\n% different element distributions and what were then matrices of modest\r\n% order, namely values of $n$ between 10 and 50.  The largest growth factor\r\n% we found was for a matrix of 0's and 1's of order 40.  The value was\r\n% $\\rho$ = 23.  We never saw growth as large as linear, $\\rho = n$, let\r\n% alone exponential, $\\rho = 2^{n-1}$.\r\n\r\n%% Average Case Growth\r\n% Nick Trefethen and Rob Schreiber published an important paper,\r\n% \"Average-case Stability of Gaussian Elimination\", in 1990.  They present\r\n% both theoretical and probabilistic models, and the results of extensive\r\n% experiments.  They show that for many different types of matrices, the\r\n% typical growth factor for partial pivoting does not increase exponentially\r\n% with matrix order $n$.  It does not even grow linearly.  It actually grows\r\n% like $n^{2\/3}$.  Their final section, \"Conclusions\", begins:\r\n%\r\n% <html>\r\n% <p style=\"margin-left:3ex;\">\r\n% Is Gaussian elimination with partial pivoting stable on average?\r\n% Everything we know on the subject indicates that the answer is emphatically\r\n% yes, and that one needs no hypotheses beyond statistical properties to\r\n% account for the success of this algorithm during nearly half a century of\r\n% digital computation.\r\n% <\/p>\r\n% <\/html>\r\n\r\n%% Worst Case Growth\r\n% The Higham Brothers, Nick and Des Higham, published a paper in 1989 about\r\n% growth factors in Gaussian elimination.  Most of their paper applies to\r\n% any kind of pivoting and involves growth that is $O(n)$.  But they do have\r\n% one theorem that applies only to partial pivoting and that characterizes\r\n% all matrices that have $\\rho = 2^{n-1}$.  Essentially these matrices\r\n% have to have the behavior we have seen in |gfpp|, namely no actual\r\n% pivoting and each step adds a row to all the later rows, doubling the values\r\n% in the last column in the process.\r\n\r\n%% Exponential Growth in Practice\r\n% In two different papers, Stephen Wright in 1993 and Les Foster in 1994, the\r\n% authors present classes of matrices encountered in actual computational\r\n% practice for which Gaussian elimination with partial pivoting blows up\r\n% with exponential growth in the elements in the last few columns of |U|.\r\n% Wright's application is a two-point boundary value problem for an ordinary\r\n% differential equation.  The discrete approximation produces a band matrix.\r\n% Foster's application is a Volterra integral equation model for population\r\n% dynamics.  The discrete approximation produces a lower trapezoidal matrix.\r\n% In both cases the matrices are augmented by several final columns.\r\n% Certain choices of parameters causes the elimination to proceed with no\r\n% pivoting and results in exponential in growth in the final columns.  \r\n% The behavior is similar to |gfpp| and the worst case growth in the Highams\r\n% theorem.\r\n\r\n%% Complete Pivoting\r\n% Complete pivoting requires $O(n^3)$ comparisons instead of the $O(n^2)$\r\n% required by partial pivoting.  More importantly, the repeated access to\r\n% the entire matrix complete changes the memory access patterns.  With modern\r\n% computer storage hierarchy, including cache and virtual memory, this is\r\n% an all-important consideration.\r\n\r\n%%\r\n% The growth factor $\\rho$ with complete pivoting is interesting and I plan\r\n% to investigate it in my next blog.\r\n\r\n%% lugui\r\n% |lugui| is one of the demonstration programs included with\r\n% <https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/37976-numerical-computing-with-matlab\r\n% Numerical Computing with MATLAB>. I will also describe it in my next blog.\r\n% For now, I will just use it to generate a graphic for this blog.\r\n% Here is the result of a factorization, the lower triangular factor |L|\r\n% in green and the upper triangular factor |U| in blue.\r\n\r\n   load pivot_blog\r\n   lugui(A,'partial')\r\n\r\n%% References\r\n% Nicholas J. Higham and Desmond J. Higham,\r\n% \"Large Growth Factors in Gaussian Elimination with Pivoting\",\r\n% <http:\/\/www.maths.manchester.ac.uk\/~higham\/papers\/hihi89.pdf\r\n% http:\/\/www.maths.manchester.ac.uk\/~higham\/papers\/hihi89.pdf>,\r\n% SIAM J. Matrix Anal. Appl. 10, 155-164, 1989.\r\n\r\n%%\r\n% Lloyd N. Trefethen and Robert S. Schreiber,\r\n% \"Average-case Stability of Gaussian Elimination\",\r\n% <https:\/\/courses.engr.illinois.edu\/cs591mh\/trefethen\/trefethen_schreiber.pdf\r\n% https:\/\/courses.engr.illinois.edu\/cs591mh\/trefethen\/trefethen_schreiber.pdf>,\r\n% SIAM J. Matrix Anal. Appl. 11, 335-360, 1990.\r\n\r\n%%\r\n% Stephen J. Wright, A Collection of Problems for Which Gaussian\r\n% \"Elimination with Partial Pivoting is Unstable\",\r\n% <ftp:\/\/info.mcs.anl.gov\/pub\/tech_reports\/reports\/P266.ps.Z\r\n% ftp:\/\/info.mcs.anl.gov\/pub\/tech_reports\/reports\/P266.ps.Z>,\r\n% SIAM J. Sci. Statist. Comput. 14, 231-238, 1993.\r\n  \r\n%%\r\n% Leslie Foster, \"Gaussian Elimination Can Fail In Practice\",\r\n% <http:\/\/www.math.sjsu.edu\/~foster\/geppfail.pdf\r\n% http:\/\/www.math.sjsu.edu\/~foster\/geppfail.pdf>,\r\n% SIAM J. Matrix Anal. Appl. 15, 1354-1362, 1994.\r\n\r\n##### SOURCE END ##### b67996bffeb64ecea9275c5286e37bb8\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/images\/cleve\/partial_pivot_blog_01.png\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>In rare cases, Gaussian elimination with partial pivoting is unstable. But the situations are so unlikely that we continue to use the algorithm as the foundation for our matrix computations.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2014\/08\/04\/gaussian-elimination-with-partial-pivoting\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[4,6,16],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1030"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=1030"}],"version-history":[{"count":10,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1030\/revisions"}],"predecessor-version":[{"id":4656,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/1030\/revisions\/4656"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=1030"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=1030"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=1030"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}