{"id":2159,"date":"2016-11-29T16:15:30","date_gmt":"2016-11-29T21:15:30","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=2159"},"modified":"2016-11-29T16:15:30","modified_gmt":"2016-11-29T21:15:30","slug":"four-fundamental-subspaces-of-linear-algebra-corrected","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2016\/11\/29\/four-fundamental-subspaces-of-linear-algebra-corrected\/","title":{"rendered":"Four Fundamental Subspaces of Linear Algebra, Corrected"},"content":{"rendered":"\r\n<div class=\"content\"><!--introduction--><p>(Please replace the erroneous posting from yesterday, Nov. 28, with this corrected version.)<\/p><p>Here is a very short course in Linear Algebra. The Singular Value Decomposition provides a natural basis for Gil Strang's Four Fundamental Subspaces.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/four_spaces.jpg\" alt=\"\"> <\/p><p>Screen shot from Gil Strang MIT\/MathWorks video lecture, <a href=\"http:\/\/www.youtube.com\/watch?v=ggWYkes-n6E\">\"The Big Picture of Linear Algebra\"<\/a>.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#5ebb5ebe-4935-4381-92e8-7a86b323cd6d\">Gil Strang<\/a><\/li><li><a href=\"#e99f56eb-2028-4b93-9c8b-05e263f8921b\">The Four Subspaces<\/a><\/li><li><a href=\"#ab7b4099-f212-4017-8ea8-496763419fb8\">Dimension and rank.<\/a><\/li><li><a href=\"#333a0a41-a4ff-45ca-a7c5-94f82d609abc\">The Singular Value Decomposition<\/a><\/li><li><a href=\"#77a3ccb7-e47b-4600-92a7-7c86df32b90a\">Two Subspaces<\/a><\/li><li><a href=\"#364356f4-34e8-4a94-bd6d-c3d638d5bff5\">Two More Subspaces<\/a><\/li><li><a href=\"#eb30b1b9-6ec2-4e7d-84f7-d8937b983078\">Four Lines<\/a><\/li><li><a href=\"#183c5072-363f-4b56-a6bf-cb209af3958e\">References<\/a><\/li><\/ul><\/div><h4>Gil Strang<a name=\"5ebb5ebe-4935-4381-92e8-7a86b323cd6d\"><\/a><\/h4><p>Gil Strang tells me that he began to think about linear algebra in terms of four fundamental subspaces in the 1970's when he wrote the first edition of his textbook, <i>Introduction to Linear Algebra<\/i>. The fifth edition, which was published last May, features the spaces <a href=\"http:\/\/bookstore.siam.org\/wc14\">on the cover<\/a>.<\/p><p>The concept is a centerpiece in his <a href=\"http:\/\/ocw.mit.edu\/courses\/mathematics\/18-06-linear-algebra-spring-2010\/video-lectures\/lecture-10-the-four-fundamental-subspaces\">video lectures<\/a> for MIT course 18.06. It even found its way into the <a href=\"http:\/\/www.youtube.com\/watch?v=ZvL88xqYSak\">new video series<\/a> about ordinary differential equations that he and I made for MIT and MathWorks.  His paper included in the notes for 18.06 is referenced below.<\/p><h4>The Four Subspaces<a name=\"e99f56eb-2028-4b93-9c8b-05e263f8921b\"><\/a><\/h4><p>Suppose that $A$ is a $m$ -by- $n$ matrix that maps vectors in $R^n$ to vectors in $R^m$. The four fundamental subspaces associated with $A$, two in $R^n$ and two in $R^m$, are:<\/p><div><ul><li>column space of $A$, the set of all $y$ in $R^m$   resulting from $y = Ax$,<\/li><li>row space of $A$, the set of all $x$ in $R^n$   resulting from $x = A^Ty$,<\/li><li>null space of $A$, the set of all $x$ in $R^n$   for which $Ax = 0$,<\/li><li>left null space of $A$, the set of all $y$ in $R^m$   for which $A^T y = 0$.<\/li><\/ul><\/div><p>The row space and the null space are <i>orthogonal<\/i> to each other and span all of $R^n$. The column space and the left null space are also <i>orthogonal<\/i> to each other and span all of $R^m$.<\/p><h4>Dimension and rank.<a name=\"ab7b4099-f212-4017-8ea8-496763419fb8\"><\/a><\/h4><p>The <i>dimension<\/i> of a subspace is the number of linearly independent vectors required to span that space. <i>The Fundamental Theorem of Linear Algebra<\/i> is<\/p><div><ul><li>The dimension of the row space is equal to the dimension of the column space.<\/li><\/ul><\/div><p>In other words, the number of linearly independent rows is equal to the number of linearly independent columns.  This may seem obvious, but it is actually a subtle fact that requires proof.<\/p><p>The <i>rank<\/i> of a matrix is this number of linearly independent rows or columns.<\/p><h4>The Singular Value Decomposition<a name=\"333a0a41-a4ff-45ca-a7c5-94f82d609abc\"><\/a><\/h4><p>The natural bases for the four fundamental subspaces are provided by the SVD, the Singular Value Decomposition, of $A$.<\/p><p>$$ A = U \\Sigma V^T $$<\/p><p>The matrices $U$ and $V$ are <i>orthogonal<\/i>.<\/p><p>$$ U^T U = I_m, \\ \\ V^T V = I_n $$<\/p><p>You can think of orthogonal matrices as multidimensional generalizations of two dimensional rotations. The matrix $\\Sigma$ is <i>diagonal<\/i>, so its only nonzero elements are on the main diagonal.<\/p><p>The shape and size of these matrices are important.  The matrix $A$ is rectangular, say with $m$ rows and $n$ columns; $U$ is square, with the same number of rows as $A$; $V$ is also square, with the same number of columns as $A$; and $\\Sigma$ is the same size as $A$. Here is a picture of this equation when $A$ is tall and skinny, so $m &gt; n$. The diagonal elements of $\\Sigma$ are the <i>singular values<\/i>, shown as blue dots.  All of the other elements of $\\Sigma$ are zero.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/USVT.png\" alt=\"\"> <\/p><p>The signs and the ordering of the columns in $U$ and $V$ can always be taken so that the singular values are nonnegative and arranged in decreasing order.<\/p><p>For any diagonal matrix like $\\Sigma$, it is clear that the rank, which is the number of independent rows or columns, is just the number of nonzero diagonal elements.<\/p><p>In MATLAB, the SVD is computed by the statement.<\/p><pre class=\"language-matlab\">[U,Sigma,V] = svd(A)\r\n<\/pre><p>With inexact floating point computation, it is appropriate to take the rank to be the number of <i>nonnegligible<\/i> diagonal elements.  So the function<\/p><pre class=\"language-matlab\">r = rank(A)\r\n<\/pre><p>counts the number of singular values larger than a tolerance.<\/p><h4>Two Subspaces<a name=\"77a3ccb7-e47b-4600-92a7-7c86df32b90a\"><\/a><\/h4><p>Multiply both sides of $A = U\\Sigma V^T $ on the right by $V$. Since $V^T V = I$, we find<\/p><p>$$ AV = U\\Sigma $$<\/p><p>Here is the picture. I've drawn a green line after column $r$ to show the rank. The only nonzero elements of $\\Sigma$, the singular values, are the blue dots.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/AV.png\" alt=\"\"> <\/p><p>Write out this equation column by column.<\/p><p>$$ Av_j = \\sigma_j u_j, \\ \\ j = 1,...,r $$<\/p><p>$$ Av_j = 0, \\ \\ j = r+1,...,n $$<\/p><p>This says that $A$ maps the first $r$ columns of $V$ onto nonzero multiples of the first $r$ columns of $U$ and maps the remaining columns of $V$ onto zero.<\/p><div><ul><li>$U(:,1:r)$ spans the column space.<\/li><li>$V(:,r+1:n)$ spans the null space.<\/li><\/ul><\/div><h4>Two More Subspaces<a name=\"364356f4-34e8-4a94-bd6d-c3d638d5bff5\"><\/a><\/h4><p>Transpose the equation $A = U\\Sigma V^T $ and multiply both sides on the right by $U$.  Since $U^T U = I$, we find<\/p><p>$$ A^T U = V \\Sigma^T $$<\/p><p>Here's the picture, with the green line at the rank.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/ATU.png\" alt=\"\"> <\/p><p>Write this out column by column.<\/p><p>$$ A^T u_j = \\sigma_j v_j, \\ \\ j = 1,...,r $$<\/p><p>$$ A^T u_j = 0, \\ \\ j = r+1,...,m $$<\/p><p>This says that $A^T$ maps the first $r$ columns of $U$ onto nonzero multiples of the first $r$ columns of $V$ and maps the remaining columns of $U$ onto zero.<\/p><div><ul><li>$V(:,1:r)$ spans the row space.<\/li><li>$U(:,r+1:m)$ spans the left nullspace.<\/li><\/ul><\/div><h4>Four Lines<a name=\"eb30b1b9-6ec2-4e7d-84f7-d8937b983078\"><\/a><\/h4><p>Here is an example involving lines in two dimensions, so $m = n = 2$. Start with these vectors.<\/p><pre class=\"codeinput\">   u = [-3 4]'\r\n   v = [1 3]'\r\n<\/pre><pre class=\"codeoutput\">u =\r\n    -3\r\n     4\r\nv =\r\n     1\r\n     3\r\n<\/pre><p>The matrix $A$ is their outer product.<\/p><pre class=\"codeinput\">   A = u*v'\r\n<\/pre><pre class=\"codeoutput\">A =\r\n    -3    -9\r\n     4    12\r\n<\/pre><p>Compute the SVD.<\/p><pre class=\"codeinput\">   [U,Sigma,V] = svd(A)\r\n<\/pre><pre class=\"codeoutput\">U =\r\n   -0.6000    0.8000\r\n    0.8000    0.6000\r\nSigma =\r\n   15.8114         0\r\n         0         0\r\nV =\r\n    0.3162   -0.9487\r\n    0.9487    0.3162\r\n<\/pre><p>As expected, $\\Sigma$ has only one nonzero singular value, so the rank is $r = 1$.<\/p><p>The first left and right singular vectors are our starting vectors, normalized to have unit length.<\/p><pre class=\"codeinput\">   ubar = u\/norm(u)\r\n   vbar = v\/norm(v)\r\n<\/pre><pre class=\"codeoutput\">ubar =\r\n   -0.6000\r\n    0.8000\r\nvbar =\r\n    0.3162\r\n    0.9487\r\n<\/pre><p>The columns of $A$ are proportional to each other, and to $\\bar{u}$. So the column space is just the line generated by multiples of either column and $\\bar{u}$ is the normalized basis vector for the column space. The columns of $A^T$ are proportional to each other, and to $\\bar{v}$. So $\\bar{v}$ is the normalized basis vector for the row space.<\/p><p>The only nonzero singular value is the product of the normalizing factors.<\/p><pre class=\"codeinput\">   sigma = norm(u)*norm(v)\r\n<\/pre><pre class=\"codeoutput\">sigma =\r\n   15.8114\r\n<\/pre><p>The second right and left singular vectors, that is the second columns of $V$ and $U$, provide bases for the null spaces of $A$ and $A^T$.<\/p><p>Here is the picture.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/four_lines.png\" alt=\"\"> <\/p><h4>References<a name=\"183c5072-363f-4b56-a6bf-cb209af3958e\"><\/a><\/h4><p>Gilbert Strang, \"The Four Fundamental Subspaces: 4 Lines\", undated notes for MIT course 18.06, <a href=\"http:\/\/web.mit.edu\/18.06\/www\/Essays\/newpaper_ver3.pdf\">&lt;http:\/\/web.mit.edu\/18.06\/www\/Essays\/newpaper_ver3.pdf<\/a>&gt;<\/p><p>Gilbert Strang, <i>Introduction to Linear Algebra<\/i>, Wellesley-Cambridge Press, fifth edition, 2016, x+574 pages, <a href=\"http:\/\/bookstore.siam.org\/wc14\">&lt;http:\/\/bookstore.siam.org\/wc14<\/a>&gt;<\/p><p>Gilbert Strang, \"The Fundamental Theorem of Linear Algebra\", <i>The American Mathematical Monthly<\/i>, Vol. 100, No. 9. (Nov., 1993), pp. 848-855, <a href=\"http:\/\/www.jstor.org\/stable\/2324660?seq=1#page_scan_tab_contents\">&lt;http:\/\/www.jstor.org\/stable\/2324660?seq=1#page_scan_tab_contents<\/a>&gt;, also available at <a href=\"http:\/\/www.souravsengupta.com\/cds2016\/lectures\/Strang_Paper1.pdf\">&lt;http:\/\/www.souravsengupta.com\/cds2016\/lectures\/Strang_Paper1.pdf<\/a>&gt;<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_3a1c1329f860408a8ff4fa9bce516acb() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='3a1c1329f860408a8ff4fa9bce516acb ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 3a1c1329f860408a8ff4fa9bce516acb';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2016 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_3a1c1329f860408a8ff4fa9bce516acb()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2016b<br><\/p><\/div><!--\r\n3a1c1329f860408a8ff4fa9bce516acb ##### SOURCE BEGIN #####\r\n%% Four Fundamental Subspaces of Linear Algebra, Corrected\r\n% (Please replace the erroneous posting from yesterday, Nov. 28,\r\n% with this corrected version.)\r\n%\r\n% Here is a very short course in Linear Algebra.\r\n% The Singular Value Decomposition provides a natural basis for\r\n% Gil Strang's Four Fundamental Subspaces.\r\n%\r\n% <<four_spaces.jpg>>\r\n%\r\n% Screen shot from Gil Strang MIT\/MathWorks video lecture,\r\n% <http:\/\/www.youtube.com\/watch?v=ggWYkes-n6E\r\n% \"The Big Picture of Linear Algebra\">.\r\n\r\n%% Gil Strang\r\n% Gil Strang tells me that he began to think about linear algebra\r\n% in terms of four fundamental subspaces in the 1970's when he wrote\r\n% the first edition of his textbook, _Introduction to Linear Algebra_.\r\n% The fifth edition, which was published last May, features the spaces\r\n% <http:\/\/bookstore.siam.org\/wc14 on the cover>.  \r\n\r\n%%\r\n% The concept is a centerpiece in his\r\n% <http:\/\/ocw.mit.edu\/courses\/mathematics\/18-06-linear-algebra-spring-2010\/video-lectures\/lecture-10-the-four-fundamental-subspaces\r\n% video lectures> for MIT course 18.06.\r\n% It even found its way into the\r\n% <http:\/\/www.youtube.com\/watch?v=ZvL88xqYSak new video series> about\r\n% ordinary differential equations that he and I made for MIT and\r\n% MathWorks.  His paper included in the notes for 18.06 is referenced\r\n% below.\r\n\r\n%% The Four Subspaces\r\n% Suppose that $A$ is a $m$ -by- $n$ matrix that maps vectors in\r\n% $R^n$ to vectors in $R^m$.\r\n% The four fundamental subspaces associated with $A$, two in $R^n$ and\r\n% two in $R^m$, are:\r\n%\r\n% * column space of $A$, the set of all $y$ in $R^m$\r\n%   resulting from $y = Ax$,\r\n% * row space of $A$, the set of all $x$ in $R^n$\r\n%   resulting from $x = A^Ty$,\r\n% * null space of $A$, the set of all $x$ in $R^n$ \r\n%   for which $Ax = 0$,\r\n% * left null space of $A$, the set of all $y$ in $R^m$\r\n%   for which $A^T y = 0$.\r\n\r\n%%\r\n% The row space and the null space are _orthogonal_ to each other\r\n% and span all of $R^n$.\r\n% The column space and the left null space are also _orthogonal_ to each\r\n% other and span all of $R^m$.\r\n\r\n%% Dimension and rank.\r\n% The _dimension_ of a subspace is the number of linearly independent\r\n% vectors required to span that space.\r\n% _The Fundamental Theorem of Linear Algebra_ is \r\n%\r\n% * The dimension of the row space is equal to the dimension of the\r\n% column space.\r\n%\r\n% In other words, the number of linearly independent rows is equal to\r\n% the number of linearly independent columns.  This may seem obvious,\r\n% but it is actually a subtle fact that requires proof.\r\n\r\n%%\r\n% The _rank_ of a matrix is this number of linearly independent rows or\r\n% columns.\r\n\r\n%% The Singular Value Decomposition\r\n% The natural bases for the four fundamental subspaces are provided\r\n% by the SVD, the Singular Value Decomposition, of $A$.\r\n%\r\n% $$ A = U \\Sigma V^T $$\r\n%\r\n% The matrices $U$ and $V$ are _orthogonal_.\r\n%\r\n% $$ U^T U = I_m, \\ \\ V^T V = I_n $$\r\n% \r\n% You can think of orthogonal matrices as multidimensional generalizations\r\n% of two dimensional rotations.\r\n% The matrix $\\Sigma$ is _diagonal_, so its only nonzero elements\r\n% are on the main diagonal.\r\n\r\n%%\r\n% The shape and size of these matrices are important.  The matrix $A$\r\n% is rectangular, say with $m$ rows and $n$ columns; $U$ is square,\r\n% with the same number of rows as $A$; $V$ is also square, with the\r\n% same number of columns as $A$; and $\\Sigma$ is the same size as $A$.\r\n% Here is a picture of this equation when $A$ is tall and skinny,\r\n% so $m > n$.\r\n% The diagonal elements of $\\Sigma$ are the _singular values_, shown as\r\n% blue dots.  All of the other elements of $\\Sigma$ are zero.\r\n%\r\n% <<USVT.png>>\r\n   \r\n%%\r\n% The signs and the ordering of the columns in $U$ and $V$ can always\r\n% be taken so that the singular values are nonnegative and\r\n% arranged in decreasing order.\r\n\r\n%%\r\n% For any diagonal matrix like $\\Sigma$, it is clear that the\r\n% rank, which is the number of independent rows or columns, is just\r\n% the number of nonzero diagonal elements.\r\n\r\n%%\r\n% In MATLAB, the SVD is computed by the statement.\r\n%\r\n%   [U,Sigma,V] = svd(A)\r\n\r\n%%\r\n% With inexact floating point computation, it is appropriate to\r\n% take the rank to be the number of _nonnegligible_ diagonal\r\n% elements.  So the function\r\n%\r\n%   r = rank(A)\r\n%\r\n% counts the number of singular values larger than a tolerance.\r\n\r\n%% Two Subspaces\r\n% Multiply both sides of $A = U\\Sigma V^T $ on the right by $V$.\r\n% Since $V^T V = I$, we find\r\n%\r\n% $$ AV = U\\Sigma $$\r\n   \r\n%%\r\n% Here is the picture.\r\n% I've drawn a green line after column $r$ to show the rank.\r\n% The only nonzero elements of $\\Sigma$, the singular values,\r\n% are the blue dots.\r\n%\r\n% <<AV.png>>\r\n\r\n%%\r\n% Write out this equation column by column.\r\n%\r\n% $$ Av_j = \\sigma_j u_j, \\ \\ j = 1,...,r $$\r\n%\r\n% $$ Av_j = 0, \\ \\ j = r+1,...,n $$\r\n\r\n%%\r\n% This says that $A$ maps the first $r$ columns of $V$ onto nonzero\r\n% multiples of the first $r$ columns of $U$ and maps the remaining \r\n% columns of $V$ onto zero.\r\n%\r\n% * $U(:,1:r)$ spans the column space.\r\n% * $V(:,r+1:n)$ spans the null space.\r\n   \r\n%% Two More Subspaces\r\n% Transpose the equation $A = U\\Sigma V^T $ and multiply both sides\r\n% on the right by $U$.  Since $U^T U = I$, we find\r\n%\r\n% $$ A^T U = V \\Sigma^T $$\r\n\r\n%%\r\n% Here's the picture, with the green line at the rank.\r\n%\r\n% <<ATU.png>>\r\n   \r\n%%\r\n% Write this out column by column.\r\n%\r\n% $$ A^T u_j = \\sigma_j v_j, \\ \\ j = 1,...,r $$\r\n%\r\n% $$ A^T u_j = 0, \\ \\ j = r+1,...,m $$\r\n\r\n%%\r\n% This says that $A^T$ maps the first $r$ columns of $U$ onto nonzero\r\n% multiples of the first $r$ columns of $V$ and maps the remaining\r\n% columns of $U$ onto zero.\r\n%\r\n% * $V(:,1:r)$ spans the row space.\r\n% * $U(:,r+1:m)$ spans the left nullspace.\r\n\r\n%% Four Lines\r\n% Here is an example involving lines in two dimensions, so $m = n = 2$.\r\n% Start with these vectors.\r\n\r\n   u = [-3 4]'\r\n   v = [1 3]'\r\n   \r\n%%\r\n% The matrix $A$ is their outer product.\r\n\r\n   A = u*v'\r\n   \r\n%%\r\n% Compute the SVD.\r\n\r\n   [U,Sigma,V] = svd(A)\r\n   \r\n%%\r\n% As expected, $\\Sigma$ has only one nonzero singular value,\r\n% so the rank is $r = 1$.\r\n   \r\n%%\r\n% The first left and right singular vectors are our starting vectors,\r\n% normalized to have unit length.\r\n\r\n   ubar = u\/norm(u)\r\n   vbar = v\/norm(v)\r\n   \r\n%%\r\n% The columns of $A$ are proportional to each other, and to $\\bar{u}$.\r\n% So the column space is just the line generated by multiples of either\r\n% column and $\\bar{u}$ is the normalized basis vector for the column space.\r\n% The columns of $A^T$ are proportional to each other, and to $\\bar{v}$.\r\n% So $\\bar{v}$ is the normalized basis vector for the row space.\r\n\r\n%%\r\n% The only nonzero singular value is the product of the\r\n% normalizing factors.\r\n\r\n   sigma = norm(u)*norm(v)\r\n   \r\n%%\r\n% The second right and left singular vectors, that is the second columns\r\n% of $V$ and $U$, provide bases for the null spaces of $A$ and $A^T$.\r\n   \r\n%%\r\n% Here is the picture.\r\n%\r\n% <<four_lines.png>>\r\n   \r\n%% References\r\n% Gilbert Strang, \"The Four Fundamental Subspaces: 4 Lines\",\r\n% undated notes for MIT course 18.06,\r\n% <http:\/\/web.mit.edu\/18.06\/www\/Essays\/newpaper_ver3.pdf\r\n% http:\/\/web.mit.edu\/18.06\/www\/Essays\/newpaper_ver3.pdf>\r\n%\r\n% Gilbert Strang, _Introduction to Linear Algebra_, \r\n% Wellesley-Cambridge Press, fifth edition, 2016, x+574 pages, \r\n% <http:\/\/bookstore.siam.org\/wc14 http:\/\/bookstore.siam.org\/wc14>\r\n%\r\n% Gilbert Strang, \"The Fundamental Theorem of Linear Algebra\",\r\n% _The American Mathematical Monthly_, Vol. 100, No. 9. (Nov., 1993), \r\n% pp. 848-855,\r\n% <http:\/\/www.jstor.org\/stable\/2324660?seq=1#page_scan_tab_contents\r\n% http:\/\/www.jstor.org\/stable\/2324660?seq=1#page_scan_tab_contents>,\r\n% also available at\r\n% <http:\/\/www.souravsengupta.com\/cds2016\/lectures\/Strang_Paper1.pdf\r\n% http:\/\/www.souravsengupta.com\/cds2016\/lectures\/Strang_Paper1.pdf>\r\n##### SOURCE END ##### 3a1c1329f860408a8ff4fa9bce516acb\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/four_spaces.jpg\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>(Please replace the erroneous posting from yesterday, Nov. 28, with this corrected version.)... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2016\/11\/29\/four-fundamental-subspaces-of-linear-algebra-corrected\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[6,8,30],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2159"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=2159"}],"version-history":[{"count":1,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2159\/revisions"}],"predecessor-version":[{"id":2160,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2159\/revisions\/2160"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=2159"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=2159"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=2159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}