{"id":165,"date":"2012-07-02T12:16:14","date_gmt":"2012-07-02T17:16:14","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=165"},"modified":"2013-05-02T10:05:30","modified_gmt":"2013-05-02T15:05:30","slug":"symmetric-pair-decomposition","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2012\/07\/02\/symmetric-pair-decomposition\/","title":{"rendered":"Symmetric Pair Decomposition"},"content":{"rendered":"\r\n<!DOCTYPE html\r\n  PUBLIC \"-\/\/W3C\/\/DTD HTML 4.01 Transitional\/\/EN\">\r\n<style type=\"text\/css\">\r\n\r\nh1 { font-size:18pt; }\r\nh2.titlebg { font-size:13pt; }\r\nh3 { color:#4A4F55; padding:0px; margin:5px 0px 5px; font-family:Arial, Helvetica, sans-serif; font-size:11pt; font-weight:bold; line-height:140%; border-bottom:1px solid #d6d4d4; display:block; }\r\nh4 { color:#4A4F55; padding:0px; margin:0px 0px 5px; font-family:Arial, Helvetica, sans-serif; font-size:10pt; font-weight:bold; line-height:140%; border-bottom:1px solid #d6d4d4; display:block; }\r\n   \r\np { padding:0px; margin:0px 0px 20px; }\r\nimg { padding:0px; margin:0px 0px 20px; border:none; }\r\np img, pre img, tt img, li img { margin-bottom:0px; } \r\n\r\nul { padding:0px; margin:0px 0px 20px 23px; list-style:square; }\r\nul li { padding:0px; margin:0px 0px 7px 0px; background:none; }\r\nul li ul { padding:5px 0px 0px; margin:0px 0px 7px 23px; }\r\nul li ol li { list-style:decimal; }\r\nol { padding:0px; margin:0px 0px 20px 0px; list-style:decimal; }\r\nol li { padding:0px; margin:0px 0px 7px 23px; list-style-type:decimal; }\r\nol li ol { padding:5px 0px 0px; margin:0px 0px 7px 0px; }\r\nol li ol li { list-style-type:lower-alpha; }\r\nol li ul { padding-top:7px; }\r\nol li ul li { list-style:square; }\r\n\r\npre, tt, code { font-size:12px; }\r\npre { margin:0px 0px 20px; }\r\npre.error { color:red; }\r\npre.codeinput { padding:10px; border:1px solid #d3d3d3; background:#f7f7f7; }\r\npre.codeoutput { padding:10px 11px; margin:0px 0px 20px; color:#4c4c4c; }\r\n\r\n@media print { pre.codeinput, pre.codeoutput { word-wrap:break-word; width:100%; } }\r\n\r\nspan.keyword { color:#0000FF }\r\nspan.comment { color:#228B22 }\r\nspan.string { color:#A020F0 }\r\nspan.untermstring { color:#B20000 }\r\nspan.syscmd { color:#B28C00 }\r\n\r\n.footer { width:auto; padding:10px 0px; margin:25px 0px 0px; border-top:1px dotted #878787; font-size:0.8em; line-height:140%; font-style:italic; color:#878787; text-align:left; float:none; }\r\n.footer p { margin:0px; }\r\n\r\n  <\/style><div class=\"content\"><!--introduction--><p>An esoteric fact about matrices is that any real matrix can be written as the product of two symmetric matrices. I've known about this fact for years, but never seriously explored the computational aspects. So I'm using this post to clarify my own understanding of what I'll call the <i>symmetric pair decomposition<\/i>. It turns out that there are open questions. I don't think we know how to reliably compute the factors. But I also have to admit that, even if we could compute them, I don't know of any practical use.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#1d55040b-1271-4bb6-9980-4afaca49b25c\">Theorem<\/a><\/li><li><a href=\"#a92c6244-f1e0-48eb-b7c6-485da865d328\">Almost Proof<\/a><\/li><li><a href=\"#11db6cb0-af8a-4454-852f-b891f81ef1b9\">Degrees of Freedom<\/a><\/li><li><a href=\"#def5f727-dca7-44e5-a7c7-023019e61a7d\">Moler's Rules<\/a><\/li><li><a href=\"#ac3122ca-d721-437e-b2d8-368675d6bb71\">LU Decomposition<\/a><\/li><li><a href=\"#0081e870-7858-450e-9fd2-3c101faf7c91\">Magic Square<\/a><\/li><li><a href=\"#81f7a44f-08bf-48f8-a88b-d01e2eb5d75c\">Nearly Defective<\/a><\/li><li><a href=\"#96adc58a-7542-49dd-a217-c6b2b498aca0\">A Better Decomposition<\/a><\/li><\/ul><\/div><h4>Theorem<a name=\"1d55040b-1271-4bb6-9980-4afaca49b25c\"><\/a><\/h4><p>Not many people know about this theorem.<\/p><p><i>Theorem: Any real matrix is equal to the product of two real symmetric matrices.<\/i><\/p><h4>Almost Proof<a name=\"a92c6244-f1e0-48eb-b7c6-485da865d328\"><\/a><\/h4><p>At first glance this theorem has nothing to do with eigenvalues. But here is the beginning of a proof, and a possible algorithm. Suppose that a real matrix $A$ has real, distinct eigenvalues. Then it can be diagonalized by the matrix $V$ of its eigenvectors.<\/p><p>$$A = V D V^{-1}$$<\/p><p>Because we are assuming there are no multiple eigenvalues, the matrix $V$ exists and is nonsingular, and the matrix $D$ is real and diagonal. Let<\/p><p>$$S_1 = V D V^T$$<\/p><p>$$S_2 = V^{-T} V^{-1}$$<\/p><p>Then $S_1$ and $S_2$ are real, symmetric, and their product is<\/p><p>$$S_1 S_2 = A$$<\/p><p>This argument is not a proof.  It just makes the theorem plausible. The challenge comes when the matrix has repeated eigenvalues and lacks a full set of eigenvectors, so it cannot be diagonalized. A complete proof would transform the matrix to its Rational Canonical Form or its Jordan Canonical form and construct explicit symmetric factors for the blocks in the canonical form.<\/p><h4>Degrees of Freedom<a name=\"11db6cb0-af8a-4454-852f-b891f81ef1b9\"><\/a><\/h4><p>If $A$ is $n$ -by- $n$ and<\/p><p>$$A = S_1 S_2$$<\/p><p>where each of the symmetric matrices has $n(n+1)\/2$ independent elements, then this is $n^2$ nonlinear equations in $n^2+n$ unknowns. It looks like there could be an $n$-parameter family of solutions. In my almost proof, each eigenvector is determined only up to a scale factor. These $n$ scale factors show up in $S_1$ and $S_2$ in complicated, nonlinear ways. I suspect that allowing complex scale factors parameterizes the complete set of solutions, but I'm not sure.<\/p><h4>Moler's Rules<a name=\"def5f727-dca7-44e5-a7c7-023019e61a7d\"><\/a><\/h4><p>My two Golden Rules of computation are:<\/p><div><ul><li>The hardest things to compute are things that do not exist.<\/li><li>The next hardest things to compute are things that are not unique.<\/li><\/ul><\/div><p>For the symmetric pair decomposition, our obscure theorem says the decomposition exists, but the degrees of freedom observation says it is probably not unique. Worse yet, the only algorithm we have requires a full set of eigenvectors, which may not exist. We will have to worry about these things.<\/p><h4>LU Decomposition<a name=\"ac3122ca-d721-437e-b2d8-368675d6bb71\"><\/a><\/h4><p>The most important decomposition in numerical linear algebra, the one we use to solve systems of simultaneous linear equations, is the LU decomposition. It expresses a permuted matrix as the product of two triangular factors.<\/p><p>$$P A = L U$$<\/p><p>The permutation matrix $P$ gives us existence and numerical stability. Putting ones on the diagonal of $L$ eliminates $n$ degrees of freedom and give us uniqueness.<\/p><h4>Magic Square<a name=\"0081e870-7858-450e-9fd2-3c101faf7c91\"><\/a><\/h4><p>Our first example involves one of my favorite matrices.<\/p><pre class=\"codeinput\">A = magic(3)\r\n<\/pre><pre class=\"codeoutput\">\r\nA =\r\n\r\n     8     1     6\r\n     3     5     7\r\n     4     9     2\r\n\r\n<\/pre><p>Use the Symbolic Toolbox to compute the eigenvalues and vectors exactly.<\/p><pre class=\"codeinput\">[V,D] = eig(sym(A))\r\n<\/pre><pre class=\"codeoutput\"> \r\nV =\r\n \r\n[ (2*6^(1\/2))\/5 - 7\/5, - (2*6^(1\/2))\/5 - 7\/5, 1]\r\n[ 2\/5 - (2*6^(1\/2))\/5,   (2*6^(1\/2))\/5 + 2\/5, 1]\r\n[                   1,                     1, 1]\r\n \r\n \r\nD =\r\n \r\n[ -2*6^(1\/2),         0,  0]\r\n[          0, 2*6^(1\/2),  0]\r\n[          0,         0, 15]\r\n \r\n<\/pre><p>Notice that the elements of <tt>V<\/tt> and <tt>D<\/tt> involve $\\sqrt{6}$ and so are irrational. Now let<\/p><pre class=\"codeinput\">S1 = simplify(V*D*V')\r\nS2 = simplify(inv(V*V'))\r\n<\/pre><pre class=\"codeoutput\"> \r\nS1 =\r\n \r\n[ 1047\/25, -57\/25,  27\/5]\r\n[  -57\/25, 567\/25, 123\/5]\r\n[    27\/5,  123\/5,    15]\r\n \r\n \r\nS2 =\r\n \r\n[ 3\/16, 1\/12,  1\/16]\r\n[ 1\/12,  1\/2,  -1\/4]\r\n[ 1\/16, -1\/4, 25\/48]\r\n \r\n<\/pre><p>The $\\sqrt{6}$ has disappeared. You can see that <tt>S1<\/tt> and <tt>S2<\/tt> are symmetric, have rational entries, and, as advertised, their product is<\/p><pre class=\"codeinput\">Product = S1*S2\r\n<\/pre><pre class=\"codeoutput\"> \r\nProduct =\r\n \r\n[ 8, 1, 6]\r\n[ 3, 5, 7]\r\n[ 4, 9, 2]\r\n \r\n<\/pre><p>Let's play with the scale factors a bit. I particularly like<\/p><pre class=\"codeinput\">V(:,3) = 2\r\nS1 = simplify(V*D*V')\/48\r\nS2 = 48*simplify(inv(V*V'))\r\n<\/pre><pre class=\"codeoutput\"> \r\nV =\r\n \r\n[ (2*6^(1\/2))\/5 - 7\/5, - (2*6^(1\/2))\/5 - 7\/5, 2]\r\n[ 2\/5 - (2*6^(1\/2))\/5,   (2*6^(1\/2))\/5 + 2\/5, 2]\r\n[                   1,                     1, 2]\r\n \r\n \r\nS1 =\r\n \r\n[ 181\/100,  89\/100, 21\/20]\r\n[  89\/100, 141\/100, 29\/20]\r\n[   21\/20,   29\/20,   5\/4]\r\n \r\n \r\nS2 =\r\n \r\n[  5,   0,  -1]\r\n[  0,  20, -16]\r\n[ -1, -16,  21]\r\n \r\n<\/pre><p>Now <tt>S1<\/tt> has decimal fraction entries, and <tt>S2<\/tt> has integer entries, including two zeros. Let's leave the symbolic world.<\/p><pre class=\"codeinput\">S1 = double(S1)\r\nS2 = double(S2)\r\nProduct = S1*S2\r\n<\/pre><pre class=\"codeoutput\">\r\nS1 =\r\n\r\n    1.8100    0.8900    1.0500\r\n    0.8900    1.4100    1.4500\r\n    1.0500    1.4500    1.2500\r\n\r\n\r\nS2 =\r\n\r\n     5     0    -1\r\n     0    20   -16\r\n    -1   -16    21\r\n\r\n\r\nProduct =\r\n\r\n     8     1     6\r\n     3     5     7\r\n     4     9     2\r\n\r\n<\/pre><p>I can't promise to get such pretty results with other examples.<\/p><h4>Nearly Defective<a name=\"81f7a44f-08bf-48f8-a88b-d01e2eb5d75c\"><\/a><\/h4><p>Suppose I want to compute the symmetric pair decomposition of this perturbation of a Jordan block.<\/p><pre class=\"codeinput\">e = sym(<span class=\"string\">'e'<\/span>,<span class=\"string\">'positive'<\/span>);\r\nA = [2 1 0 0; 0 2 1 0; 0 0 2 1; e 0 0 2]\r\n<\/pre><pre class=\"codeoutput\"> \r\nA =\r\n \r\n[ 2, 1, 0, 0]\r\n[ 0, 2, 1, 0]\r\n[ 0, 0, 2, 1]\r\n[ e, 0, 0, 2]\r\n \r\n<\/pre><p>Here is the eigenvalue decomposition. The vectors have been scaled so that the last component is equal to 1. The eigenvalues are located on a circle in the complex plane centered at 2, with a radius of <tt>e^(1\/4)<\/tt>, which is the signature of an eigenvalue of multiplicity 4.<\/p><pre class=\"codeinput\">[V,D] = eig(A);\r\nV = simplify(V)\r\nD = simplify(D)\r\n<\/pre><pre class=\"codeoutput\"> \r\nV =\r\n \r\n[ -i\/e^(3\/4),  i\/e^(3\/4), 1\/e^(3\/4), -1\/e^(3\/4)]\r\n[ -1\/e^(1\/2), -1\/e^(1\/2), 1\/e^(1\/2),  1\/e^(1\/2)]\r\n[  i\/e^(1\/4), -i\/e^(1\/4), 1\/e^(1\/4), -1\/e^(1\/4)]\r\n[          1,          1,         1,          1]\r\n \r\n \r\nD =\r\n \r\n[ 2 - e^(1\/4)*i,             0,           0,           0]\r\n[             0, e^(1\/4)*i + 2,           0,           0]\r\n[             0,             0, e^(1\/4) + 2,           0]\r\n[             0,             0,           0, 2 - e^(1\/4)]\r\n \r\n<\/pre><p>Here is the symmetric pair decomposition resulting from this eigenvalue decomposition.<\/p><pre class=\"codeinput\">S1 = simplify(V*D*V.')\r\nS2 = simplify(inv(V*V.'))\r\n<\/pre><pre class=\"codeoutput\"> \r\nS1 =\r\n \r\n[   0, 4\/e, 8\/e, 0]\r\n[ 4\/e, 8\/e,   0, 0]\r\n[ 8\/e,   0,   0, 4]\r\n[   0,   0,   4, 8]\r\n \r\n \r\nS2 =\r\n \r\n[   0,   0, e\/4,   0]\r\n[   0, e\/4,   0,   0]\r\n[ e\/4,   0,   0,   0]\r\n[   0,   0,   0, 1\/4]\r\n \r\n<\/pre><p>Well, this sort of does the job. <tt>S1<\/tt> and <tt>S2<\/tt> are symmetric and their product is equal to <tt>A<\/tt>.<\/p><pre class=\"codeinput\">Product = S1*S2\r\n<\/pre><pre class=\"codeoutput\"> \r\nProduct =\r\n \r\n[ 2, 1, 0, 0]\r\n[ 0, 2, 1, 0]\r\n[ 0, 0, 2, 1]\r\n[ e, 0, 0, 2]\r\n \r\n<\/pre><p>But I am worried that the factors are very badly scaled. As I make <tt>e<\/tt> smaller, the large elements in <tt>S1<\/tt> get larger, and the small elements in <tt>S2<\/tt> get smaller. The decomposition breaks down.<\/p><h4>A Better Decomposition<a name=\"96adc58a-7542-49dd-a217-c6b2b498aca0\"><\/a><\/h4><p>A better decomposition is just a rotation. These two matrices are symmetric and their product is <tt>A<\/tt>.<\/p><pre class=\"codeinput\">S2 = sym(rot90(eye(size(A))))\r\nS1 = A\/S2\r\nProduct = S1*S2\r\n<\/pre><pre class=\"codeoutput\"> \r\nS2 =\r\n \r\n[ 0, 0, 0, 1]\r\n[ 0, 0, 1, 0]\r\n[ 0, 1, 0, 0]\r\n[ 1, 0, 0, 0]\r\n \r\n \r\nS1 =\r\n \r\n[ 0, 0, 1, 2]\r\n[ 0, 1, 2, 0]\r\n[ 1, 2, 0, 0]\r\n[ 2, 0, 0, e]\r\n \r\n \r\nProduct =\r\n \r\n[ 2, 1, 0, 0]\r\n[ 0, 2, 1, 0]\r\n[ 0, 0, 2, 1]\r\n[ e, 0, 0, 2]\r\n \r\n<\/pre><p>Can I reproduce this decomposition by rescaling the eigenvectors? Here is code that uses the symbolic solve function to compute new scale factors.  If you want to see how it works, download this M-file using the link at the end of this post, remove the semicolons in this section, and run or <tt>publish<\/tt> it again.<\/p><pre class=\"codeinput\">s = sym(<span class=\"string\">'s'<\/span>,[4,1]);\r\nV = V*diag(s);\r\nT = simplify(inv(V*V.'));\r\nsoln = solve(T(:,1)-S2(:,1));\r\ns = [soln.s1(1); soln.s2(1); soln.s3(1); soln.s4(1)]\r\n<\/pre><pre class=\"codeoutput\"> \r\ns =\r\n \r\n  (e^(3\/4)*i)^(1\/2)\/2\r\n (-e^(3\/4)*i)^(1\/2)\/2\r\n            e^(3\/8)\/2\r\n   (-e^(3\/4))^(1\/2)\/2\r\n \r\n<\/pre><p>These scale factors are complex numbers with magnitude <tt>e^(3\/8)\/2<\/tt>. Let's rescale the eigenvectors.  Of course, the eigenvalues don't change.<\/p><pre class=\"codeinput\">[V,D] = eig(A);\r\nV = simplify(V*diag(s))\r\n<\/pre><pre class=\"codeoutput\"> \r\nV =\r\n \r\n[ -(-1)^(3\/4)\/(2*e^(3\/8)),    (-1)^(1\/4)\/(2*e^(3\/8)), 1\/(2*e^(3\/8)), -i\/(2*e^(3\/8))]\r\n[ -(-1)^(1\/4)\/(2*e^(1\/8)), -1\/(-1)^(1\/4)\/(2*e^(1\/8)), 1\/(2*e^(1\/8)),  i\/(2*e^(1\/8))]\r\n[  ((-1)^(3\/4)*e^(1\/8))\/2,   -((-1)^(1\/4)*e^(1\/8))\/2,     e^(1\/8)\/2, -(e^(1\/8)*i)\/2]\r\n[  ((-1)^(1\/4)*e^(3\/8))\/2,  (1\/(-1)^(1\/4)*e^(3\/8))\/2,     e^(3\/8)\/2,  (e^(3\/8)*i)\/2]\r\n \r\n<\/pre><p>Now these eigenvectors produce the same stable decomposition as the rotation.<\/p><pre class=\"codeinput\">S1 = simplify(V*D*V.')\r\nS2 = simplify(inv(V*V.'))\r\n<\/pre><pre class=\"codeoutput\"> \r\nS1 =\r\n \r\n[ 0, 0, 1, 2]\r\n[ 0, 1, 2, 0]\r\n[ 1, 2, 0, 0]\r\n[ 2, 0, 0, e]\r\n \r\n \r\nS2 =\r\n \r\n[ 0, 0, 0, 1]\r\n[ 0, 0, 1, 0]\r\n[ 0, 1, 0, 0]\r\n[ 1, 0, 0, 0]\r\n \r\n<\/pre><p>Can carefully choosing the scaling of the eigenvectors be the basis for a sound numerical algorithm? I doubt it. We're still trying to compute something that is not unique, using factors that almost do not exist. It's pretty shaky.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_35f02c9afa664ddd813f0286d373485b() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='35f02c9afa664ddd813f0286d373485b ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 35f02c9afa664ddd813f0286d373485b';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2012 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_35f02c9afa664ddd813f0286d373485b()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; 7.14<br><\/p><p class=\"footer\"><br>\r\n      Published with MATLAB&reg; 7.14<br><\/p><\/div><!--\r\n35f02c9afa664ddd813f0286d373485b ##### SOURCE BEGIN #####\r\n%% Symmetric Pair Decomposition\r\n% An esoteric fact about matrices is that any real matrix can be written as\r\n% the product of two symmetric matrices.\r\n% I've known about this fact for years, but never seriously explored\r\n% the computational aspects.\r\n% So I'm using this post to clarify my own understanding of\r\n% what I'll call the _symmetric pair decomposition_.\r\n% It turns out that there are open questions.\r\n% I don't think we know how to reliably compute the factors.\r\n% But I also have to admit that, even if we could compute them,\r\n% I don't know of any practical use.\r\n%\r\n%% Theorem\r\n% Not many people know about this theorem.\r\n%\r\n% _Theorem: Any real matrix is equal to the product of two real\r\n% symmetric matrices._\r\n%\r\n%% Almost Proof\r\n% At first glance this theorem has nothing to do with eigenvalues.\r\n% But here is the beginning of a proof, and a possible algorithm.\r\n% Suppose that a real matrix $A$ has real, distinct eigenvalues.\r\n% Then it can be diagonalized by the matrix $V$ of its eigenvectors.\r\n%\r\n% $$A = V D V^{-1}$$\r\n%\r\n% Because we are assuming there are no multiple eigenvalues,\r\n% the matrix $V$ exists and is nonsingular,\r\n% and the matrix $D$ is real and diagonal.\r\n% Let\r\n%\r\n% $$S_1 = V D V^T$$\r\n%\r\n%\r\n% $$S_2 = V^{-T} V^{-1}$$\r\n%\r\n% Then $S_1$ and $S_2$ are real, symmetric, and their product is\r\n%\r\n% $$S_1 S_2 = A$$\r\n%\r\n% This argument is not a proof.  It just makes the theorem plausible.\r\n% The challenge comes when the matrix has repeated eigenvalues and\r\n% lacks a full set of eigenvectors, so it cannot be diagonalized.\r\n% A complete proof would transform the matrix to its Rational Canonical\r\n% Form or its Jordan Canonical form and construct explicit symmetric factors\r\n% for the blocks in the canonical form.\r\n\r\n%% Degrees of Freedom\r\n% If $A$ is $n$ -by- $n$ and\r\n%\r\n% $$A = S_1 S_2$$\r\n%\r\n% where each of the symmetric matrices has $n(n+1)\/2$ independent elements,\r\n% then this is $n^2$ nonlinear equations in $n^2+n$ unknowns.\r\n% It looks like there could be an $n$-parameter family of solutions.\r\n% In my almost proof, each eigenvector is determined only up to a\r\n% scale factor.\r\n% These $n$ scale factors show up in $S_1$ and $S_2$ in complicated,\r\n% nonlinear ways.\r\n% I suspect that allowing complex scale factors parameterizes\r\n% the complete set of solutions, but I'm not sure.\r\n\r\n%% Moler's Rules\r\n% My two Golden Rules of computation are:\r\n%\r\n% * The hardest things to compute are things that do not exist.\r\n% * The next hardest things to compute are things that are not unique.\r\n%\r\n% For the symmetric pair decomposition, our obscure theorem says\r\n% the decomposition exists,\r\n% but the degrees of freedom observation says it is probably not unique.\r\n% Worse yet, the only algorithm we have requires a full set of eigenvectors,\r\n% which may not exist.\r\n% We will have to worry about these things.\r\n\r\n%% LU Decomposition\r\n% The most important decomposition in numerical linear algebra,\r\n% the one we use to solve systems of simultaneous linear equations,\r\n% is the LU decomposition.\r\n% It expresses a permuted matrix as the product of two triangular factors.\r\n%\r\n% $$P A = L U$$\r\n%\r\n% The permutation matrix $P$ gives us existence and numerical stability.\r\n% Putting ones on the diagonal of $L$ eliminates $n$ degrees of freedom and\r\n% give us uniqueness.\r\n% \r\n\r\n%% Magic Square\r\n%\r\n% Our first example involves one of my favorite matrices.\r\n\r\nA = magic(3)\r\n\r\n%%\r\n% Use the Symbolic Toolbox to compute the eigenvalues and vectors exactly.\r\n\r\n[V,D] = eig(sym(A))\r\n\r\n%%\r\n% Notice that the elements of |V| and |D| involve $\\sqrt{6}$\r\n% and so are irrational.\r\n% Now let\r\n\r\nS1 = simplify(V*D*V')\r\nS2 = simplify(inv(V*V'))\r\n\r\n%%\r\n% The $\\sqrt{6}$ has disappeared.\r\n% You can see that |S1| and |S2| are symmetric, have rational entries,\r\n% and, as advertised, their product is\r\n\r\nProduct = S1*S2 \r\n\r\n%%\r\n% Let's play with the scale factors a bit.\r\n% I particularly like\r\n\r\nV(:,3) = 2\r\nS1 = simplify(V*D*V')\/48\r\nS2 = 48*simplify(inv(V*V'))\r\n\r\n%%\r\n% Now |S1| has decimal fraction entries, and |S2| has integer entries,\r\n% including two zeros.\r\n% Let's leave the symbolic world.\r\n\r\nS1 = double(S1)\r\nS2 = double(S2)\r\nProduct = S1*S2\r\n\r\n%%\r\n% I can't promise to get such pretty results with other examples.\r\n\r\n%% Nearly Defective\r\n% Suppose I want to compute the symmetric pair decomposition of\r\n% this perturbation of a Jordan block.\r\n\r\ne = sym('e','positive');\r\nA = [2 1 0 0; 0 2 1 0; 0 0 2 1; e 0 0 2]\r\n\r\n%%\r\n% Here is the eigenvalue decomposition.\r\n% The vectors have been scaled so that the last component is equal to 1.\r\n% The eigenvalues are located on a circle in the complex plane\r\n% centered at 2, with a radius of |e^(1\/4)|, which is the signature\r\n% of an eigenvalue of multiplicity 4.\r\n\r\n[V,D] = eig(A);\r\nV = simplify(V)\r\nD = simplify(D)\r\n\r\n%%\r\n% Here is the symmetric pair decomposition resulting from\r\n% this eigenvalue decomposition.\r\n\r\nS1 = simplify(V*D*V.')\r\nS2 = simplify(inv(V*V.'))\r\n\r\n%%\r\n% Well, this sort of does the job.\r\n% |S1| and |S2| are symmetric and their product is equal to |A|.\r\nProduct = S1*S2\r\n\r\n%%\r\n% But I am worried that the factors are very badly scaled.\r\n% As I make |e| smaller, the large elements in |S1| get larger,\r\n% and the small elements in |S2| get smaller.\r\n% The decomposition breaks down.\r\n\r\n%% A Better Decomposition\r\n% A better decomposition is just a rotation.\r\n% These two matrices are symmetric and their product is |A|.\r\n\r\nS2 = sym(rot90(eye(size(A))))\r\nS1 = A\/S2\r\nProduct = S1*S2\r\n\r\n%%\r\n% Can I reproduce this decomposition by rescaling the eigenvectors?\r\n% Here is code that uses the symbolic solve function to compute\r\n% new scale factors.  If you want to see how it works, download this\r\n% M-file using the link at the end of this post, remove the semicolons\r\n% in this section, and run or |publish| it again.\r\n\r\ns = sym('s',[4,1]);\r\nV = V*diag(s);\r\nT = simplify(inv(V*V.'));\r\nsoln = solve(T(:,1)-S2(:,1));\r\ns = [soln.s1(1); soln.s2(1); soln.s3(1); soln.s4(1)]\r\n\r\n%%\r\n% These scale factors are complex numbers with magnitude\r\n% |e^(3\/8)\/2|.\r\n% Let's rescale the eigenvectors.  Of course, the eigenvalues don't change.\r\n\r\n[V,D] = eig(A);\r\nV = simplify(V*diag(s))\r\n\r\n%%\r\n% Now these eigenvectors produce the same stable decomposition as the rotation.\r\n\r\nS1 = simplify(V*D*V.')\r\nS2 = simplify(inv(V*V.'))\r\n\r\n%%\r\n% Can carefully choosing the scaling of the eigenvectors\r\n% be the basis for a sound numerical algorithm?\r\n% I doubt it.\r\n% We're still trying to compute something that is\r\n% not unique, using factors that almost do not exist.\r\n% It's pretty shaky.\r\n\r\n##### SOURCE END ##### 35f02c9afa664ddd813f0286d373485b\r\n-->","protected":false},"excerpt":{"rendered":"<!--introduction--><p>An esoteric fact about matrices is that any real matrix can be written as the product of two symmetric matrices. I've known about this fact for years, but never seriously explored the computational aspects. So I'm using this post to clarify my own understanding of what I'll call the <i>symmetric pair decomposition<\/i>. It turns out that there are open questions. I don't think we know how to reliably compute the factors. But I also have to admit that, even if we could compute them, I don't know of any practical use.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2012\/07\/02\/symmetric-pair-decomposition\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[6],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/165"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=165"}],"version-history":[{"count":17,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/165\/revisions"}],"predecessor-version":[{"id":187,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/165\/revisions\/187"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=165"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=165"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=165"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}