{"id":12428,"date":"2025-02-23T16:27:51","date_gmt":"2025-02-23T21:27:51","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=12428"},"modified":"2025-02-25T11:57:39","modified_gmt":"2025-02-25T16:57:39","slug":"two-flavors-of-svd","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2025\/02\/23\/two-flavors-of-svd\/","title":{"rendered":"Two Flavors of SVD"},"content":{"rendered":"\r\n<div class=\"content\"><!--introduction-->\r\n<p>MATLAB has two different ways to compute singular values. The easiest is to compute the singular values without the singular vectors. Use <tt>svd<\/tt> with one output argument, s1.<\/p>\r\n<pre class=\"language-matlab\")>\r\n    s1 = svd(A)\r\n<\/pre>\r\n<p>The alternative is to use <tt>svd<\/tt> with three outputs. Ignore the first and third output and specify the second output to be a column vector, s2.<\/p>\r\n<pre class=\"language-matlab\")>\r\n    [~,s2,~] = svd(A,'vector')\r\n<\/pre>\r\n<p>The MathWorks technical support team receives calls from observant users who notice that the two approaches might produce different singular values. Which is more accurate, s1 or s2? Which is faster? Which should we use?<\/p>\r\n<p>I found myself asking the same questions.<\/p>\r\n<p>A key feature of all our experiments is the <i>rank<\/i> of the matrix. Let's investigate three cases: a rank 2 matrix, a low rank matrix, and a full rank matrix.<\/p>\r\n<!--\/introduction-->\r\n<h3>Contents<\/h3>\r\n<div>\r\n<ul>\r\n<li>\r\n<a href=\"#fd3a68bb-1f20-4500-b3d1-bf2099fd49e9\">Checkerboard<\/a>\r\n<\/li>\r\n<li>\r\n<a href=\"#065e6691-eb01-4a82-ae37-a9b3059a260b\">Low Rank<\/a>\r\n<\/li>\r\n<li>\r\n<a href=\"#40d675c5-bb03-40f6-9a57-c5e2787e4265\">Full Rank<\/a>\r\n<\/li>\r\n<li>\r\n<a href=\"#5c02fb6b-8753-4d35-9fbe-2212f25326bd\">Timing<\/a>\r\n<\/li>\r\n<li>\r\n<a href=\"#4e2c5535-bc57-4b67-a0dc-6baf130b19c3\">Remarks<\/a>\r\n<\/li>\r\n<li>\r\n<a href=\"#e6ae1ded-1efe-4610-9266-c60ca3540466\">Software<\/a>\r\n<\/li>\r\n<\/ul>\r\n<\/div>\r\n<h4>Checkerboard<a name=\"fd3a68bb-1f20-4500-b3d1-bf2099fd49e9\"><\/a>\r\n<\/h4>\r\n<p>The first example, motivated by a recent query, is an 80-by-80 matrix of zeros and ones derived from the <tt>checkerboard<\/tt> function in the Image Processing Toolbox.<\/p>\r\n<pre class=\"codeinput\">\r\n    A = double(checkerboard > 0);\r\n<\/pre>\r\n<p>The rank of the checkerboard matrix is 2.<\/p>\r\n<pre class=\"codeinput\">\r\n    r = rank(A)\r\n<\/pre>\r\n<pre class=\"codeoutput\">\r\n    r =\r\n        2\r\n<\/pre>\r\n<p>The <tt>image<\/tt> function provides the same portrait of A as its <tt>spy<\/tt> plot.<\/p>\r\n<pre class=\"codeinput\">\r\n    imagem(A)\r\n    title('rank 2')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_01.png\" alt=\"\"> <p>Let's begin with the exact singular values. The two nonzero singular values are both equal to 40. The zero singular value has multiplicity 78.<\/p>\r\n<pre class=\"codeinput\">\r\n    s = [40; 40; zeros(78,1)];\r\n    disp('  s =')\r\n    disp(s(1:5))\r\n    disp('     :')\r\n    disp(s(end-2:end))\r\n<\/pre>\r\n<pre class=\"codeoutput\"> \r\n    s =\r\n        40\r\n        40\r\n         0\r\n         0\r\n         0\r\n         :\r\n         0\r\n         0\r\n         0\r\n<\/pre>\r\n<p>A perfect plot of the singular values of a rank 2 matrix would look like this.<\/p>\r\n<pre class=\"codeinput\">\r\n    plotem(s)\r\n    title('rank 2')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_02.png\" alt=\"\"> <p>What do we get when we actually compute the SVD of the checkerboard?<\/p>\r\n<p>The built-in SVD function uses Householder reflections to reduce the matrix to bidiagonal form. When the vectors are not required, a divide and conquer iteration then reduces the bidiagonal to diagonal. In all our examples, after divide and conquer has found the nonzero singular values, all that remains is roundoff error. Despite this fact, the iterations are continued in order to find all singular values \"to high relative error independent of their magnitudes.\" We have roundoff in roundoff, then roundoff in roundoff in roundoff, and so on.<\/p>\r\n<p>The following logarithmic plot of s1 for the checkerboard matrix shows the phenomenon that I like to call \"cascading roundoff\". There are lines at <tt>norm(A)<\/tt> and at<\/p>\r\n<pre class=\"codeinput\">\r\n    tol = eps(norm(A))\r\n<\/pre>\r\n<pre class=\"codeoutput\">\r\n    tol =\r\n        7.1054e-15\r\n<\/pre>\r\n<p>This is the tolerance initially involved in the convergence test. The steps in the plot are at powers of <tt>tol<\/tt>.<\/p>\r\n<pre class=\"codeinput\">\r\n    s1 = svd(A);\r\n    plotem(s1,inf)\r\n    title('rank 2')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_03.png\" alt=\"\"> <p>If we specify three outputs, the iterative portion of the SVD function uses a traditional QR iteration with a more conservative convergence criterion instead of divide and conquer. There is no cascading roundoff. All the s2 singular values that would be zero with exact computation are of size <tt>tol<\/tt>.<\/p>\r\n<pre class=\"codeinput\">\r\n    [~,s2,~] = svd(A,'vector');\r\n    plotem(s1,s2,[-48,4])\r\n    title('rank 2')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_04.png\" alt=\"\"> <p>The plots of s1 and s2 are very different, and neither plot looks like the plot of the exact answer. However, all three plots agree about the double value at 40. And all the disagreements, including the cascading roundoff in s1, are below the line at <tt>tol<\/tt>.<\/p>\r\n<p>The checkerboard example shows that s1 and s2 can be very different, but s1 and s2 are much closer to each other than either is to the right answer.<\/p>\r\n<h4>Low Rank<a name=\"065e6691-eb01-4a82-ae37-a9b3059a260b\"><\/a>\r\n<\/h4>\r\n<p>For a matrix with rank between 2 and full order n, we can use the venerable Hilbert matrix. We have a row vector j and a column vector k in a singleton expansion.<\/p>\r\n<pre class=\"codeinput\">\r\n    n = 80;\r\n    j = 1:n;\r\n    k = (1:n)';\r\n    A = 1.\/(k+j-1);\r\n<\/pre>\r\n<p>The effective rank turns out to be 17.<\/p>\r\n<pre class=\"codeinput\">\r\n    r = rank(A)\r\n<\/pre>\r\n<pre class=\"codeoutput\">\r\n    r =\r\n        17\r\n<\/pre>\r\n<p>The elements of this Hilbert matrix vary over three orders of magnitude, so a logarithmic image is appropriate.<\/p>\r\n<pre class=\"codeinput\">\r\n    imagem(floor(log2(A)))\r\n    title('low rank')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_05.png\" alt=\"\"> <p>Compare the one-output and the three-output singular values,<\/p>\r\n<pre class=\"codeinput\">\r\n    s1 = svd(A);\r\n    [~,s2,~] = svd(A,'vector');\r\n    plotem(s1,s2,[-22,3])\r\n    title('low rank')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_06.png\" alt=\"\"> <p>The results s1 and s2 agree about the first 17 values and all disagreements are below the line at <tt>tol<\/tt>.<\/p>\r\n<h4>Full Rank<a name=\"40d675c5-bb03-40f6-9a57-c5e2787e4265\"><\/a>\r\n<\/h4>\r\n<p>Using the same column vector k and row vector j in another example of single expansion produces a full rank matrix.<\/p>\r\n<pre class=\"codeinput\">\r\n    A = min(k,j);\r\n<\/pre>\r\n<p>Check that A has full rank.<\/p>\r\n<pre class=\"codeinput\">\r\n    r = rank(A)\r\n<\/pre>\r\n<pre class=\"codeoutput\">\r\n    r =\r\n        80\r\n<\/pre>\r\n<p>The logarithm is not needed for this image.<\/p>\r\n<pre class=\"codeinput\">\r\n    imagem(A)\r\n    title('full rank')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_07.png\" alt=\"\"> <p>Compare the one-output and the three-output singular values,<\/p>\r\n<pre class=\"codeinput\">\r\n    s1 = svd(A);\r\n    [~,s2,~] = svd(A,'vector');\r\n    plotem(s1,s2,[-2,4])\r\n    title('full rank')\r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_08.png\" alt=\"\"> <p>With full rank, all values of s1 and s2 are essentially equal. Any line at <tt>tol<\/tt> would be irrelevant.<\/p>\r\n<h4>Timing<a name=\"5c02fb6b-8753-4d35-9fbe-2212f25326bd\"><\/a>\r\n<\/h4>\r\n<p>Which is faster, s1 or s2?<\/p>\r\n<p>One extensive timing experiment involves matrices with full rank and with orders<\/p>\r\n<pre class=\"language-matlab\")>\r\n    n = 250:250:5000  \r\n<\/pre>\r\n<p>The times measured for computing s1 and s2 are shown by the o's in the following plot.<\/p>\r\n<p>Since the time complexity for computing SVD is O(n^3), we fit the data by cubic polynomials. For large n, the time required is dominated by the n^3 terms. The ratio of the coefficients of these dominate terms is<\/p>\r\n<pre class=\"language-matlab\")>\r\n    ratio = 1.17\r\n<\/pre>\r\n<p>In other words, for large problems s1 is about 17 percent faster than s2.<\/p>\r\n<pre class=\"codeinput\">\r\n    timefit \r\n<\/pre>\r\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_09.png\" alt=\"\"> <h4>Remarks<a name=\"4e2c5535-bc57-4b67-a0dc-6baf130b19c3\"><\/a>\r\n<\/h4>\r\n<p>I will admit to a personal preference for s2 over s1. I am more familiar with QR than I am with divide and conquer. As a result, I have more confidence in s2.<\/p>\r\n<p>I realize that the LAPACK divide and conquer driver DSVDD can achieve the stated goal of finding all singular values \"to high relative error independent of their magnitudes\" if the input A is bidiagonal and known exactly. But I don't see that in MATLAB with s1. I suspect that MATLAB does not make a special case for bidiagonal svd.<\/p>\r\n<p>I will be very happy to see any other examples. Please comment.<\/p>\r\n<h4>Software<a name=\"e6ae1ded-1efe-4610-9266-c60ca3540466\"><\/a>\r\n<\/h4>\r\n<p>This blog post is executable. You can <tt>publish<\/tt> it if you also have the three files in <a href=\"https:\/\/blogs.mathworks.com\/cleve\/files\/Checkerboard_mzip.m\">Checkerboard_mzip<\/a>.<\/p>\r\n<script language=\"JavaScript\"> <!-- \r\n    function grabCode_344eb3b8bac34d61ba92152693925f2e() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='344eb3b8bac34d61ba92152693925f2e ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 344eb3b8bac34d61ba92152693925f2e';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2025 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script>\r\n<p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\">\r\n<br>\r\n<a href=\"javascript:grabCode_344eb3b8bac34d61ba92152693925f2e()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript>\r\n<\/span><\/a>\r\n<br>\r\n<br>\r\n      Published with MATLAB&reg; R2024b<br>\r\n<\/p>\r\n<\/div>\r\n<!--\r\n344eb3b8bac34d61ba92152693925f2e ##### SOURCE BEGIN #####\r\n%% Two Flavors of SVD\r\n% MATLAB has two different ways to compute singular values.\r\n% The easiest is to compute the singular values without the singular\r\n% vectors.  Use |svd| with one output argument, s1.\r\n%\r\n%   s1 = svd(A)\r\n%\r\n% The alternative is to use |svd| with three outputs.  Ignore the \r\n% first and third output and specify the second output to be a\r\n% column vector, s2.\r\n%\r\n%   [~,s2,~] = svd(A,'vector')\r\n%\r\n% The MathWorks technical support team receives calls from observant\r\n% users who notice that the two approaches might produce different\r\n% singular values.\r\n% Which is more accurate, s1 or s2?\r\n% Which is faster?\r\n% Which should we use?\r\n%\r\n% I found myself asking the same questions.\r\n%\r\n% A key feature of all our experiments is the _rank_ of the matrix.\r\n% Let's investigate three cases: a rank 2 matrix, a low rank matrix,\r\n% and a full rank matrix.\r\n\r\n%% Checkerboard\r\n% The first example, motivated by a recent query,\r\n% is an 80-by-80 matrix of zeros and ones\r\n% derived from the |checkerboard| function in the\r\n% Image Processing Toolbox.\r\n\r\n    A = double(checkerboard > 0);\r\n\r\n%%\r\n% The rank of the checkerboard matrix is 2.\r\n\r\n    r = rank(A)\r\n\r\n%%\r\n% The |image| function provides the same portrait of A as its |spy| plot.\r\n\r\n    imagem(A)\r\n    title('rank 2')\r\n\r\n%%\r\n% Let's begin with the exact singular values.\r\n% The two nonzero singular values are both equal to 40.\r\n% The zero singular value has multiplicity 78.\r\n\r\n    s = [40; 40; zeros(78,1)];\r\n    disp('  s =')\r\n    disp(s(1:5))\r\n    disp('     :')\r\n    disp(s(end-2:end))\r\n\r\n%%\r\n% A perfect plot of the singular values of a rank 2 matrix\r\n% would look like this.\r\n\r\n    plotem(s)\r\n    title('rank 2')\r\n\r\n%%\r\n% What do we get when we actually compute the SVD of the checkerboard?\r\n%\r\n% The built-in SVD function uses Householder reflections to reduce\r\n% the matrix to bidiagonal form.  When the vectors are not required,\r\n% a divide and conquer iteration then reduces the bidiagonal to diagonal.\r\n% In all our examples, after divide and conquer has found the nonzero\r\n% singular values, all that remains is roundoff error.  Despite this\r\n% fact, the iterations are continued in order to find all singular\r\n% values \"to high relative error independent of their magnitudes.\"\r\n% We have roundoff in roundoff, then roundoff in roundoff in roundoff,\r\n% and so on. \r\n%\r\n% The following logarithmic plot of s1 for the checkerboard matrix\r\n% shows the phenomenon that I like\r\n% to call \"cascading roundoff\". There are lines at |norm(A)| and at\r\n\r\n    tol = eps(norm(A))\r\n\r\n%%\r\n% This is the tolerance initially involved in the convergence test.\r\n% The steps in the plot are at powers of |tol|.\r\n\r\n    s1 = svd(A);\r\n    plotem(s1,inf)\r\n    title('rank 2')\r\n\r\n%%\r\n% If we specify three outputs, the iterative portion of the SVD function\r\n% uses a traditional QR iteration with a more conservative convergence \r\n% criterion instead of divide and conquer.\r\n% There is no cascading roundoff. All the s2 singular values\r\n% that would be zero with exact computation are of size |tol|.\r\n\r\n    [~,s2,~] = svd(A,'vector');\r\n    plotem(s1,s2,[-48,4])\r\n    title('rank 2')\r\n\r\n%%\r\n% The plots of s1 and s2 are very different, and neither plot looks like\r\n% the plot of the exact answer. However, all three plots agree about the\r\n% double value at 40.  And all the disagreements, including the cascading\r\n% roundoff in s1, are below the line at |tol|.\r\n%\r\n% The checkerboard example shows that s1 and s2 can be very different,\r\n% but s1 and s2 are much closer to each other than either is to the \r\n% right answer.\r\n\r\n\r\n%% Low Rank\r\n% For a matrix with rank between 2 and full order n, we can use the\r\n% venerable Hilbert matrix. We have a row vector j and a column vector k\r\n% in a singleton expansion.\r\n\r\n    n = 80;\r\n    j = 1:n;\r\n    k = (1:n)';\r\n    A = 1.\/(k+j-1);\r\n\r\n%%\r\n% The effective rank turns out to be 17.\r\n\r\n    r = rank(A)\r\n\r\n%%\r\n% The elements of this Hilbert matrix vary over three orders of\r\n% magnitude, so a logarithmic image is appropriate.\r\n\r\n    imagem(floor(log2(A)))\r\n    title('low rank')\r\n\r\n%%\r\n% Compare the one-output and the three-output singular values,\r\n\r\n    s1 = svd(A);\r\n    [~,s2,~] = svd(A,'vector');\r\n    plotem(s1,s2,[-22,3])\r\n    title('low rank')\r\n\r\n%%\r\n% The results s1 and s2 agree about the first 17 values and\r\n% all disagreements are below the line \r\n% at |tol|.\r\n\r\n%% Full Rank\r\n% Using the same column vector k and row vector j in another\r\n% example of single expansion produces a full rank matrix.\r\n\r\n    A = min(k,j);\r\n\r\n%%\r\n% Check that A has full rank.\r\n\r\n    r = rank(A)\r\n\r\n%%\r\n% The logarithm is not needed for this image.\r\n\r\n    imagem(A)\r\n    title('full rank')\r\n\r\n%%\r\n% Compare the one-output and the three-output singular values,\r\n\r\n    s1 = svd(A);\r\n    [~,s2,~] = svd(A,'vector');\r\n    plotem(s1,s2,[-2,4])\r\n    title('full rank')\r\n\r\n%%\r\n% With full rank, all values of s1 and s2 are essentially equal.\r\n% Any line at |tol| would be irrelevant.\r\n\r\n%% Timing\r\n% Which is faster, s1 or s2?\r\n%\r\n% One extensive timing experiment involves matrices with full rank\r\n% and with orders\r\n% \r\n%   n = 250:250:5000  \r\n%\r\n% The times measured for computing s1 and s2 are shown by the o's in\r\n% the following plot.\r\n%\r\n% Since the time complexity for computing SVD is O(n^3),\r\n% we fit the data by cubic polynomials.\r\n% For large n, the time required is dominated by the n^3 terms.\r\n% The ratio of the coefficients of these dominate terms is\r\n%\r\n%   ratio = 1.17\r\n%\r\n% In other words, for large problems s1 is about 17 percent faster \r\n% than s2.\r\n\r\n    timefit \r\n\r\n%% Remarks\r\n% I will admit to a personal preference for s2 over s1.\r\n% I am more familiar with QR than I am with divide and conquer.\r\n% As a result, I have more confidence in s2.\r\n%\r\n% I realize that the LAPACK divide and conquer driver DSVDD can achieve\r\n% the stated goal of finding all singular values \"to high relative error\r\n% independent of their magnitudes\" if the input A is bidiagonal and\r\n% known exactly.  But I don't see that in MATLAB with s1.\r\n% I suspect that MATLAB does not make a special case for bidiagonal svd.\r\n%\r\n% I will be very happy to see any other examples.\r\n% Please comment.\r\n\r\n%% Software\r\n% This blog post is executable.  You can |publish| it\r\n% if you also have the three files in\r\n% <https:\/\/blogs.mathworks.com\/cleve\/files\/Checkerboard_mzip.m\r\n% Checkerboard_mzip>.\r\n##### SOURCE END ##### 344eb3b8bac34d61ba92152693925f2e\r\n-->\r\n","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/checkerboard_blog_01.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction-->\r\n<p>MATLAB has two different ways to compute singular values. The easiest is to compute the singular values without the singular vectors. Use <tt>svd<\/tt> with one output argument, s1.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2025\/02\/23\/two-flavors-of-svd\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":12401,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[16,14,30],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/12428"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=12428"}],"version-history":[{"count":8,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/12428\/revisions"}],"predecessor-version":[{"id":12458,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/12428\/revisions\/12458"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/12401"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=12428"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=12428"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=12428"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}