{"id":2261,"date":"2017-01-23T12:00:20","date_gmt":"2017-01-23T17:00:20","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=2261"},"modified":"2017-02-05T12:25:35","modified_gmt":"2017-02-05T17:25:35","slug":"ulps-plots-reveal-math-function-accurary","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2017\/01\/23\/ulps-plots-reveal-math-function-accurary\/","title":{"rendered":"Ulps Plots Reveal Math Function Accuracy"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>\"ULP\" stands for \"unit in the last place.\" An <i>ulps plot<\/i> samples a fundamental math function such as $\\sin{x}$, or a more esoteric function like a Bessel function.  The samples are compared with more accurate values obtained from a higher precision computation.  A plot of the accuracy, measured in ulps, reveals valuable information about the underlying algorithms.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#131ecd09-7d55-4bec-b115-aa5d01d52ac8\">fdlibm<\/a><\/li><li><a href=\"#3927cb53-cfe6-4815-a5bc-25fb4150b347\">ulps plots<\/a><\/li><li><a href=\"#dfed2931-345a-4f18-b8d0-d311b2b82bd6\">sin<\/a><\/li><li><a href=\"#6d2ab49a-8ddc-46c3-b135-51bba6ac77d7\">tan<\/a><\/li><li><a href=\"#064e1e57-93f9-4339-8bfa-483670860b32\">atan<\/a><\/li><li><a href=\"#ee21b9cd-9362-4d72-a077-830c2c15c327\">exp<\/a><\/li><li><a href=\"#e60db9d1-68ce-434c-8c5c-379e4619c8bd\">Lambert W<\/a><\/li><li><a href=\"#7d98179d-0208-4ce3-af83-21a4cc9f0883\">Bessel functions<\/a><\/li><li><a href=\"#75220b2e-8bad-4de6-911d-47b4b83fa353\">erfinv<\/a><\/li><li><a href=\"#c5b5c88f-9ea9-4c6f-91be-4468a60ae3b1\">Code<\/a><\/li><\/ul><\/div><h4>fdlibm<a name=\"131ecd09-7d55-4bec-b115-aa5d01d52ac8\"><\/a><\/h4><p><tt>libm<\/tt> is the library of elementary math functions that supports the C compiler.  <tt>fdlibm<\/tt> is \"freely distributable\" source for <tt>libm<\/tt> developed and put into the public domain over 25 years ago by K. C. Ng and perhaps a few others at Sun Microsystems. I wrote about <tt>fdlibm<\/tt> in our newsletter in 2002, <a href=\"https:\/\/www.mathworks.com\/company\/newsletters\/articles\/the-tetragamma-function-and-numerical-craftsmanship.html\">The Tetragamma Function and Numerical Craftsmanship<\/a>.<\/p><p>Mathematically <tt>fdlibm<\/tt> shows immaculate craftsmanship.  We still use it today for our elementary transcendental functions.  And I suspect all other mathematical software projects do as well.  If they don't, they should.<\/p><h4>ulps plots<a name=\"3927cb53-cfe6-4815-a5bc-25fb4150b347\"><\/a><\/h4><p><tt>ulps(x)<\/tt> is the distance from <tt>x<\/tt> to the next larger floating point number.  It's the same as <tt>eps(x)<\/tt>.<\/p><p>To assess the accuracy of a computed value<\/p><pre class=\"language-matlab\">y = f(x)\r\n<\/pre><p>compare it with the more accurate value obtained from the Symbolic Math Toolbox<\/p><pre class=\"language-matlab\">Y = f(sym(x,<span class=\"string\">'f'<\/span>))\r\n<\/pre><p>The <tt>'f'<\/tt> flag says to convert <tt>x<\/tt> to a <tt>sym<\/tt> exactly, without trying to guess that it is an inverse power of 10 or the <tt>sqrt<\/tt> of something. The relative error in <tt>y<\/tt>, measured in units in the last place, is then<\/p><pre class=\"language-matlab\">u = (y - Y)\/eps(abs(Y))\r\n<\/pre><p>Since this is <i>relative<\/i> error, it is a stringent measure near the zeros of <tt>f(x)<\/tt>.<\/p><p>If <tt>y<\/tt> is the floating point number obtained by correctly rounding <tt>Y<\/tt> to double precision, then<\/p><pre class=\"language-matlab\">-0.5 &lt;= u &lt;= 0.5\r\n<\/pre><p>This is the best that can be hoped for.  Compute the exact mathematical value of <tt>f(x)<\/tt> and make a single rounding error to obtain the final result.  Half-ulp accuracy is difficult to obtain algorithmically, and too expensive in execution time.  All of the functions in MATLAB that are derived from <tt>fdlibm<\/tt> have better than one-ulp accuracy.<\/p><pre class=\"language-matlab\">-1.0 &lt; u &lt; 1.0\r\n<\/pre><p>Each of the following plots involves 100,000 random arguments <tt>x<\/tt>, uniformly distributed in an appropriate interval.<\/p><h4>sin<a name=\"dfed2931-345a-4f18-b8d0-d311b2b82bd6\"><\/a><\/h4><p>We see about 0.8 ulp accuracy from this sample.  That's typical.<\/p><p>Argument reduction is the first step in computing $\\sin{x}$. An integer multiple $n$ of $\\pi\/2$ is subtracted from the argument to bring it into the interval<\/p><p>$$ -\\frac{\\pi}{4} \\le x - n \\frac{\\pi}{2} \\le \\frac{\\pi}{4} $$<\/p><p>Then, depending upon whether $n$ is odd or even, a polynomial approximation of degree 13 to either $\\sin$ or $\\cos$ gives the nearly correctly rounded result for the reduced argument, and hence for the original argument.  The ulps plot shows a slight degradation in accuracy at odd multiples of $\\pi\/4$, which are the extreme points for the polynomial approximations.<\/p><p>It is important to note that the accuracy is better than one ulp even near the end-points of the sample interval, $0$ and $2\\pi$. This is where $\\sin{x}$ approaches $0$ and the approximation must follow carefully so that the relative error remains bounded.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/sin.png\" alt=\"\"> <\/p><h4>tan<a name=\"6d2ab49a-8ddc-46c3-b135-51bba6ac77d7\"><\/a><\/h4><p>Again, roughly 0.8 ulp accuracy.<\/p><p>Similar argument reduction results in similar behavior near odd multiples of $\\pi\/4$.  In between these points, at $\\pi\/2$ and $3\\pi\/2$, $\\tan{x}$ has a pole and the approximation must follow suit. The algorithm uses a reciprocal and the identity<\/p><p>$$ \\tan x = -1\/\\tan{(x+\\frac{\\pi}{2})} $$<\/p><p>This comes close to dividing by zero as you approach a pole, but the resulting approximation remains better than one ulp.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/tan.png\" alt=\"\"> <\/p><h4>atan<a name=\"064e1e57-93f9-4339-8bfa-483670860b32\"><\/a><\/h4><p>Good to within slightly more than 0.8 ulp.  The underlying approximation is a piecewise polynomial with breakpoints at a few multiples of 1\/16 that are evident in the plot and marked on the axis.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/atan.png\" alt=\"\"> <\/p><h4>exp<a name=\"ee21b9cd-9362-4d72-a077-830c2c15c327\"><\/a><\/h4><p>About the same accuracy as the previous plots.<\/p><p>The argument reduction involves the key value<\/p><p>$$ r = \\ln{2} \\approx 0.6931 $$<\/p><p>and the identity<\/p><p>$$ \\exp{x} = 2^n \\exp{(x-n r)} $$<\/p><p>The resulting ulps plot shows the extremes of the error at odd multiples of $r\/2$.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/exp.png\" alt=\"\"> <\/p><h4>Lambert W<a name=\"e60db9d1-68ce-434c-8c5c-379e4619c8bd\"><\/a><\/h4><p>Now for two functions that are not in <tt>fdlibm<\/tt>. If you follow this blog, you might have noticed that I am a big fan of the Lambert W function.  I blogged about it a couple of years ago, <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2013\/09\/02\">The Lambert W Function<\/a>. The Wikipedia article is excellent, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Lambert_W_function\">Lambert W Function<\/a>. And, you cn just Google \"lambert w function\" for many more interesting links.<\/p><p>The Lambert W function is not available in MATLAB itself. The Symbolic Math Toolbox has an @double method that accesses the symbolic code for type double arguments.  Or, my blog post mentioned above has some simple (but elegant, if I do say so myself) code.<\/p><p>Neither code is one-ulp accurate.  The primary branch of the function has a zero at the origin.  As we get near that zero, the relative error measured in <i>ulps<\/i> is unbounded.  The absolute accuracy is OK, but the relative accuracy is not.  In fact, you might see a billion ulps error.  That's a <i>gigaulp<\/i>, or <i>gulp<\/i> for short.<\/p><p>As <tt>lambertw(x)<\/tt> crosses a power of two, the unit in the last place, <tt>eps(lambertw(x))<\/tt>, jumps by a factor of two.  Three of these points are marked by ticks on the x-axis in the ulps plot.<\/p><pre class=\"codeinput\">   <span class=\"keyword\">for<\/span> a = [1\/8 1\/4 1\/2]\r\n       z = fzero(@(x) lambertw(x)-a,.5)\r\n   <span class=\"keyword\">end<\/span>\r\n<\/pre><pre class=\"codeoutput\">z =\r\n    0.1416\r\nz =\r\n    0.3210\r\nz =\r\n    0.8244\r\n<\/pre><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/lambertw.png\" alt=\"\"> <\/p><h4>Bessel functions<a name=\"7d98179d-0208-4ce3-af83-21a4cc9f0883\"><\/a><\/h4><p>Our codes for Bessel functions have fairly good, although not one-ulp, absolute accuracy.  But they do not have high relative accuracy near the zeros.  Here are the first five zeros, and an ulps plot covering the first two, for $J_0(x)$, the zero-th order Bessel function of the first kind.<\/p><pre class=\"codeinput\">   <span class=\"keyword\">for<\/span> a = (1:5)*pi\r\n       z = fzero(@(x) besselj(0,x), a)\r\n   <span class=\"keyword\">end<\/span>\r\n<\/pre><pre class=\"codeoutput\">z =\r\n    2.4048\r\nz =\r\n    5.5201\r\nz =\r\n    8.6537\r\nz =\r\n   11.7915\r\nz =\r\n   14.9309\r\n<\/pre><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/besselj0.png\" alt=\"\"> <\/p><h4>erfinv<a name=\"75220b2e-8bad-4de6-911d-47b4b83fa353\"><\/a><\/h4><p>Here is the inverse of the error function.  It looks very interesting, but I haven't investigated it yet.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/erfinv.png\" alt=\"\"> <\/p><h4>Code<a name=\"c5b5c88f-9ea9-4c6f-91be-4468a60ae3b1\"><\/a><\/h4><p>I have recently updated <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/59085-cleve-laboratory\">Cleve's Laboratory<\/a> in the MATLAB Central file exchange. The latest version includes <tt>ulpsapp.m<\/tt>, which generates the ulps plots in this blog.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_81a9a7a597f740c696e6bc1c1b444b47() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='81a9a7a597f740c696e6bc1c1b444b47 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 81a9a7a597f740c696e6bc1c1b444b47';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2017 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_81a9a7a597f740c696e6bc1c1b444b47()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2017a<br><\/p><\/div><!--\r\n81a9a7a597f740c696e6bc1c1b444b47 ##### SOURCE BEGIN #####\r\n%% Ulps Plots Reveal Math Function Accurary\r\n% \"ULP\" stands for \"unit in the last place.\"\r\n% An _ulps plot_ samples a fundamental math function such as $\\sin{x}$,\r\n% or a more esoteric function like a Bessel function.  The samples are\r\n% compared with more accurate values obtained from a higher precision\r\n% computation.  A plot of the accuracy, measured in ulps, reveals\r\n% valuable information about the underlying algorithms.\r\n\r\n%% fdlibm\r\n% |libm| is the library of elementary math functions that supports the\r\n% C compiler.  |fdlibm| is \"freely distributable\" source for |libm|\r\n% developed and put into the public domain over 25 years ago by K. C. Ng\r\n% and perhaps a few others at Sun Microsystems.\r\n% I wrote about |fdlibm| in our newsletter in 2002,\r\n% <https:\/\/www.mathworks.com\/company\/newsletters\/articles\/the-tetragamma-function-and-numerical-craftsmanship.html\r\n% The Tetragamma Function and Numerical Craftsmanship>.\r\n\r\n%%\r\n% Mathematically |fdlibm| shows immaculate craftsmanship.  We still use it\r\n% today for our elementary transcendental functions.  And I suspect all\r\n% other mathematical software projects do as well.  If they don't,\r\n% they should.\r\n\r\n\r\n%% ulps plots\r\n% |ulps(x)| is the distance from |x| to the next larger floating point\r\n% number.  It's the same as |eps(x)|.\r\n\r\n%%\r\n% To assess the accuracy of a computed value\r\n%\r\n%   y = f(x)\r\n%\r\n% compare it with the more accurate value obtained from the Symbolic\r\n% Math Toolbox\r\n%\r\n%   Y = f(sym(x,'f'))\r\n%\r\n% The |'f'| flag says to convert |x| to a |sym| exactly, without\r\n% trying to guess that it is an inverse power of 10 or the |sqrt|\r\n% of something.\r\n% The relative error in |y|, measured in units in the last place, is then\r\n%\r\n%   u = (y - Y)\/eps(abs(Y))\r\n%\r\n% Since this is _relative_ error, it is a stringent measure near the\r\n% zeros of |f(x)|.\r\n\r\n%%\r\n% If |y| is the floating point number obtained by correctly rounding |Y|\r\n% to double precision, then\r\n%\r\n%   -0.5 <= u <= 0.5\r\n%\r\n% This is the best that can be hoped for.  Compute the exact mathematical\r\n% value of |f(x)| and make a single rounding error to obtain the final\r\n% result.  Half-ulp accuracy is difficult to obtain algorithmically,\r\n% and too expensive in execution time.  All of the functions in MATLAB\r\n% that are derived from |fdlibm| have better than one-ulp accuracy.\r\n%\r\n%   -1.0 < u < 1.0\r\n%\r\n\r\n%%\r\n% Each of the following plots involves 100,000 random arguments |x|,\r\n% uniformly distributed in an appropriate interval.\r\n\r\n%% sin\r\n% We see about 0.8 ulp accuracy from this sample.  That's typical.\r\n\r\n%%\r\n% Argument reduction is the first step in computing $\\sin{x}$.\r\n% An integer multiple $n$ of $\\pi\/2$ is subtracted from the argument\r\n% to bring it into the interval\r\n%\r\n% $$ -\\frac{\\pi}{4} \\le x - n \\frac{\\pi}{2} \\le \\frac{\\pi}{4} $$\r\n%\r\n% Then, depending upon whether $n$ is odd or even, a polynomial\r\n% approximation of degree 13 to either $\\sin$ or $\\cos$ gives the\r\n% nearly correctly rounded result for the reduced argument, and hence\r\n% for the original argument.  The ulps plot shows a slight degradation\r\n% in accuracy at odd multiples of $\\pi\/4$, which are the extreme\r\n% points for the polynomial approximations.\r\n\r\n%%\r\n% It is important to note that the accuracy is better than one ulp\r\n% even near the end-points of the sample interval, $0$ and $2\\pi$.\r\n% This is where $\\sin{x}$ approaches $0$ and the approximation must\r\n% follow carefully so that the relative error remains bounded.\r\n%\r\n% <<sin.png>>\r\n%\r\n\r\n%% tan\r\n% Again, roughly 0.8 ulp accuracy.\r\n\r\n%%\r\n% Similar argument reduction results in similar behavior near odd\r\n% multiples of $\\pi\/4$.  In between these points, at $\\pi\/2$ and\r\n% $3\\pi\/2$, $\\tan{x}$ has a pole and the approximation must follow suit.\r\n% The algorithm uses a reciprocal and the identity\r\n%\r\n% $$ \\tan x = -1\/\\tan{(x+\\frac{\\pi}{2})} $$\r\n%\r\n% This comes close to dividing by zero as you approach a pole, but the\r\n% resulting approximation remains better than one ulp.\r\n%\r\n% <<tan.png>>\r\n%\r\n\r\n%% atan\r\n% Good to within slightly more than 0.8 ulp.  The underlying approximation is a \r\n% piecewise polynomial with breakpoints at a few multiples of 1\/16\r\n% that are evident in the plot and marked on the axis.\r\n%\r\n% <<atan.png>>\r\n%\r\n\r\n%% exp\r\n% About the same accuracy as the previous plots.\r\n\r\n%%\r\n% The argument reduction involves the key value\r\n%\r\n% $$ r = \\ln{2} \\approx 0.6931 $$\r\n%\r\n% and the identity\r\n%\r\n% $$ \\exp{x} = 2^n \\exp{(x-n r)} $$\r\n%\r\n% The resulting ulps plot shows the extremes of the error at odd multiples of $r\/2$.\r\n%\r\n% <<exp.png>>\r\n%\r\n\r\n%% Lambert W\r\n% Now for two functions that are not in |fdlibm|.\r\n% If you follow this blog, you might have noticed that I am a big fan\r\n% of the Lambert W function.  I blogged about it a couple of years ago,\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2013\/09\/02 The Lambert W Function>.\r\n% The Wikipedia article is excellent,\r\n% <https:\/\/en.wikipedia.org\/wiki\/Lambert_W_function Lambert W Function>.\r\n% And, you cn just Google \"lambert w function\" for many more interesting\r\n% links.\r\n\r\n%%\r\n% The Lambert W function is not available in MATLAB itself.\r\n% The Symbolic Math Toolbox has an @double method that accesses\r\n% the symbolic code for type double arguments.  Or, my blog post\r\n% mentioned above has some simple (but elegant, if I do say so myself)\r\n% code.\r\n\r\n%%\r\n% Neither code is one-ulp accurate.  The primary branch of the function\r\n% has a zero at the origin.  As we get near that zero, the relative error\r\n% measured in _ulps_ is unbounded.  The absolute accuracy is OK, but\r\n% the relative accuracy is not.  In fact, you might see a billion ulps\r\n% error.  That's a _gigaulp_, or _gulp_ for short.\r\n\r\n%%\r\n% As |lambertw(x)| crosses a power of two, the unit in the last place,\r\n% |eps(lambertw(x))|, jumps by a factor of two.  Three of these points\r\n% are marked by ticks on the x-axis in the ulps plot.\r\n\r\n   for a = [1\/8 1\/4 1\/2]\r\n       z = fzero(@(x) lambertw(x)-a,.5)\r\n   end\r\n   \r\n%%\r\n%\r\n% <<lambertw.png>>\r\n%\r\n\r\n%% Bessel functions\r\n% Our codes for Bessel functions have fairly good, although not one-ulp,\r\n% absolute accuracy.  But they do not have high relative accuracy near\r\n% the zeros.  Here are the first five zeros, and an ulps plot covering\r\n% the first two, for $J_0(x)$, the zero-th order Bessel\r\n% function of the first kind.\r\n\r\n   for a = (1:5)*pi\r\n       z = fzero(@(x) besselj(0,x), a)\r\n   end\r\n   \r\n%%\r\n%\r\n% <<besselj0.png>>\r\n\r\n%% erfinv\r\n% Here is the inverse of the error function.  It looks very interesting,\r\n% but I haven't investigated it yet.\r\n%\r\n% <<erfinv.png>>\r\n\r\n%% Code\r\n% I have recently updated \r\n% <https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/59085-cleve-laboratory\r\n% Cleve's Laboratory> in the MATLAB Central file exchange.\r\n% The latest version includes |ulpsapp.m|, which generates the ulps plots\r\n% in this blog.\r\n##### SOURCE END ##### 81a9a7a597f740c696e6bc1c1b444b47\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/sin.png\" onError=\"this.style.display ='none';\" \/><\/div><!--introduction--><p>\"ULP\" stands for \"unit in the last place.\" An <i>ulps plot<\/i> samples a fundamental math function such as $\\sin{x}$, or a more esoteric function like a Bessel function.  The samples are compared with more accurate values obtained from a higher precision computation.  A plot of the accuracy, measured in ulps, reveals valuable information about the underlying algorithms.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2017\/01\/23\/ulps-plots-reveal-math-function-accurary\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[11,4,16,7,26,17],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2261"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=2261"}],"version-history":[{"count":2,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2261\/revisions"}],"predecessor-version":[{"id":2329,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2261\/revisions\/2329"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=2261"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=2261"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=2261"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}