{"id":1747,"date":"2018-07-30T15:50:23","date_gmt":"2018-07-30T19:50:23","guid":{"rendered":"https:\/\/blogs.mathworks.com\/developer\/?p=1747"},"modified":"2018-07-31T07:18:52","modified_gmt":"2018-07-31T11:18:52","slug":"semi-automated-testing","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/developer\/2018\/07\/30\/semi-automated-testing\/","title":{"rendered":"Semi-Automated Testing"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>I've been doing a bit of spelunking around the File Exchange and GitHub lately, and I've seen a little pattern emerge in the tests of surprisingly many projects. It looks like this:<\/p><!--\/introduction--><pre class=\"language-matlab\">\r\n<span class=\"keyword\">classdef<\/span> testDisk &lt; matlab.unittest.TestCase\r\n    \r\n    <span class=\"keyword\">properties<\/span>\r\n        map\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    <span class=\"keyword\">methods<\/span> (TestClassSetup)\r\n        <span class=\"keyword\">function<\/span> createMap(testCase)\r\n            opt = sctool.scmapopt(<span class=\"string\">'trace'<\/span>,0,<span class=\"string\">'tol'<\/span>,1e-12);\r\n            p = polygon([4 2i -2+4i -3 -3-1i 2-2i]);\r\n            testCase.map = diskmap(p,opt);\r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    <span class=\"keyword\">methods<\/span> (Test)\r\n        \r\n        <span class=\"keyword\">function<\/span> testPlot(testCase)\r\n            fig = figure;\r\n            plot(testCase.map,4,3) <span class=\"comment\">% &lt;======= RIGHT HERE!<\/span>\r\n            close(fig);\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n<span class=\"keyword\">end<\/span>\r\n\r\n<\/pre><p>The plot command shows and exercises the graphical features of this toolbox. If we just run this outside of test form we can see it produces a cool result.<\/p><pre class=\"codeinput\">opt = sctool.scmapopt(<span class=\"string\">'trace'<\/span>,0,<span class=\"string\">'tol'<\/span>,1e-12);\r\np = polygon([4 2i -2+4i -3 -3-1i 2-2i]);\r\nmap = diskmap(p,opt);\r\nfig = figure;\r\nplot(map,4,3)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/developer\/files\/manual_verification_01.png\" alt=\"\"> <p>By the way, in this case I am pulling from the <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/1316-schwarz-christoffel-toolbox?s_tid=prof_contriblnk\">Schwarz-Christoffel Toolbox<\/a>, which by my eye looks to be quite a nice package! Check out the <a href=\"http:\/\/www.math.udel.edu\/~driscoll\/SC\/guide.pdf\">User's Guide<\/a>.<\/p><p>The idea here is great, right? The developer of the project is looking to get coverage on one of the key capabilities of the package, the visualization. At a minimum, the test is indeed confirming that the plot code executes without error, which is a great step. However, I feel like this might speak to a common pain point. How do I verify things that are very hard to verify, like graphics? Before we throw our hands into the air and <a href=\"http:\/\/blogs.mathworks.com\/developer\/files\/flippingtables.png\">flip over any tables<\/a> its worth noting that we may have a few options. We certainly can get access to the data in the plot and numerically confirm that it is plotted as expected. We can also check the properties of the graphics primitives and so on and so forth. This is all true, but I think it risks missing the point. Sometimes you just want to look at the dang plot!<\/p><p>You might know exactly when the plot is right and when it is wrong. You might see subtle visual problems right away looking at it that would take forever to try to encode in a test covering every single property of every single graphics primitive you are working with.<\/p><p>Just let me look at the plot.<\/p><p>This test does just that, but it flashes the figure up on the screen and you have to look very closely (and quickly) or use a debugging workflow to get real insight and confirm the visualization is working correctly. A worse alternative is just to leave figures open and never close them. This litters your MATLAB environment every time you run the tests and is really hard to determine how each figure was produced and for what test. It doesn't work in a CI system workflow. In short, it makes it hard to verify the plots are correct, which means that we won't verify the plots are correct.<\/p><p>Know what we can do though? We can <tt><b>log<\/b><\/tt>! We can <tt><b>testCase.log<\/b><\/tt>! We've already gone through the hard work of creating these figures and visualizations. Why don't we log them and see them later? We can do that pretty easily because we have a <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.unittest.diagnostics.figurediagnostic-class.html\">FigureDiagnostic<\/a> class that takes a figure handle and saves it away as both a <tt><b>.fig<\/b><\/tt> file and a <tt><b>.png<\/b><\/tt> file. That way we can log it away and open it up after the test run. If we were verifying anything (like the plot data or graphics attributes) we could also just use these diagnostics as the diagnostics input on the verification or assertion methods we are using. For the test above, let's log it:<\/p><pre class=\"language-matlab\">\r\n<span class=\"keyword\">classdef<\/span> testDisk &lt; matlab.unittest.TestCase\r\n    \r\n    <span class=\"keyword\">properties<\/span>\r\n        map\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    \r\n    <span class=\"keyword\">methods<\/span> (TestClassSetup)\r\n        <span class=\"keyword\">function<\/span> createMap(testCase)\r\n            opt = sctool.scmapopt(<span class=\"string\">'trace'<\/span>,0,<span class=\"string\">'tol'<\/span>,1e-12);\r\n            p = polygon([4 2i -2+4i -3 -3-1i 2-2i]);\r\n            testCase.map = diskmap(p,opt);\r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    \r\n    <span class=\"keyword\">methods<\/span> (Test)\r\n        \r\n        <span class=\"keyword\">function<\/span> testPlot(testCase)\r\n            import <span class=\"string\">matlab.unittest.diagnostics.Diagnostic<\/span>;\r\n            import <span class=\"string\">matlab.unittest.diagnostics.FigureDiagnostic<\/span>;\r\n            \r\n            fig = figure;\r\n            <span class=\"comment\">% Let's use addTeardown instead because https:\/\/blogs.mathworks.com\/developer\/2015\/07\/27\/addteardown\/<\/span>\r\n            testCase.addTeardown(@close, fig);\r\n            plot(testCase.map,4,3);\r\n            \r\n            <span class=\"comment\">% Now we log it for fun and for profit.<\/span>\r\n            testCase.log(3, <span class=\"keyword\">...<\/span>\r\n                Diagnostic.join(<span class=\"string\">'Please confirm there are concentric convex sets in the lower left.'<\/span>, <span class=\"keyword\">...<\/span>\r\n                FigureDiagnostic(fig)));\r\n            \r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n<span class=\"keyword\">end<\/span>\r\n\r\n<\/pre><p>I've put a nice description on there so we know what we are looking for in the figure. I did this by joining a string description with our <tt><b>FigureDiagnostic<\/b><\/tt> using <tt><b>Diagnostic.join<\/b><\/tt>. Also, I've logged it at level 3, which corresponds to the <tt><b>Detailed<\/b><\/tt> level of the <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.unittest.verbosity-class.html\"><tt><b>Verbosity<\/b><\/tt><\/a> enumeration.   This means it won't show up if I just run the standard <tt><b>runtests<\/b><\/tt> call:<\/p><pre class=\"codeinput\">runtests(<span class=\"string\">'tests\/testDisk.m'<\/span>)\r\n<\/pre><pre class=\"codeoutput\">Running testDisk\r\n....\r\nDone testDisk\r\n__________\r\n\r\n\r\nans = \r\n\r\n  1&times;4 TestResult array with properties:\r\n\r\n    Name\r\n    Passed\r\n    Failed\r\n    Incomplete\r\n    Duration\r\n    Details\r\n\r\nTotals:\r\n   4 Passed, 0 Failed, 0 Incomplete.\r\n   0.80408 seconds testing time.\r\n\r\n<\/pre><p>...but it will if I run at a higher level of logging:<\/p><pre class=\"codeinput\">runtests(<span class=\"string\">'tests\/testDisk.m'<\/span>,<span class=\"string\">'Verbosity'<\/span>,<span class=\"string\">'Detailed'<\/span>)\r\n<\/pre><pre class=\"codeoutput\"> Running testDisk\r\n  Setting up testDisk\r\n  Done setting up testDisk in 0.01131 seconds\r\n   Running testDisk\/testForwardMap\r\n   Done testDisk\/testForwardMap in 0.0076177 seconds\r\n   Running testDisk\/testInverseMap\r\n   Done testDisk\/testInverseMap in 0.0071096 seconds\r\n   Running testDisk\/testCenter\r\n   Done testDisk\/testCenter in 0.0082754 seconds\r\n   Running testDisk\/testPlot\r\n[Detailed] Diagnostic logged (2018-07-30T15:42:18): \r\nPlease confirm there are concentric convex sets in the lower left.\r\nFigure saved to:\r\n--&gt; <a href=\"https:\/\/blogs.mathworks.com\/developer\/files\/manual_verification_01.png\">\/private\/var\/folders\/bm\/6qgg87js1bb7fpr2p475bcwh0002wp\/T\/094eb448-615a-4667-95e2-0a6b62b81eae\/Figure_2d16d47d-a44a-4425-9507-84bb27afcf26.fig<\/a>\r\n--&gt; <a href=\"https:\/\/blogs.mathworks.com\/developer\/files\/manual_verification_01.png\">\/private\/var\/folders\/bm\/6qgg87js1bb7fpr2p475bcwh0002wp\/T\/094eb448-615a-4667-95e2-0a6b62b81eae\/Figure_2d16d47d-a44a-4425-9507-84bb27afcf26.png<\/a>\r\n\r\n   Done testDisk\/testPlot in 1.3447 seconds\r\n  Tearing down testDisk\r\n  Done tearing down testDisk in 0 seconds\r\n Done testDisk in 1.379 seconds\r\n__________\r\n\r\n\r\nans = \r\n\r\n  1&times;4 TestResult array with properties:\r\n\r\n    Name\r\n    Passed\r\n    Failed\r\n    Incomplete\r\n    Duration\r\n    Details\r\n\r\nTotals:\r\n   4 Passed, 0 Failed, 0 Incomplete.\r\n   1.379 seconds testing time.\r\n\r\n<\/pre><p>Great! Now we can see links in the test log pointing to images of the plot as well as a figure file. This is nice, but I am just getting started. Let's see this workflow when we generate a test report:<\/p><pre class=\"codeinput\">import <span class=\"string\">matlab.unittest.plugins.TestReportPlugin<\/span>;\r\n\r\nrunner = matlab.unittest.TestRunner.withTextOutput;\r\nrunner.addPlugin(TestReportPlugin.producingHTML(<span class=\"string\">'Verbosity'<\/span>,3))\r\nrunner.run(testsuite(<span class=\"string\">'tests'<\/span>))\r\n<\/pre><pre class=\"codeoutput\">Running testAnnulus\r\n \r\nNumber of iterations: 32\r\nNumber of function evaluations: 91\r\nFinal norm(F(x)): 1.27486e-09\r\nNumber of restarts for secant methods: 1\r\n...\r\nDone testAnnulus\r\n__________\r\n\r\nRunning testDisk\r\n....\r\nDone testDisk\r\n__________\r\n\r\nRunning testExterior\r\n...\r\nDone testExterior\r\n__________\r\n\r\nRunning testHalfplane\r\n...\r\nDone testHalfplane\r\n__________\r\n\r\nRunning testRectangle\r\n...\r\nDone testRectangle\r\n__________\r\n\r\nRunning testStrip\r\n...\r\nDone testStrip\r\n__________\r\n\r\nGenerating report. Please wait.\r\n    Preparing content for the report.\r\n    Adding content to the report.\r\n    Writing report to file.\r\nReport has been saved to: <a href=\"https:\/\/blogs.mathworks.com\/developer\/files\/manual_verification_report.pdf\">\/private\/var\/folders\/bm\/6qgg87js1bb7fpr2p475bcwh0002wp\/T\/tp86d8e3a7_aedb_45fa_a82e_0ceb6430ee87\/index.html<\/a>\r\n\r\nans = \r\n\r\n  1&times;19 TestResult array with properties:\r\n\r\n    Name\r\n    Passed\r\n    Failed\r\n    Incomplete\r\n    Duration\r\n    Details\r\n\r\nTotals:\r\n   19 Passed, 0 Failed, 0 Incomplete.\r\n   8.7504 seconds testing time.\r\n\r\n<\/pre><p>This is where it really starts to get beautiful. Now we have a full report that we can view at our leisure and confirm that all the visualizations are correct<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/developer\/files\/testreportsnapshot.png\" alt=\"\"> <\/p><p>We've run the whole test suite and have captured the figures for <i><b>all the tests<\/b><\/i> not just this one. We are now in the realm of <i>semi-automated<\/i> testing. There are some things that really need a human to take a look at to confirm correctness. However, the entirety of the test run and test setup can still be automated! This can still be done via a CI system so you don't have to remember to run the tests and look over the plots every time you change the code. You simply let the automation do it. For things that need manual verification you can always log away the artifacts in a pdf or html report and confirm periodically, or prior to release. If there is a bug, you can mine the artifacts from all your CI builds to see where and when it was introduced.<\/p><p>You can even extend this approach to add an expected image to the report. So if you log a known good expected image and then use the test code to generate the image for each software change you can look at the actual image and the expected image right next to each other and confirm that they match. Beautiful. Full test automation is clearly the ideal to strive for, but in those cases that you really need to look at a picture, let the framework and your CI system do all the work for you in setting it up and you can just quickly and efficiently verify that it is correct.<\/p><p>Happy semi-automated testing!<\/p><p>P.S. Take a look at the full report generated in PDF form <a href=\"https:\/\/blogs.mathworks.com\/developer\/files\/manual_verification_report.pdf\">here<\/a><\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_71118919c8b94b02a545dbfef62bba0e() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='71118919c8b94b02a545dbfef62bba0e ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 71118919c8b94b02a545dbfef62bba0e';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2018 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_71118919c8b94b02a545dbfef62bba0e()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2018a<br><\/p><\/div><!--\r\n71118919c8b94b02a545dbfef62bba0e ##### SOURCE BEGIN #####\r\n%% Semi-Automated Testing|?|\r\n% I've been doing a bit of spelunking around the File Exchange and GitHub lately, \r\n% and I've seen a little pattern emerge in the tests of surprisingly many projects. \r\n% It looks like this:\r\n%%\r\n% \r\n% <include>testDisk.m<\/include>\r\n%\r\n% \r\n% The plot command shows and exercises the graphical features of this toolbox. \r\n% If we just run this outside of test form we can see it produces a cool result.\r\n\r\nopt = sctool.scmapopt('trace',0,'tol',1e-12);\r\np = polygon([4 2i -2+4i -3 -3-1i 2-2i]);\r\nmap = diskmap(p,opt);\r\nfig = figure;\r\nplot(map,4,3) \r\n%% \r\n% By the way, in this case I am pulling from the <https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/1316-schwarz-christoffel-toolbox?s_tid=prof_contriblnk \r\n% Schwarz-Christoffel Toolbox>, which by my eye looks to be quite a nice package! \r\n% Check out the <http:\/\/www.math.udel.edu\/~driscoll\/SC\/guide.pdf User's Guide>.\r\n% \r\n% The idea here is great, right? The developer of the project is looking to\r\n% get coverage on one of the key capabilities of the package, the\r\n% visualization. At a minimum, the test is indeed confirming that the plot\r\n% code executes without error, which is a great step. However, I feel like\r\n% this might speak to a common pain point. How do I verify things that are\r\n% very hard to verify, like graphics? Before we throw our hands into the\r\n% air and <http:\/\/blogs.mathworks.com\/developer\/files\/flippingtables.png\r\n% flip over any tables> its worth noting that we may have a few options.\r\n% We certainly can get access to the data in the plot and numerically\r\n% confirm that it is plotted as expected. We can also check the properties\r\n% of the graphics primitives and so on and so forth. This is all true, but\r\n% I think it risks missing the point. Sometimes you just want to look at\r\n% the dang plot!\r\n% \r\n% You might know exactly when the plot is right and when it is wrong. You \r\n% might see subtle visual problems right away looking at it that would take forever \r\n% to try to encode in a test covering every single property of every single graphics \r\n% primitive you are working with.\r\n% \r\n% Just let me look at the plot.\r\n% \r\n% This test does just that, but it flashes the figure up on the screen \r\n% and you have to look very closely (and quickly) or use a debugging workflow \r\n% to get real insight and confirm the visualization is working correctly. A worse alternative \r\n% is just to leave figures open and never close them. This litters your MATLAB \r\n% environment every time you run the tests and is really hard to determine how \r\n% each figure was produced and for what test. It doesn't work in a CI system workflow. \r\n% In short, it makes it hard to verify the plots are correct, which means that \r\n% we won't verify the plots are correct.\r\n% \r\n% Know what we can do though? We can |*log*|! We can |*testCase.log*|! We've \r\n% already gone through the hard work of creating these figures and visualizations. \r\n% Why don't we log them and see them later? We can do that pretty easily because \r\n% we have a <https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.unittest.diagnostics.figurediagnostic-class.html \r\n% FigureDiagnostic> class that takes a figure handle and saves it away as both \r\n% a |*.fig*| file and a |*.png*| file. That way we can log it away and open \r\n% it up after the test run. If we were verifying anything (like the plot data \r\n% or graphics attributes) we could also just use these diagnostics as the diagnostics \r\n% input on the verification or assertion methods we are using. For the test above, \r\n% let's log it:\r\n%%\r\n% \r\n% <include>testDiskUpdated.m<\/include>\r\n%\r\n% \r\n% I've put a nice description on there so we know what we are looking for \r\n% in the figure. I did this by joining a string description with our |*FigureDiagnostic*| \r\n% using |*Diagnostic.join*|. Also, I've logged it at level 3, which corresponds \r\n% to the |*Detailed*| level of the <https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.unittest.verbosity-class.html \r\n% |*Verbosity*|> enumeration.   This means it won't show up if I just run the standard |*runtests*| call:\r\n\r\nruntests('tests\/testDisk.m')\r\n%% \r\n% ...but it will if I run at a higher level of logging:\r\n\r\nruntests('tests\/testDisk.m','Verbosity','Detailed')\r\n%% \r\n% Great! Now we can see links in the test log pointing to images of the plot \r\n% as well as a figure file. This is nice, but I am just getting started. \r\n% Let's see this workflow when we generate a test report:\r\n\r\nimport matlab.unittest.plugins.TestReportPlugin;\r\n\r\nrunner = matlab.unittest.TestRunner.withTextOutput;\r\nrunner.addPlugin(TestReportPlugin.producingHTML('Verbosity',3))\r\nrunner.run(testsuite('tests'))\r\n%% \r\n% This is where it really starts to get beautiful. Now we have a full report \r\n% that we can view at our leisure and confirm that all the visualizations are \r\n% correct\r\n%\r\n% <<testreportsnapshot.png>>\r\n% \r\n%% \r\n% We've run the whole test suite and have captured the figures for _*all \r\n% the tests*_ not just this one. We are now in the realm of _semi-automated_ testing. \r\n% There are some things that really need a human to take a look at to confirm \r\n% correctness. However, the entirety of the test run and test setup can still \r\n% be automated! This can still be done via a CI system so you don't have to remember \r\n% to run the tests and look over the plots every time you change the code. You \r\n% simply let the automation do it. For things that need manual verification you \r\n% can always log away the artifacts in a pdf or html report and confirm periodically, \r\n% or prior to release. If there is a bug, you can mine the artifacts from all \r\n% your CI builds to see where and when it was introduced. \r\n% \r\n% You can even extend this approach to add an expected image to the report. \r\n% So if you log a known good expected image and then use the test code to generate \r\n% the image for each software change you can look at the actual image and the \r\n% expected image right next to each other and confirm that they match. Beautiful. \r\n% Full test automation is clearly the ideal to strive for, but in those cases \r\n% that you really need to look at a picture, let the framework and your CI system \r\n% do all the work for you in setting it up and you can just quickly and efficiently \r\n% verify that it is correct.\r\n% \r\n% Happy semi-automated testing!\r\n%\r\n% P.S. Take a look at the full report generated in PDF form <https:\/\/blogs.mathworks.com\/developer\/files\/manual_verification_report.pdf here>\r\n##### SOURCE END ##### 71118919c8b94b02a545dbfef62bba0e\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/developer\/files\/testreportsnapshot.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction--><p>I've been doing a bit of spelunking around the File Exchange and GitHub lately, and I've seen a little pattern emerge in the tests of surprisingly many projects. It looks like this:... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/developer\/2018\/07\/30\/semi-automated-testing\/\">read more >><\/a><\/p>","protected":false},"author":90,"featured_media":1757,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[7],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/1747"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/users\/90"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/comments?post=1747"}],"version-history":[{"count":20,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/1747\/revisions"}],"predecessor-version":[{"id":1795,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/1747\/revisions\/1795"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/media\/1757"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/media?parent=1747"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/categories?post=1747"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/tags?post=1747"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}