{"id":483,"date":"2016-04-11T22:36:14","date_gmt":"2016-04-11T22:36:14","guid":{"rendered":"https:\/\/blogs.mathworks.com\/developer\/?p=483"},"modified":"2016-04-12T03:15:36","modified_gmt":"2016-04-12T03:15:36","slug":"performance-ab-testing","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/developer\/2016\/04\/11\/performance-ab-testing\/","title":{"rendered":"Performance Review Criteria 1: Peer Comparison"},"content":{"rendered":"<div class=\"content\"><p>Have you ever wondered how to compare the relative speed of two snippets of MATLAB code? Who am I kidding? Of course, we all have. I am just as sure that you already have a solution for this. I know this because all around the internet I have seen people comparing the speed of two algorithms, and everyone does it differently. Also, too many times there are problems with the way it is done and it produces an inaccurate result. For example, often people just rely on straight usage of tic\/toc and they gather just one code sample. This sample may be misleading because it can be an outlier or otherwise not truly representative of the expected value of the typical runtime. Also, typically these solutions don't take into account warming up the code. This can also be misleading because one algorithm may take longer to initialize but have better long term performance. Even if the options at play all have the same warmup time, the first algorithm may be penalized with warmup while subsequent algorithms may unfairly benefit from the overall code warmup.<\/p><p>The <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/timeit.html\"><b><tt>timeit<\/tt><\/b><\/a> function definitely helps here, and I am happy when I see these conversations using  <b><tt>timeit<\/tt><\/b> to measure the code performance. However, even <b><tt>timeit<\/tt><\/b> doesn't fit all usage models. While <b><tt>timeit<\/tt><\/b> works fantastically for timing snippets of code, it brings along the following considerations:<\/p><div><ul><li>Each snippet of code to be measured must be put inside of a function.<\/li><li>The measured code includes all of the code inside that function, possibly including initialization code.<\/li><li>It returns a single estimation of the expected value of the runtime rather than a distribution.<\/li><li>It only measures a single function at any given time. Comparing multiple algorithms requires multiple functions and multiple calls to <b><tt>timeit<\/tt><\/b>.<\/li><\/ul><\/div><p>With the performance testing framework in R2016a comes another workflow for comparing the runtime of two (or more) implementations of an algorithm. This can be done quite easily and intuitively by writing a performance test using the <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/matlab_prog\/write-script-based-unit-tests.html\">script based testing<\/a> interface.<\/p><p>To demonstrate this I am going to bring back a blast from the past and talk about Sarah's <a href=\"https:\/\/blogs.mathworks.com\/loren\/2008\/06\/25\/speeding-up-matlab-applications\/\">guest post<\/a> on Loren's blog back in 2008. This post does a great job in describing how to speed up MATLAB code and I think the concepts there are still very applicable. As we talk about this I'd like to show how we can use the code developed in that post to demonstrate how we can easily compare the performance of each of the code improvements she made. While we are at it we can see if the improvements are still improvements in today's MATLAB, and we can see how we do against 2008.<\/p><p>Enough talking, let's get into the code. Well, the nice thing here is that today we can just place all of this code into a simple script based test! In script based testing, the individual tests are separated by code sections (lines beginning with %%). When we use the performance test framework to measure code, we will measure the content in each test section separately.<\/p><p>In fact, we don't even need to produce a separate function to initialize our variables like we did in that post, we can just do this once in the script based test's shared variable section. In a test script, the shared variable section consists of any code that appears before the first explicit code section (the first line beginning with %%). So we can put the entire experiment of Sarah's post into one simple script:<\/p><pre class=\"language-matlab\">\r\n<span class=\"comment\">% Performance Test demonstrating Sarah's blog improvements. This can all be<\/span>\r\n<span class=\"comment\">% placed into a single script with the code we want to compare each in<\/span>\r\n<span class=\"comment\">% their own code section. This first section is the shared variable<\/span>\r\n<span class=\"comment\">% section. It will not be measured, but can be used to initialize variables<\/span>\r\n<span class=\"comment\">% that will be available to all code sections independently.<\/span>\r\n\r\n<span class=\"comment\">% Initialize the grid and initial and final points<\/span>\r\nnx1 = 50;\r\nnx2 =  50;\r\nx1l =  0;\r\nx1u = 100;\r\nx2l =  0;\r\nx2u = 100;\r\n\r\nx1 = linspace(x1l,x1u,nx1+1);\r\nx2 = linspace(x2l,x2u,nx2+1);\r\n\r\nlimsf1 = 1:nx1+1;\r\nlimsf2 = 1:nx2+1;\r\n\r\nlimsi1 = limsf1;\r\nlimsi2 = limsf2;\r\n\r\n<span class=\"comment\">% Initialize other variables<\/span>\r\nt = 1;\r\nsigmax1 = 0.5;\r\nsigmax2 = 1;\r\nsigma = t * [sigmax1^2 0; 0 sigmax2^2];\r\ninvSig = inv(sigma);\r\ndetSig = det(sigma);\r\n\r\nexpF = [1 0; 0 1];\r\nn = size (expF, 1);\r\ngausThresh = 10;\r\n\r\nsmall = 0; \r\nsubs = []; \r\nvals = [];\r\n\r\n\r\n<span class=\"comment\">%% Initial Code<\/span>\r\n<span class=\"comment\">% The initial code will serve as a baseline and will be the starting point<\/span>\r\n<span class=\"comment\">% for our performance optimizations. This solution iterates through all<\/span>\r\n<span class=\"comment\">% possible initial and final positions and calculates the values of<\/span>\r\n<span class=\"comment\">% exponent and out if exponent &gt; gausThresh.<\/span>\r\n\r\n<span class=\"keyword\">for<\/span> i1 = 1:nx1+1\r\n    <span class=\"keyword\">for<\/span> i2 = 1:nx2+1\r\n        <span class=\"keyword\">for<\/span> f1 = limsf1\r\n            <span class=\"keyword\">for<\/span> f2 = limsf2\r\n\r\n                <span class=\"comment\">% Initial and final position<\/span>\r\n                xi = [x1(i1) x2(i2)]';\r\n                xf = [x1(f1) x2(f2)]';\r\n\r\n                exponent = 0.5 * (xf - expF * xi)'<span class=\"keyword\">...<\/span>\r\n                    * invSig * (xf - expF * xi);\r\n\r\n                <span class=\"keyword\">if<\/span> exponent &gt; gausThresh\r\n                    small = small + 1;\r\n                <span class=\"keyword\">else<\/span>\r\n                    out = 1 \/ (sqrt((2 * pi)^n * detSig))<span class=\"keyword\">...<\/span>\r\n                        * exp(-exponent);\r\n                    subs = [subs; i1 i2 f1 f2];\r\n                    vals = [vals; out];\r\n                <span class=\"keyword\">end<\/span>\r\n\r\n            <span class=\"keyword\">end<\/span>\r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<span class=\"comment\">%% Code with Preallocation<\/span>\r\n<span class=\"comment\">% Sarah's first improvement was to leverage preallocation to reduce the<\/span>\r\n<span class=\"comment\">% time spent allocating memory and constantly growing the arrays.<\/span>\r\n\r\n<span class=\"comment\">% Initial guess for preallocation<\/span>\r\nmm = min((nx1+1)^2*(nx2+1)^2, 10^6);\r\nsubs = zeros(mm,4);\r\nvals = zeros(mm,1);\r\n\r\ncounter = 0;\r\n\r\n<span class=\"comment\">% Iterate through all possible initial<\/span>\r\n<span class=\"comment\">% and final positions<\/span>\r\n<span class=\"keyword\">for<\/span> i1 = 1:nx1+1\r\n    <span class=\"keyword\">for<\/span> i2 = 1:nx2+1\r\n        <span class=\"keyword\">for<\/span> f1 = limsf1\r\n            <span class=\"keyword\">for<\/span> f2 = limsf2\r\n\r\n                xi = [x1(i1) x2(i2)]'; <span class=\"comment\">%% Initial position<\/span>\r\n                xf = [x1(f1) x2(f2)]'; <span class=\"comment\">%% Final position<\/span>\r\n\r\n                exponent = 0.5 * (xf - expF * xi)'<span class=\"keyword\">...<\/span>\r\n                    * invSig * (xf - expF * xi);\r\n\r\n                <span class=\"comment\">% Increase preallocation if necessary<\/span>\r\n                <span class=\"keyword\">if<\/span> counter == length(vals)\r\n                    subs = [subs; zeros(mm, 4)];\r\n                    vals = [vals; zeros(mm, 1)];\r\n                <span class=\"keyword\">end<\/span>\r\n\r\n                <span class=\"keyword\">if<\/span> exponent &gt; gausThresh\r\n                    small = small + 1;\r\n                <span class=\"keyword\">else<\/span>\r\n                    <span class=\"comment\">% Counter introduced<\/span>\r\n                    counter=counter + 1;\r\n                    out = 1 \/ (sqrt((2 * pi)^n * detSig))<span class=\"keyword\">...<\/span>\r\n                        * exp(-exponent);\r\n                    subs(counter,:) = [i1 i2 f1 f2];\r\n                    vals(counter) = out;\r\n                <span class=\"keyword\">end<\/span>\r\n\r\n            <span class=\"keyword\">end<\/span>\r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<span class=\"comment\">% Remove zero components that came from preallocation<\/span>\r\nvals = vals(vals &gt; 0);\r\nsubs = subs(vals &gt; 0);\r\n\r\n<span class=\"comment\">%% Vectorize the Inner Two loops<\/span>\r\n<span class=\"comment\">% The second step was to leverage vectorization. Sarah outlines how to<\/span>\r\n<span class=\"comment\">% vectorize the inner two loops to gain substantial speed improvement.<\/span>\r\n\r\nvals = cell(nx1+1,nx2+1); <span class=\"comment\">% Cell preallocation<\/span>\r\nsubs = cell(nx1+1,nx2+1); <span class=\"comment\">% Cell preallocation<\/span>\r\n\r\n[xind,yind] = meshgrid(limsf1,limsf2);\r\nxyindices = [xind(:)' ; yind(:)'];\r\n\r\n[x,y] = meshgrid(x1(limsf1),x2(limsf2));\r\nxyfinal = [x(:)' ; y(:)'];\r\n\r\nexptotal = zeros(length(xyfinal),1);\r\n\r\n<span class=\"comment\">% Loop over all possible combinations of positions<\/span>\r\n<span class=\"keyword\">for<\/span> i1 = 1:nx1+1\r\n    <span class=\"keyword\">for<\/span> i2 = 1:nx2+1\r\n\r\n        xyinitial = repmat([x1(i1);x2(i2)],1,length(xyfinal));\r\n\r\n        expa = 0.5 * (xyfinal - expF * xyinitial);\r\n        expb = invSig * (xyfinal - expF * xyinitial);\r\n        exptotal(:,1) = expa(1,:).*expb(1,:)+expa(2,:).*expb(2,:);\r\n\r\n        index = find(exptotal &lt; gausThresh);\r\n        expreduced = exptotal(exptotal &lt; gausThresh);\r\n\r\n        out = 1 \/ (sqrt((2 * pi)^n * detSig)) * exp(-(expreduced));\r\n        vals{i1,i2} = out;\r\n        subs{i1,i2} = [i1*ones(1,length(index)) ; <span class=\"keyword\">...<\/span>\r\n            i2*ones(1,length(index)); xyindices(1,index); <span class=\"keyword\">...<\/span>\r\n            xyindices(2,index)]' ;\r\n\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<span class=\"comment\">% Reshape and convert output so it is in a<\/span>\r\n<span class=\"comment\">% simple matrix format<\/span>\r\nvals = cell2mat(vals(:));\r\nsubs = cell2mat(subs(:));\r\n\r\nsmall = ((nx1+1)^2*(nx2+1)^2)-length(subs);\r\n\r\n<span class=\"comment\">%% Vectorize the Inner Three Loops<\/span>\r\n<span class=\"comment\">% Let's take the vectorization approach one more level and remove all but<\/span>\r\n<span class=\"comment\">% the outermost loop.<\/span>\r\n\r\n<span class=\"comment\">% ndgrid gives a matrix of all the possible combinations<\/span>\r\n[aind,bind,cind] = ndgrid(limsi2,limsf1,limsf2);\r\n[a,b,c] = ndgrid(x2,x1,x2);\r\n\r\nvals = cell(nx1+1,nx2+1);  <span class=\"comment\">% Cell preallocation<\/span>\r\nsubs = cell(nx1+1,nx2+1);  <span class=\"comment\">% Cell preallocation<\/span>\r\n\r\n<span class=\"comment\">% Convert grids to single vector to use in a single loop<\/span>\r\nb = b(:); aind = aind(:); bind = bind(:); cind = cind(:);\r\n\r\nexpac = a(:)-c(:); <span class=\"comment\">% Calculate x2-x1<\/span>\r\n\r\n<span class=\"comment\">% Iterate through initial x1 positions (i1)<\/span>\r\n<span class=\"keyword\">for<\/span> i1 = limsi1\r\n\r\n    exbx1= b-x1(i1);\r\n    expaux = invSig(2)*exbx1.*expac;\r\n    exponent = 0.5*(invSig(1)*exbx1.*exbx1+expaux);\r\n\r\n    index = find(exponent &lt; gausThresh);\r\n    expreduced = exponent(exponent &lt; gausThresh);\r\n\r\n    vals{i1} = 1 \/ (sqrt((2 * pi)^n * detSig))<span class=\"keyword\">...<\/span>\r\n        .*exp(-expreduced);\r\n\r\n    subs{i1} = [i1*ones(1,length(index));\r\n        aind(index)' ; bind(index)';<span class=\"keyword\">...<\/span>\r\n        cind(index)']';\r\n\r\n<span class=\"keyword\">end<\/span>\r\n\r\nvals = cell2mat(vals(:));\r\nsubs = cell2mat(subs(:));\r\n\r\nsmall = ((nx1+1)^2*(nx2+1)^2)-length(subs);\r\n\r\n<span class=\"comment\">%% Final Solution<\/span>\r\n<span class=\"comment\">% Putting it all together, Sarah demonstrated the combination of preallocation,<\/span>\r\n<span class=\"comment\">% vectorization, and reducing unnecessary calculations in the loop.<\/span>\r\n\r\nconst=1 \/ (sqrt((2 * pi)^n * detSig));\r\n \r\n<span class=\"comment\">% ndgrid gives a matrix of all the possible combinations<\/span>\r\n<span class=\"comment\">% of position, except limsi1 which we iterate over<\/span>\r\n\r\n[aind,bind,cind] = ndgrid(limsi2,limsf1,limsf2);\r\n[a,b,c] = ndgrid(x2,x1,x2);\r\n\r\nvals = cell(nx1+1,nx2+1);  <span class=\"comment\">% Cell preallocation<\/span>\r\nsubs = cell(nx1+1,nx2+1);  <span class=\"comment\">% Cell preallocation<\/span>\r\n\r\n<span class=\"comment\">% Convert grids to single vector to<\/span>\r\n<span class=\"comment\">% use in a single for-loop<\/span>\r\nb = b(:);\r\naind = aind(:);\r\nbind = bind(:);\r\ncind = cind(:);\r\n\r\nexpac= a(:)-c(:);\r\nexpaux = invSig(2)*expac.*expac;\r\n\r\n<span class=\"comment\">% Iterate through initial x1 positions<\/span>\r\n\r\n<span class=\"keyword\">for<\/span> i1 = limsi1\r\n\r\n    expbx1= b-x1(i1);\r\n    exponent = 0.5*(invSig(1)*expbx1.*expbx1+expaux);\r\n\r\n    <span class=\"comment\">% Find indices where exponent &lt; gausThresh<\/span>\r\n    index = find(exponent &lt; gausThresh);\r\n\r\n    <span class=\"comment\">% Find and keep values where exp &lt; gausThresh<\/span>\r\n\r\n    expreduced = exponent(exponent &lt; gausThresh);\r\n\r\n    vals{i1} = const.*exp(-expreduced);\r\n\r\n    subs{i1} = [i1*ones(1,length(index));\r\n        aind(index)' ; bind(index)';<span class=\"keyword\">...<\/span>\r\n        cind(index)']';\r\n<span class=\"keyword\">end<\/span>\r\n\r\nvals = cell2mat(vals(:));\r\nsubs = cell2mat(subs(:));\r\n\r\nsmall = ((nx1+1)^2*(nx2+1)^2)-length(subs);\r\n\r\n<\/pre><p>Sorry for the big code dump. Take a look back at the <a href=\"https:\/\/blogs.mathworks.com\/loren\/2008\/06\/25\/speeding-up-matlab-applications\/\">original post<\/a> for an in depth explanation of each code section and how they improve upon one another. However, just look at the higher level structure of this code and it implications:<\/p><div><ul><li>First of all all of the code is in the same script. This means that all code under question is in close proximity to that being compared.<\/li><li>We leverage the shared variable section to share common variable definitions. This single sources these definitions to ensure we are comparing apples to apples (actually the blog post originally had a <a href=\"https:\/\/blogs.mathworks.com\/loren\/2008\/06\/25\/speeding-up-matlab-applications\/#comment-32690\">bug in it<\/a> because these variables weren't single sourced!).<\/li><li>Since the code is in the same script, each of the steps can be described against each other and published for more streamlined communication.<\/li><li>You can run it all together at once using the new <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/runperf.html\"><b><tt>runperf<\/tt><\/b><\/a> function!<\/li><\/ul><\/div><p>When you run this with runperf, first the shared variables are run and initialized just once. However, after that each code section is run repeatedly, first to warm up the code in the section and then to gather multiple measurements to ensure that the measurement is robust to statistical sampling error. The function collects enough samples such that the mean of the samples has a 5% relative margin of error with a 95% confidence level. It then returns the full data into a MeasurementResult array for each code snippet. Take a look:<\/p><pre class=\"codeinput\">measResult = runperf(<span class=\"string\">'comparisonTest'<\/span>)\r\n<\/pre><pre class=\"codeoutput\">Running comparisonTest\r\n..........\r\n..........\r\n..........\r\n..........\r\nDone comparisonTest\r\n__________\r\n\r\n\r\nmeasResult = \r\n\r\n  1x5 MeasurementResult array with properties:\r\n\r\n    Name\r\n    Valid\r\n    Samples\r\n    TestActivity\r\n\r\nTotals:\r\n   5 Valid, 0 Invalid.\r\n\r\n<\/pre><p>Each element of this result array contains a wealth of information you might want to leverage to analyze the different algorithms' performance. Each element has a name derived from the code section's title to identify each code snippet being analyzed. Also, and importantly, every code snippet's measured sample times are included in the Samples property, which is a MATLAB table containing the measurement information (excluding the samples taken to warm up the code).<\/p><pre class=\"codeinput\">measResult.Samples\r\n<\/pre><pre class=\"codeoutput\">\r\nans = \r\n\r\n               Name               MeasuredTime         Timestamp              Host         Platform           Version                      RunIdentifier            \r\n    __________________________    ____________    ____________________    _____________    ________    _____________________    ____________________________________\r\n\r\n    comparisonTest\/InitialCode    19.032          11-Apr-2016 16:18:21    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/InitialCode    18.956          11-Apr-2016 16:18:40    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/InitialCode    19.697          11-Apr-2016 16:19:00    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/InitialCode    18.802          11-Apr-2016 16:19:19    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n\r\n\r\nans = \r\n\r\n                    Name                    MeasuredTime         Timestamp              Host         Platform           Version                      RunIdentifier            \r\n    ____________________________________    ____________    ____________________    _____________    ________    _____________________    ____________________________________\r\n\r\n    comparisonTest\/CodeWithPreallocation    17.613          11-Apr-2016 16:20:47    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/CodeWithPreallocation    17.533          11-Apr-2016 16:21:04    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/CodeWithPreallocation    17.476          11-Apr-2016 16:21:22    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/CodeWithPreallocation    17.574          11-Apr-2016 16:21:39    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n\r\n\r\nans = \r\n\r\n                      Name                      MeasuredTime         Timestamp              Host         Platform           Version                      RunIdentifier            \r\n    ________________________________________    ____________    ____________________    _____________    ________    _____________________    ____________________________________\r\n\r\n    comparisonTest\/VectorizeTheInnerTwoLoops    0.16654         11-Apr-2016 16:21:40    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerTwoLoops    0.16728         11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerTwoLoops    0.16495         11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerTwoLoops    0.16462         11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n\r\n\r\nans = \r\n\r\n                       Name                       MeasuredTime         Timestamp              Host         Platform           Version                      RunIdentifier            \r\n    __________________________________________    ____________    ____________________    _____________    ________    _____________________    ____________________________________\r\n\r\n    comparisonTest\/VectorizeTheInnerThreeLoops    0.054535        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerThreeLoops    0.053873        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerThreeLoops    0.053708        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerThreeLoops      0.0535        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n\r\n\r\nans = \r\n\r\n                Name                MeasuredTime         Timestamp              Host         Platform           Version                      RunIdentifier            \r\n    ____________________________    ____________    ____________________    _____________    ________    _____________________    ____________________________________\r\n\r\n    comparisonTest\/FinalSolution    0.051026        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/FinalSolution    0.050669        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/FinalSolution    0.050521        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/FinalSolution    0.052291        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n\r\n<\/pre><p>As you can see above each of the tables are separate for each code section being measured, but are just begging to be concatenated together into one table:<\/p><pre class=\"codeinput\">allSamples = vertcat(measResult.Samples)\r\n<\/pre><pre class=\"codeoutput\">\r\nallSamples = \r\n\r\n                       Name                       MeasuredTime         Timestamp              Host         Platform           Version                      RunIdentifier            \r\n    __________________________________________    ____________    ____________________    _____________    ________    _____________________    ____________________________________\r\n\r\n    comparisonTest\/InitialCode                      19.032        11-Apr-2016 16:18:21    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/InitialCode                      18.956        11-Apr-2016 16:18:40    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/InitialCode                      19.697        11-Apr-2016 16:19:00    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/InitialCode                      18.802        11-Apr-2016 16:19:19    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/CodeWithPreallocation            17.613        11-Apr-2016 16:20:47    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/CodeWithPreallocation            17.533        11-Apr-2016 16:21:04    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/CodeWithPreallocation            17.476        11-Apr-2016 16:21:22    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/CodeWithPreallocation            17.574        11-Apr-2016 16:21:39    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerTwoLoops       0.16654        11-Apr-2016 16:21:40    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerTwoLoops       0.16728        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerTwoLoops       0.16495        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerTwoLoops       0.16462        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerThreeLoops    0.054535        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerThreeLoops    0.053873        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerThreeLoops    0.053708        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/VectorizeTheInnerThreeLoops      0.0535        11-Apr-2016 16:21:41    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/FinalSolution                  0.051026        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/FinalSolution                  0.050669        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/FinalSolution                  0.050521        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n    comparisonTest\/FinalSolution                  0.052291        11-Apr-2016 16:21:42    MyMachineName    maci64      9.0.0.341360 (R2016a)    ef8f9d2d-5028-48ad-ab9f-422f9719b711\r\n\r\n<\/pre><p>Great, now we have all the data together, and using varfun we can now you can apply whatever statistic we prefer on these samples to get an easy comparison. This is important because sometimes we want to actually observe the expected value of the performance so we would leverage the mean. However, you may want to use median to be more robust to outliers. Another preference may be to recognize that performance noise is additive so a case can be made for using the min, and even using max might be desired to get a better idea of worst case performance. The resulting data structure gives you all the samples collected and lets you decide how you'd like to analyze it. Let's do this today with the median. This is done by using varfun with the <tt>InputVariables<\/tt> set to  <tt>MeasuredTime<\/tt> and the <tt>GroupingVariables<\/tt> set to the Name for each separate code snippet we are comparing:<\/p><pre class=\"codeinput\">overView = varfun(@median, allSamples, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'InputVariables'<\/span>, <span class=\"string\">'MeasuredTime'<\/span>, <span class=\"string\">'GroupingVariables'<\/span>, <span class=\"string\">'Name'<\/span>)\r\n<\/pre><pre class=\"codeoutput\">\r\noverView = \r\n\r\n                       Name                       GroupCount    median_MeasuredTime\r\n    __________________________________________    __________    ___________________\r\n\r\n    comparisonTest\/InitialCode                    4               18.994           \r\n    comparisonTest\/CodeWithPreallocation          4               17.554           \r\n    comparisonTest\/VectorizeTheInnerTwoLoops      4              0.16574           \r\n    comparisonTest\/VectorizeTheInnerThreeLoops    4             0.053791           \r\n    comparisonTest\/FinalSolution                  4             0.050847           \r\n\r\n<\/pre><p>There we go, we have a concise high level view of each of our algorithms' performance by calling runperf on our script and taking the median of the <tt>MeasuredTime<\/tt>. This script can also be published for communication of our options so we can describe the differences clearly and weigh things like code readability against code performance:<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/2016PublishedPerfTest.png\" alt=\"\"> <\/p><p>So how did we do against 2008? Sarah's recommendations still hold true, and each of her improvements still produce a corresponding improvement in the runtime performance. Also, if you compare the runtimes of these code snippets since 2008 you can see that the overall times for all algorithms are substantially faster (with <tt>nx1<\/tt> = <tt>nx2<\/tt> =50). Woot Woot! While this is certainly an unfair characteristic since I am not running this performance analysis on the same hardware that Sarah did 8 years ago, it still is encouraging to see progress, both including advancements in hardware as well as the wealth of performance improvements applied to MATLAB over the years, not least including the new execution engine in R2015b. I did the same thing for <tt>nx1<\/tt> = <tt>nx2<\/tt> = 100, and it also showed both significant improvement and consistency:<\/p><pre class=\"codeinput\">load <span class=\"string\">n100Results.mat<\/span>\r\nn100Comparison\r\n<\/pre><pre class=\"codeoutput\">\r\nn100Comparison = \r\n\r\n                       Name                       GroupCount    median_MeasuredTime\r\n    __________________________________________    __________    ___________________\r\n\r\n    comparisonTest\/InitialCode                    4             888.36             \r\n    comparisonTest\/CodeWithPreallocation          4             271.55             \r\n    comparisonTest\/VectorizeTheInnerTwoLoops      4              2.817             \r\n    comparisonTest\/VectorizeTheInnerThreeLoops    4             1.2057             \r\n    comparisonTest\/FinalSolution                  4             1.0468             \r\n\r\n<\/pre><p>Here is one more quick trick. Rather than seeing a single statistical value for each algorithm, we have access to all measured points so we can see the distribution. Note that in this case the actual distribution is somewhat less interesting because of the low sample sizes we needed to achieve the relative margin of error, but it does provide some extra insight, especially with larger sample sizes.<\/p><pre class=\"codeinput\">ax = axes;\r\nhold(ax);\r\n<span class=\"keyword\">for<\/span> result = measResult\r\n    h = histogram(ax, result.Samples.MeasuredTime, <span class=\"string\">'BinWidth'<\/span>,  0.025*median(result.Samples.MeasuredTime));\r\n<span class=\"keyword\">end<\/span>\r\nlegend(measResult.Name,<span class=\"string\">'Location'<\/span>,<span class=\"string\">'SouthOutside'<\/span>);\r\n<\/pre><pre class=\"codeoutput\">Current plot held\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/2016performanceAB_01.png\" alt=\"\"> <p>Note because Sarah improved the code so drastically and we are talking orders of magnitude here, we need to use a log scale on the x axis to see all the different distributions clearly. Nice!<\/p><pre class=\"codeinput\">ax.XScale = <span class=\"string\">'log'<\/span>;\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/2016performanceAB_02.png\" alt=\"\"> <p>How have you measured performance of two algorithms against each other in the past? Do you see this as helping in your performance analysis workflows? Let me know your thoughts in the comments.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_0908f60068254df6a27b755421f41b4d() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='0908f60068254df6a27b755421f41b4d ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 0908f60068254df6a27b755421f41b4d';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2016 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_0908f60068254df6a27b755421f41b4d()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2016a<br><\/p><\/div><!--\r\n0908f60068254df6a27b755421f41b4d ##### SOURCE BEGIN #####\r\n%% \r\n% Have you ever wondered how to compare the relative speed of two snippets\r\n% of MATLAB code? Who am I kidding? Of course, we all have. I am just as\r\n% sure that you already have a solution for this. I know this because all\r\n% around the internet I have seen people comparing the speed of two\r\n% algorithms, and everyone does it differently. Also, too many times there\r\n% are problems with the way it is done and it produces an inaccurate\r\n% result. For example, often people just rely on straight usage of tic\/toc\r\n% and they gather just one code sample. This sample may be misleading\r\n% because it can be an outlier or otherwise not truly representative of the\r\n% expected value of the typical runtime. Also, typically these solutions\r\n% don't take into account warming up the code. This can also be misleading\r\n% because one algorithm may take longer to initialize but have better long\r\n% term performance. Even if the options at play all have the same warmup\r\n% time, the first algorithm may be penalized with warmup while subsequent\r\n% algorithms may unfairly benefit from the overall code warmup.\r\n%\r\n% The <https:\/\/www.mathworks.com\/help\/matlab\/ref\/timeit.html *|timeit|*>\r\n% function definitely helps here, and I am happy when I see these\r\n% conversations using  *|timeit|* to measure the code performance. However,\r\n% even *|timeit|* doesn't fit all usage models. While *|timeit|* works\r\n% fantastically for timing snippets of code, it brings along the\r\n% following considerations:\r\n%\r\n% * Each snippet of code to be measured must be put inside of a function.\r\n% * The measured code includes all of the code inside that function,\r\n% possibly including initialization code.\r\n% * It returns a single estimation of the expected value of the runtime\r\n% rather than a distribution.\r\n% * It only measures a single function at any given time. Comparing\r\n% multiple algorithms requires multiple functions and multiple calls to\r\n% *|timeit|*.\r\n%\r\n% With the performance testing framework in R2016a comes another workflow\r\n% for comparing the runtime of two (or more) implementations of an\r\n% algorithm. This can be done quite easily and intuitively by writing a\r\n% performance test using the\r\n% <https:\/\/www.mathworks.com\/help\/matlab\/matlab_prog\/write-script-based-unit-tests.html\r\n% script based testing> interface.\r\n% \r\n% To demonstrate this I am going to bring back a blast from the past and\r\n% talk about Sarah's\r\n% <https:\/\/blogs.mathworks.com\/loren\/2008\/06\/25\/speeding-up-matlab-applications\/\r\n% guest post> on Loren's blog back in 2008. This post does a great job in\r\n% describing how to speed up MATLAB code and I think the concepts there are\r\n% still very applicable. As we talk about this I'd like to show how we can\r\n% use the code developed in that post to demonstrate how we can easily\r\n% compare the performance of each of the code improvements she made. While\r\n% we are at it we can see if the improvements are still improvements in\r\n% today's MATLAB, and we can see how we do against 2008.\r\n%\r\n% Enough talking, let's get into the code. Well, the nice thing here is\r\n% that today we can just place all of this code into a simple script based\r\n% test! In script based testing, the individual tests are separated by code\r\n% sections (lines beginning with %%). When we use the performance\r\n% test framework to measure code, we will measure the content in each test\r\n% section separately.\r\n% \r\n% In fact, we don't even need to produce a separate function to initialize\r\n% our variables like we did in that post, we can just do this once in the\r\n% script based test's shared variable section. In a test script, the shared\r\n% variable section consists of any code that appears before the first\r\n% explicit code section (the first line beginning with %%). So we can put\r\n% the entire experiment of Sarah's post into one simple script:\r\n%\r\n% <include>comparisonTest<\/include>\r\n%\r\n% Sorry for the big code dump. Take a look back at the\r\n% <https:\/\/blogs.mathworks.com\/loren\/2008\/06\/25\/speeding-up-matlab-applications\/\r\n% original post> for an in depth explanation of each code section and how\r\n% they improve upon one another. However, just look at the higher level\r\n% structure of this code and it implications:\r\n%\r\n% * First of all all of the code is in the same script. This means that all\r\n% code under question is in close proximity to that being compared.\r\n% * We leverage the shared variable section to share common variable\r\n% definitions. This single sources these definitions to ensure we are\r\n% comparing apples to apples (actually the blog post originally had a\r\n% <https:\/\/blogs.mathworks.com\/loren\/2008\/06\/25\/speeding-up-matlab-applications\/#comment-32690\r\n% bug in it> because these variables weren't single sourced!).\r\n% * Since the code is in the same script, each of the steps can be\r\n% described against each other and published for more streamlined\r\n% communication.\r\n% * You can run it all together at once using the new\r\n% <https:\/\/www.mathworks.com\/help\/matlab\/ref\/runperf.html *|runperf|*> function!\r\n%\r\n% When you run this with runperf, first the shared variables are run and\r\n% initialized just once. However, after that each code section is run\r\n% repeatedly, first to warm up the code in the section and then to gather\r\n% multiple measurements to ensure that the measurement is robust to\r\n% statistical sampling error. The function collects enough samples such\r\n% that the mean of the samples has a 5% relative margin of error with a 95%\r\n% confidence level. It then returns the full data into a MeasurementResult\r\n% array for each code snippet. Take a look:\r\nmeasResult = runperf('comparisonTest')\r\n\r\n%%\r\n% Each element of this result array contains a wealth of information you\r\n% might want to leverage to analyze the different algorithms' performance.\r\n% Each element has a name derived from the code section's title to identify\r\n% each code snippet being analyzed. Also, and importantly, every code\r\n% snippet's measured sample times are included in the Samples property,\r\n% which is a MATLAB table containing the measurement information (excluding\r\n% the samples taken to warm up the code).\r\nmeasResult.Samples\r\n\r\n%%\r\n% As you can see above each of the tables are separate for each code\r\n% section being measured, but are just begging to be concatenated together\r\n% into one table:\r\nallSamples = vertcat(measResult.Samples)\r\n\r\n%%\r\n% Great, now we have all the data together, and using varfun we can now you\r\n% can apply whatever statistic we prefer on these samples to get an easy\r\n% comparison. This is important because sometimes we want to actually\r\n% observe the expected value of the performance so we would leverage the\r\n% mean. However, you may want to use median to be more robust to outliers.\r\n% Another preference may be to recognize that performance noise is additive\r\n% so a case can be made for using the min, and even using max might be\r\n% desired to get a better idea of worst case performance. The resulting\r\n% data structure gives you all the samples collected and lets you decide\r\n% how you'd like to analyze it. Let's do this today with the median. This\r\n% is done by using varfun with the |InputVariables| set to  |MeasuredTime|\r\n% and the |GroupingVariables| set to the Name for each separate code\r\n% snippet we are comparing:\r\noverView = varfun(@median, allSamples, ...\r\n    'InputVariables', 'MeasuredTime', 'GroupingVariables', 'Name')\r\n\r\n%%\r\n% There we go, we have a concise high level view of each of our algorithms'\r\n% performance by calling runperf on our script and taking the median of the\r\n% |MeasuredTime|. This script can also be published for communication of our\r\n% options so we can describe the differences clearly and weigh things like\r\n% code readability against code performance:\r\n%\r\n% <<2016PublishedPerfTest.png>>\r\n% \r\n% So how did we do against 2008? Sarah's recommendations still hold true,\r\n% and each of her improvements still produce a corresponding improvement in\r\n% the runtime performance. Also, if you compare the runtimes of these code\r\n% snippets since 2008 you can see that the overall times for all algorithms\r\n% are substantially faster (with |nx1| = |nx2| =50). Woot Woot! While this\r\n% is certainly an unfair characteristic since I am not running this\r\n% performance analysis on the same hardware that Sarah did 8 years ago, it\r\n% still is encouraging to see progress, both including advancements in\r\n% hardware as well as the wealth of performance improvements applied to\r\n% MATLAB over the years, not least including the new execution engine in\r\n% R2015b. I did the same thing for |nx1| = |nx2| = 100, and it also showed\r\n% both significant improvement and consistency:\r\n%\r\nload n100Results.mat\r\nn100Comparison\r\n\r\n%%\r\n% Here is one more quick trick. Rather than seeing a single statistical\r\n% value for each algorithm, we have access to all measured points so we can\r\n% see the distribution. Note that in this case the actual distribution is\r\n% somewhat less interesting because of the low sample sizes we needed to\r\n% achieve the relative margin of error, but it does provide some extra\r\n% insight, especially with larger sample sizes.\r\nax = axes;\r\nhold(ax);\r\nfor result = measResult\r\n    h = histogram(ax, result.Samples.MeasuredTime, 'BinWidth',  0.025*median(result.Samples.MeasuredTime));\r\nend\r\nlegend(measResult.Name,'Location','SouthOutside');\r\n\r\n%%\r\n% Note because Sarah improved the code so drastically and we are talking\r\n% orders of magnitude here, we need to use a log scale on the x axis to see\r\n% all the different distributions clearly. Nice! \r\nax.XScale = 'log';\r\n\r\n%%\r\n% How have you measured performance of two algorithms against each other in\r\n% the past? Do you see this as helping in your performance analysis\r\n% workflows? Let me know your thoughts in the comments.\r\n\r\n##### SOURCE END ##### 0908f60068254df6a27b755421f41b4d\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/developer\/files\/2016performanceAB_02.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><p>Have you ever wondered how to compare the relative speed of two snippets of MATLAB code? Who am I kidding? Of course, we all have. I am just as sure that you already have a solution for this. I know... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/developer\/2016\/04\/11\/performance-ab-testing\/\">read more >><\/a><\/p>","protected":false},"author":90,"featured_media":502,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[13,7],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/483"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/users\/90"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/comments?post=483"}],"version-history":[{"count":14,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/483\/revisions"}],"predecessor-version":[{"id":506,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/483\/revisions\/506"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/media\/502"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/media?parent=483"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/categories?post=483"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/tags?post=483"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}