{"id":1901,"date":"2018-10-05T18:26:56","date_gmt":"2018-10-05T22:26:56","guid":{"rendered":"https:\/\/blogs.mathworks.com\/developer\/?p=1901"},"modified":"2018-10-05T18:34:47","modified_gmt":"2018-10-05T22:34:47","slug":"keepmeasuring","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/developer\/2018\/10\/05\/keepmeasuring\/","title":{"rendered":"Just Keep Swimming"},"content":{"rendered":"<div class=\"content\"><p>Remember Dory?<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/developer\/files\/y2018Dory.jpg\" alt=\"\"> <\/p><p><i>Image Credit: Silvio Tanaka [ <a href=\"https:\/\/creativecommons.org\/licenses\/by\/2.0\">CC BY 2.0<\/a> ], <a href=\"https:\/\/commons.wikimedia.org\/wiki\/File:Paracanthurus_hepatus.jpg\">via Wikimedia Commons<\/a><\/i><\/p><p>The model of persistence in the face of difficult circumstances, the hilarious and free spirited fish of the vast ocean expanse, the adopted aunt of our lovable Nemo, that Dory?<\/p><p>Well she was ahead of her time. Who knew of the wisdom of her sage advice, just keep swimming.<\/p><p>We need a little bit of that sometimes when we are performance testing. Specifically, we need to just keep swimming (so to speak) when we are measuring code that is just too fast. For example, let's take the example from <a href=\"https:\/\/blogs.mathworks.com\/developer\/2018\/09\/18\/jmeter-results-jenkins\/\">last post<\/a> (CQ's matrix library). The performance tests we wrote here look like this:<\/p><pre class=\"language-matlab\">\r\n<span class=\"keyword\">classdef<\/span> tMatrixLibrary &lt; matlab.perftest.TestCase\r\n    \r\n    <span class=\"keyword\">properties<\/span>(TestParameter)\r\n        TestMatrix = struct(<span class=\"string\">'midSize'<\/span>, magic(600),<span class=\"keyword\">...<\/span>\r\n            <span class=\"string\">'largeSize'<\/span>, magic(1000));\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    <span class=\"keyword\">methods<\/span>(Test)\r\n        <span class=\"keyword\">function<\/span> testSum(testCase, TestMatrix)\r\n            matrix_sum(TestMatrix);\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testMean(testCase, TestMatrix)\r\n            matrix_mean(TestMatrix);\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testEig(testCase, TestMatrix)\r\n            \r\n            testCase.assertReturnsTrue(@() size(TestMatrix,1) == size(TestMatrix,2), <span class=\"keyword\">...<\/span>\r\n                <span class=\"string\">'Eig only works on square matrix'<\/span>);\r\n            testCase.startMeasuring;\r\n            matrix_eig(TestMatrix);\r\n            testCase.stopMeasuring;\r\n            \r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<\/pre><p>Here you can see that we tested against a \"medium size\" problem and a \"large size\" problem. (Un)conveniently missing, however, is a \"small size\" problem. Why is this? Well, why don't we add one...<\/p><pre class=\"language-matlab\">\r\n<span class=\"keyword\">classdef<\/span> tMatrixLibrary_v2 &lt; matlab.perftest.TestCase\r\n    \r\n    <span class=\"keyword\">properties<\/span>(TestParameter)\r\n        TestMatrix = struct(<span class=\"string\">'smallSize'<\/span>, magic(100), <span class=\"string\">'midSize'<\/span>, magic(600),<span class=\"keyword\">...<\/span>\r\n            <span class=\"string\">'largeSize'<\/span>, magic(1000));\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    <span class=\"keyword\">methods<\/span>(Test)\r\n        <span class=\"keyword\">function<\/span> testSum(testCase, TestMatrix)\r\n            matrix_sum(TestMatrix);\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testMean(testCase, TestMatrix)\r\n            matrix_mean(TestMatrix);\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testEig(testCase, TestMatrix)\r\n            \r\n            testCase.assertReturnsTrue(@() size(TestMatrix,1) == size(TestMatrix,2), <span class=\"keyword\">...<\/span>\r\n                <span class=\"string\">'Eig only works on square matrix'<\/span>);\r\n            testCase.startMeasuring;\r\n            matrix_eig(TestMatrix);\r\n            testCase.stopMeasuring;\r\n            \r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<\/pre><p>...along with a quick function to check the validity of the result and we'll find out:<\/p><pre class=\"language-matlab\">\r\n<span class=\"keyword\">function<\/span> checkResults(results)\r\ndisp(newline)\r\ndispFrame\r\n<span class=\"keyword\">if<\/span> ~all([results.Valid])\r\n    disp(<span class=\"string\">'Oh no Dory, some measurements were invalid!'<\/span>)\r\n\r\n<span class=\"keyword\">else<\/span>\r\n    disp(<span class=\"string\">'Thanks Dory, you''re the best! All our measurements are good.'<\/span>)\r\n<span class=\"keyword\">end<\/span>\r\ndispFrame\r\n\r\n<span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">function<\/span> dispFrame\r\ndisp(<span class=\"string\">':::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::'<\/span>);\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<\/pre><pre class=\"codeinput\">results = runperf(<span class=\"string\">'tMatrixLibrary_v2'<\/span>);\r\ncheckResults(results)\r\n<\/pre><pre class=\"codeoutput\">Running tMatrixLibrary_v2\r\n........\r\n================================================================================\r\ntMatrixLibrary_v2\/testSum(TestMatrix=smallSize) was filtered.\r\n    Test Diagnostic: The MeasuredTime should not be too close to the precision of the framework.\r\n================================================================================\r\n.. .......... .......... .......... .......\r\n================================================================================\r\ntMatrixLibrary_v2\/testMean(TestMatrix=smallSize) was filtered.\r\n    Test Diagnostic: The MeasuredTime should not be too close to the precision of the framework.\r\n================================================================================\r\n...\r\n.......... .......... .......... .......... .Warning: Target Relative Margin of Error not met after running the MaxSamples\r\nfor tMatrixLibrary_v2\/testMean(TestMatrix=largeSize). \r\n.........\r\n.......... .....\r\nDone tMatrixLibrary_v2\r\n__________\r\n\r\nFailure Summary:\r\n\r\n     Name                                              Failed  Incomplete  Reason(s)\r\n    ===============================================================================================\r\n     tMatrixLibrary_v2\/testSum(TestMatrix=smallSize)               X       Filtered by assumption.\r\n    -----------------------------------------------------------------------------------------------\r\n     tMatrixLibrary_v2\/testMean(TestMatrix=smallSize)              X       Filtered by assumption.\r\n\r\n\r\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\r\nOh no Dory, some measurements were invalid!\r\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\r\n<\/pre><p>Uh oh, that doesn't look ideal. It looks like some of our small sized measurements weren't valid. They weren't valid because the framework recognized that the code execution time was too close to the measurable precision of the framework. The framework was able to determine that it would have been a garbage measurement, so rather than risk providing bad data as a result it has proactively filtered the tests that were too fast and marked the result as invalid. Well, I still want to measure this fast case, so what do we do? Typically, what we see happening is people wrapping their code with a static <b><tt>for<\/tt><\/b> loop, like so:<\/p><pre class=\"language-matlab\">\r\n<span class=\"keyword\">classdef<\/span> tMatrixLibrary_v3 &lt; matlab.perftest.TestCase\r\n    \r\n    <span class=\"keyword\">properties<\/span>(TestParameter)\r\n        TestMatrix = struct(<span class=\"string\">'smallSize'<\/span>, magic(100), <span class=\"string\">'midSize'<\/span>, magic(600),<span class=\"keyword\">...<\/span>\r\n            <span class=\"string\">'largeSize'<\/span>, magic(1000));\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    <span class=\"keyword\">methods<\/span>(Test)\r\n        <span class=\"keyword\">function<\/span> testSum(testCase, TestMatrix)\r\n            <span class=\"keyword\">for<\/span> idx = 1:1000\r\n                matrix_sum(TestMatrix);\r\n            <span class=\"keyword\">end<\/span>\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testMean(testCase, TestMatrix)\r\n            <span class=\"keyword\">for<\/span> idx = 1:1000\r\n                matrix_mean(TestMatrix);\r\n            <span class=\"keyword\">end<\/span>\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testEig(testCase, TestMatrix)\r\n            \r\n            testCase.assertReturnsTrue(@() size(TestMatrix,1) == size(TestMatrix,2), <span class=\"keyword\">...<\/span>\r\n                <span class=\"string\">'Eig only works on square matrix'<\/span>);\r\n            testCase.startMeasuring;\r\n            matrix_eig(TestMatrix);\r\n            testCase.stopMeasuring;\r\n            \r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<\/pre><pre class=\"codeinput\">results = runperf(<span class=\"string\">'tMatrixLibrary_v3'<\/span>);\r\ncheckResults(results)\r\n<\/pre><pre class=\"codeoutput\">Running tMatrixLibrary_v3\r\n.......... .......... .......... .......... ..........\r\n.......... .......... ..\r\nDone tMatrixLibrary_v3\r\n__________\r\n\r\n\r\n\r\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\r\nThanks Dory, you're the best! All our measurements are good.\r\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\r\n<\/pre><p>Well the results look good at least. Think about this like you were measuring the weight of a feather using a kitchen scale. Measuring one feather will give you a bunk measurement, so instead you gather together 1000 or more feathers, put them in a box, and measure the whole thing to get a good idea of the average weight of a feather.<\/p><p>Problem solved? Well not really. There are some big drawbacks to this approach. Let's enumerate:<\/p><div><ol><li>I had to choose a number of iterations to test against. What's more, this choice was arbitrary. How do I know that I am not right on the edge of framework precision? If so I am likely to experience tests that sporadically are too fast to measure! In addition, the framework precision is machine dependent, so different machines will have different precision thresholds. There is no one good number to go off of so it's anyone's guess. Not comforting.<\/li><li><i><b>#ohmygoodness<\/b><\/i> was this slow! Why was it slow? Because I had to run 1000 iterations for everything, including the larger matrix sizes. These larger sizes don't need to be run in a loop, but in order to maintain an apples\/oranges comparison they need to if the smaller sizes are.<\/li><li>This approach falls flat when comparing other algorithms against each other. If one algorithm needs 1000 iterations, but the other only needs 750, we quickly get into comparing apples and oranges and we lose insight into our true performance.<\/li><li>Let's say we diligently track our code performance over time, and furthermore we do a bang-up job optimizing our critical code and make vast improvements in the code performance. Well this improvement may require that we \"up\" the iteration count, since 1000 iterations may suddenly become too fast to measure as a result of our code optimizations and we need to now measure 10,000 iterations. Once we do this however, all future measurements are on a different scale than our historical data. Lame.<\/li><li>Finally, and perhaps as a root cause of some of these apples\/oranges troubles discussed above, it is simply not the true measurement of the code's execution time.<\/li><\/ol><\/div><p>So takeaway, once your code hits the limitations of how fast we can measure something, we can add a static <b><tt>for<\/tt><\/b> loop and maybe hobble along, but it's really not the best experience.<\/p><p><i>Enter <b>R2018b<\/b>.<\/i><\/p><p>In R2018b the performance testing framework now has a new <b><tt>keepMeasuring<\/tt><\/b> method on the <b><tt>matlab.perftest.TestCase<\/tt><\/b> class to support faster code measurement workflows. How is this used? Put it in a while loop and let the framework determine the right number of iterations:<\/p><pre class=\"language-matlab\">\r\n<span class=\"keyword\">classdef<\/span> tMatrixLibrary_final &lt; matlab.perftest.TestCase\r\n    \r\n    <span class=\"keyword\">properties<\/span>(TestParameter)\r\n        TestMatrix = struct(<span class=\"string\">'smallSize'<\/span>, magic(100), <span class=\"string\">'midSize'<\/span>, magic(600),<span class=\"keyword\">...<\/span>\r\n            <span class=\"string\">'largeSize'<\/span>, magic(1000));\r\n    <span class=\"keyword\">end<\/span>\r\n    \r\n    <span class=\"keyword\">methods<\/span>(Test)\r\n        <span class=\"keyword\">function<\/span> testSum(testCase, TestMatrix)\r\n            <span class=\"keyword\">while<\/span> testCase.keepMeasuring\r\n                matrix_sum(TestMatrix);\r\n            <span class=\"keyword\">end<\/span>\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testMean(testCase, TestMatrix)\r\n            <span class=\"keyword\">while<\/span> testCase.keepMeasuring\r\n                matrix_mean(TestMatrix);\r\n            <span class=\"keyword\">end<\/span>\r\n        <span class=\"keyword\">end<\/span>\r\n        \r\n        <span class=\"keyword\">function<\/span> testEig(testCase, TestMatrix)\r\n            \r\n            testCase.assertReturnsTrue(@() size(TestMatrix,1) == size(TestMatrix,2), <span class=\"keyword\">...<\/span>\r\n                <span class=\"string\">'Eig only works on square matrix'<\/span>);\r\n            <span class=\"keyword\">while<\/span> testCase.keepMeasuring\r\n                matrix_eig(TestMatrix);\r\n            <span class=\"keyword\">end<\/span>\r\n            \r\n        <span class=\"keyword\">end<\/span>\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n\r\n<\/pre><pre class=\"codeinput\">results = runperf(<span class=\"string\">'tMatrixLibrary_final'<\/span>);\r\ncheckResults(results)\r\n<\/pre><pre class=\"codeoutput\">Running tMatrixLibrary_final\r\n.......... .......... .......... .......... ..........\r\n.......... .......... ..\r\nDone tMatrixLibrary_final\r\n__________\r\n\r\n\r\n\r\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\r\nThanks Dory, you're the best! All our measurements are good.\r\n:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\r\n<\/pre><p>Ah! Isn't that lovely? I didn't need to hard code any static <b><tt>for<\/tt><\/b> loop values, we were able to accurately measure significantly faster code, and the measured values returned actually are the real times taken rather than the values offset by an arbitrary scaling factor. Also, the code that needs more iterations is the fast code, so I didn't even notice any difference in test time, whereas the slower tests didn't need any more iterations at all for an accurate measurement. So nice. Look at the sample summary and you can see the real time taken:<\/p><pre class=\"codeinput\">sampleSummary(results)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n  9&times;7 table\r\n\r\n                           Name                            SampleSize       Mean       StandardDeviation       Min          Median         Max    \r\n    ___________________________________________________    __________    __________    _________________    __________    __________    __________\r\n\r\n    tMatrixLibrary_final\/testSum(TestMatrix=smallSize)         4         1.4886e-05        1.242e-07        1.4807e-05    1.4833e-05    1.5071e-05\r\n    tMatrixLibrary_final\/testSum(TestMatrix=midSize)           4         0.00060309       1.4084e-05        0.00058393    0.00060658    0.00061526\r\n    tMatrixLibrary_final\/testSum(TestMatrix=largeSize)         4           0.003275       6.2535e-05         0.0031897     0.0032854     0.0033395\r\n    tMatrixLibrary_final\/testMean(TestMatrix=smallSize)        4         1.5534e-05       3.4959e-07        1.5221e-05    1.5535e-05    1.5847e-05\r\n    tMatrixLibrary_final\/testMean(TestMatrix=midSize)          4         0.00059933       9.2749e-06        0.00058845    0.00060012    0.00060865\r\n    tMatrixLibrary_final\/testMean(TestMatrix=largeSize)        4          0.0032668       4.8834e-05         0.0031958     0.0032859     0.0032997\r\n    tMatrixLibrary_final\/testEig(TestMatrix=smallSize)         4           0.003086       6.1447e-05           0.00304     0.0030639     0.0031762\r\n    tMatrixLibrary_final\/testEig(TestMatrix=midSize)           4            0.16333         0.001356           0.16232       0.16287       0.16527\r\n    tMatrixLibrary_final\/testEig(TestMatrix=largeSize)         4            0.39709       0.00096197           0.39613       0.39704       0.39813\r\n\r\n<\/pre><p>It is worth noting that this is not a silver bullet. There still is some framework overhead in the keepMeasuring method that prevents us from measuring some really fast code. Think about it like measuring a group of feathers, but if each feather comes individually wrapped in a small packet, there comes a point where we are measuring the overhead of the packet rather than the actual feather. So, while there is still some code that will be too fast to measure (don't expect a valid measurement of 1+1 please), using the <b><tt>keepMeasuring<\/tt><\/b> method as shown opened up 2 orders of magnitude in allowable precision in our experiments.<\/p><p>Have fun, and like Dory, just keep measuring y'all!<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_309982d53d1c479780859e987ce33728() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='309982d53d1c479780859e987ce33728 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 309982d53d1c479780859e987ce33728';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2018 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_309982d53d1c479780859e987ce33728()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2018b<br><\/p><\/div><!--\r\n309982d53d1c479780859e987ce33728 ##### SOURCE BEGIN #####\r\n%% \r\n% \r\n% Remember Dory? \r\n%\r\n% <<y2018Dory.jpg>>\r\n%\r\n% _Image Credit: Silvio Tanaka [\r\n% <https:\/\/creativecommons.org\/licenses\/by\/2.0 CC BY 2.0> ], <https:\/\/commons.wikimedia.org\/wiki\/File:Paracanthurus_hepatus.jpg via Wikimedia Commons>_\r\n% \r\n% The model of persistence in the face of difficult circumstances, the\r\n% hilarious and free spirited  of the vast ocean expanse, the adopted aunt\r\n% of our lovable Nemo, that Dory?\r\n%\r\n% Well she was ahead of her time. Who knew of the wisdom of her sage\r\n% advice, just keep swimming.\r\n%\r\n% We need a little bit of that sometimes when we are performance testing.\r\n% Specifically, we need to just keep swimming (so to speak) when we are\r\n% measuring code that is just too fast. For example, let's take the\r\n% example from\r\n% <https:\/\/blogs.mathworks.com\/developer\/2018\/09\/18\/jmeter-results-jenkins\/\r\n% last post> (CQ's matrix library). The performance tests we wrote here\r\n% look like this:\r\n%\r\n% <include>tMatrixLibrary_orig<\/include>\r\n% \r\n% Here you can see that we tested against a \"medium size\" problem and a\r\n% \"large size\" problem. (Un)conveniently missing, however, is a \"small\r\n% size\"\r\n% problem. Why is this? Well, why don't we add one...\r\n%\r\n% <include>tMatrixLibrary_v2<\/include>\r\n%\r\n% ...along with a quick function to check the validity of the result and\r\n% we'll find out: \r\n%\r\n% <include>checkResults<\/include>\r\n% \r\nresults = runperf('tMatrixLibrary_v2');\r\ncheckResults(results)\r\n\r\n%%\r\n% Uh oh, that doesn't look ideal. It looks like some of our small sized\r\n% measurements weren't valid. they weren't valid because the framework\r\n% recognized that the code execution time was too close to the measurable\r\n% precision of the framework. The framework was able to determine that it\r\n% would have been a garbage measurement, so rather than risk providing bad\r\n% data as a result it has proactively filtered the tests that were too fast\r\n% and marked the result as invalid. Well, I still want to measure this fast\r\n% case, so what do we do? Typically, what we see happening is people\r\n% wrapping their code with a static *|for|* loop, like so:\r\n%\r\n% <include>tMatrixLibrary_v3<\/include>\r\n%\r\nresults = runperf('tMatrixLibrary_v3');\r\ncheckResults(results)\r\n\r\n%%\r\n% Well the results look good at least. Think about this like you were\r\n% measuring the weight of a feather using a kitchen scale. Measuring one\r\n% feather will give you a bunk measurement, so instead you gather together\r\n% 1000 or more feathers, put them in a box, and measure the whole thing to\r\n% get a good idea of the average weight of a feather.\r\n% \r\n% Problem solved? Well not really. There are some big drawbacks to this\r\n% approach. Let's enumerate:\r\n%\r\n% # I had to choose a number of iterations to test against. What's more,\r\n% this choice was arbitrary. How do I know that I am not right on the edge\r\n% of framework precision? if so I am likely to experience tests that\r\n% sporadically are too fast to measure? In addition, the framework precision\r\n% is machine dependent, so different machines will have different precision\r\n% thresholds. There is no one good number to go off of so it's anyone's\r\n% guess. Not comforting.\r\n% # _*#ohmygoodness*_ was this slow! Why was it slow? Because I had to run 1000\r\n% iterations for everything, including the larger matrix sizes. These\r\n% larger sizes don't need to be run in a loop, but in order to maintain an\r\n% apples\/oranges comparison they need to if the smaller sizes are.\r\n% # This approach falls flat when comparing other algorithms against\r\n% each other. If one algorithm needs 1000 iterations, but the other only\r\n% needs 750, we quickly get into comparing apples and oranges and we lose\r\n% insight into our true performance.\r\n% # Let's say we diligently track our code performance over time, and\r\n% furthermore we do a bang-up job optimizing our critical code and make\r\n% vast improvements in the code performance. Well this improvement may\r\n% require that we \"up\" the iteration count, since 1000 iterations may\r\n% suddenly become too fast to measure as a result of our code optimizations\r\n% and we need to now measure 10,000 iterations. Once we do this however,\r\n% all future measurements are on a different scale than our historical\r\n% data. Lame.\r\n% # Finally, and perhaps as a root cause of some of these apples\/oranges\r\n% troubles discussed above, it is simply not the true measurement of the\r\n% code's execution time.\r\n% \r\n% So takeaway, once your code hits the limitations of how fast we can\r\n% measure something, we can add a static *|for|* loop and maybe hobble along,\r\n% but it's really not the best experience.\r\n%\r\n% _Enter *R2018b*._\r\n%\r\n% In R2018b the performance testing framework now has a new\r\n% *|keepMeasuring|* method on the *|matlab.perftest.TestCase|* class to\r\n% support faster code measurement workflows. How is this used? Put it in a\r\n% while loop and let the framework determine the right number of\r\n% iterations:\r\n%\r\n% <include>tMatrixLibrary_final.m<\/include>\r\n%\r\nresults = runperf('tMatrixLibrary_final');\r\ncheckResults(results)\r\n\r\n%%\r\n% Ah! Isn't that lovely? I didn't need to hard code any static *|for|* loop\r\n% values, we were able to accurately measure significantly faster code, and\r\n% the measured values returned actually are the real times taken rather\r\n% than the values offset by an arbitrary scaling factor. Also, the code\r\n% that needs more iterations is the fast code, so I didn't even notice any\r\n% difference in test time, whereas the slower tests didn't need any more\r\n% iterations at all for an accurate measurement. So nice. Look at the\r\n% sample summary and you can see the real time taken:\r\n\r\nsampleSummary(results)\r\n\r\n%%\r\n% It is worth noting that this is not a silver bullet. There still is some\r\n% framework overhead in the keepMeasuring method that prevents us from\r\n% measuring some really fast code. Think about it like measuring a group of\r\n% feathers, but if each feather comes individually wrapped in a small\r\n% packet, there comes a point where we are measuring the overhead of the\r\n% packet rather than the actual feather. So, while there is still some code\r\n% that will be too fast to measure (don't expect a valid measurement of 1+1\r\n% please), using the *|keepMeasuring|* method as shown opened up 2 orders\r\n% of magnitude in allowable precision in our experiments.\r\n%\r\n% Have fun, and like Dory, just keep measuring y'all!\r\n##### SOURCE END ##### 309982d53d1c479780859e987ce33728\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"http:\/\/blogs.mathworks.com\/developer\/files\/y2018Dory.jpg\" onError=\"this.style.display ='none';\" \/><\/div><p>Remember Dory? Image Credit: Silvio Tanaka [ CC BY 2.0 ], via Wikimedia CommonsThe model of persistence in the face of difficult circumstances, the hilarious and free spirited fish of the vast ocean... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/developer\/2018\/10\/05\/keepmeasuring\/\">read more >><\/a><\/p>","protected":false},"author":90,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[13,7],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/1901"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/users\/90"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/comments?post=1901"}],"version-history":[{"count":9,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/1901\/revisions"}],"predecessor-version":[{"id":1919,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/1901\/revisions\/1919"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/media?parent=1901"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/categories?post=1901"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/tags?post=1901"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}