{"id":761,"date":"2016-10-24T18:30:04","date_gmt":"2016-10-24T18:30:04","guid":{"rendered":"https:\/\/blogs.mathworks.com\/developer\/?p=761"},"modified":"2016-10-28T17:17:42","modified_gmt":"2016-10-28T17:17:42","slug":"passing-diagnostics","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/developer\/2016\/10\/24\/passing-diagnostics\/","title":{"rendered":"Passing the Test"},"content":{"rendered":"<div class=\"content\"><p><a href=\"https:\/\/blogs.mathworks.com\/developer\/2016\/09\/29\/tap-with-yaml\/\">Last time<\/a> we saw that the new version 13 YAML output can provide a richer CI system experience because the diagnostics are in a standard structured format that can be read by a machine (the CI system) and presented to a human (you) much more cleanly.<\/p><p>One thing I didn't mention is that the TAPPlugin (along with several other <a title=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.unittest.plugins.html (link no longer works)\">plugins<\/a>) now also supports passing diagnostics! It's as easy as an additional Name\/Value pair when creating the plugin:<\/p><pre class=\"language-matlab\">\r\nimport <span class=\"string\">matlab.unittest.TestRunner<\/span>;\r\nimport <span class=\"string\">matlab.unittest.plugins.TAPPlugin<\/span>;\r\nimport <span class=\"string\">matlab.unittest.plugins.ToFile<\/span>;\r\n\r\n<span class=\"keyword\">try<\/span>\r\n    suite = testsuite(<span class=\"string\">'unittest'<\/span>);\r\n    runner = TestRunner.withTextOutput(<span class=\"string\">'Verbosity'<\/span>,3);\r\n    <span class=\"comment\">% Add the TAP plugin<\/span>\r\n    tapFile = fullfile(getenv(<span class=\"string\">'WORKSPACE'<\/span>), <span class=\"string\">'testResults.tap'<\/span>);\r\n    \r\n    runner.addPlugin(TAPPlugin.producingVersion13(ToFile(tapFile)), <span class=\"keyword\">...<\/span>\r\n        <span class=\"string\">'IncludingPassingDiagnostics'<\/span>,true);\r\n    results = runner.run(suite)\r\n<span class=\"keyword\">catch<\/span> e\r\n    disp(getReport(e,<span class=\"string\">'extended'<\/span>));\r\n    exit(1);\r\n<span class=\"keyword\">end<\/span>\r\nexit;\r\n\r\n<\/pre><p>With that simple change the output now includes not only the diagnostics from failing qualifications, but also the diagnostics from the passing qualifications as well. All of this information can now be stored in your Jenkins logs. Take a look:<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/y2016_PassingDiag_Overview.png\" alt=\"\"> <\/p><p>A couple things to note right off the bat. First, we can see that the tests I've zoomed in on here are passing. How do we know that without digging into the text? The colors tell us, look for the green!<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/y2016_PassingDiag_Green.png\" alt=\"\"> <\/p><p>Second, we can actually see that some of these tests passed without any qualifications at all. Since we see no passing diagnostics, we can infer that there was no call to <b><tt>verifyEqual<\/tt><\/b> or <b><tt>assertTrue<\/tt><\/b> or anything. Is this expected? Not sure. There are definitely some times where a test does not need to perform any verification step, and just executing the code correctly is all the verification needed, but if I were a bettin' man I'd bet that most of the time there is an appropriate verification that can and should be done. This view makes such things more clear.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/y2016_PassingDiag_NoQualifications.png\" alt=\"\"> <\/p><p>Why might we want to include passing diagnostics? Usually we are most concerned with the test failures and we don't really need to provide any insight into passing qualifications? While this is definitely true, including the passing diagnostics may prove to be desirable in a number of ways:<\/p><p><b>Diagnostics as proof<\/b> - Showing the passing diagnostics gives more information to the user, including even simply the test name. When called upon to prove the behavior of your software at any point in time (or any version of your SCM system), keeping a history of your test runs and <b><i>how<\/i><\/b> they passed (as opposed to merely <b><i>that<\/i><\/b> they passed) may prove valuable.<\/p><p><b>Diagnostics to find root causes<\/b> - Unfortunately, test suites aren't perfect. They are very good defenses against the introduction of bugs, but sometimes bugs can slip passed their fortifications. However, when that happens you can look back into previously passing results and possibly gain insight into why the test suite didn't catch the bug. For example, in this example the test passes the <b><tt>verifyEmpty<\/tt><\/b> call, but if the correct behavior called for an empty double of size <tt>1x0<\/tt>, these diagnostics would show that is was actually a <tt>0x0<\/tt> cell array which was unexpected and would explain why the tests failed to catch the introduction of the bug.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/y2016_PassingDiag_verifyEmptyWrong.png\" alt=\"\"> <\/p><p><b>Diagnostics as specification<\/b> - Finally, including the passing diagnostics can provide self documenting test intent. This is particularly true if you leverage test diagnostics as the last input argument to your verification. The test writer can provide more context on the expected behavior of the test in the language of the domain, which can therefore act as a specification when combined with such reports. This is an important and oft misunderstood point. In order to support the display of passing diagnostics, when writing tests we should phrase diagnostics we supply in a way that does not depend on whether the test passed or failed. It should simply state the expected behavior instead. You can see in this example that:<\/p><pre>The vertices cell array should be empty by default.<\/pre><p>is much better than:<\/p><pre>The vertices cell array was not empty by default.<\/pre><p>A common mistake is to assume these descriptions only ever apply to failures when in fact they apply to both passing and failing conditions.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/developer\/files\/y2016_PassingDiag_TestDiags.png\" alt=\"\"> <\/p><p>What are the downsides of leveraging passing diagnostics? Really it boils down to performance and verbosity. Not everyone will want to wade through all the passing diagnostics to analyze the failures, and not everyone will want to incur the extra time\/space performance overhead that will come with including them. What about you? Do you find you need to analyze in more depth what happens in passing tests?<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_0e4cf01b57994f5a967fef2943c801b9() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='0e4cf01b57994f5a967fef2943c801b9 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 0e4cf01b57994f5a967fef2943c801b9';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2016 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_0e4cf01b57994f5a967fef2943c801b9()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2016b<br><\/p><\/div><!--\r\n0e4cf01b57994f5a967fef2943c801b9 ##### SOURCE BEGIN #####\r\n%%\r\n% <https:\/\/blogs.mathworks.com\/developer\/2016\/09\/29\/tap-with-yaml\/ Last\r\n% time> we saw that the new version 13 YAML output can provide a richer CI\r\n% system experience because the diagnostics are in a standard structured\r\n% format that can be read by a machine (the CI system) and presented to a\r\n% human (you) much more cleanly.\r\n%\r\n% One thing I didn't mention is that the TAPPlugin (along with several\r\n% other\r\n% <https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.unittest.plugins.html\r\n% plugins>) now also supports passing diagnostics! It's as easy as an\r\n% additional Name\/Value pair when creating the plugin:\r\n%\r\n% <include>runALL_THE_TESTS_YAML.m<\/include>\r\n% \r\n% With that simple change the output now includes not only the diagnostics\r\n% from failing qualifications, but also the diagnostics from the passing\r\n% qualifications as well. All of this information can now be stored in your\r\n% Jenkins logs. Take a look:\r\n%\r\n% <<y2016_PassingDiag_Overview.png>>\r\n%\r\n% A couple things to note right off the bat. First, we can see that the\r\n% tests I've zoomed in on here are passing. How do we know that without\r\n% digging into the text? The colors tell us, look for the green!\r\n%\r\n% <<y2016_PassingDiag_Green.png>>\r\n%\r\n% Second, we can actually see that some of these tests passed without any\r\n% qualifications at all. Since we see no passing diagnostics, we can infer\r\n% that there was no call to *|verifyEqual|* or *|assertTrue|* or anything.\r\n% Is this expected? Not sure. There are definitely some times where a test\r\n% does not need to perform any verification step, and just executing the\r\n% code correctly is all the verification needed, but if I were a bettin'\r\n% man I'd bet that most of the time there is an appropriate verification\r\n% that can and should be done. This view makes such things more clear.\r\n%\r\n% <<y2016_PassingDiag_NoQualifications.png>>\r\n%\r\n% Why might we want to include passing diagnostics? Usually we are most\r\n% concerned with the test failures and we don't really need to provide any\r\n% insight into passing qualifications? While this is definitely true,\r\n% including the passing diagnostics may prove to be desirable in a number\r\n% of ways:\r\n%\r\n% *Diagnostics as proof* - Showing the passing diagnostics gives more\r\n% information to the user, including even simply the test name. When called\r\n% upon to prove the behavior of your software at any point in time (or any\r\n% version of your SCM system), keeping a history of your test runs and\r\n% *_how_* they passed (as opposed to merely *_that_* they passed) may prove\r\n% valuable.\r\n%\r\n% *Diagnostics to find root causes* - Unfortunately, test suites aren't\r\n% perfect. They are very good defenses against the introduction of bugs,\r\n% but sometimes bugs can slip passed their fortifications. However, when\r\n% that happens you can look back into previously passing results and\r\n% possibly gain insight into why the test suite didn't catch the bug. For\r\n% example, in this example the test passes the *|verifyEmpty|* call, but if\r\n% the correct behavior called for an empty double of size |1x0|, these\r\n% diagnostics would show that is was actually a |0x0| cell array which was\r\n% unexpected and would explain why the tests failed to catch the\r\n% introduction of the bug.\r\n%\r\n% <<y2016_PassingDiag_verifyEmptyWrong.png>>\r\n%\r\n% *Diagnostics as specification* - Finally, including the passing\r\n% diagnostics can provide self documenting test intent. This is\r\n% particularly true if you leverage test diagnostics as the last input\r\n% argument to your verification. The test writer can provide more context\r\n% on the expected behavior of the test in the language of the domain, which\r\n% can therefore act as a specification when combined with such reports.\r\n% This is an important and oft misunderstood point. In order to support the\r\n% display of passing diagnostics, when writing tests we should phrase\r\n% diagnostics we supply in a way that does not depend on whether the test\r\n% passed or failed. It should simply state the expected behavior instead.\r\n% You can see in this example that:\r\n%\r\n%  The vertices cell array should be empty by default.\r\n%\r\n% is much better than:\r\n% \r\n%  The vertices cell array was not empty by default.\r\n%\r\n% A common mistake is to assume these descriptions only ever apply to\r\n% failures when in fact they apply to both passing and failing conditions.\r\n%\r\n% <<y2016_PassingDiag_TestDiags.png>>\r\n%\r\n% What are the downsides of leveraging passing diagnostics? Really it boils\r\n% down to performance and verbosity. Not everyone will want to wade through\r\n% all the passing diagnostics to analyze the failures, and not everyone\r\n% will want to incur the extra time\/space performance overhead that will\r\n% come with including them. What about you? Do you find you need to analyze\r\n% in more depth what happens in passing tests?\r\n##### SOURCE END ##### 0e4cf01b57994f5a967fef2943c801b9\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/developer\/files\/y2016_PassingDiag_Overview.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><p>Last time we saw that the new version 13 YAML output can provide a richer CI system experience because the diagnostics are in a standard structured format that can be read by a machine (the CI... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/developer\/2016\/10\/24\/passing-diagnostics\/\">read more >><\/a><\/p>","protected":false},"author":90,"featured_media":766,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/761"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/users\/90"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/comments?post=761"}],"version-history":[{"count":12,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/761\/revisions"}],"predecessor-version":[{"id":796,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/posts\/761\/revisions\/796"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/media\/766"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/media?parent=761"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/categories?post=761"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/developer\/wp-json\/wp\/v2\/tags?post=761"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}