{"id":1052,"date":"2014-05-15T10:42:53","date_gmt":"2014-05-15T15:42:53","guid":{"rendered":"https:\/\/blogs.mathworks.com\/steve\/?p=1052"},"modified":"2014-05-15T10:42:53","modified_gmt":"2014-05-15T15:42:53","slug":"quantization-issues-when-testing-image-processing-code-2","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/steve\/2014\/05\/15\/quantization-issues-when-testing-image-processing-code-2\/","title":{"rendered":"Quantization issues when testing image processing code"},"content":{"rendered":"<div class=\"content\"><p>Today I have for you an insider's view of a subtle aspect of testing image processing software (such as the Image Processing Toolbox!).<\/p><p>I've written several times in this blog about testing software. Years ago I wrote about the testing framework I put on the File Exchange, and more recently (<a href=\"https:\/\/blogs.mathworks.com\/steve\/2013\/03\/12\/matlab-software-testing-tools-old-and-new-r2013a\/\">12-Mar-2013<\/a>) I've promoted the new testing framework added to MATLAB a few releases ago.<\/p><p>In an <a href=\"http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=5235131&amp;refinements%3D4281757781%26sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5232784%29\">article<\/a> I wrote for the IEEE\/AIP magazine <i>Computing in Science and Engineering<\/i> a few years ago, I described the difficulties of comparing floating-point values when writing tests. Because floating-point arithmetic operations are subject to round-off error and other numerical issues, you generally have to use a tolerance when checking an output value for correctness. Sometimes it might be appropriate to use an <i>absolute tolerance<\/i>:<\/p><p>$$|a - b| \\leq T$$<\/p><p>And sometimes it might be more appropriate to use a <i>relative tolerance<\/i>:<\/p><p>$$ \\frac{|a - b|}{\\max(|a|,|b|)} \\leq T $$<\/p><p>But for testing image processing code, this isn't the whole story, as I was reminded a year ago by a question from <a href=\"http:\/\/third-bit.com\">Greg Wilson<\/a>. Greg, the force behind <a href=\"http:\/\/software-carpentry.org\">Software Carpentry<\/a>, was asked by a scientist in a class about how to test image processing code, such as a \"simple\" edge detector. Greg and I had an email conversation about this, which Greg then <a href=\"http:\/\/software-carpentry.org\/blog\/2013\/03\/testing-image-processing.html\">summarized on the Software Carpentry blog<\/a>.<\/p><p>This was my initial response:<\/p><p><i>Whenever there is a floating-point computation that is then quantized to produce an output image, comparing actual versus expected can be tricky. I had to learn to deal with this early in my MathWorks software developer days. Two common scenarios in which this occurs:<\/i><\/p><div><ul><li><i>Rounding a floating-point computation to produce an integer-valued output image<\/i><\/li><li><i>Thresholding a floating-point computation to produce a binary image (such as many edge detection methods)<\/i><\/li><\/ul><\/div><p><i>The problem is that floating-point round-off differences can turn a floating-point value that should be a 0.5 or exactly equal to the threshold into a value that's a tiny bit below. For testing, this means that the actual and expected images are exactly the same...except for a small number of pixels that are off by one. In a situation like this, the actual image can change because you changed the compiler's optimization flags, used a different compiler, used a different processor, used a multithreaded algorithm with dynamic allocation of work to the different threads, etc. So to compare actual against expected, I wrote a test assertion function that passes if the actual is the same as the expected except for a small percentage of pixels that are allowed to be different by 1.<\/i><\/p><p>Greg immediately asked the obvious follow-up question: What is a \"small percentage\"?<\/p><p><i>There isn't a general rule. With filtering, for example, some choices of filter coefficients could lead to a lot of \"int + 0.5\" values; other coefficients might result in few or none. I start with either an exact equality test or a floating-point tolerance test, depending on the computation. If there are some off-by-one values, I spot-check them to verify whether they are caused by a floating-point round-off plus quantization issue. If it all looks good, then I set the tolerance based on what's happening in that particular test case and move on. If you tied me down and forced me to pick a typical number, I'd say 1%.<\/i><\/p><p>PS. Greg gets some credit for indirectly influencing the testing tools in MATLAB. He's the one who prodded me to turn my toy testing project into something slightly more than a toy and make it available to MATLAB users. The existence and popularity of that File Exchange contribution then had a positive influence on the decision of the MathWorks Test Tools Team to create full-blown testing framework for MATLAB. Thanks, Greg!<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_be9d5a1046e84985a662865fb746c302() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='be9d5a1046e84985a662865fb746c302 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' be9d5a1046e84985a662865fb746c302';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2014 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_be9d5a1046e84985a662865fb746c302()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2014a<br><\/p><\/div><!--\r\nbe9d5a1046e84985a662865fb746c302 ##### SOURCE BEGIN #####\r\n%%\r\n% Today I have for you an insider's view of a subtle aspect of testing\r\n% image processing software (such as the Image Processing Toolbox!).\r\n%\r\n% I've written several times in this blog about testing software. Years ago\r\n% I wrote about the testing framework I put on the File Exchange, and more\r\n% recently\r\n% (<https:\/\/blogs.mathworks.com\/steve\/2013\/03\/12\/matlab-software-testing-tools-old-and-new-r2013a\/\r\n% 12-Mar-2013>) I've promoted the new testing framework added to MATLAB a\r\n% few releases ago.\r\n%\r\n% In an <http:\/\/ieeexplore.ieee.org\/xpl\/articleDetails.jsp?arnumber=5235131&refinements%3D4281757781%26sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5232784%29 \r\n% article> I wrote for the IEEE\/AIP magazine _Computing in Science\r\n% and Engineering_ a few years ago, I described the difficulties of\r\n% comparing floating-point values when writing tests. Because\r\n% floating-point arithmetic operations are subject to round-off error and\r\n% other numerical issues, you generally have to use a tolerance when\r\n% checking an output value for correctness. Sometimes it might be\r\n% appropriate to use an _absolute tolerance_:\r\n%\r\n% $$|a - b| \\leq T$$\r\n%\r\n% And sometimes it might be more appropriate to use a _relative tolerance_:\r\n%\r\n% $$ \\frac{|a - b|}{\\max(|a|,|b|)} \\leq T $$\r\n%\r\n% But for testing image processing code, this isn't the whole story, as I\r\n% was reminded a year ago by a question from <http:\/\/third-bit.com Greg\r\n% Wilson>. Greg, the force behind <http:\/\/software-carpentry.org Software\r\n% Carpentry>, was asked by a scientist in a class about how to test image\r\n% processing code, such as a \"simple\" edge detector. Greg and I had an\r\n% email conversation about this, which Greg then\r\n% <http:\/\/software-carpentry.org\/blog\/2013\/03\/testing-image-processing.html\r\n% summarized on the Software Carpentry blog>.\r\n%\r\n% This was my initial response:\r\n%\r\n% _Whenever there is a floating-point computation that is then quantized to\r\n% produce an output image, comparing actual versus expected can be tricky.\r\n% I had to learn to deal with this early in my MathWorks software developer\r\n% days. Two common scenarios in which this occurs:_\r\n% \r\n% * _Rounding a floating-point computation to produce an integer-valued\r\n% output image_\r\n% * _Thresholding a floating-point computation to produce a binary image\r\n% (such as many edge detection methods)_\r\n% \r\n% _The problem is that floating-point round-off differences can turn a\r\n% floating-point value that should be a 0.5 or exactly equal to the\r\n% threshold into a value that's a tiny bit below. For testing, this means\r\n% that the actual and expected images are exactly the same...except for a\r\n% small number of pixels that are off by one. In a situation like this, the\r\n% actual image can change because you changed the compiler's optimization\r\n% flags, used a different compiler, used a different processor, used a\r\n% multithreaded algorithm with dynamic allocation of work to the different\r\n% threads, etc. So to compare actual against expected, I wrote a test\r\n% assertion function that passes if the actual is the same as the expected\r\n% except for a small percentage of pixels that are allowed to be different\r\n% by 1._\r\n%\r\n% Greg immediately asked the obvious follow-up question: What is a \"small\r\n% percentage\"?\r\n%\r\n% _There isn't a general rule. With filtering, for example, some choices of\r\n% filter coefficients could lead to a lot of \"int + 0.5\" values; other\r\n% coefficients might result in few or none. I start with either an exact\r\n% equality test or a floating-point tolerance test, depending on the\r\n% computation. If there are some off-by-one values, I spot-check them to\r\n% verify whether they are caused by a floating-point round-off plus\r\n% quantization issue. If it all looks good, then I set the tolerance based\r\n% on what's happening in that particular test case and move on. If you tied\r\n% me down and forced me to pick a typical number, I'd say 1%._\r\n%\r\n% PS. Greg gets some credit for indirectly influencing the testing tools in\r\n% MATLAB. He's the one who prodded me to turn my toy testing project into\r\n% something slightly more than a toy and make it available to MATLAB users.\r\n% The existence and popularity of that File Exchange contribution then had\r\n% a positive influence on the decision of the MathWorks Test Tools Team to\r\n% create full-blown testing framework for MATLAB. Thanks, Greg!\r\n##### SOURCE END ##### be9d5a1046e84985a662865fb746c302\r\n-->","protected":false},"excerpt":{"rendered":"<p>Today I have for you an insider's view of a subtle aspect of testing image processing software (such as the Image Processing Toolbox!).I've written several times in this blog about testing software.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/steve\/2014\/05\/15\/quantization-issues-when-testing-image-processing-code-2\/\">read more >><\/a><\/p>","protected":false},"author":42,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/1052"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/users\/42"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/comments?post=1052"}],"version-history":[{"count":3,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/1052\/revisions"}],"predecessor-version":[{"id":1055,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/1052\/revisions\/1055"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/media?parent=1052"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/categories?post=1052"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/tags?post=1052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}