{"id":731,"date":"2012-12-18T13:53:57","date_gmt":"2012-12-18T18:53:57","guid":{"rendered":"https:\/\/blogs.mathworks.com\/steve\/?p=731"},"modified":"2019-11-01T09:08:22","modified_gmt":"2019-11-01T13:08:22","slug":"counting-objects-without-bias","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/steve\/2012\/12\/18\/counting-objects-without-bias\/","title":{"rendered":"Counting objects without bias"},"content":{"rendered":"<div class=\"content\"><p>In today's post I want to look at two methods for counting objects per unit area. For example, how can we estimate the number of objects per unit area in this image:<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/2012\/rice-bw-2.png\" alt=\"\"> <\/p><p>I could simply count the number of connected components and divide the total image area. (For this post I'll assume that each pixel has an area of 1.0.)<\/p><pre class=\"codeinput\">url = <span class=\"string\">'https:\/\/blogs.mathworks.com\/images\/steve\/2012\/rice-bw-2.png'<\/span>;\r\nbw = imread(url);\r\ncc = bwconncomp(bw)\r\n<\/pre><pre class=\"codeoutput\">\r\ncc = \r\n\r\n    Connectivity: 8\r\n       ImageSize: [256 256]\r\n      NumObjects: 95\r\n    PixelIdxList: {1x95 cell}\r\n\r\n<\/pre><pre class=\"codeinput\">objects_per_unit_area = cc.NumObjects \/ (size(bw,1) * size(bw,2))\r\n<\/pre><pre class=\"codeoutput\">\r\nobjects_per_unit_area =\r\n\r\n    0.0014\r\n\r\n<\/pre><p>This method, however, is wrong according to <i>The Image Processing Handbook<\/i>, John C. Russ, 5th ed., CRC Press, 2007, pp. 565-567. \"When features intersect the edge of the field of view, it is not proper to count all features that can be seen.\" Russ gives two better methods.<\/p><p>The first method ignores objects touching a pair of adjacent edges, such as the top and left edges. Russ points out that this is \"equivalent to counting each feature by its lower right corner,\" and he draws an analogy to counting people in a room by counting noses. (I'll have to ask the developers on the <a href=\"https:\/\/www.mathworks.com\/products\/computer-vision\/\">Computer Vision System Toolbox<\/a> team for help with nose detection.)<\/p><p>The Image Processing Toolbox has a function, <tt>imclearborder<\/tt>, for removing <b>all<\/b> objects touching the image borders. With a little creativity, we can use that function to identify all objects that do not touch the top and left borders.<\/p><p>We start by padding the image with 0s on the left and top.<\/p><pre class=\"codeinput\">bw2 = padarray(bw,[1 1],0,<span class=\"string\">'pre'<\/span>);\r\n<\/pre><p>Because of the padding, there are no objects touching the left and top borders of <tt>bw2<\/tt>. Calling <tt>imclearborder<\/tt> on this image, then, will remove all objects touching the bottom and right borders in the original image, but it will leave objects touching the top and left borders alone.<\/p><pre class=\"codeinput\">bw3 = imclearborder(bw2);\r\n<\/pre><p>Now let's remove the padded column on the left and padded row on the top.<\/p><pre class=\"codeinput\">bw4 = bw3(2:end,2:end);\r\nimshow(bw4)\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/2012\/counting_features_01.png\" alt=\"\"> <p>What's the object count per unit area now?<\/p><pre class=\"codeinput\">cc = bwconncomp(bw4);\r\nobjects_per_unit_area_2 = cc.NumObjects \/ (size(bw4,1) * size(bw4,2))\r\n<\/pre><pre class=\"codeoutput\">\r\nobjects_per_unit_area_2 =\r\n\r\n    0.0013\r\n\r\n<\/pre><p>The first method, which counted all objects in the original image, was off by somewhere in the neighborhood of 10-11%.<\/p><pre class=\"codeinput\">abs(objects_per_unit_area_2 - objects_per_unit_area) \/ <span class=\"keyword\">...<\/span>\r\n    objects_per_unit_area\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n    0.1053\r\n\r\n<\/pre><p>Another method described by Russ is to do a weighted count of all objects that do not touch any edge. The weight count compensates for the relative likelihood that an object with a certain horizontal and vertical extent would touch the edge of a certain field of view. The equation for the weighted count of each object is<\/p><p>$$ C = \\frac{W_x W_y}{(W_x - F_x) (W_y - F_y)} $$<\/p><p>where $W_x$ and $W_y$ are the dimensions of the image in the x and y directions, and $F_x$ and $F_y$ are the maximum projected dimensions of the object in those directions.<\/p><p>We make use of the Image Processing Toolbox function <tt>regionprops<\/tt> to compute the bounding box of each object, from which we can determine $F_x$ and $F_y$.<\/p><p>Start by clearing <b>all<\/b> objects that touch any image border.<\/p><pre class=\"codeinput\">bw5 = imclearborder(bw);\r\n<\/pre><p>Next, compute the bounding boxes of all the remaining objects.<\/p><pre class=\"codeinput\">s = regionprops(bw5,<span class=\"string\">'BoundingBox'<\/span>);\r\n<\/pre><p>For example, here's the bounding box of the fifth detected object:<\/p><pre class=\"codeinput\">s(5)\r\n<\/pre><pre class=\"codeoutput\">\r\nans = \r\n\r\n    BoundingBox: [17.5000 166.5000 20 24]\r\n\r\n<\/pre><p>I'm going to use an advanced bit of MATLAB syntax here to convert the structure array <tt>s<\/tt> into an ordinary P-by-4 matrix containing all the bounding boxes together. The second line of code displays the bounding boxes of the first seven objects.<\/p><pre class=\"codeinput\">bboxes = cat(1,s.BoundingBox);\r\nbboxes(1:7,:)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n    9.5000   86.5000   25.0000   17.0000\r\n   10.5000   18.5000   26.0000   16.0000\r\n   17.5000   37.5000   19.0000   23.0000\r\n   17.5000  108.5000   10.0000   29.0000\r\n   17.5000  166.5000   20.0000   24.0000\r\n   23.5000   59.5000    9.0000   31.0000\r\n   28.5000  212.5000   17.0000   28.0000\r\n\r\n<\/pre><p>$F_x$ is the third column, and $F_y$ is the fourth column.<\/p><pre class=\"codeinput\">Fx = bboxes(:,3);\r\nFy = bboxes(:,4);\r\n<\/pre><p>$W_x$ and $W_y$ are computed from the size of the image.<\/p><pre class=\"codeinput\">[Wy,Wx] = size(bw);\r\n<\/pre><p>Now compute a vector of weighted counts, one for each object that doesn't touch a border.<\/p><pre class=\"codeinput\">C = (Wx * Wy) .\/ ((Wx - Fx) .* (Wy - Fy));\r\nC(1:7)\r\n<\/pre><pre class=\"codeoutput\">\r\nans =\r\n\r\n    1.1871\r\n    1.1872\r\n    1.1868\r\n    1.1736\r\n    1.1970\r\n    1.1792\r\n    1.2027\r\n\r\n<\/pre><p>The last step is to sum the counts and divide by the total image area.<\/p><pre class=\"codeinput\">objects_per_unit_area_3 = sum(C) \/ (Wx*Wy)\r\n<\/pre><pre class=\"codeoutput\">\r\nobjects_per_unit_area_3 =\r\n\r\n    0.0012\r\n\r\n<\/pre><p>Russ comments that this last method may work better for images containing nonconvex objects that may touch image boundaries more than once.<\/p><p>Have you needed to do this kind of calculation in your own work? Leave a comment and tell us about your application.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_d7c69a5dd26b4993b3f6ebf1622b48a6() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='d7c69a5dd26b4993b3f6ebf1622b48a6 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' d7c69a5dd26b4993b3f6ebf1622b48a6';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2012 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_d7c69a5dd26b4993b3f6ebf1622b48a6()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2012b<br><\/p><p class=\"footer\"><br>\r\n      Published with MATLAB&reg; R2012b<br><\/p><\/div><!--\r\nd7c69a5dd26b4993b3f6ebf1622b48a6 ##### SOURCE BEGIN #####\r\n%%\r\n% In today's post I want to look at two methods for counting objects per\r\n% unit area. For example, how can we estimate the number of objects per\r\n% unit area in this image:\r\n%\r\n% <<https:\/\/blogs.mathworks.com\/images\/steve\/2012\/rice-bw-2.png>>\r\n%\r\n% I could simply count the number of connected components and divide the\r\n% total image area. (For this post I'll assume that each pixel has an area\r\n% of 1.0.)\r\n\r\nurl = 'https:\/\/blogs.mathworks.com\/images\/steve\/2012\/rice-bw-2.png';\r\nbw = imread(url);\r\ncc = bwconncomp(bw)\r\n\r\n%%\r\n\r\nobjects_per_unit_area = cc.NumObjects \/ (size(bw,1) * size(bw,2))\r\n\r\n%%\r\n% This method, however, is wrong according to _The Image Processing\r\n% Handbook_, John C. Russ, 5th ed., CRC Press, 2007, pp. 565-567. \"When\r\n% features intersect the edge of the field of view, it is not proper to\r\n% count all features that can be seen.\" Russ gives two better methods.\r\n%\r\n% The first method ignores objects touching a pair of adjacent edges, such\r\n% as the top and left edges. Russ points out that this is \"equivalent to\r\n% counting each feature by its lower right corner,\" and he draws an analogy\r\n% to counting people in a room by counting noses. (I'll have to ask the\r\n% developers on the <https:\/\/www.mathworks.com\/products\/computer-vision\/\r\n% Computer Vision System Toolbox> team for help with nose detection.)\r\n%\r\n% The Image Processing Toolbox has a function, |imclearborder|, for\r\n% removing *all* objects touching the image borders. With a little creativity,\r\n% we can use that function to identify all objects that do not touch the top\r\n% and left borders. \r\n%\r\n% We start by padding the image with 0s on the left and top.\r\n\r\nbw2 = padarray(bw,[1 1],0,'pre');\r\n\r\n%%\r\n% Because of the padding, there are no objects touching the left and top\r\n% borders of |bw2|. Calling |imclearborder| on this image, then, will\r\n% remove all objects touching the bottom and right borders in the original\r\n% image, but it will leave objects touching the top and left borders alone.\r\n\r\nbw3 = imclearborder(bw2);\r\n\r\n%%\r\n% Now let's remove the padded column on the left and padded row on the top.\r\n\r\nbw4 = bw3(2:end,2:end);\r\nimshow(bw4)\r\n\r\n%%\r\n% What's the object count per unit area now?\r\n\r\ncc = bwconncomp(bw4);\r\nobjects_per_unit_area_2 = cc.NumObjects \/ (size(bw4,1) * size(bw4,2))\r\n\r\n%%\r\n% The first method, which counted all objects in the original image, was\r\n% off by somewhere in the neighborhood of 10-11%.\r\n\r\nabs(objects_per_unit_area_2 - objects_per_unit_area) \/ ...\r\n    objects_per_unit_area\r\n\r\n%%\r\n% Another method described by Russ is to do a weighted count of all objects\r\n% that do not touch any edge. The weight count compensates for the\r\n% relative likelihood that an object with a certain horizontal and vertical\r\n% extent would touch the edge of a certain field of view. The equation for\r\n% the weighted count of each object is\r\n%\r\n% $$ C = \\frac{W_x W_y}{(W_x - F_x) (W_y - F_y)} $$\r\n%\r\n% where $W_x$ and $W_y$ are the dimensions of the image in the x and y\r\n% directions, and $F_x$ and $F_y$ are the maximum projected dimensions of\r\n% the object in those directions.\r\n%\r\n% We make use of the Image Processing Toolbox function |regionprops| to\r\n% compute the bounding box of each object, from which we can determine\r\n% $F_x$ and $F_y$.\r\n%\r\n% Start by clearing *all* objects that touch any image border.\r\n\r\nbw5 = imclearborder(bw);\r\n\r\n%%\r\n% Next, compute the bounding boxes of all the remaining objects.\r\n\r\ns = regionprops(bw5,'BoundingBox');\r\n\r\n%%\r\n% For example, here's the bounding box of the fifth detected object:\r\n\r\ns(5)\r\n\r\n%%\r\n% I'm going to use an advanced bit of MATLAB syntax here to convert the\r\n% structure array |s| into an ordinary P-by-4 matrix containing all the\r\n% bounding boxes together. The second line of code displays the bounding\r\n% boxes of the first seven objects.\r\n\r\nbboxes = cat(1,s.BoundingBox);\r\nbboxes(1:7,:)\r\n\r\n%%\r\n% $F_x$ is the third column, and $F_y$ is the fourth column.\r\n\r\nFx = bboxes(:,3);\r\nFy = bboxes(:,4);\r\n\r\n%%\r\n% $W_x$ and $W_y$ are computed from the size of the image.\r\n\r\n[Wy,Wx] = size(bw);\r\n\r\n%%\r\n% Now compute a vector of weighted counts, one for each object that doesn't\r\n% touch a border.\r\n\r\nC = (Wx * Wy) .\/ ((Wx - Fx) .* (Wy - Fy));\r\nC(1:7)\r\n\r\n%%\r\n% The last step is to sum the counts and divide by the total image area.\r\n\r\nobjects_per_unit_area_3 = sum(C) \/ (Wx*Wy)\r\n\r\n%%\r\n% Russ comments that this last method may work better for images containing\r\n% nonconvex objects that may touch image boundaries more than once.\r\n%\r\n% Have you needed to do this kind of calculation in your own work? Leave a\r\n% comment and tell us about your application.\r\n##### SOURCE END ##### d7c69a5dd26b4993b3f6ebf1622b48a6\r\n-->","protected":false},"excerpt":{"rendered":"<p>In today's post I want to look at two methods for counting objects per unit area. For example, how can we estimate the number of objects per unit area in this image: I could simply count the number... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/steve\/2012\/12\/18\/counting-objects-without-bias\/\">read more >><\/a><\/p>","protected":false},"author":42,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[561,46,404,76,36,344,168,190],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/731"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/users\/42"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/comments?post=731"}],"version-history":[{"count":12,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/731\/revisions"}],"predecessor-version":[{"id":3807,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/731\/revisions\/3807"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/media?parent=731"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/categories?post=731"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/tags?post=731"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}