In a comment on my "Connected component labeling - Part 6" post, Martin Isenburg asked "what is the rationale behind the design decision to have bwlabeln work on individual pixels rather than on runs of pixels?"
Excellent question. I'm happy to answer "why did they do it that way" algorithm questions—when I know the answer, that is!
To summarize the issue: bwlabel performs two-dimensional connected component labeling by analyzing the adjacency relationships of runs of pixels. bwlabeln, on the other hand, analyzes the adjacency relationships of individual pixels.
Background: Late in the development cycle when Image Processing Toolbox version 3, we decided based on a rising tide of user feedback to add multidimensional support to the toolbox. This was a lot work, and we only had a few months to do it. Component labeling was on the list. 2-D labeling existed in the toolbox, but we needed to add support for arbitrary dimension labeling.
Several toolbox functions, including connected-component labeling, depended upon some definition of pixel connectivity. 2-D functions that already existed accepted 4 or 8 as connectivity definitions. For 3-D, we could add 6, 18, and 26 to the list, but what should we do for arbitrary dimensions? We decided to add to support a very broad notion of defining connectivity. Specifically, to define the desired multidimensional connectivity, you could provide a connectivity array, called CONN in the documentation. CONN is a 3-by-3-by- ... -by-3 logical array, symmetric about its center element. This gives the user the ability to define whatever connectivity they desire.
As part of the overall multidimensional development effort, we had already created code for doing arbitrary-dimensional neighborhood iterating (see toolbox/images/images/private/neighborhood.cpp for the details). By combining that code with the union-find technique, it was possible to implement arbitrary-dimension arbitrary-connectivity labeling quickly and robustly, without much code.
I thought about carrying over the run-length encoding idea to multiple dimensions, but I suspected the code needed to perform arbitrary-dimension arbitrary-connectivity adjacency analysis on runs would be complicated and error-prone. I was also pretty sure that there would be cases, depending on the dimensionality, the connectivity definition, and the specific image characteristics, where doing a run-length encoding pass would be counterproductive.
So it all came down to a judgment call. Based on available time and resources, the long list of functions that needed multidimensional implementations, and the possibility that the run-length technique might be inefficient in some cases, we decided to go for the pixel-wise union-find implementation.
コメントを残すには、ここ をクリックして MathWorks アカウントにサインインするか新しい MathWorks アカウントを作成します。