Steve on Image Processing with MATLAB https://blogs.mathworks.com/steve Steve Eddins is an enthusiastic amateur French horn player. Also, he has developed image processing and MATLAB software for MathWorks since 1993. Steve coauthored the book <i>Digital Image Processing Using MATLAB</i>. Fri, 09 Jun 2023 15:18:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 An Example of Function Argument Validation for an Image Utility Function https://blogs.mathworks.com/steve/2023/06/09/an-example-of-function-argument-validation-for-an-image-utility-function/?s_tid=feedtopost https://blogs.mathworks.com/steve/2023/06/09/an-example-of-function-argument-validation-for-an-image-utility-function/#comments Fri, 09 Jun 2023 15:18:03 +0000 https://blogs.mathworks.com/steve/?p=7445

While I was working on a prototype yesterday, I found myself writing a small bit of code that turned out to be so useful, I wondered why I had never done it before. I'd like to show it to you.... read more >>

]]>
While I was working on a prototype yesterday, I found myself writing a small bit of code that turned out to be so useful, I wondered why I had never done it before. I'd like to show it to you. It'll be an opportunity to explore some recent advances in MATLAB that make useful function syntaxes easy to write.
I wrote a simple function called imageSizeInXY. This function takes a MATLAB Graphics Image object and returns its total width and height in the spatial x-y units of the axes.
function out = imageSizeInXY(im)
 
...
 
end
(I'll show the full implementation below.) I was using the function like this.
A = imread("peppers.png");
im = imshow(A, XData = [-1 1], YData = [-1 1]);
axis on
imageSizeInXY_v1(im)
ans = 1×2
2.0039 2.0052
But this function only works if I have captured the image object in a variable. Usually, I call imshow with no output argument, and so I don't have the image object:
imshow(A, XData = [-1 1], YData = [-1 1])
I thought to myself, it would sure be nice if I could just call imageSizeInXY with no input arguments, and it would just automatically tell me about the image that's displayed in the current figure. After thinking about this briefly, I wrote the following:
function out = imageSizeInXY(im)
arguments
im (1,1) matlab.graphics.primitive.Image = findimage
end
 
...
 
end
 
function im = findimage
image_objects = imhandles(imgca);
if isempty(image_objects)
error("No image object found.");
end
im = image_objects(1);
end
The first interesting piece here is the use of the arguments block, a MATLAB language syntax that was introduced in R2019b. The arguments block, and more generally the function argument validation features, helps to implement function behavior with helpful error messages and useful defaults while writing very little code.
Let's look more closely at the beginning of the function.
function out = imageSizeInXY(im)
arguments
im (1,1) matlab.graphics.primitive.Image = findimage
end
The arguments block lets us specify constraints and default behavior for the function's input arguments. The "(1,1)" part says that the input argument, im, must have size $1 \times 1$. In other words, it must be a scalar. The next part, "matlab.graphics.primitive.Image," indicates that im is required to be that class (or convertible to it).
MATLAB will enforce these size and type constraints automatically, issuing specific error messages if they are not met.
image-size-in-xy-errors.png
Note that you can get the full class name of MATLAB graphics objects by calling class:
class(im)
ans = 'matlab.graphics.primitive.Image'
The "= findimage" portion is used to automatically provide a default value for the input im. This can be a constant value or an expression. In this case, findimage is an expression that calls the local function that I have written:
function im = findimage
image_objects = imhandles(imgca);
if isempty(image_objects)
error("No image object found.");
end
im = image_objects(1);
end
The functions imgca and imhandles are Image Processing Toolbox utilities. The function imgca returns the most recently current axes object containing one or more images, and imhandles returns the image objects within it.
The last few lines are just me being careful to handle the cases where there are no image objects to be found, or where there are multiple image objects in the axes.
The full implementation, including the simple size calculation, is below.
function out = imageSizeInXY(im)
arguments
im (1,1) matlab.graphics.primitive.Image = findimage
end
 
pixel_width = extent(im.XData,size(im.CData,2));
pixel_height = extent(im.YData,size(im.CData,1));
 
x1 = im.XData(1) - (pixel_width / 2);
x2 = im.XData(end) + (pixel_width / 2);
 
y1 = im.YData(1) - (pixel_height / 2);
y2 = im.YData(end) + (pixel_height / 2);
 
out = [(x2 - x1) (y2 - y1)];
end
 
function im = findimage
image_objects = imhandles(imgca);
if isempty(image_objects)
error("No image object found.");
end
im = image_objects(1);
end
 
function e = extent(data,num_pixels)
if (num_pixels <= 1)
e = 1;
else
e = (data(2) - data(1)) / (num_pixels - 1);
end
end
Now I don't have to remember to call imshow with an output argument in order to use it.
If you haven't already explored using function argument validation and arguments blocks for writing your own MATLAB functions, I encourage you to give them a try. It's really worth it!
]]>
https://blogs.mathworks.com/steve/2023/06/09/an-example-of-function-argument-validation-for-an-image-utility-function/feed/ 5
Quantitative Video Analysis: Measuring a Container Filling with Liquid https://blogs.mathworks.com/steve/2023/04/12/quantitative-video-analysis-measuring-a-container-filling-with-liquid/?s_tid=feedtopost https://blogs.mathworks.com/steve/2023/04/12/quantitative-video-analysis-measuring-a-container-filling-with-liquid/#respond Wed, 12 Apr 2023 14:16:20 +0000 https://blogs.mathworks.com/steve/?p=7398

Today's post is by guest blogger Isaac Bruss. Isaac has been a course developer for MathWorks since 2019. His PhD is in computational physics from UMass Amherst. Isaac has helped launch and support... read more >>

]]>
Today's post is by guest blogger Isaac Bruss. Isaac has been a course developer for MathWorks since 2019. His PhD is in computational physics from UMass Amherst. Isaac has helped launch and support several courses on Coursera, on topics such as robotics, machine learning, and computer vision. Isaac's post is based on a lesson from the Image Processing for Engineering and Science Specialization on Coursera.

Introduction

In this post, I’ll analyze a video of a container filling with liquid. Now, that might not sound like the most exciting thing in the world, but imagine, for example, you're a quality control engineer responsible for ensuring that the rate of filling bottles is consistent. You need to determine if this rate is constant or varies over time. To answer this question, you’d need to process each individual frame from the video file. But where would you start?
Well, in this case, I started out with the Video Viewer App to become more familiar with the video. This app allows you to see the video's size, frame rate, and total number of frames along the bottom bar. By using this app, you can quickly get a sense of what I’m working with by playing the video and navigating between frames.
movie-player.gif

Creating the Segmentation Function

With the goal of measuring the liquid within the container, I needed a segmentation function that can separate the liquid from the background. To ensure I develop a robust algorithm, I export a few representative frames to create and test a segmentation function.
export-frames.gif
The Color Thresholder App is a powerful tool for interactively segmenting images based on color. This app is particularly useful for images that have distinct color differences between the object of interest and the background. In this case, I used the region selection tool to highlight pixels of interest within the L*a*b* color space. This allowed me to isolate the dark liquid from both the purple background and white foam at the top.
color-thresholder.gif
Once I'm satisfied with my thresholding results, I export this segmentation function from the app as a “.m” file named “liquidMask”.

Check the Segmentation Function

This custom function was developed using only the few frames I exported from the video. But to be confident in the results, I need to ensure that it accurately isolates the dark liquid in all the frames. To do this, I first create a video reader object.
v = VideoReader("liquidVideo.mp4")
v =
VideoReader with properties: General Properties: Name: 'liquidVideo.mp4' Path: 'C:\MATLAB Drive\Play\Blog Posts\video analysis' Duration: 24 CurrentTime: 0.1000 NumFrames: <Calculating...> learn more Video Properties: Width: 120 Height: 216 FrameRate: 10 BitsPerPixel: 24 VideoFormat: 'RGB24'
Then I loop through the frames using the hasFrame function. Inside the loop, each frame is passed into the custom liquidMask function, and the resulting mask is displayed side-by-side using the montage function.
while hasFrame(v) % Loop through all frames
img = readFrame(v); % Read a frame
bw = liquidMask(img); % Apply the custom segmentation function
montage({img,bw},'BorderSize',[20,20],'BackgroundColor',[0.5,0.5,0.5]);
drawnow
end
mask-generation.gif
The resulting segmented video frames look great! Further morphological operations to better segment the region of interest are possible, but to get a quick example up and running they aren’t needed.

Analyzing Liquid Percentage

Now I can move on to the final part: calculating the percentage of the container that is filled with water at each frame.
To get started, I need to rewind the video to the beginning using the CurrentTime property of the VideoReader.
v.CurrentTime = 0;
Next, I initialize two variables to store values for each frame: the percentage of the container filled, and the time stamp.
nFrames = v.NumFrames;
percents = zeros(nFrames,1);
times = zeros(nFrames,1);
Now I’m ready to use a for-loop to read through the video, segment each image, and calculate the percentage of the container filled at each frame. The percentage calculation is done by counting the number of pixels that are filled with liquid and dividing it by the total number of pixels.
for i = 1:nFrames % Loop through all frames
img = readFrame(v); % Read a frame
bw = liquidMask(img); % Create the binary mask using the custom function
liquidPart = nnz(bw); % Calculate the number of liquid (true) pixels
perc = liquidPart/numel(bw); % Calculate percentage filled with liquid
time = v.CurrentTime; % Mark the current time
percents(i) = perc; % Save the fill-percentage value
times(i) = time; % Save the time stamp
end
Now that the data has been gathered, I plot the percentage of the container filled versus time to see how quickly the liquid fills the container.
figure
plot(times,percents*100)
title('Percentage of Container Filled with Liquid vs Time')
xlabel('Time (seconds)')
ylabel('Percentage Filled')
Look at that! The fill rate is not consistent. From the plot, I see that the flow slowed down around 8 seconds, increased at 15 seconds, and slowed again at 20 seconds.
This post is based on a lesson from the Image Processing for Engineering and Science Specialization on Coursera. There you’ll be able to complete many projects like this one, from segmenting cracks in images of concrete to counting cars in a video of traffic.
___
Thanks, Isaac!
]]> https://blogs.mathworks.com/steve/2023/04/12/quantitative-video-analysis-measuring-a-container-filling-with-liquid/feed/ 0 Revised Circularity Measurement in regionprops (R2023a) https://blogs.mathworks.com/steve/2023/03/21/revised-circularity-measurement-in-regionprops-r2023a/?s_tid=feedtopost https://blogs.mathworks.com/steve/2023/03/21/revised-circularity-measurement-in-regionprops-r2023a/#comments Tue, 21 Mar 2023 15:46:29 +0000 https://blogs.mathworks.com/steve/?p=7371

For some shapes, especially ones with a small number of pixels, a commonly-used method for computing circularity often results in a value which is biased high, and which can be greater than 1. In... read more >>

]]>
For some shapes, especially ones with a small number of pixels, a commonly-used method for computing circularity often results in a value which is biased high, and which can be greater than 1. In releases prior to R2023a, the function regionprops used this common method. In R2023a, the computation method has been modified to correct for the bias.
For the full story, read on!

What Is Circularity?

Our definition of circularity is:
$c = \frac{4\pi a}{p^2}$
where a is the area of a shape and p is its perimeter. It is a unitless quantity that lies in the range $[0,1]$. A true circle has a circularity of 1. Here are the estimated circularity measurements for some simple shapes.
url = "https://blogs.mathworks.com/steve/files/simple-shapes.png";
A = imread(url);
p = regionprops("table",A,["Circularity" "Centroid"])
p = 3×2 table
 CentroidCircularity
1104.0036107.95640.9999
2327.0258141.00070.6223
3619.9921107.93770.1514
imshow(A)
for k = 1:height(p)
text(p.Centroid(k,1),p.Centroid(k,2),string(p.Circularity(k)),...
Color = "yellow",...
BackgroundColor = "blue",...
HorizontalAlignment = "center",...
VerticalAlignment = "middle",...
FontSize = 14)
end
title("Circularity measurements for some simple shapes")

Circularity > 1??

Some users have been understandably puzzled to find that, in previous releases, regionprops sometimes returned values greater than 1 for circularity. For example:
url = "https://blogs.mathworks.com/steve/files/small-circle.png";
B = imread(url);
imshow(B)
outlinePixels(size(B))
text(7,7,"1.38",...
Color = "yellow",...
BackgroundColor = "blue",...
HorizontalAlignment = "center",...
VerticalAlignment = "middle",...
FontSize = 14)
title("Circularity as measured in previous releases")

Measuring Perimeter

Before going on, I want to pause to note that, while circularity can be a useful measurement, it can also be quite tricky. One can get lost in fractal land when trying to measure perimeter precisely; see Bernard Mandelbrot's famous paper, "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension," described on Wikipedia here. Then, add the difficulties introduced by the fact that we are working with digital images, whose pixels we sometimes think of as points on a discrete grid, and sometimes as squares with unit area. See, for example, MATLAB Answers MVP John D'Errico's commentary on this topic.
In fact, measuring perimeter is what gets the common circularity computation method in trouble. Measuring the perimeter of objects in digital images is usually done by tracing along from one pixel center to the next, and then adding up weights that are determined by the angles between successive segments.
b = bwboundaries(B);
imshow(B)
outlinePixels(size(B))
hold on
plot(b{1}(:,1),b{1}(:,2),Color = [0.850 0.325 0.098],...
Marker = "*")
hold off
A few different schemes have been used to assign the weights in order to minimize systematic bias in the perimeter measurement. The weights used by regionprops are those given in Vossepeol and Smeulders, "Vector Code Probability and Metrication Error in the Representation of Straight Lines of Finite Length," Computer Graphics and Image Processing, vol. 20, pp. 347-364, 1982. See the discussion in Luego, "Measuring boundary length," Cris' Image Analysis Blog: Theory, Methods, Algorithms, Applications, https://www.crisluengo.net/archives/310/, last accessed 20-Mar-2023.
The cause of the circularity computation bias can be seen in the figure above. The perimeter is being calculated based on a path that connects pixel centers. Some of the pixels, though, lie partially outside the red curve. The perimeter, then, is being calculated along a curve that does not completely contain all of the square pixels; the curve is just a bit too small. Relative to the area computation, then, the perimeter computation is being underestimated. Since perimeter is in the denominator of the circularity formula, the resulting value is too high.
Can we fix the problem by tracing the perimeter path differently? Let's try with a digitized square at a 45-degree angle.
mask = [ ...
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 1 1 1 0 0 0 0 0
0 0 0 0 1 1 1 1 1 0 0 0 0
0 0 0 1 1 1 1 1 1 1 0 0 0
0 0 1 1 1 1 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 0 1 1 1 1 1 1 1 1 1 0 0
0 0 0 1 1 1 1 1 1 1 0 0 0
0 0 0 0 1 1 1 1 1 0 0 0 0
0 0 0 0 0 1 1 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 ];
Here is the path traced by regionprops for computing the perimeter.
x = [1 6 11 6 1] + 1;
y = [6 1 6 11 6] + 1;
imshow(mask)
outlinePixels(size(mask))
hold on
plot(x,y,Color = [0.850 0.325 0.098])
hold off
The perimeter of the red square is $4 \cdot 5 \sqrt{2} \approx 28.3$. The would correspond to an area of $(5 \sqrt{2})^2$. However, the number of white squares is 61. The estimated perimeter and area are inconsistent with each other, which will result in a biased circularity measurement.
Can we fix it by tracing the perimeter along the outer points of the pixels, like this?
x = [1.5 6.5 7.5 12.5 12.5 7.5 6.5 1.5 1.5];
y = [6.5 1.5 1.5 6.5 7.5 12.5 12.5 7.5 6.5];
imshow(mask)
outlinePixels(size(mask))
hold on
plot(x,y,Color = [0.850 0.325 0.098])
hold off
Now the calculated perimeter is going to be too high because it corresponds to an area that is much greater than 61, the number of white pixels.
perim = sum(sqrt(sum(diff([x(:) y(:)]).^2,2)))
perim = 32.2843
For a square, that perimeter corresponds to this area:
area = (perim/4)^2
area = 65.1421
This time, the estimated perimeter is too high with respect to the estimated area, and so the computed circularity will be biased low.
What if we compute the perimeter by tracing along the pixel edges?
x = [1.5 2.5 2.5 3.5 3.5 4.5 4.5 5.5 5.5 6.5 6.5 ...
7.5 7.5 8.5 8.5 9.5 9.5 10.5 10.5 11.5 11.5 12.5 12.5 ...
11.5 11.5 10.5 10.5 9.5 9.5 8.5 8.5 7.5 7.5 6.5 6.5 ...
5.5 5.5 4.5 4.5 3.5 3.5 2.5 2.5 1.5 1.5];
y = [7.5 7.5 8.5 8.5 9.5 9.5 10.5 10.5 11.5 11.5 ...
12.5 12.5 11.5 11.5 10.5 10.5 9.5 9.5 8.5 8.5 ...
7.5 7.5 6.5 6.5 5.5 5.5 4.5 4.5 3.5 3.5 2.5 2.5 1.5 1.5 ...
2.5 2.5 3.5 3.5 4.5 4.5 5.5 5.5 6.5 6.5 7.5];
imshow(mask)
outlinePixels(size(mask))
hold on
plot(x,y,Color = [0.850 0.325 0.098], LineWidth = 5)
hold off
But now the calculated perimeter is WAY too high with respect to the area, so the computed circularity would be very low.
perim = sum(sqrt(sum(diff([x(:) y(:)]).^2,2)))
perim = 44

Bias Correction Model

After all that, we decided to leave the perimeter tracing method alone. Instead, we came up with a simple approximate model for the circularity bias and then used that model to derive a correction factor.
Consider a circle with radius r. We can model the bias by computing circularity using area that is calculated from r and perimeter that is calculated from $r - 1/2$. The result, $c'$, is the model of the biased circularity.
$c' = \frac{1}{1 - \frac{1}{r} + \frac{0.25}{r^2}} = \frac{1}{\left(1 - \frac{0.5}{r}\right)^2}$
How well does $c'$ match what the old regionprops computation was returning? Here is a comparison performed on digitized circles with a range of radii.
bias-model-comparison-for-circles.png
But what should we do for general shapes? After all, if we already know it's a circle, then there's no big need to compute the circularity metric, right?
Here's the strategy: Given the computed area, a, determine an equivalent radius, $r_{\text{eq}}$, which is the radius of a circle with the same area. Then, use $r_{\text{eq}}$ to derive a bias correction factor:
$c_{\text{corrected}} = c\left(1 - \frac{0.5}{r_{\text{eq}}}\right)^2 = c\left(1-\frac{0.5}{\sqrt{a/\pi}}\right)^2$

Does It Work?

We just made a big assumption there, computing $r_{\text{eq}}$ as if our shape is a circle. Can we really do that? Well, we did a bunch of experiments, using various kinds of shapes, scaled over a large range of sizes, and rotated at various angles. Here are the results for one experiment, using an equilateral triangle with a base rotated off the horizontal axis by 20 degrees. The blue curve is the circularity computed in previous releases; the red curve is the corrected circularity as computed in R2023a, and the horizontal orange line is the theoretical value.
equilateral-triangle-20-degrees.png
We saw similar curves for a broad range of shapes. For some shapes, with a theoretical circularity value below around 0.2, the correction did not improve the result. On balance, however, we decided that the correction method was worthwhile and that regionprops should be changed to incorporate it.
If you need to get the previously computed value in your code, see the Version History section of the regionprops reference page for instructions.

Helper Function

function outlinePixels(sz)
M = sz(1);
N = sz(2);
for k = 1.5:(N-0.5)
xline(k, Color = [.7 .7 .7]);
end
for k = 1.5:(M-0.5)
yline(k, Color = [.7 .7 .7]);
end
end
]]>
https://blogs.mathworks.com/steve/2023/03/21/revised-circularity-measurement-in-regionprops-r2023a/feed/ 2
Identifying Border-Touching Objects Using imclearborder or regionprops https://blogs.mathworks.com/steve/2023/02/17/objects-touching-border-imclearborder-regionprops/?s_tid=feedtopost https://blogs.mathworks.com/steve/2023/02/17/objects-touching-border-imclearborder-regionprops/#comments Fri, 17 Feb 2023 19:22:08 +0000 https://blogs.mathworks.com/steve/?p=6831

I have seen some requests and questions related to identifying objects in a binary image that are touching the image border. Sometimes the question relates to the use of imclearborder, and... read more >>

]]>
I have seen some requests and questions related to identifying objects in a binary image that are touching the image border. Sometimes the question relates to the use of imclearborder, and sometimes the question is about regionprops. Today, I'll show you how to tackle the problem both ways.

Using imclearborder

I'll be using this binary version of the rice.png sample image from the Image Processing Toolbox.
url = "https://blogs.mathworks.com/steve/files/rice-bw-1.png";
A = imread(url);
imshow(A)
The function imclearborder removes all objects touching the border.
B = imclearborder(A);
imshow(B)
That seems to be the opposite of what we want. We can, however, convert this into the desired result by using the MATLAB element-wise logical operators, such as & (and), | (or), and ~ (not). In words, we want the foreground pixels that are in A and are not in B. As a MATLAB expression, it looks like this:
C = A & ~B;
imshow(C)

Using regionprops

The function regionprops can compute all sorts of properties of binary image objects. Here is a simple example that computes the area and centroid of each object in our sample image. I'm using the form of regionprops that returns a table.
props = regionprops("table",A,["Area" "Centroid"])
t = 97×2 table
 AreaCentroid
11385.985535.0870
21216.446362.0579
3672.104585.1940
41784.5056117.5674
51579.4650141.2675
628611.8007168.5350
719512.0872205.2974
81359.4296223.0296
917621.369395.9716
1020824.197126.3125
1119928.185948.2563
1218822.1011122.6011
1321628.2778179.4630
1414328.3427251.9720
My technique for finding border-touching objects with regionprops uses the BoundingBox property, so include that property along with any other properties that you want to measure.
props = regionprops("table",A,["Area" "Centroid" "BoundingBox"])
t = 97×3 table
 AreaCentroidBoundingBox
11385.985535.08700.500022.50001322
21216.446362.05790.500053.50001316
3672.104585.19400.500073.5000423
41784.5056117.56740.5000102.50001028
51579.4650141.26750.5000132.50002016
628611.8007168.53500.5000152.50002633
719512.0872205.29740.5000199.50002412
81359.4296223.02960.5000218.50002010
917621.369395.97169.500086.50002517
1020824.197126.312510.500018.50002616
1119928.185948.256317.500037.50001923
1218822.1011122.601117.5000108.50001029
1321628.2778179.463017.5000166.50002024
1414328.3427251.972018.5000245.50002311
For any particular object, BoundingBox is a four-element vector containing the left, top, width, and height of the bounding box. For example, here is the bounding box of the 20th object:
props.BoundingBox(20,:)
ans = 1×4
35.5000 113.5000 9.0000 28.0000
By comparing these values to the size of the image, we can identify which objects touch the image border.
Start by determining the objects that touch specific borders.
left_coordinate = props.BoundingBox(:,1);
props.TouchesLeftBorder = (left_coordinate == 0.5);
top_coordinate = props.BoundingBox(:,2);
props.TouchesTopBorder = (top_coordinate == 0.5);
right_coordinate = left_coordinate + props.BoundingBox(:,3);
bottom_coordinate = top_coordinate + props.BoundingBox(:,4);
[M,N] = size(A);
props.TouchesRightBorder = (right_coordinate == (N+0.5));
props.TouchesBottomBorder = (bottom_coordinate == (M+0.5))
t = 97×7 table
 AreaCentroidBoundingBoxTouchesLeftBorderTouchesTopBorderTouchesRightBorderTouchesBottomBorder
11385.985535.08700.500022.500013221000
21216.446362.05790.500053.500013161000
3672.104585.19400.500073.50004231000
41784.5056117.56740.5000102.500010281000
51579.4650141.26750.5000132.500020161000
628611.8007168.53500.5000152.500026331000
719512.0872205.29740.5000199.500024121000
81359.4296223.02960.5000218.500020101000
917621.369395.97169.500086.500025170000
1020824.197126.312510.500018.500026160000
1119928.185948.256317.500037.500019230000
1218822.1011122.601117.5000108.500010290000
1321628.2778179.463017.5000166.500020240000
1414328.3427251.972018.5000245.500023110001
Finally, compute whether objects touch any border using |, the element-wise OR operator.
props.TouchesAnyBorder = props.TouchesLeftBorder | ...
props.TouchesTopBorder | ...
props.TouchesRightBorder | ...
props.TouchesBottomBorder
t = 97×8 table
 AreaCentroidBoundingBoxTouchesLeftBorderTouchesTopBorderTouchesRightBorderTouchesBottomBorderTouchesAnyBorder
11385.985535.08700.500022.5000132210001
21216.446362.05790.500053.5000131610001
3672.104585.19400.500073.500042310001
41784.5056117.56740.5000102.5000102810001
51579.4650141.26750.5000132.5000201610001
628611.8007168.53500.5000152.5000263310001
719512.0872205.29740.5000199.5000241210001
81359.4296223.02960.5000218.5000201010001
917621.369395.97169.500086.5000251700000
1020824.197126.312510.500018.5000261600000
1119928.185948.256317.500037.5000192300000
1218822.1011122.601117.5000108.5000102900000
1321628.2778179.463017.5000166.5000202400000
1414328.3427251.972018.5000245.5000231100011
I'll finish with a quick visual sanity check of the results.
L = bwlabel(A);
L_touches_border = ismember(L,find(props.TouchesAnyBorder));
imshowpair(A,L_touches_border)
]]>
https://blogs.mathworks.com/steve/2023/02/17/objects-touching-border-imclearborder-regionprops/feed/ 2
Object Sort Order for bwlabel, bwconncomp, and regionprops https://blogs.mathworks.com/steve/2023/01/30/bwlabel-bwconncomp-regionprops-order/?s_tid=feedtopost Mon, 30 Jan 2023 21:28:11 +0000 https://blogs.mathworks.com/steve/?p=6795

Someone recently asked me about the order of objects found by the functions bwlabel, bwconncomp, and regionprops. In this binary image, for example, what accounts for the object order?The functions... read more >>

]]>
Someone recently asked me about the order of objects found by the functions bwlabel, bwconncomp, and regionprops. In this binary image, for example, what accounts for the object order?
rice-bw-ul-numbered.png
The functions bwlabel and bwconncomp both scan for objects in the same direction. They look down each column, starting with the left-most column. This animation illustrates the search order:
rice-animated.gif
Because of this search procedure, the object order depends on the top, left-most pixel in each object. Specifically, the order is a lexicographic sort of $(c,r)$ coordinates, where c and r are the column and row coordinates of the top, left-most pixel.
If you pass a binary image to regionprops, the resulting object order is the same as for bwconncomp, and that is because regionprops calls bwconncomp under the hood.
Sometimes, people who are working with images of text ask to change things so that the search proceeds along rows instead of columns. The motivation is to get the object order to be same as the order of characters on each line. (Assuming a left-to-right language, that is.) That generally doesn't work well, though. Consider this text fragment:
about-larry-75dpi.png
With a row-wise search, the letter "b" would be found first, followed by "L" and then "t." This animation shows why that happens:
about-larry-animated.gif
If you want a different sort order, I recommend returning the regionprops output as a table, and then you can use sortrows. Here are a couple of examples.
url = "https://blogs.mathworks.com/steve/files/rice-bw-ul.png";
A = imread(url);
imshow(A)
props = regionprops("table",A,["Centroid" "Area"])
props = 11×2 table
 AreaCentroid
11385.985535.0870
2875.367860.3563
320824.197126.3125
419928.185948.2563
56123.81974.3443
62026.750062.3500
718943.634919.8783
89743.845458.5361
911558.20877.0696
107061.785758.2286
112662.923131.8846
You could sort the objects according to their centroids, with the primary sort in the vertical direction and the secondary sort in the horizontal direction.
props.Centroid_x = props.Centroid(:,1);
props.Centroid_y = props.Centroid(:,2);
props_by_row = sortrows(props,["Centroid_y" "Centroid_x"])
props_by_row = 11×4 table
 AreaCentroidCentroid_xCentroid_y
16123.81974.344323.81974.3443
211558.20877.069658.20877.0696
318943.634919.878343.634919.8783
420824.197126.312524.197126.3125
52662.923131.884662.923131.8846
61385.985535.08705.985535.0870
719928.185948.256328.185948.2563
87061.785758.228661.785758.2286
99743.845458.536143.845458.5361
10875.367860.35635.367860.3563
112026.750062.350026.750062.3500
imshow(A)
for k = 1:height(props_by_row)
x = props_by_row.Centroid_x(k);
y = props_by_row.Centroid_y(k);
text(x,y,string(k),Color = "yellow", BackgroundColor = "blue", FontSize = 16, ...
FontWeight = "bold", HorizontalAlignment = "center",...
VerticalAlignment = "middle");
end
For another example, you could sort by area.
props_by_area = sortrows(props,"Area","descend")
props_by_area = 11×4 table
 AreaCentroidCentroid_xCentroid_y
120824.197126.312524.197126.3125
219928.185948.256328.185948.2563
318943.634919.878343.634919.8783
41385.985535.08705.985535.0870
511558.20877.069658.20877.0696
69743.845458.536143.845458.5361
7875.367860.35635.367860.3563
87061.785758.228661.785758.2286
96123.81974.344323.81974.3443
102662.923131.884662.923131.8846
112026.750062.350026.750062.3500
imshow(A)
for k = 1:height(props_by_area)
x = props_by_area.Centroid_x(k);
y = props_by_area.Centroid_y(k);
text(x,y,string(k),Color = "yellow", BackgroundColor = "blue", FontSize = 16, ...
FontWeight = "bold", HorizontalAlignment = "center", ...
VerticalAlignment = "middle");
end
title("Objects sorted by area (biggest to smallest)")
]]>
Fast Local Sums, Integral Images, and Integral Box Filtering https://blogs.mathworks.com/steve/2023/01/09/integral-image-box-filtering/?s_tid=feedtopost https://blogs.mathworks.com/steve/2023/01/09/integral-image-box-filtering/#comments Mon, 09 Jan 2023 21:05:51 +0000 https://blogs.mathworks.com/steve/?p=6734

In May 2006, I wrote about a technique for computing fast local sums of an image. Today, I want to update that post with additional information about integral image and integral box filtering... read more >>

]]>
In May 2006, I wrote about a technique for computing fast local sums of an image. Today, I want to update that post with additional information about integral image and integral box filtering features in the Image Processing Toolbox.
First, let's review the concept of local sums.
A = magic(7)
A = 7×7
30 39 48 1 10 19 28 38 47 7 9 18 27 29 46 6 8 17 26 35 37 5 14 16 25 34 36 45 13 15 24 33 42 44 4 21 23 32 41 43 3 12 22 31 40 49 2 11 20
The local sum of the $3 \times 3$ neighborhood around (2,3) element of A can be computed this way:
submatrix = A(1:3, 2:4)
submatrix = 3×3
39 48 1 47 7 9 6 8 17
local_sum_2_3 = sum(submatrix,"all")
local_sum_2_3 = 182
In image processing applications, we are often interested in the local sum divided by the number of elements in the neighborhood, which is 9 in this case:
local_sum_2_3/9
ans = 20.2222
Computing all the local sums for the matrix, divided by the neighborhood size, is the same as performing 2-D filtering with a local mean filter, like this:
B = imfilter(A,ones(3,3)/9);
B(2,3)
ans = 20.2222
This is often called a box filter. Computing an $N \times N$ box filter for an entire $P \times Q$ image would seem to require approximately $PQN^2$ operations. In other words, for a fixed image size, we might expect the box filter computational time to increase in proportion to $N^2$, where N is the local sum window size.
Let's see how long imfilter takes with respect to a changing local sum neighborhood size.
I = rand(3000,4000);
nn = 15:10:205;
imfilter_times = zeros(size(nn));
for k = 1:length(nn)
n = nn(k);
f = @() imfilter(I,ones(n,n)/n^2);
imfilter_times(k) = timeit(f);
end
plot(nn,imfilter_times)
title("Box filter execution time using imfilter")
xlabel("window size")
ylabel("time (s)")
The function imfilter is doing better than expected, with an execution time that increases only linearly with the window size. (That's because imfilter is taking advantage of filter separability, which I have written about previously.)
But we can compute local sums even faster, by using something called an integral image, also known as a summed-area table. Let's go back to the magic square example and use the function integralImage.
Ai = integralImage(A)
Ai = 8×8
0 0 0 0 0 0 0 0 0 30 69 117 118 128 147 175 0 68 154 209 219 247 293 350 0 114 206 269 296 350 431 525 0 119 225 304 356 444 561 700 0 132 253 356 441 571 732 875 0 153 297 432 558 731 895 1050 0 175 350 525 700 875 1050 1225
Each element of Ai is the cumulative sum of all the elements in A that are above and to the left. Ai(4,3), which is 206, is the sum of all the elements of A(1:3,1:2).
A(1:3,1:2)
ans = 3×2
30 39 38 47 46 6
sum(ans,"all")
ans = 206
Ai(4,3)
ans = 206
Expressed as a general formula, the integral image is $A_i(m,n) = \sum_{p=1}^{m-1} \sum_{q=1}^{n-1} A(p,q)$.
The really interesting thing about having the integral image, Ai, is that it lets you compute the sum of any rectangular submatrix of A by an addition and subtraction of just four numbers. Specifically, the sum of all the elements in A(m1:m2,n1:n2) is Ai(m2+1,n2+1) - Ai(m1,n2+1) - Ai(m2+1,n1) + Ai(m1,n1). Consider computing the sum of all elements in A(3:5,2:6).
m1 = 3;
m2 = 5;
n1 = 2;
n2 = 6;
A(m1:m2,n1:n2)
ans = 3×5
6 8 17 26 35 14 16 25 34 36 15 24 33 42 44
sum(ans,"all")
ans = 375
Ai(m2+1,n2+1) - Ai(m1,n2+1) - Ai(m2+1,n1) + Ai(m1,n1)
ans = 375
If we have the integral image, then, we can compute every element of the box filter output by adding just four terms together. In other words, the box filter computation can be independent of the box filter window size!
The Image Processing Toolbox function integralBoxFilter does this for you. You give it the integral image, and it returns the result of the box filter applied to the original image.
Let's see how long that takes as we vary the box filter window size.
integral_box_filter_times = zeros(1,length(nn));
for k = 1:length(nn)
n = nn(k);
h = @() integralBoxFilter(Ai,n);
integral_box_filter_times(k) = timeit(h);
end
plot(nn,integral_box_filter_times, ...
nn,imfilter_times)
legend(["integralBoxFilter" "imfilter"], ...
Location = "northwest")
title("Box filter execution time")
xlabel("window size")
ylabel("time (s)")
That looks great! As expected, integralBoxFilter is fast, with execution time that is independent of the window size. To be completely fair, though, we should include the time required to compute the integral image.
integral_box_filter_total_times = zeros(1,length(nn));
for k = 1:length(nn)
n = nn(k);
r = @() integralBoxFilter(integralImage(I),n);
integral_box_filter_total_times(k) = timeit(r);
end
plot(nn,imfilter_times,...
nn,integral_box_filter_times,...
nn,integral_box_filter_total_times)
title("Box filter execution time")
xlabel("window size")
ylabel("time (s)")
legend(["imfilter" "integralBoxFilter" "integralImage + integralBoxFilter"],...
Location = "northwest")
From the plot, we can see that using the integral image for box filtering is the way to go, even when you take into consideration the time needed to compute the integral image.
]]> https://blogs.mathworks.com/steve/2023/01/09/integral-image-box-filtering/feed/ 7 Objects and Polygonal Boundaries https://blogs.mathworks.com/steve/2022/12/28/objects-and-polygonal-boundaries/?s_tid=feedtopost Wed, 28 Dec 2022 21:12:17 +0000 https://blogs.mathworks.com/steve/?p=6698

In some of my recent perusal of image processing questions on MATLAB Answers, I have come across several questions involving binary image objects and polygonal boundaries, like this.The questions... read more >>

]]>
In some of my recent perusal of image processing questions on MATLAB Answers, I have come across several questions involving binary image objects and polygonal boundaries, like this.
binary-objects-polygonal-boundaries.png
The questions vary but are similar:
  • How can I determine which objects touch the polygon boundary?
  • How can I get rid of objects outside the boundary?
  • How can I get rid of objects outside or touching the boundary?
  • How can I get rid of objects within a certain distance of the boundary?
Today I want to show you how to answer all these questions and more besides, using these fundamental operations:
(Or, as an alternative to drawpolygon, images.roi.Polygon, and createMask, you can use roipoly and poly2mask.)
Here is the image I'll be using today:
image_url = "https://blogs.mathworks.com/steve/files/rice-bw-1.png";
A = imread(image_url);
imshow(A)
And here is my polygon.
xy = [ ...
118 31
35 116
35 219
219 245
161 186
237 56];
I'll be drawing it using a helper function, showpoly, which is at the end of this post. The helper function uses drawpolygon, like this:
imshow(A)
p = drawpolygon(Position = xy, FaceAlpha = 0.5)
p =
Polygon with properties: Position: [6×2 double] Label: '' Color: [0 0.4470 0.7410] Parent: [1×1 Axes] Visible: on Selected: 0 Show all properties
Note that drawpolygon creates an images.roi.Polygon (or just Polygon). The Polygon object is one of a variety of region-of-interest (ROI) objects in the Image Processing Toolbox. You can customize many aspects of its appearance and behavior, and it is useful for implementing apps that work with ROIs.
Of particular interest for today's topic is the Polygon function called createMask. This function returns a binary image with white pixels inside the polygon.
mask = createMask(p);
imshow(mask)

Objects completely or partially inside the polygon

Now we can start answering the object-polygon questions. First up: which objects are inside the polygon, either completely or partially?
One of the things that makes MATLAB fun for image processing is using logical operators with binary images. Below, I compute the logical AND of the original binary image with the polygon mask image.
B = A & mask;
imshow(B)
showpoly
Next, I apply morphological reconstruction using imreconstruct. This function takes a marker image and expands its objects outward, while constraining the expansion to include only object pixels in the second input. This is a convenient way to convert partial objects back into their full, original shapes. The result, C, is an image containing objects that are completely or partially within the polygon.
C = imreconstruct(B,A);
imshow(C)
showpoly

Objects touching the perimeter of the polygon

Next question: which objects touch the edge, or perimeter, of the polygon? I'll start with bwperim on the polygon mask to compute a perimeter mask.
perimeter_mask = bwperim(mask);
imshow(perimeter_mask)
Going back to the logical & operator gives us the perimeter pixels that are part of any of the objects.
D = A & perimeter_mask;
imshow(D)
Morphological reconstruct is useful again here, to "reconstruct" the full shapes of objects from the object pixels that lie along the polygon perimeter.
J = imreconstruct(D,A);
imshow(J)
showpoly();

Objects outside the polygon

Third question: which objects lie completely outside the polygon? Recall that image C, computed above, is the objects that are completely or partially inside the polygon. The objects that we want, then, are the objects that are in A but not in C. You can compute this using A & ~C or setdiff(A,C).
objects_outside = A & ~C;
imshow(objects_outside)
showpoly();
What about the object near the upper right polygon vertex? Is it really completely outside the polygon?
axis([215 230 45 60])
It looks like a sliver of white is inside the polygon. Why is that?
I'll explain by showing the polygon mask, the outside objects, the polygon itself, and the pixel grid. I'll use imshowpair to both the polygon mask and the outside-objects mask. (The function pixelgrid is on the File Exchange.)
figure
imshowpair(objects_outside,mask)
showpoly();
axis([215 230 45 60])
pixelgrid
When a polygon only partially covers a pixel, the createMask has certain tie-breaking rules to determine whether a pixel is inside or outside. You can see the tie-breaking rules play out in several pixels above.

Objects inside and at least 20 pixels away the polygon boundary

The fourth question is which objects are not only inside the polygon, but at least 20 pixels away from the polygon boundary? To answer this question, I'll bring in another tool: the distance transform, as computed by bwdist.
First, though, let's start by applying the logical NOT (or complement) operator, ~. Applying it to the polygon mask image gives us a mask showing the pixels that are outside the polygon. I'll turn the axis box on so you can see the extent of the exterior mask.
exterior_mask = ~mask;
imshow(exterior_mask)
axis on
The distance transform computes, for every image pixel, the distance between that pixel and the nearest foreground pixel. If a pixel is itself a foreground pixel, then the distance transform at that pixel is 0. By computing the distance transform of the exterior mask, we can find out how far every pixel is from the closest pixel in the polygon exterior.
G = bwdist(exterior_mask);
(Alternative approach: consider the buffer method of polyshape object.)
When displaying a distance transform as an image, the autoranging syntax of imshow is useful. Just specify [] as the second argument when displaying intensity values, and the minimum value will get automatically scaled to black, and the maximum intensity value will get automatically scaled to white.
imshow(G,[])
Now let's compute a mask of pixels at least 20 pixel units away from the exterior mask by using a relational operator, >=.
interior_mask_with_buffer = (G >= 20);
imshow(interior_mask_with_buffer)
showpoly();
Using the techniques described above, compute an image containing the objects that are completely outside the interior buffered mask.
H = imreconstruct(A & ~interior_mask_with_buffer, A);
imshow(H)
Next, find the objects that are in A but not in H.
I = A & ~H;
imshowpair(I,interior_mask_with_buffer)
showpoly();

Objects not within 20 pixels of the polygon border

The final question for today is which objects are not within 20 pixels of the polygon border. Start by computing the distance transform of the perimeter mask.
J = bwdist(perimeter_mask);
imshow(J,[])
Use a relational operator to compute a buffered perimeter mask.
perimeter_mask_with_buffer = (J <= 20);
imshow(perimeter_mask_with_buffer)
Use the techniques described above to find all of the objects that are competely or partially within the buffered perimeter mask region.
K = A & perimeter_mask_with_buffer;
L = imreconstruct(K,A);
imshow(L)
The final answer is the objects that are in the original image, A, but not in L.
M = A & ~L;
imshowpair(M,perimeter_mask_with_buffer)
showpoly();
This post demonstrated a number of MATLAB and toolbox functions and operators that work well together:
  • createMask (or poly2mask) to create a binary "mask" image from polygons
  • Logical operators on binary images to compute things like "in this image or mask but not in this other one"
  • imreconstruct to reconstruct original binary image shapes from partial ones
  • bwdist to compute the distance transform, which tells you how far away every image pixel is from some binary image mask
  • Relational operators to form masks from distance transform images
Have fun with these.
And Happy New Year!
function showpoly
% Helper function
xy = [ ...
118 31
35 116
35 219
219 245
161 186
237 56];
 
drawpolygon(Position = xy, FaceAlpha = 0.5);
end
]]>
Avoid Using JPEG for Image Analysis https://blogs.mathworks.com/steve/2022/12/19/avoid-jpeg-for-image-analysis/?s_tid=feedtopost https://blogs.mathworks.com/steve/2022/12/19/avoid-jpeg-for-image-analysis/#comments Mon, 19 Dec 2022 13:59:30 +0000 https://blogs.mathworks.com/steve/?p=6146

Note added 20-Dec-2022: I wrote this post with lossy JPEG in mind. If you work with DICOM data, see Jeff Mather's note in the comments below about the use of lossless JPEG in DICOM. I have... read more >>

]]>

Note added 20-Dec-2022: I wrote this post with lossy JPEG in mind. If you work with DICOM data, see Jeff Mather's note in the comments below about the use of lossless JPEG in DICOM.

I have recently been reading a lot of image processing and image analysis questions on MATLAB Answers. Anyone who does the same will quickly come across comments and answers posted by the prolific Image Analyst, a MATLAB Answers MVP who has posted an astounding 34,600+ answers in the last 11 years.
I always enjoy the side commentary that I sometimes find in these answers. For example, I came across this code comment in one answer:
% This is a horrible image. NEVER use JPG format
% for image analysis. Use PNG, TIFF, or BMP instead.
Today, I want to take the opportunity to endorse this statement and to elaborate on it a bit. Why do we not love JPEG files for image analysis?
The reason is that JPEG is a lossy image compression format. Lossy compression methods achieve substantial reduction in file size by using "inexact approximations and partial data discarding to represent the content." With JPEG compression, roughly speaking, pixels are grouped into blocks, and then the pixel data in each block is quantized and partially discarded. The method is called lossy because you can't get the exact original image pixel data back from a JPEG file; information has been lost.
I've written about lossy vs. lossless compression before; see my 02-May-2013 post.
Usually, this form of compression is acceptable for viewing photographs because the compression method exploits properties of human visual perception so that the information loss is relatively imperceptible. In image analysis applications, though, when you are trying to automatically detect or measure things, the pixel data imperfections in the JPEG file could mess it up.
Let me show you a couple of different versions of this picture:
IMG_6129-reduced-size.png
The original picture was shot in RAW mode (which is lossless) and saved as a 4032x3024 PNG file, with a file size of 23 MB.
Here is a highly magnified view of the center of the picture. You can see the individual pixels.
lossless-magnified.png
I created a JPEG version of this file with a file size of only about 1 MB. If you look at the entire picture, the JPEG version looks like the original.
lossy-version-reduced.png
But if you look at a highly magnified view, you can see that pixel data has been partially discarded in a block-wise fashion.
lossy-magnified.png
You can probably imagine how this might affect object detection and measurement.
So, as Image Analyst says, stick to lossless formats such as PNG or TIFF for your image analysis applications.
]]>
https://blogs.mathworks.com/steve/2022/12/19/avoid-jpeg-for-image-analysis/feed/ 8
Finding the Channels https://blogs.mathworks.com/steve/2022/12/12/finding-the-channels/?s_tid=feedtopost https://blogs.mathworks.com/steve/2022/12/12/finding-the-channels/#comments Mon, 12 Dec 2022 18:35:07 +0000 https://blogs.mathworks.com/steve/?p=6125

Last winter, Aktham asked on MATLAB Answers how to find the channels in this image.url = "https://www.mathworks.com/matlabcentral/answers/uploaded_files/880740/image.jpeg";A =... read more >>

]]>
Last winter, Aktham asked on MATLAB Answers how to find the channels in this image.
url = "https://www.mathworks.com/matlabcentral/answers/uploaded_files/880740/image.jpeg";
A = imread(url);
imshow(A)
Image credit: Aktham Shoukry. Used with permission.
Here is an example of what Aktham meant by "channel."
channel-example-overlay.png
This post shows one way to accomplish the task, using erosion, morphological reconstruction, and some image arithmetic.
The channels are relatively thin in one dimension, either horizontally or vertically. So, a morphological erosion can be constructed that will keep a portion of each grain, while completely eliminating the channels.
First, a bit of preprocessing.
Ag = im2gray(A);
B = imcomplement(Ag);
imshow(B)
The channels are about 10 pixels thick, while the square-ish "grains" are about 50-by-50. Let's erode with a square structuring element with a size that's in between.
C = imerode(B,ones(25,25));
imshow(C)
Next, use morphological reconstruction to restore the original grain shapes.
D = imreconstruct(C,B);
imshow(D)
Complement the image to make the grains dark again.
E = imcomplement(D);
imshow(E)
Subtracting the original grayscale image from E gets us very close.
F = E - Ag;
imshow(F)
G = imbinarize(F);
imshow(G)
A final cleanup of the small objects, using either bwareafilt or bwareaopen, gives us our result.
H = bwareaopen(G,50);
imshow(H)
The function imoverlay gives us a nice visualization of the result. (You could also use imfuse.)
K = imoverlay(Ag,H,[.8500 .3250 .0980]);
imshow(K)
Note, however, that our result isn't quite perfect. There are partially cropped-off grains along the top border that have been identified as channels. Normally, I would recommend using imclearborder as a preprocessing step, but in this case, imclearborder eliminates too many of the desired channels.
I am open to suggestions about how to refine this method. Leave a comment and let me know what you think.
]]> https://blogs.mathworks.com/steve/2022/12/12/finding-the-channels/feed/ 3 Using Active Contour Automation in the Medical Image Labeler https://blogs.mathworks.com/steve/2022/11/18/using-active-contour-automation-in-the-medical-image-labeler/?s_tid=feedtopost Fri, 18 Nov 2022 14:00:46 +0000 https://blogs.mathworks.com/steve/?p=5859

Woo hoo! The Image Processing Toolbox team has just created a new product:Medical Imaging Toolbox™Shipped with the R2022b release a couple of months ago, this product provides apps, functions,... read more >>

]]>
Woo hoo! The Image Processing Toolbox team has just created a new product:
medicalimagelabeler-app-ref-page-graphic.png
Shipped with the R2022b release a couple of months ago, this product provides apps, functions, and workflows for designing and testing diagnostic imaging applications. You can perform 3D rendering and visualization, multimodal registration, and segmentation and labeling of radiology images. The toolbox also lets you train predefined deep learning networks (with Deep Learning Toolbox™). I'm looking forward to writing about this product and its capabilities.
I've been talking recently with Sailesh, one of the product developers, about the Medical Image Labeler. This app is for labeling ground truth data in 2-D and 3-D medical images. With this app, you can:
  • Import multiple 2-D images or 3-D image volumes.
  • View images as slice planes or volumes with anatomical orientation markers and scale bars.
  • Create multiple pixel label definitions to label regions of interest. Label pixels using automatic algorithms such as flood fill, semi-automatic techniques such as interpolation, and manual techniques such as painting by superpixels.
  • Write, import, and use your own custom automation algorithm to automatically label ground truth data.
  • Export the labeled ground truth data as a groundTruthMedical object. You can use this object to share labels with colleagues or for training semantic segmentation deep learning networks.
The Medical Image Labeler supports 2-D images and image sequences stored in the DICOM and NIfTI file formats. An image sequence is a series of images related by time, such as ultrasound data. The app supports 3-D image volume data stored in the DICOM (single or multifile volume), NIfTI, and NRRD file formats. See Get Started with Medical Image Labeler.
I asked Sailesh for his thoughts about what to tell people in this blog about Medical Image Labeler. He commented that the app is intended for people working towards any kind of AI-assisted CAD (computer-aided diagnosis). It doesn't replace a doctor or diagnostician; rather, it assists with tasks related to their research.
Sailesh also mentioned some specific capabilities that he'd like to highlight. I've prepared a 3-minute video of one of those capabilities: Using active contours to automate object labeling. This tool is one of several in Medical Image Labeler that are intended to save time in the labeling process. Check it out:
There are a few other things about the Medical Image Labeler that I hope to show you soon.
If you have suggestions for the Medical Image Labeler, or for Medical Imaging Toolbox in general, add a comment below.
]]>