# Steve on Image Processing

## Counting objects without bias

In today's post I want to look at two methods for counting objects per unit area. For example, how can we estimate the number of objects per unit area in this image:

I could simply count the number of connected components and divide the total image area. (For this post I'll assume that each pixel has an area of 1.0.)

url = 'http://blogs.mathworks.com/images/steve/2012/rice-bw-2.png';
cc = bwconncomp(bw)

cc =

Connectivity: 8
ImageSize: [256 256]
NumObjects: 95
PixelIdxList: {1x95 cell}


objects_per_unit_area = cc.NumObjects / (size(bw,1) * size(bw,2))

objects_per_unit_area =

0.0014



This method, however, is wrong according to The Image Processing Handbook, John C. Russ, 5th ed., CRC Press, 2007, pp. 565-567. "When features intersect the edge of the field of view, it is not proper to count all features that can be seen." Russ gives two better methods.

The first method ignores objects touching a pair of adjacent edges, such as the top and left edges. Russ points out that this is "equivalent to counting each feature by its lower right corner," and he draws an analogy to counting people in a room by counting noses. (I'll have to ask the developers on the Computer Vision System Toolbox team for help with nose detection.)

The Image Processing Toolbox has a function, imclearborder, for removing all objects touching the image borders. With a little creativity, we can use that function to identify all objects that do not touch the top and left borders.

We start by padding the image with 0s on the left and top.

bw2 = padarray(bw,[1 1],0,'pre');


Because of the padding, there are no objects touching the left and top borders of bw2. Calling imclearborder on this image, then, will remove all objects touching the bottom and right borders in the original image, but it will leave objects touching the top and left borders alone.

bw3 = imclearborder(bw2);


Now let's remove the padded column on the left and padded row on the top.

bw4 = bw3(2:end,2:end);
imshow(bw4)


What's the object count per unit area now?

cc = bwconncomp(bw4);
objects_per_unit_area_2 = cc.NumObjects / (size(bw4,1) * size(bw4,2))

objects_per_unit_area_2 =

0.0013



The first method, which counted all objects in the original image, was off by somewhere in the neighborhood of 10-11%.

abs(objects_per_unit_area_2 - objects_per_unit_area) / ...
objects_per_unit_area

ans =

0.1053



Another method described by Russ is to do a weighted count of all objects that do not touch any edge. The weight count compensates for the relative likelihood that an object with a certain horizontal and vertical extent would touch the edge of a certain field of view. The equation for the weighted count of each object is

$$C = \frac{W_x W_y}{(W_x - F_x) (W_y - F_y)}$$

where $W_x$ and $W_y$ are the dimensions of the image in the x and y directions, and $F_x$ and $F_y$ are the maximum projected dimensions of the object in those directions.

We make use of the Image Processing Toolbox function regionprops to compute the bounding box of each object, from which we can determine $F_x$ and $F_y$.

Start by clearing all objects that touch any image border.

bw5 = imclearborder(bw);


Next, compute the bounding boxes of all the remaining objects.

s = regionprops(bw5,'BoundingBox');


For example, here's the bounding box of the fifth detected object:

s(5)

ans =

BoundingBox: [17.5000 166.5000 20 24]



I'm going to use an advanced bit of MATLAB syntax here to convert the structure array s into an ordinary P-by-4 matrix containing all the bounding boxes together. The second line of code displays the bounding boxes of the first seven objects.

bboxes = cat(1,s.BoundingBox);
bboxes(1:7,:)

ans =

9.5000   86.5000   25.0000   17.0000
10.5000   18.5000   26.0000   16.0000
17.5000   37.5000   19.0000   23.0000
17.5000  108.5000   10.0000   29.0000
17.5000  166.5000   20.0000   24.0000
23.5000   59.5000    9.0000   31.0000
28.5000  212.5000   17.0000   28.0000



$F_x$ is the third column, and $F_y$ is the fourth column.

Fx = bboxes(:,3);
Fy = bboxes(:,4);


$W_x$ and $W_y$ are computed from the size of the image.

[Wy,Wx] = size(bw);


Now compute a vector of weighted counts, one for each object that doesn't touch a border.

C = (Wx * Wy) ./ ((Wx - Fx) .* (Wy - Fy));
C(1:7)

ans =

1.1871
1.1872
1.1868
1.1736
1.1970
1.1792
1.2027



The last step is to sum the counts and divide by the total image area.

objects_per_unit_area_3 = sum(C) / (Wx*Wy)

objects_per_unit_area_3 =

0.0012



Russ comments that this last method may work better for images containing nonconvex objects that may touch image boundaries more than once.

Get the MATLAB code

Published with MATLAB® R2012b

## Don’t Photoshop it…MATLAB it! Image Effects with MATLAB (Part 4)

I'd like to welcome back guest blogger Brett Shoelson for the continuation of his series of posts on implementing image special effects in MATLAB. Brett, a contributor for the File Exchange Pick of the Week blog, has been doing image processing with MATLAB for almost 20 years now.

### Contents

#### Out Standing in the Field

In the three previous entries in this guest series (part 1, part 2, part 3) I've worked my way through some approaches to creating special image effects with MATLAB. I started easy, and increased the difficulty level as I progressed. In this fourth post, I'm going to create the zebra image above. You might guess that doing so will entail segmenting the zebra--certainly the most difficult part of this problem.

I previously wrote that I can usually get a good segmentation mask working in on or more grayscale representations of a color image. While I stand by that statement, I will also say that sometimes color does provide good information with which to create a segmentation mask. In fact, I'm going to use color to segment the zebra in this image, and then use the manual masking approach (using the imfreehand- mediated approach I used in the previous post) to fine-tune the mask.

#### Segmenting by Color

There are several approaches to segmenting using color information. If I wanted to create a mask of a single user-selected color, for instance, I could use impixelregion to explore colors in a region, or I could invoke impixel to click-select color samples.

%Note that the |im2double| conversion conveniently scales the intensities to [0,1]
URL = 'http://blogs.mathworks.com/pick/files/ZebraInField.jpg';
figure, imshow(img);
impixelregion


impixelregion provides a convenient tool for exploring the RGB (or grayscale) values underneath a small rectangle; as you drag that rectangle over your image, the intensity values are updated and displayed in a separate figure. In this image, for instance, we might recognize that the grass has an RGB intensity of roughly [0.75 0.65 0.5]. By specifying a "tolerance" of 0.05 (i.e., 5 percent of the color range) we can readily create a mask of "not-zebra" by selecting all pixels that have those approximate red, green, and blue values:

targetColor = [0.75 0.65 0.5];
tolerance = 0.05;
img(:,:,1) >= targetColor(1) - tolerance & ...
img(:,:,1) <= targetColor(1) + tolerance & ...
img(:,:,2) >= targetColor(2) - tolerance & ...
img(:,:,2) <= targetColor(2) + tolerance & ...
img(:,:,3) >= targetColor(3) - tolerance & ...
img(:,:,3) <= targetColor(3) + tolerance;


Recognize here that each of those constraints on the binary variable "mask" is simply an additional logical AND that I'm applying. I could easily provide different tolerances for ranges above and below each of R, G, and B. (A GUI with sliders might facilitate that interaction!) I could also create additional masks in a similar manner and combine them, using the logical OR operation, until I achieved the desired segmentation.

Instead of going down that path, though, I'd like to demonstrate another useful approach to color segmentation. In this approach, rather than manually selecting colors on which to base the segmentation mask, I'm going to let MATLAB do the work. The function rgb2ind quantizes an image into a user-specified number of colors. Each of those "quantum levels" can be used to create a unique mask of the image. For instance, here I quantize the zebra into 16 colors and display the binary mask that each index represents:

nColors = 16;
X = rgb2ind(img,16);
% (X ranges from 0 to nColors-1)
for ii = 0:nColors-1
subplot(4,4,ii+1)
imshow(ismember(X,ii));
title(sprintf('X = %d',ii));
end


Now at a glance I can construct a segmentation mask of the zebra, selecting only the indices that have a significant component within the area of interest:

mask = ismember(X,[0,2,4,5,7,10:12,14,15]);


Clearly, there's a tradeoff between including more indices to "solidify" the zebra mask, and increasing the amount of background "noise" included in the segmentation. I like this as a starting point, so I'm going to use this and start refining:

mask = bwareaopen(mask,100); %Remove small blobs


Normally, I would use imclearborder to remove the large white region at the top of the image, and then, using regionprops, determine the areas of each connected blob. The documentation for regionprops shows how one could easily use that information to eliminate all but the largest object in the resulting image. (That would presumably leave only the zebra.) But in this case, that would also eliminate all the small isolated regions around the zebra's legs, and I would like to keep those. So instead, I'm going to quickly encircle the zebra using imfreehand, and use the resulting mask to eliminate all peripheral blobs.

manualMask = imfreehand;


posns = getPosition(manualMask);


#### Now to Use the Segmented Zebra

Yes! I like that mask, and can do a lot with it. Conveniently, the function roifilt2 allows me to operate on only a masked portion of an image. That function, though, only works on 2-D images--not on RGB images. However, when it makes sense to do so, I can modify the red, green, and blue planes indendently, and reconstruct the RGB image using cat. For instance:

processImage{1} = @(x) imadjust(x,[0.05; 0.16],[1.00; 0.11], 1.20); %Red plane
processImage{2} = @(x) imadjust(x,[0.10; 0.83],[0.00; 1.00], 1.00); %Green plane
processImage{3} = @(x) imadjust(x,[0.00; 0.22],[1.00; 0.00], 1.10); %Blue plane
enhancedImg = cat(3,r,g,b);
figure,imshow(enhancedImg)


Creating the "normal" zebra in the pastel field is a bit trickier, since I can't do a decorrelation stretch plane-by-plane. So instead, I'm going to create the "pastel effect" using decorrstretch, and then reset the values underneath the masked zebra back to their original values.

imgEnhanced = decorrstretch(img); %Operating on the original image
% Now reset the red plane to the masked value of the original image
r = imgEnhanced(:,:,1); % Get the enhanced red plane
tmp = img(:,:,1); % Get the original red plane
%Replace the enhanced red under the mask with the original red:
% Same for green
g = imgEnhanced(:,:,2);
tmp = img(:,:,2);
% And blue
b = imgEnhanced(:,:,3);
tmp = img(:,:,3);
% Now reconstruct the enhanced image
imgEnhanced = cat(3,r,g,b);
% And tweak it with imadjust to intensify colors
imgEnhanced = imadjust(imgEnhanced,[0.1; 0.8],[0.00; 1.00], 1.00);
%figure, imshow(imgEnhanced); title('Voila!')
%(This reproduces the zebra at the top of this post!)


#### Thanks, a Note on Image Segmentation, and a Challenge

If you are a regular reader of "Steve on Image Processing," then you'll know that I've commandeered Steve's blog for the past several weeks to present my vision of using MATLAB to create special image effects. I'd like to thank Steve for hosting this guest series; I always enjoy reading Steve's blog, and I'm very pleased to contribute to it.

In the course of this guest series, I shared my GUIs for adjusting image intensities and for trying out different morphological operations and structuring elements. I previously mentioned that segmentation is often the most difficult part of an image processing problem. So, in the spirit of the coming holidays, here's one more GUI that I'd like to share with you: SegmentTool provides an interactive environment for segmenting images. It currently includes tabs for edge detection, thresholding, Hough transforms, using regional and extended minima and maxima, and color-based segmentation. Take it for a test drive and let me know what you think, or what else you'd like to see incorporated.

This post concludes this guest series, but doesn't nearly exhaust the creative ways in which one can manipulate images using MATLAB and the Image Processing Toolbox. So I'd like to close with a challenge:

• What Effects Can You Create with MATLAB? * Share your own cool image effects with us! Show us how, using only MATLAB (and MathWorks Toolboxes, of course) you can create cool image effects. I will catalog, and share on the MATLAB Central File Exchange, useful or fun approaches to creating special effects with images.

Note that it's not just the altered images that I want; please include the code (or at least, pseudo-code, with detail sufficient to allow other users to reproduce the effect). I'll gladly send some swag to anyone who shares an effect that gets included in the special effects gallery. Send your code to me at brett.shoelson@mathworks.com.

Happy MATLABbing!

All images copyright Brett Shoelson; used with permission.

Get the MATLAB code

Published with MATLAB® R2012b

## Don’t Photoshop it…MATLAB it! Image Effects with MATLAB (Part 3)

I'd like to welcome back guest blogger Brett Shoelson for the continuation of his series of posts on implementing image special effects in MATLAB. Brett, a contributor for the File Exchange Pick of the Week blog, has been doing image processing with MATLAB for almost 20 years now.

### Contents

#### Put Me In the Zoo!

So far (post 1, post 2) in this guest series on creating special image effects with MATLAB, I've applied processes to the entire image. For instance, I created the image above by starting with the contrast-enhanced elephant from my previous post. I created the pseudocolor effect by decorrelation-stretching (using decorrstretch) the resulting image. And then I enhanced the colors a bit using imadjust.

URL = 'http://blogs.mathworks.com/pick/files/ElephantProfile.jpg';
colorElephant = decorrstretch(img);
colorElephant = imadjust(colorElephant,[0.10; 0.79],[0.00; 1.00], 1.10);


(Decorrelation stretching is useful for visualizing, and sometimes analyzing, imagery by reducing inter-plane autocorrelation levels in an image. It is often used with multispectral or hyperspectral images.)

In this installment, I wanted to take an ordinary photograph of a giraffe and modify it to give the giraffe something out-of-the-ordinary. But to do so, I needed to segment the image--to create a binary mask with 1's in the locations of my regions of interest, and 0's elsewhere. There are many approaches to segmenting images--indeed, there are many approaches in the Image Processing Toolbox. So if we wanted to color the spots of a giraffe, for instance, we'd first have to find an appropriate method of segmenting the spots. (Without a doubt, getting a good segmentation is the most difficult part of many image processing problems!)

URL = 'http://blogs.mathworks.com/pick/files/GiraffeProfile.jpg';
img = imadjust(img,[0.20; 0.80],[0.00; 0.80], 1.10);
hAx = axes; %Notice that I created a handle to the axes; I will use that later.
imshow(img);


Next, I want to segment the giraffe's spots. Usually (but not always), when I'm trying to segment an RGB image, I find I can get a good mask working with one (or sometimes multiple) grayscale representation(s) of the color images. I like to look at the individual colorplanes and at different colorspace representations of the image as well. ExploreRGB makes that exploration easy:

As we can see in this image of the ExploreRGB GUI, the giraffe's spots are fairly well segmented for us in the "saturation" plane of the HSV colorspace. So let's start there for our segmentation of the spots.

grayImg = rgb2hsv(img);
grayImg = grayImg(:,:,2);
grayImg =  imadjust(grayImg,[0.6; 1.0],[0.00; 1.00], 1.00);
spotMask = im2bw(grayImg,graythresh(grayImg));

Now, if I were happy with that as a segmentation mask, I could move on. But I only wanted the giraffe's spots--not its head or mane. I could continue to try different programmatic approaches; remember that I can combine these logical masks in any way that makes sense. But instead, I'm going to take my first cue from Photoshop and go old-school: I'm going to manually exclude the portions of the giraffe that I don't want to colorize. So how do I do that?

#### imfreehand to the Rescue!

A few years back (R2008a, I believe), we refactored our region-of-interest (ROI) tools. I want to draw a freehand region, and use it to help mask my image. So, using imfreehand, I quickly encircle the region I want to exclude. (I say "quickly," but you generally want to take your time with this step; getting a good segmentation mask is crucial to getting a good effect!)

manualMask = imfreehand;
[m,n,~] = size(grayImg);


Now I can combine that with my original spot mask, with the logical combination "...AND NOT....":

spotMask  = spotMask & ~excludeMask;


In displaying the result (below left), I recognize that I trimmed the mask manually a bit tighter than I intended, so I will dilate it a bit (imdilate), and then exclude small regions (fewer than 10 connected pixels) using bwareaopen. The result is shown below on the right.

spotMask  = spotMask & imdilate(~excludeMask,strel('disk',2));


That's close to my desired mask, but I'm going to tweak it a bit more by filling holes in the spots (imfill) and by manipulating it with some morphological operations (MorphTool again!). One more application of bwareaopen (this time, keeping only very large blobs), and I have the segmentation mask looking the way I want it.

spotMask = imfill(spotMask,'holes');
spotMask = bwareaopen(spotMask,2000);

#### Now to Color the Spots

From this point, coloring the spots was reasonably straightforward: I simply labeled the spots (bwlabel) and colored the labeled regions by calling label2rgb. (Also, I specified "jet" as my colormap and used the "shuffle" command to randomly assign map colors to the spots.) Next, I set the "hold" property of the current axes (i.e., the one containing the original color image), and displayed the colored spots on top of the original image. Finally, I modified the "alphadata" property of the colored-spot image to set its transparency to 0.25.

L = bwlabel(spotMask);
% Using the handle to the axes that contains the original image, make
% that axes current, and set its "hold" property on.
axes(hAx)
hold on
h = imshow(colormask); %I created a handle here, too.
colortrans = 0.25;


Now all that remains is to capture the visualization as an RGB image. I'm simply going to start with a copy of the original giraffe image, and modify it colorplane-by-colorplane, scaling each by 0.75 (i.e., 1-colortrans) and adding the scaled RGB components of the colored-spot image:

imgEnhanced = img;
for ii = 1:3
end


#### Next Up: Out Standing in the Field

All images copyright Brett Shoelson; used with permission.

Get the MATLAB code

Published with MATLAB® R2012b

## Don’t Photoshop it…MATLAB it! Image Effects with MATLAB (Part 2)

I'd like to welcome back guest blogger Brett Shoelson for the continuation of his series of posts on implementing image special effects in MATLAB. Brett, a contributor for the File Exchange Pick of the Week blog, has been doing image processing with MATLAB for almost 20 years now.

### Contents

#### imadjust as an Image Enhancement Tool

In my previous post in this guest series, I introduced my image adjustment GUI, and used it to enhance colors in modified versions of images of a mandrill and of two zebras. For both of those images, I operated on all colorplanes uniformly; i.e., whatever I did to the red plane, I also did to green and blue. The calling syntax for imadjust is as follows:

imgOut = imadjust(imgIn,[low_in; high_in],[low_out; high_out],gamma);

The default inputs are:

imgOut = imadjust(imgIn,[0; 1],[0; 1],1);

Different input parameters will produce different effects. In fact, imadjust should often be the starting point for simply correcting illumination issues with an image:

URL = 'http://blogs.mathworks.com/pick/files/DrinkingZebra1.jpg';
enhanced = imadjust(img,[0.00; 0.35],[0.00; 1.00], 1.00);
subplot(1,2,1);imshow(img);title('Original');


You may recall that when I modified the image of two zebras in my previous post, I not only increased low_out, but I also reversed (and tweaked) the values for low_out and high_out:

imgEnhanced = imadjust(imgEnhanced,[0.30; 0.85],[0.90; 0.00], 0.90);

In reversing those input values, I effectively reversed the image. In fact, for a grayscale image, calling

imgOut = imadjust(imgIn,[0; 1],[1; 0],1); % Note the reversal of low_out and high_out


is equivalent to calling imgOut = imcomplement(imgIn):

img = imread('cameraman.tif');
img1 = imadjust(img,[0.00; 1.00],[1.00; 0.00], 1.00);
img2 = imcomplement(img);
assert(isequal(img1,img2))% No error is thrown!
figure;subplot(1,2,1);imshow(img);xlabel('Original image courtesy MIT');
subplot(1,2,2);imshow(img1);


Now recognize that ImadjustGUI calls imadjust behind the scenes, using the standard syntax. If you read the documentation for imadjust carefully, you will learn that the parameter inputs low_in, high_in, low_out, high_out, and gamma need not be scalars. In fact, if those parameters are specifed appropriately as 1-by-3 vectors, then imadjust operates separately on the red, green, and blue colorplanes:

newmap = imadjust(map,[low_in; high_in],[low_out; high_out],gamma)
% ...transforms the colormap associated with an indexed image.
% If low_in, high_in, low_out, high_out, and gamma are scalars, then the
% same mapping applies to red, green, and blue components.
%
% Unique mappings for each color component are possible when low_in and
% high_in are both 1-by-3 vectors, low_out and high_out are both 1-by-3 vectors,
% or gamma is a 1-by-3 vector.


That works for adjusting colormaps; it also works for adjusting images. As a result, you can readily reverse individual colorplanes of an input RGB image, and in doing, create some cool effects!

#### Andy Warhol Meets an Elephant

Andy Warhol famously created iconic images of Marilyn Monroe and other celebrities, casting them in startling, unexpected colors, and sometimes tiling them to create memorable effects. We can easily produce similar effects by reversing and saturating individual colorplanes of RGB images. (I wrote ImadjustGUI to facilitate, interactively, those plane-by-plane intensity adjustments.)

#### Reading and Pre-Processing the Elephant

First, of course, we read and display the elephant:

URL = 'http://blogs.mathworks.com/pick/files/ElephantProfile.jpg';
img = imread(URL);

He's a wrinkly old fellow (below left). I'd like to bring out those wrinkles by enhancing contrast in the image. There are a few ways to do that, but I learned about my favorite way by reading through the "Gray-Scale Morphology" section of DIPUM, 2nd Ed. Specifically, the authors of this (most excellent) book indicated (on page 529) that one could combine topat and bottomhat filters to enhance contrast. (I built the appropriate combination of those filters behind the "Contrast Enhancement" button of MorphTool.) So, using MorphTool-generated code:

SE = strel('Disk',18);


Now, operating with imadjust plane by plane, reversing the red and blue planes, and modifying the gamma mapping, I can easily find my way to several interesting effects. For instance:

imgEnhanced1 = imadjust(imgEnhanced,[0.00 0.00 0.00; 1.00 0.38 0.40],[1.00 0.00 0.70; 0.20 1.00 0.40], [4.90 4.00 1.70]);
imgEnhanced2 = imadjust(imgEnhanced,[0.13 0.00 0.30; 0.75 1.00 1.00],[0.00 1.00 0.50; 1.00 0.00 0.27], [5.90 0.80 4.10]);


So, two more of those interesting effects, and then we can compose the four-elephants image above:

imgEnhanced3 = imadjust(img,[0.20 0.00 0.09; 0.83 1.00 0.52],[0.00 0.00 1.00; 1.00 1.00 0.00], [1.10 2.70 1.00]);
imgEnhanced4 = imadjust(img,[0.20 0.00 0.00; 0.70 1.00 1.00],[1.00 0.90 0.00; 0.00 0.90 1.00], [1.30 1.00 1.00]);

I also wanted to flip two of those enhanced images. fliplr makes it easy to flip a 2-dimensional matrix, but it doesn't work on RGB images. So I flipped them plane-by-plane, and concatenated cat the flipped planes in the third ( z -) dimensioninto new RGB images:

r = fliplr(imgEnhanced2(:,:,1));
g = fliplr(imgEnhanced2(:,:,2));
b = fliplr(imgEnhanced2(:,:,3));
imgEnhanced2 = cat(3,r,g,b);
CompositeImage = [imgEnhanced1 imgEnhanced2; imgEnhanced3 imgEnhanced4]; % (Images 2 and 4 are flipped plane-by-plane.)


#### Next Up: Put Me In the Zoo!

All images except "cameraman" copyright Brett Shoelson; used with permission.

Get the MATLAB code

Published with MATLAB® R2012b

## Don’t Photoshop it … MATLAB it! Image Effects with MATLAB (Part 1)

I'd like to introduce guest blogger Brett Shoelson, who has prepared a series of posts on implementing image special effects in MATLAB. Brett, a contributor for the File Exchange Pick of the Week blog, has been doing image processing with MATLAB for almost 20 years now.

### Contents

#### A Milestone, and a New Camera

My wife and kids and I recently returned from celebrating a milestone birthday--I'll not say which milestone!--in Namibia, where we spent a couple of weeks on Safari in Etosha National Park. The morning we were scheduled to leave the US, I began kicking myself: I simply couldn't make this trip without having a better camera. (I had been passionate about photography many years ago, but have more recently satisfied myself with snapshots from a small digital point-and-shoot camera.) So hours before our intercontinental flight, I found myself camera-shopping in a big-box electronics store, and after some deliberation, coming home with a new Nikon D5100 and a 55-300mm zoom lens.

Our trip was truly memorable--due in no small part to the 1200 photographs I took in the course of the vacation.

Yes, 1200 pictures. That's a lot of pictures--that I knew no one, including myself, would ever want to wade through. So while I was snapping away, capturing images just like the ones the cameras clicking to my left and right were capturing, I hatched a plan: I would create and share a small gallery of safari images set apart by the special effects I was designing for them in my head.

#### A Challenge: Use MATLAB to Create a Gallery of Safari Effects

Special effects--that's Photoshop's wheelhouse, right? In fact, I have a rarely-used license for Adobe's flagship product, and it made sense that this project would have me dusting it off. But here's the rub: I am a dyed-in-the-wool MATLABber--one who spends a lot of time helping customers tackle their image processing challenges using MATLAB and the Image Processing Toolbox. Could I implement these effects in MATLAB? What challenges would I face? And how could I overcome them?

It always rankles me just a bit when I hear people ask me--_knowing_ my penchant for processing images with MATLAB--to "Photoshop" an image for them. No, I won't Photoshop it for you. But I wil MATLAB it for you! So, new goal: Create a gallery of special effects for my safari photos using only MATLAB (and Toolboxes, of course!) , and blog about the experience. And before I get a flood of emails, let me point out that I know that this is not what MATLAB is about, and that it makes more sense to do artistic image manipulations using a special-effects-oriented tool. But I liked the challenge that doing it in MATLAB afforded. And I liked the opportunity to create (and share) more broadly useful tools that helped me achieve my goals.

This image was relatively easy to create. (That's what I mean by getting the effect "for free.") I've used entropy filtering several times in the past; when I've needed to quantify the local orderedness or randomness of an image, I think of entropy filtering. The Toolbox function entropyfilt makes doing so simple. But I've also noticed that when you apply the function to an RGB image, sometimes the results are visually arresting. For example, using the mandrill image that ships with MATLAB, we can easily produce an interesting effect:

load mandrill
img = ind2rgb(X,map);
subplot(1,2,1);imshow(img);
enhanced = entropyfilt(img,getnhood(strel('disk',1)));
enhanced = enhanced/max(enhanced(:));
subplot(1,2,2);imshow(enhanced);


Beyond the simple application of entropy filtering to an RGB image (and division by the maximum intensity of the image), I should address two other important aspects of that code block. First, notice that I specified a neigborhood using a disc-shaped structuring element (strel) of radius 1. And after the division by the maximum intensity, I modified the intensity mapping to enhance colors in the image using the imadjust function. So how did I select that structuring element? And how could improve the imadjust-mediate enhancement using non-default input parameters?

#### Two GUI Tools to Manage the Manipulations

To answer the first question, allow me to chat briefly about MorphTool, an interactive GUI I shared on the MATLAB Central File Exchange. MorphTool provides an interface for trying out different combinations of morphological operators and different shapes and sizes of structuring elements. So after I read in the image, I called it up in MorphTool, selected "entropy filtering," and changed the structuring element until I got the results I was after:

To answer the second question, I should point out that I also uploaded a file to the File Exchange many years ago (long before I was a MathWorker) that allowed interactive manipulation of the inputs to imadjust. I recently took that down--it was old and poorly implemented--and replaced it with another new GUI that is much more robust, better implemented, and also convertable to an App.

So, quite simply, I loaded the morphologically altered images of the mandrill and the zebras into ImadjustGUI and started playing with sliders until I got the desired results. Then I pressed the "Export" button, and got auto-generated code that generate those results:

To create the stylized image of the two zebras above:

URL = 'http://blogs.mathworks.com/pick/files/TwoZebras.jpg';
imgEnhanced = entropyfilt(img,getnhood(strel('Disk',4)));
imgEnhanced = imgEnhanced/max(imgEnhanced(:));
imgEnhanced = imadjust(imgEnhanced,[0.30; 0.85],[0.90; 0.00], 0.90);
imshow(imgEnhanced)


Steve was kind enough to allow me to share my experiences in his blog. In my next guest post, I'll step up the complexity a bit...working my way up to the zebra in the field, at the top of this post!

#### Next Up: Andy Warhol Meets an Elephant

All images except "mandrill" copyright Brett Shoelson; used with permission.

Get the MATLAB code

Published with MATLAB® R2012b

## Computer vision and image processing in R2012b

A lot happened in the R2012b for products related to image processing:

### Contents

#### Computer Vision System Toolbox

The Computer Vision System Toolbox added a Kalman filter system object and a Hungarian assignment algorithm function, both for object tracking. The insertObjectAnnotation function is also useful for object tracking. The vision.PointTracker system object tracks points using the KLT feature tracker algorithm.

The vision.PeopleDetector system object uses HOG features and a trained SVM classifier to find unoccluded people in upright positions.

showMatchedFeatures is a nice visualization function to examine corresponding feature matches between a pair of images.

Several new examples and tutorials have been added to the doc, such as Motion-Based Multiple Object Tracking.

Side note: the office building atrium shown in the picture above is being torn up as I write this. I can't wait to see what the new design looks like.

For more information, see the R2012b Computer Vision System Toolbox Release Notes.

#### Image Processing Toolbox

The Image Processing Toolbox added several new functions in the R2012b release, including:

• imhistmatch for histogram matching
• multithresh and imquantize for multilevel thresholding
• bwlookup, a replacement for applylut that offers MATLAB Coder support.

Several functions run faster now, including:

• histeq
• imrotate
• intlut

Image gradient computations were available in the toolbox before, but it was kind of hidden (as optional output arguments from the edge function). We hope it will be much more discoverable now. We thought most people computing the image gradient are more interested in the gradient magnitude, so that's what imgradient computes. To get the horizontal and vertical gradient components, call imgradientxy instead.

Histogram matching is a useful way to view a series of images so that they are all adjusted to have roughly the same contrast and brightness.

If you have a MATLAB Coder license, you'll find that you can now generate C code for calls to bwmorph and the new bwlookup.

#### Image Acquisition Toolbox

The Image Acquisition Toolbox added a new adaptor for Point Grey cameras, including FireWire, GigE Vision, USB 2, and Bumblebee 2.

It has also added support for Matrox Orion HD hardware.

GigE Vision devices are now easier to install and configure.

Get the MATLAB code

Published with MATLAB® R2012b

## MATLAB R2012b

A month ago, MathWorks shipped its annual September release, R2012b. (MathWorks follows a twice-yearly release schedule for the entire product line.) Several of my fellow bloggers have already described various aspects of the new release. Today I'll take a look at MATLAB, following my typical practice of highlighting a few changes to MATLAB that are particularly interesting to me.

### Contents

#### On the Mac

I became a Mac owner for the first time this summer, so I've been learning more about Macs and about MATLAB on the Mac for the past few months. I was interested to see some improvements in R2012b for the Mac, such as support for full-screen view mode.

#### Faster Math

I work on the same hallway as the MATLAB Math team, so I get to eavesdrop on what they're doing. As usual, they've speeded up and added multithreading capability to more MATLAB math functions, including the special functions airy, psi, and several Bessel functions. Keep it up, team!

#### Really Big Tiff files

The original TIFF image file format was limited to a file size of 4 GB or less. A variant called BigTIFF was designed to break through this limit. The function imread can now read BigTIFF files. Writing BigTIFF files is also possible, although the process is more complicated. You'll need to use the Tiff class.

#### Multiplatform Support for Excel files

Several release cycles ago, we started working on improving Excel file support on platforms other than Windows. This release continues that trend, with several multiplatform Excel file enhancements.

#### Discoverability of Publish

My friend Loren Shure regularly travels all over the place giving talks about MATLAB. After most of her talks, she tells this tale: "Hey, guess what new MATLAB feature the users in name-your-city were most excited about? Publish!" And then we all laugh kind of sadly, because the Publish feature was added to MATLAB eight years ago!

This story illustrates one of the biggest reasons why MATLAB looks a lot different in R2012b. The organization and presentation of capabilities in the new "Toolstrip" at the top of MATLAB is intended to make it easier for MATLAB users to find, learn, and use important MATLAB capabilities. For example, here's a screen clip of the PUBLISH tab on the Toolstrip:

#### Refining Document Searches

Finally, I wanted to mention the ability to refine documentation searches in several different ways. If I search for "connected components", the search results page looks like this:

From this point, I can refine my search. Am I interested in results from the Image Processing Toolbox, the Bioinformatics Toolbox, or the Computer Vision System Toolbox? Am I doing sequence analysis or image analysis? Do I want to see function reference pages or examples?

Get the MATLAB code

Published with MATLAB® R2012b

## Image processing in Orlando next week (ICIP 2012)

I'm flying down to Orlando, FL on Saturday afternoon for the IEEE International Conference on Image Processing (ICIP 2012). I'll be doing two workshops there on Sunday and Monday of next week.

The first one is MATLAB Techniques for Image Processing. This will be on Sunday during the tutorial portion of the program. It's from 14:00 to 17:00. I hope that even experienced MATLAB users and image processing experts will pick up some useful tips in this session.

The second one is Take Control of Your Code: Software Development Tools and Techniques for Engineers and Scientists. This one is scheduled for Monday evening, 18:30 to 20:00. If you are not a professional software developer, but you have to write (and maintain) a lot of code to get your engineering job or research done, I hope to help you with some things I've learned over the years as a MathWorks software developer.

I believe that space is still available for both of these sessions. The Take Control of Your Code talk was filled earlier in the summer, but late last week it was moved to a larger room.

Spandan Tiwari, who wrote the circle-finding blog post earlier this month, is coming too. If you are going to ICIP, we'd love to talk with you. We'll be the ones in the MathWorks shirts.

## Detecting Circular Objects in Images

Today's blog post was written by Spandan Tiwari, a developer on the Image Processing Toolbox team. Thanks, Spandan!

This example shows how you can use imfindcircles to automatically detect circles or circular objects in an image. It also shows how you can use viscircles to visualize the detected circles.

I was looking for an interesting image to show the use of imfindcircles when my fellow Mathworker Ashish reminded of the delicious M&M image from Steve's earlier post: What color is green?

url = 'http://blogs.mathworks.com/images/steve/2010/mms.jpg';
imshow(rgb)


Besides having plenty of sweet circles to detect, there are a few interesting things going on in this image from a circle detection point-of-view:

1. There are M&Ms of different colors, which have different contrasts with respect to the background. On one end, the black ones clearly stand out on this background. On the other end, the yellow M&Ms do not contrast well with the background.
2. Notice how several M&Ms are close together and almost touching each other. Touching and overlapping object boundaries is usually a challenging scenario for object detection.

So let's see how we can use imfindcircles to find the M&Ms.

First we need to specify a radius range to search for the circles. A quick way to find the appropriate radius range is to use the interactive tool imdistline to get an approximate estimate of the radii of various objects.

d = imdistline;


imdistline creates a draggable tool that can be moved to fit across an M&M and read off the numbers to get an idea of the radius. Most M&Ms have radius in the range of 16-19 pixels. I will use a slightly larger range of 15-20 pixels just to be sure. But before that let's remove the imdistline tool.

delete(d);


So, let's call imfindcircles on this image with the search radius of [15 20] pixels and see what we get. Before we do that, it is a good practice to ask whether the objects are brighter or darker than the background. This is needed because, by default, imfindcircles finds circular objects that are brighter than the background. In scenarios where the objects of interest are darker than the background, we need to set the parameter 'ObjectPolarity' to 'dark'. To answer whether the M&Ms in our image are brighter or darker than the background, let's look at the grayscale version of the image.

gray_image = rgb2gray(rgb);
imshow(gray_image);


We see that the background is quite bright and most of the M&Ms are probably darker than the background. So we will call imfindcircles with the parameter 'ObjectPolarity' set to 'dark'.

[centers, radii] = imfindcircles(rgb,[15 20],'ObjectPolarity','dark')

centers =

[]

[]



Well, the outputs centers and radii are empty, which means that no circles were found. This happens frequently because imfindcircles is a circle detector, and similar to most detectors, imfindcircles has an internal detection threshold that determines its sensitivity. In simple terms it means that the detector's confidence in a certain (circle) detection has to be greater than a certain level before it is considered a valid detection. imfindcircles has a parameter 'Sensitivity' which can be used to control this internal threshold, and consequently, the sensitivity of the algorithm. This is similar to the sensitivity control on the motion detectors used in home security systems. Coming back to our M&M image, it is possible that at the default sensitivity level all the circles are lower than the internal threshold, which is why we don't see any detections. By default, 'Sensitivity', which is a number between [0-1], is set to 0.85. So let's try increase 'Sensitivity' to say 0.9.

[centers, radii] = imfindcircles(rgb,[15 20],'ObjectPolarity','dark','Sensitivity',0.9)

centers =

97.2101  108.7405
349.1440   44.8708
430.4268  243.4405
368.8914  210.2235
167.4823   48.9834
256.0455  280.1922
356.3265   86.3372
191.0240  357.9989
305.4986  190.8765
210.4295  207.9475
121.3942  141.0331
396.4924  184.1435
444.3143  299.8000
363.3456  247.3126
461.6538  208.0843

18.5272
18.1409
18.2202
18.0592
18.8884
17.7121
18.6441
17.7396
17.7436
18.2228
17.9140
18.1599
17.8539
18.5682
17.8709



OK, so now we found some circles - 15 to be precise. centers contains the locations of circle centers and radii contains the estimated radii of those circles.

We can use function viscircles, which was also shipped in R2012a, to draw these circles on the image. The variables centers and radii can be passed directly to viscircles.

imshow(rgb);



The circle centers seem correctly positioned and their corresponding radii seem to match well to the actual M&Ms. But still we are missing quite a few of them. What's up with that?

Let's try to increase 'Sensitivity' even more, to 0.95, and see what happens.

[centers, radii] = imfindcircles(rgb,[15 20],'ObjectPolarity','dark','Sensitivity',0.95);
size(centers)

ans =

34     2



So increasing 'Sensitivity' gets us 34 circles. Let's plot these circles on the image again.

delete(h);


Much better.

Now, under the hood, imfindcircles has two different methods for finding circles. So far we have been using the default method, which is the phase coding method. There's another method, popularly called the two-stage method, that is available in imfindcircles. Let's try to use the two-stage method and see what we get.

[centers, radii, metric] = imfindcircles(rgb,[15 20], ...
'ObjectPolarity','dark','Sensitivity',0.95,'Method','twostage');

delete(h);



OK, great! The two-stage method is detecting more circles, at the Sensitivity of 0.95.

In general, these two method are complementary in that have they have different strengths. In my experience, phase coding method is typically faster and slightly more robust to noise than the two-stage method. But it can also need higher 'Sensitivity' levels to get the same number of detections as the two-stage method as we saw here.

Coming back to our image, it is curious that we are not picking up the yellow M&Ms in the image. Hmmm, the yellow M&Ms do not have a strong contrast with the background. In fact they seem to have very similar intensities as the background. Is it possible that the yellow M&Ms are not really 'darker' than the background as we are assuming. Let's check that out by looking at the grayscale version of this image again.

imshow(gray_image);


Indeed! The yellow M&Ms are almost the same intensity, maybe even brighter, as compared to the background. This suggests that we should change 'ObjectPolarity' to 'bright' to detect the yellow ones.

[centersBright, radiiBright] = imfindcircles(rgb,[15 20],'ObjectPolarity','bright','Sensitivity',0.95);


And let's draw the bright circles in a different color, say blue, by changing the 'EdgeColor' parameter in viscircles.

imshow(rgb);


OK, so we got one of them. That's a start! These yellow ones are hard to find because of they don't stand out as well as others on this background. Let me introduce you to another parameter which might help us here - 'EdgeThreshold'. To find circles, imfindcircles uses only the edge pixels in the image. These edge pixels are essentially pixels with high gradient value. The 'EdgeThreshold' parameter controls how high the gradient value at a pixel has to be before it is considered an edge pixel and included in computation. A high value (closer to 1) of this parameter will allow only the strong edges (higher gradient values) to be included, whereas a low value (closer to 0) is more permissive and includes even the weaker edges (lower gradient values) in computation. In case of yellow M&Ms, since the contrast is low, we expect the boundary pixels (on the circumference of the M&M) to have low gradient values. So let's try to lower the 'EdgeThreshold' parameter to ensure that they are included in computation.

[centersBright, radiiBright, metricBright] = imfindcircles(rgb,[15 20], ...
'ObjectPolarity','bright','Sensitivity',0.95,'EdgeThreshold',0.1);

delete(hBright);


All right! We are now picking up most of the yellow ones, and some green ones too. Let's also draw the other M&Ms, which we found earlier, in red.

h = viscircles(centers,radii);


We are still missing a few of them, such as the yellow one of the left top corner. I am sure we can adjust the parameters to pick up on those as well. However, at some point on our journey to find all the M&Ms we will see some false circles showing up (M&Ms where there are none!). This highlights that there is always a trade-off between how many true circles you can find (detection rate) and how many false circles you get with them (false alarm rate). Both go hand in hand, i.e., getting more of the actual circles means a greater likelihood of getting false circles.

So, now that we have most of our circles let's have some fun. Let's try and draw these M&Ms in their own respective colors.

First let's collect all our circle centers and radii together.

centers = round([centers; centersBright]);


Also, let's smooth our image to remove some of the lighting and writing artifacts (the 'm's) from our M&Ms.

filt_rgb = imfilter(rgb,ones(5)/25);


Now we will extract the colors at the center of the circles. For this first I make a list of all the elements of the array that I want to get. Remember the center locations are stored in centers. But I want to get the color value at these locations, which means that I have to extract the values from all the three planes (along the third dimension) of this RGB image at those center locations.

To give an example, let's say we have two circles with centers example_centers = [10 20 ; 40 50 ]

So to get the color values we need to get the values from the following elements of the matrix. Here each row shows the element location in the 3-dimensional image matrix.

locs = [10 20 1 ;
40 50 1 ;
10 20 2 ;
40 50 2 ;
10 20 3 ;
40 50 3 ]


There are many different ways you can do this in MATLAB. I like to do it using repmat and a simple trick using kron, which is the function for computing Kronecker product of two matrices.

example_centers = [10 20; 40 50];
locs = [repmat(example_centers,3,1) ...
kron((1:3)',ones(size(example_centers,1),1))]

locs =

10    20     1
40    50     1
10    20     2
40    50     2
10    20     3
40    50     3



Similarly we can make a list of all the elements we need to extract for all the circle centers in centers. Note that the first and second columns of centers have the column and row location of the circle centers, respectively. Since we want the first column in 'locs' to be the row location we switch the columns of centers using fliplr.

locs = [repmat(fliplr(centers),3,1) kron((1:3)',ones(size(centers,1),1))];


Next, we convert these locations to linear indices using sub2ind.

center_idx = sub2ind(size(rgb),locs(:,1),locs(:,2),locs(:,3));


Finally we extract these elements from the smoothed M&M image.

cent_color = filt_rgb(center_idx);


Since 'cent_color' is a vector, we reshape it to get it in 3-column format where the three columns correspond to R, G, and B colors for the circle centers.

cent_color = reshape(cent_color,size(centers,1),3);


Now that we have the colors corresponding to each circle, we can plot these circles with the right colors. Remember to normalize the colors between [0-1] for plotting.

imshow(rgb)

for i = 1:size(centers,1)
end


That looks about right. If you look closely, some circles are not exactly the same color as the M&Ms. This is because of there is variation in the color even within an M&M. This variation can be seen on closer look, as shown in Steve's post What color is green?.

Now that I have the circle colors extracted out what else can I do? Maybe I can simulate an M&M image of my own. Hmmm, what do I need to do for that? I see broadly two steps for doing this. First step is to paint the M&Ms with the right colors at the right locations. Second step is to fill in the background color.

For the first step of painting the M&Ms, we will use a simple trick from Linear Systems theory. Linear systems theory tells us that when we convolve a function with a shifted impulse (Dirac delta) function, we get the same function shifted to the location of the impulse, i.e. $f(t)*d(t-T) = f(t-T)$, where $f$ is the function, $d$ is the impulse function, and $*$ denotes convolution. Essentially, convolving a function with an impulse creates a copy of the function centered at the impulse location.

The first thing we need to do is create a blank canvas similar to our original image.

cartoon_rgb = zeros(size(rgb),class(rgb));


Next we fill in the right color values at the center locations for all the M&Ms in the image.

cartoon_rgb(center_idx) = cent_color(:);
imshow(cartoon_rgb)


These individual pixels of different colors are like digital 'impulses', (single-pixel wide, height corresponding to a color). Next we need to make a 'function', which in our case is just a circular mask (because the M&Ms are circular), to convolve with these impulses and make a copy of at each of these impulse locations. For simplicity, I use a circle of radius 17 pixels for all the M&Ms even though they have slightly different radii.

radius = 17;


Finally we convolve the image with the colored impulses with the circular mask to paint the M&Ms at the right locations.

cartoon_rgb = imfilter(cartoon_rgb,circ_mask,'conv');
imshow(cartoon_rgb)


A word of caution about this trick here. You might have noticed we are working with multiple impulses simultaneously, i.e. we are not convolving the circular mask with just one impulse but a string of impulses. We are getting away with that because our impulses (circles) are well-separated (more than 2 x radius = 34 pixels apart) and the circles don't overlap. If that was not the case, we would see unwanted colors in the overlapping part of the circles due to superpositioning of the masks.

So now we have painted the colored M&Ms. All we have to do now is to paint the background. I just pick an RGB color to match the background.

bg_color = [235; 218; 175];


Next we need to find all the pixels that are part of the background. We do this by using the same convolution trick we used. We make an image with the 'impulses'.

bw = false(size(rgb,1),size(rgb,2));
bw(sub2ind(size(bw),centers(:,2),centers(:,1))) = true;


Then we convolve this image with a circular mask to get circles painted. Then we complement the image to highlight the background (in white) and record the locations (linear indices) of the pixels belonging to the background.

bw = imfilter(bw,circ_mask);
bw = imcomplement(bw);
imshow(bw)
bg_idx_gray = find(bw);


Now we will fill these pixels locations with the background color. Since this is a color image we need to do some work to fill the colors. We do this is in a way similar to what to we did to extract the colors for the circle center locations. First we find the linear indices of all the elements in the color image matrix that need to be filled.

bg_idx_color = [bg_idx_gray; numel(bw)+bg_idx_gray; 2*numel(bw)+bg_idx_gray];


Next we make a vector that has the color values that go into those elements. We do this by repeating the color values in bg_color once for each pixel location.

bg_fill_values = kron(bg_color,ones(size(bg_idx_gray)));


Finally we fill the color values in the background locations.

cartoon_rgb(bg_idx_color) = bg_fill_values;


That's it. We have made our simulated M&M image and now let's see how it has come out. To view this, I will use a new function, imshowpair, introduced in R2012a. It has some great options for viewing and comparing two images. I will use the 'montage' option to view the original and simulated M&M images side-by-side. There are several other cool options in imshowpair, such as 'blend', which is my favorite. You should check it out.

So here you go: spot the difference!

imshowpair(rgb,cartoon_rgb,'montage');
title('Spot the difference!');
axis off;


Happy circle hunting!

Get the MATLAB code

Published with MATLAB® 7.14

## Wrapping up the analysis of Cody solutions

Today I'm wrapping up my analysis of the results of my Cody problem on eliminating unnecessary polygon vertices.

### Contents

#### A solution by Cris

Here is Cris' first correct solution.

function P = simplify_polygon_108897_luengo(P)
N = size(P,1);
if N<=2
return
end
% assume last point is same as first one -- remove
%    in production code I'd run through the whole list and remove
%    duplicate points.
P(end,:) = [];
N = N-1
% I use 0-based indexing to use MOD more easily, always index with ii+1!
ii = 0; % The point under consideration
first = NaN; % The first point we did not remove
while N>=3 && ii~=first
p1 = P(mod(ii-1,N)+1,:);
p2 = P(ii+1,:);
p3 = P(mod(ii+1,N)+1,:);
v1 = p2-p1;
v2 = p3-p2;
disp([v1;v2])
if ( v1(1)*v2(2) == v1(2)*v2(1) ) && any(v1.*v2>0)
% the two vectors are in the same direction: delete the middle point
P(ii+1,:) = [];
N = N-1;
ii = mod(ii,N); % in case this was the last point
else
% keep the point & move on
if isnan(first)
first = ii;
end
ii = mod(ii+1,N);
end
end
% add first point to end again
P(end+1,:) = P(1,:);
end


This code walks forward (in the while loop) through the polygon vertices. Each iteration of the while loop checks one vertex to see if it lies on the line segment between the previous vertex and the next one. If so, the code deletes that vertex.

When I first formulated this problem, I wasn't certain whether a fully vectorized solution was possible. I thought it might be necessary to remove vertices one at a time, as Cris does here.

Cris uses a few different techniques that I have often used in my own code.

#### Handling Edge Cases

I prefer to write code that handles edge cases naturally, without conditional logic. However, when edge cases can't be handled smoothly by the main algorithm without adding conditional logic, I often handle them with special-case code and an early return at the beginning.

N = size(P,1);
if N<=2
return
end


#### Preprocess and Postprocess the Vertex List

For his algorithm, Cris didn't want the first polygon vertex to appear again at the end of the list. So he removed it at the beginning and added it back at the end.

P(end,:) = [];

     -SNIP-
% add first point to end again
P(end+1,:) = P(1,:);


#### Use the mod Function for Circular Indexing

Suppose you have an index into a vector, and you want the "next" element in the vector, where "next" means go back to the beginning if you are at the end. One could use conditional logic to achieve this:

if i < length(v)
i = i + 1;
else
i = 1;
end


Or you could use the mod (modulo) function. You just have to account for 1-based indexing in MATLAB.

i = mod(i - 1,length(v)) + 1;


The MATLAB function circshift is implemented using exactly this idea. Here's the key part of circshift:

% Loop through each dimension of the input matrix to
% calculate shifted indices
for k = 1:numDimsA
m      = sizeA(k);
idx{k} = mod((0:m-1)-p(k), m)+1;
end

% Perform the actual conversion by indexing
% into the input matrix
b = a(idx{:});


#### Using the MATLAB File Compare Tool

Have you ever used the MATLAB File Comparison Tool? It's can be very useful.

I was interested to see how the code changed when Cris submitted his second correct solution. Here's what the File Comparison Tool (available from the Current Folder Browser) showed:

I don't see any algorithm changes. It looks like Cris has started to tweak his solution to make it smaller (as measured by the Cody solution scoring system).

#### A Solution by Sven

function Ps = simplify_polygon(P)
% Handle degenerate cases!
if size(P,1)<3
Ps = P;
return;
end
% Normalise the difference between successive points
pdiff = diff(P([end-1 1:end],:)); % Append the 2nd-to-last
npdiff = bsxfun(@rdivide, pdiff, sqrt(sum(pdiff.^2,2)));
% Any successive normalised differences that don't change can be discarded
keepIdxs = find(any(diff(npdiff),2));
% Append the first/last points (the question states that P(1,:)==P(end,:))
Ps = P(keepIdxs([1:end 1]),:);
end


Sven uses diff and bsxfun to get a vectorized solution. He also handles edge cases up front and postprocesses the vertex list.

#### A Solution by Richard

Richard got into the game early with Cris and Sven. Here is his first correct solution.

function Ps = simplify_polygon(P)
if ismember(size(P, 1), [0 1])
Ps = P;
return
end

   dP = diff(P);
dP = bsxfun(@times, 1./hypot(dP(:, 1), dP(:, 2)), dP);
idx = find(abs(1 - dot(dP, circshift(dP, 1), 2)) > 1e-10);
Ps = P([idx; idx(1)], :);
end

This solution is somewhat similar to Sven's. Note the use of hypot and circshift.

#### Tag Team Effort

Sven and Richard engaged in a tag team effort to make their solutions smaller and smaller. Their creative burst certainly contributed to one of the highest "average solutions per solver" seen on Cody:

Here's the one of the smallest solutions (at least, it's one of the smallest that doesn't use a Neural Networks toolbox function):

function P = simplify_polygon(P)
try
diag(sum(abs(diff(P)),2)) \ diff(P);
P(any(ans - circshift(ans,1),2),:);
P = vertcat(ans, ans(1,:));
end


No, I wouldn't write production code like this. But it's clever, and understanding how it works can teach us useful things about MATLAB.

#### Output Argument = Input Argument

Did you know that you can use the same variable name in both the input argument list and the output argument list of a function? In other words, you can do this:

  function P = simplify_polygon(P)

The effect is to initialize the output argument value to the value of the input argument. It's equivalent to this:

  function P_out = simplify_polygon(P_in)
P_out = P_in;

My favorite example of using this technique shows up as a solution to another of my Cody problems, swapping two values.

  function [b,a] = swap(a,b)

That's the entire function!

Richard and Sven use this technique in combination with a try block to avoid having to explicity write code to handle the special cases.

#### Automatic Output Variable, ans

When a function returns an output argument and you don't assign it to a variable, MATLAB automatically creates a variable called ans to hold the result. You have probably seen this behavior in the Command Window:

>> magic(3)
ans =
     8     1     6
3     5     7
4     9     2
>> sum(ans)
ans =
    15    15    15

But this behavior also happens when you are running functions. The behavior is exploited in these two lines:

   diag(sum(abs(diff(P)),2)) \ diff(P);
P(any(ans - circshift(ans,1),2),:);

The call to diag produces an output argument that isn't stored explicitly in a variable. The next line can get that result by using the automatic variable ans.

#### The Importance of Formulating Your Problem Carefully

This Cody problem was originally inspired by the problem of postprocessing the output of the Image Processing Toolbox function bwboundaries in order to remove unneeded vertices. This function produces polygons containing horizontal and vertical line segments vertices whose coordinates are integers. My selection of test cases was heavily influenced by this use case. However, I posed the problem on Cody as a more general problem: "Eliminate unnecessary polygon vertices." As I pointed out last week, solving this problem robustly in general can be quite complicated because of floating-point arithmetic issues.

Richard and Sven were very aware of this:

#### Thanks to All the Solvers!

Although I have focused on a few particular solutions from three different people today, this problem sparked a very creative collection of solutions from many people. I wish could have spent more time analyzing and discussing all of the solutions.

Thanks for giving it a try, everyone, and I hope you enjoyed it!

Cody solvers, if you'd like to add a comment to this post, please tell us about a particularly interesting Cody problem that you think we should go look at.

Get the MATLAB code

Published with MATLAB® 7.14

Steve Eddins is a software development manager in the MATLAB and image processing areas at MathWorks. Steve coauthored Digital Image Processing Using MATLAB. He writes here about image processing concepts, algorithm implementations, and MATLAB.

These postings are the author's and don't necessarily represent the opinions of MathWorks.