Steve on Image Processing

More on segmenting in a*-b* space 12

Posted by Steve Eddins,

I'm back to looking at M&Ms today. (Previous posts: 17-Dec-2010, 23-Dec-2010, 30-Dec-2010, 11-Jan-2011)

url = '';
rgb = imread(url);

Last time I showed how I used imfreehand to segment the region in the a*-b* plane corresponding to the green M&Ms.

This time I'll use connected-components labeling and regionprops to segment the image based on all the colors, including the desk background.

I saved the previously computed a*-b* histogram in a MAT-file online; here's how to retrieve and display it.

matfile_url = '';
temp_matfile = [tempname '.mat'];
urlwrite(matfile_url, temp_matfile);
s = load(temp_matfile);
H = s.H;
imshow(H, [0 1000], 'InitialMagnification', 300, 'XData', [-100 100], ...
    'YData', [-100 100])
axis on

Next, let's threshold the image. (Magic Number Alert! I chose the threshold manually based on visual inspection of pixel values.)

mask = H > 100;
imshow(mask, 'InitialMagnification', 300, 'XData', [-100 100], ...
    'YData', [-100 100])
axis on

So we've got seven "blobs". Let's measure their centroids.

props = regionprops(mask, s.H, 'Centroid');
ans = 

    Centroid: [20.2143 82.5357]

centers = cat(1, props.Centroid)
centers =

   20.2143   82.5357
   45.4706   92.2353
   49.2973   29.0541
   51.6029   67.1765
   52.2667   53.4667
   77.8000   67.8000
   80.1724   84.4483

These centroid values are in the intrinsic pixel coordinates of the image. To convert them to a* and b* values, we have to scale and shift.

ab_centers = 2*centers - 102
ab_centers =

  -61.5714   63.0714
  -11.0588   82.4706
   -3.4054  -43.8919
    1.2059   32.3529
    2.5333    4.9333
   53.6000   33.6000
   58.3448   66.8966

a_centers = ab_centers(:,1);
b_centers = ab_centers(:,2);

Next question: where are these centers, and what are the regions in the a*-b* plane closest to each one? The voronoi function shows you both.

hold on
voronoi(a_centers, b_centers, 'r')
hold off

To perform a nearest-neighbor classification of all the pixels, let's compute the Delaunay triangulation, from which we can easily do the nearest-neighbor calculation.

dt = DelaunayTri(a_centers, b_centers)
dt = 


      Constraints: []
                X: [7x2 double]
    Triangulation: [7x3 double]

Compute the a* and b* values of all the pixels:

lab = lab2double(applycform(rgb, makecform('srgb2lab')));
L = lab(:,:,1);
a = lab(:,:,2);
b = lab(:,:,3);

For every pixel in the original image, find the closest a*-b* centroid by using the nearestNeighbor function with the Delaunay triangulation.

X = nearestNeighbor(dt, a(:), b(:));
X = reshape(X, size(a));

I would like to make a colormap of the seven colors in our segmentation. We have a* and b* values for each of the colors, but not L* values. We could just make up a constant L* value. Instead, I'll compute the mean L* value for all the pixels closest to the centroid of each of the histogram blobs.

L_mean = zeros(size(a_centers));
for k = 1:numel(L_mean)
    L_mean(k) = mean(L(X == k));

Now we convert the L*a*b* values corresponding to each of our seven colors back to RGB using applycform.

map = applycform([L_mean, a_centers, b_centers], makecform('lab2srgb'))
map =

    0.2587    0.8349    0.1934
    0.8724    0.8465    0.0271
    0.1095    0.4111    0.6803
    0.8382    0.7591    0.5286
    0.3304    0.2993    0.2751
    0.7264    0.2055    0.2008
    0.9653    0.3595    0.0690

And finally we can use X and map to display our segmented result as an indexed image.

close all
imshow(X, map)

Not bad!

Unless I have some inspiration between now and next week, I might be ready to let this image go and search for something else to write about.

Get the MATLAB code

Published with MATLAB® 7.11

12 CommentsOldest to Newest

Hi Steve,

Thank you very much for your blog. I’m taking a course on medical image processing and am slowly reading my way through all of your posts.

If you need a topic suggestion, may I suggest something on automating the identification of reflections in xray crystallography (in 3D space from 2D images taken at different angles/rotations)? I’m having trouble identifying points on different images (with different intensities) as resulting from the same reflection. Thanks!

This is just a note to let people know that these are not the ACTUAL lab values like you’d get on a spectrophotometer or colorimeter. They’re arbitrary or relative because no known color standard (such as the X-rite colorChecker Chart) was used to develop the correct transform from rgb to lab. These values are just those using arbitrary exposure (input colors) and the “book formula” for converting rgb to lab. That said, this technique CAN be used to segment colors, and even can be used on different images as long as the images had similar exposures. For accurate color measurement though (to get the “true” lab values), you’d need to calibrate against known color standards.

Mark—The camera I used produces color values in sRGB, which is a device-independent space. Presumably the camera manufacturer did the calibration in order to produce sRGB values. However, given that sRGB is based on a certain white-point adaptation, and given that L*a*b* is based on a reference white, it is true that we don’t have enough information to infer XYZ values. Accurate colorimetry wasn’t really the point of this post, though.

Can you clarify the meaning of the code line in which you scale and shift to convert pixel coordinates to a* b* values?

ab_centers = 2*centers - 102

I’m not following that step. What is the nature of the scale? And of the shift by 102?


Brett—Sorry about that. When I was writing this post I knew that I was zooming by that point kind of fast, but the post was getting long and I was running out of time. In the 2-D histogram I computed, the left-most column corresponded to an a* value of -100, and the right-most column corresponded to an a* value of 100. But regionprops doesn’t know about those values. It returns a centroid in the pixel coordinate system where the left-most column is 1 and the right-most column is 101, the number of columns in the matrix H.

ab_centers = 2*centers - 102

is just the equation of the line that maps 1 to -100 and 101 to 100.

Like your post very much. Working on a similar project in Y Cb Cr. Question: I want to display an image and use the data cursor to explore Cb or cr values. There are two ways I thought of to accomplish this. Not sure how to do them though:
1. Is there a way to display a y Cb cr image directly?. Tried imshow to no avail
2. Tried to find the datatip function imshow calls tommodify it. No luck.

Help steve!

Joel—imshow only supports the RGB space. It does not call a datatip function. You might want to take a look at impixelinfo.

To answer Joel’s question, you can show the image in RGB and then make a custom datatip to explore the colors in another colorspace. For example:

dcm_obj = datacursormode(fig);

%%%% put this in a separate file myupdatefcn.m
function txt = myupdatefcn(empt,event_obj)
pos = round(get(event_obj,'Position'))
txt = {['Y: ',num2str(hsv(1))],...
	      ['Cb: ',num2str(hsv(2))],['Cr: ',num2str(hsv(3))]};

sRGB values aren’t reliable, or maybe I just don’t understand them. True, the camera claims to deliver sRGB values, but you know that if you put the camera in manual mode and adjust the exposure time, you can get any sRGB values you want out of the image. You want your white object in your scene to be [255 255 255], no problem. You want your white object to be [152 152 152] – again, no problem. And the lab values for those two would be different. The “true” lab color of the object didn’t change but I can get any rgb or lab color I want from it. Strange enough, the X-rite ColorChecker chart gives sRGB values for the chips, but who ever actually gets those? Pretty much no one. But I agree with you that accurate colorimetry isn’t the point of this post, and this is a good method for segmenting colors. (I do have a Powerpoint tutorial on accurate colorimetry from RGB images, if anyone is interested. Well pretty accurate – there’s a more complicated way to make it even better that’s not covered in the tutorial.)

I am working on a project where I have to segment colors in an image. Thus this post is very interesting for me! However, when I run the same code not enough colors are included.. for example I would like to distinguish between white, grey and blue. However the code as it is does not do that with my image. Do you have a suggestion for how to do this? I have tried adjusting the threshold but this still does not do enough. I would like to distinguish roads from blue sky, but it classifies them in the same class… Could you help me with this? Thanks in advance and thanks for your post!

actually the blue sky gets distinguished from the grey road.. (using threshold 30 or 20 for example) but the white clouds are grouped in the same cluster as the grey road. I am trying to include luminance L value to distinguish them, however the voronoi algorithm only works for 2 dimensional data.. Do you have sugestions?

These postings are the author's and don't necessarily represent the opinions of MathWorks.