Steve on Image Processing and MATLAB

Concepts, algorithms & MATLAB

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

What color is green? 12

Posted by Steve Eddins,

"Why do you have M&Ms on your desk?" my friend Nausheen wanted to know. Well, the truth is, for playing around with color images, M&Ms are simply irresistable.

url = '';
rgb = imread(url);

I can think a few different things we could try with this image. First let's tackle the question, "What color is green?"

That is, can we quantify the color of the green M&Ms? There are a couple of challenges. The first is that there is some variation in lighting in this image from top to bottom. For example, suppose we pick two pixels from within the regions marked below:

hold on
plot(212, 26, 'wo', 'MarkerSize', 12)
plot(235, 378, 'wo', 'MarkerSize', 12)
hold off

Let's display an image containing two giant pixels, one chosen from the upper M&M and the other chosen from the lower one.

twopixels = [rgb(26,212,:), rgb(378,235,:)];
imshow(twopixels, 'InitialMagnification', 'fit')

You can see that's quite a difference in shade. But even within the same M&M there's a lot of color variation. Let's examine two pixels on a single M&M.

axis([220 250 370 400])
hold on
plot(228, 382, 'wo', 'MarkerSize', 12)
plot(237, 389, 'wo', 'MarkerSize', 12)
hold off
twopixels = [rgb(382,228,:), rgb(389,237,:)];
imshow(twopixels, 'InitialMagnification', 'fit')

So where do we go from here? Well, next time I'll probably explore one or two other color spaces, and I also plan to show you how to compute and display a two-dimensional histogram.

Get the MATLAB code

Published with MATLAB® 7.11


Comments are closed.

12 CommentsOldest to Newest

Sven replied on : 1 of 12
Looking forward to it Steve. I recently got back from the doc with the instruction to "monitor the size and shape of some freckles/moles". In true nerd form, I'm taking fortnightly photos with a ruler visible in the images, and plan to come to the next visit armed with some plots of area and aspect ratio over time. As with m&ms, putting skin pigments into the right colour space is gonna be important for proper segmentation.
Mark Hayworth replied on : 5 of 12
Steve: This brings up a good time for me to suggest that you include a 3D color space gamut inspection utility into the Image Processing Toolbox. For example, see that is an imageJ plugin. Don't look only at the screenshot (which is not really representative of what it can do), you have to download it and try it. It can do lots of things and I use it constantly (because there is no such functionality in MATLAB). Please consider it for a future version. It will let you see 3D scatterplots of your pixels in various color spaces so that you can see clusters and understand how successful color segmentation might be. And you can rotate the color space to help you understand what's there. Let me know what you think of it, and if it's something that someday can be added to MATLAB.
Pat replied on : 7 of 12
At RIT we offer a PhD in Color Science. [Disclaimer] I'm not in Color Science, but the answer to "What is Green" is "it depends". Camera color perception depends on: the camera, the camera settings, the white-balance algorithm in the camera, the camera exposure, the light source (or sources), the location of the light sources, the location of the camera, and many other factors. When you answer the question, "What is Green", you must also answer the question, "What is ~not~ Green"? So, if you have only m&m's to worry about, you only need to worry about separating the green m&m's from the background and the other m&m's. However, if there were other colored candies in the picture (I don't know, say maybe skittles) then your discrimination algorithm has to be more complicated. In your case I would consider putting a reference color patch in the image -- say a gray card or a gray background for starters. Best Regards
Faruk replied on : 8 of 12
Hi all, I wrote a code for the computing the number of m's. I missed only one yellow :). I think, the big challenge is the reflection of the window. What is your comments
clc,clear all,close all

url = '';
rgb = imread(url);
Im_out=zeros(N,M); %pre-allocate output image


% search each layer 
for k=1:3

%background canclation
background = imopen(I,se);

%adjust intensity %imadjust(I2,[0 0.1],[0 1])


% cancle the noise object

%sum three output and convert to logic 

title(sprintf('Number of Detected m''s = %d ',length(s)))
% centroids = cat(1, s.Centroid);
% plot(centroids(:,1), centroids(:,2), 'b*')
hold on
for k = 1 : length(s)
    rectangle('Position', s(k,1).BoundingBox,'EdgeColor','k','LineWidth',2);
hold off
Juan replied on : 11 of 12
Hi, Steve. This is more of a fundamental question regarding color and color algorithms from a person that understands too little of image processing. I need to compare colors: sample vs standard. Paper pH-meter changes color when pH changes. The color standard is visually compared with the sample. Leaving aside camera, lighting, material and environmental conditions, is there is a color model (RGB, L*a*b, etc.) that offers inherently less error than the others? I don't know if this question even make sense. Error meaning standard deviation from the pixel value (yes, ideally just a unique value) of the standard color. I would take the average of the pixel values from the sample color picture and measure the deviation from the pixel value of the standard color. Which color model would you suggest to use? Thanks.
Steve replied on : 12 of 12
Juan—You might not know much about image processing, but I know absolutely nothing about quality control for pH paper. Therefore my thoughts about this might be completely bogus. I don't think you can "leave aside" lighting, camera, and material. If you don't have carefully controlled lighting and a camera that produces calibrated pixel data, then I don't see how any error metric will be meaningful. L*a*b* seems like a reasonable color model to use because it is designed to correspond to human perception, and pH paper is meant to be "read" by human observers. To use L*a*b* space you'll need to pick a reference white, which might be the color values for unexposed pH paper under fixed lighting conditions. But to get into L*a*b* space accurately, you'll need controlled lighting and calibrated camera data.