Last week I posed this question: How does MATLAB associate the value of a particular matrix element with a color displayed on the screen? Let's start by exploring MATLAB's two basic pixel-color display models:
- Matrix element values specify pixel colors directly
- Matrix element values specify pixel colors indirectly, through the figure's colormap
The matrix (or array) of pixel values is stored in the Handle Graphics image object's CData property. If the CData array is a three-dimensional array with size M-by-N-by-3, then the pixel values specify the colors directly as a mix of red (first plane), green (second plane), and blue (third plane). We sometimes call such an image a "truecolor" image. (I believe this term originated in the computer graphics display industry. Someone please correct me if I'm wrong.)
Here's an illustrative image with just three pixels: red, blue, and yellow.
plane_1 = [1 0 1]; plane_2 = [0 0 1]; plane_3 = [0 1 0]; rgb = cat(3, plane_1, plane_2, plane_3); size(rgb) image(rgb) axis image title('Truecolor image with one red, one blue, and one yellow pixel')
ans = 1 3 3
With truecolor images, changing the colormap has no effect on the image colors displayed.
colormap(hot) title('Changing the figure colormap does not affect the pixel colors')
If the image CData is two-dimensional, then the CData values are treated as lookup indices into the figure's colormap. As an example, let's use an indexed image that ships with MATLAB, clown.mat (Ned's favorite).
s = load('clown') % This the functional form of load. This form returns % a structure whose fields are the variables stored % in the MAT-file.
s = X: [200x320 double] map: [81x3 double] caption: [2x1 char]
The X and map variables stored in clown.mat are both necessary to display the image correctly. X contains the pixel values, and map contains the associated colormap.
To get the color of the (5,5) pixel, first see what X(5,5) is:
ans = 61
Then use that value as a row index into the colormap matrix, map:
ans = 0.9961 0.5781 0.1250
So the (5,5) pixel has a lot of red, some green, and a small amount of blue.
Displaying the image requires two MATLAB commands, one to create the image and one to set the figure's colormap:
image(s.X) colormap(s.map) title('Indexed image')
Unlike truecolor images, indexed images are affected by changes in the figure's colormap.
colormap(cool) title('Indexed image displays incorrectly if you use the wrong colormap')
In my first post on this subject, I suggested that there might really be three pixel-color display models in MATLAB instead of two. The third display model is a variation of the indexed image model.
I'll talk about that next time.
Get the MATLAB code
Published with MATLAB® 7.1
Comments are closed.
76 CommentsOldest to Newest
To blog reader Adam from North Carolina State University: “You may be right. I may be crazy.” – Billy Joel
first I’d like to admit that i find myself waiting for new posts on the blog. ( consider it as a huge compliment (:-)
i personally find the color issue more appealing , and i have a question regarding you last post.
is color map a “hybrid” between RGB and XYZ ? while stating Y ( luminance) in the map and the hue in RGB values of the map ?
so actually what we have here is that every pixel has “”4″” bytes of info: the luminance and the red green and blue value ?
and one last thing , unless I’m mistaken shouldn’t the RGB values represent ( in the screen ) the luminance as well ?
Ike – Compliments are always accepted! The three values, R, G, and B, are sufficient. A separate luminance value isn’t needed. For example, [R G B] = [0 0 0] is black, and [1 1 1] is white. [.7 .7 .7] is an intermediate shade of gray.
Image Processing Toolbox function makecform can be used to convert from RGB space to XYZ space. For example:
cform = makecform(‘srgb2xyz’);
rgb_colors = [.9 .5 .1];
xyz_colors = applycform(rgb_colors, cform);
Steve: great topics, regarding transformations and colors. I just discovered your blog and it’s great. Congratulations. Oh, almost forgot: the Digital Image Processing with Matlab book is amazing too.
I only want to suggest topics about Segmentation (active contours, can be?). Greetings from Brazil.
Vitor – thanks for your topic suggestions. I’ll add them to my list.
I have read and implemented your book, it was nice experience. Now I want to clear one thing regarding Chain Code….The your chain code is not working on Open Boundries. it is worked on only closed boundries.
Dept. of CS and IT,
Dr. Babasaheb Ambedkar Marathwada University,
Aurangabad(MS) 431004 India
Vikas – I don’t understand your comment. Are you asking a question, or just making an observation? If a question, can you clarify it?
I find this website very informative . I have one question.The std2 function.. is it a first standard deviation ? I mean if i were to calculate the standard deviaton of an array would it be the values within one standard deviation?
Priya – See the documentation (description and algorithm) for std2.
Thank you steve,
I have one more question please.How can one overlay a cropped image onto an original image? I know i have the co-ordinates of the cropped image but how do i overlay after performing region operations onto the original image?
Priya – See step 7 of this Image Processing Toolbox example: https://www.mathworks.com/help/images/examples/registering-an-image-using-normalized-cross-correlation.html
I am trying to process images for classification and localization, am using neural network tools, I stored histogrames of images,giving a variable name and a boolean target, I try to train the network but it always give messages that matrices are not correctly sized. I do not know how to solve thi problem.
About 2 weeks ago I asked you a question about image segmentation and region-growing algorithms. You asked me to see my code and examples of images that I’m having difficulty in identifying object. I emailed you these images, but I didn’t get any feedback yet. Have you find any good way to segment these images?
I have another question. I have my mfile listed in a folder called MATLAB CODE.Now this is in the current directory “c:\MATLAB CODE” and i use the ‘uigetfile’ to choose an image.The filename and the pathname of the image is returned correctly but somehow when i try to read the image using ‘analyze75read’ it gives me an error.
I tried putting the image file in the same folder as MATLAB CODE and my current directory reads”c:\MATLAB CODE\S01″ and then i use the ‘analyze75read’ and it works!
My question is: Does the ‘analyze75read’ command work only if the particular folder is listed in the current directory?
It is really inconvenient to keep loading all images as the same folder as the matlab code!
Please let me know if I am mistaken or is there another way to this problem.
Thanks and regards,
I didn’t mean to put pressure on you. I apologize for this, I was just trying to find out whether you got or not my email with the attachments.
Najma – I’m sorry, but I can’t be of much help with neural networks. That’s outside my area of expertise.
Priya – A query came into tech support a few days ago about this very question. Did that come from you? A fix is in the works. A temporary work-around is to add the directory containing your images to the MATLAB path.
Yes it did come from me. Thanks for your answer. I did put all the files I wanted in the set path.
Just an FYI, you can read an anlyze image into matlab. There is existing code in a matlab program called SPM, but we at the BIR (the writers of Analyze) have some internal code as well. It is unsupported, but it does work. Others can followup with me, if they would like.
David – the Image Processing Toolbox now has analyze read capability. It is fully supported.
Hey Steve,I’m new to Matlab and need some help on finding local variance of a pel in a local neighborhood , what functions can help me achieve this… please guide
I am new to image processing.can u tell me what information do we get from mean ,standard deviation,variance,median of pixel intensity
Subrajeet – A local mean operator is sometimes used to smooth or reduce noise in an image. A local standard deviation operator is sometimes used a measure of texture variation. Variance is directly related to standard deviation. A local median operator is also used to reduce noise. It can preserve edges better than the local mean.
Hi steve you said local mean operator is sometimes used to smooth or reduce noise in an image. A local standard deviation operator is sometimes used a measure of texture variation. Variance is directly related to standard deviation. A local median operator is also used to reduce noise. It can preserve edges better than the local mean.
Is there any justification behind this why we will do this like mean or meadian replacements.
Subrajeet – I recommend that you explore one or more image processing textbooks to read about the background behind these ideas. For example, you might try Digital Image Processing by Gonzalez and Woods.
Shravan – Try the function stdfilt in the Image Processing Toolbox.
Sreenivas – Comments on this blog should be at least somewhat relevant to the topic of discussion. I usually delete general requests for MATLAB code, since this forum is not intended for that purpose.
I have the task to consider a picture with the name honda.jpg.I have to change the color of the car from blue to green.
Then I have to put some some other pictures at the background.That means I have to replace the white colored picles with some other picture of my choice
Michael—You can use the UData and VData input parameters of the imtransform function to make the origin be wherever you want. Also, you can use the XData and YData parameters to control the output space rectangle over which the result is computed. See my July 7 2006 post for an example.
How do I set the origin of the affine transformation?? You mention that the rotation and scaling are default at (0,0). Can I make the transform origin at a point in the base image??
Is there any certain technique to segment an indexed image by using thresholding technique? Common practice in this topic is to threshold a grayscale or RGB image. Do we should analyze pixel value or the colormap component (of this indexed image) when applying thresholding?
DIP greetings from Malaysia,
Rudi—It usually isn’t meaningful to perform thresholding or other mathematical operations directly on the index values of an indexed image. I suggest that you convert to grayscale first. You can use ind2gray.
I’m trying to use cross correlation technique to calculate velocity objects between two images. I have the reference that the highes peak of the correlation matrix determines the amount of pixels that has moved during the time from two images. How can i do this?
Thnak you Steve. I saw the product and i used my photos to practice (sample time 0.14s), the result was imax=19096 max_c=0.58043 xpeak=308 ypeak=312 c=621×609, i’m confuse about this…312 pixels has moved in 0.14s? i need to relate this with distance…
Rosy—I think you may have stopped too soon in looking at that product demo. Continue with step 4. You’ll see that the translation offset between the two images is calculated by subtracting the image size from xpeak and ypeak.
Hi Steve. I did your suggestion and i have the next results: when i compare two consecutives photos (1-2) the movement is minimum and i have max_c=0.7869, imax=189095, ypeak=311, xpeak=305, corr_offset=0, rec_offset=0, offset=0, xoffset=0, yoffset=0. Next i used photos 1-4 where the change is more evident i have max_c= 0.3268, imax=189718, ypeak=313, xpeak=306, corr_offset=1 2, rect_offset=0 0, offset=1 2, xoffset=1, yoffset=2. Do I have to analyze every 3 photos instead 1? how i can translate the result to velocity? pixel per frame? or can i get mm per second?
Rosy—If you know the pixel spacing in physical units (e.g., mm per pixel), then compute the Euclidean distance corresponding to the translation offset and then scale by the pixel spacing. Then scale by the number of frames per second to get mm per second.
I did this (i hope is correct) Euclidean_Distance = sqrt(sum(xoffset – yoffset).^2)= 1. The reference is 0.25 mm/pixel, then Distance = (0.25*Euclidean_Distance)= 0.25, time between frames 0.145 s, i used photos 1-4, Total_time = number_of_frames*time_between_frames = 0.435. Therefore the velocity is 0.25/0.435= 0.5747 mm/s. Do you thing that is right?
Rosy—Your calculations seem reasonable.
Thank you Steve, I really appreciate your help!
Hi, Steve. I have a question: in which function can I transform an indexed image to a true color image by myself? I have a matrix 640*480, and show the image with jet(64) colormap. I want a transformation to make this matrix to 640*480*3 true color format, and I wrote a for-loop to do the job, but the code was in-efficient. Does MATLAB has any function to achieve this effect?
theMax =max(indImage(:));theMin =min(indImage(:)););
tmpIndex =(indImage>=(theMin+(theMax-theMin)/64 *(x-1))) & (indImage
Sorry, I just found ind2rgb could do the job I asked yesterday. I should just read the MATLAB help more carefully!
Sir, can u send me a code for detecting players in a cricket video using seeded region growing algorithm??
Karthick—Well, cricket player detection is not something I’ve ever written code for, nor is it on my list of things to do soon. :-) There are several tracking demos in the Video and Image Processing Blockset. You might be able to get some algorithm inspiration from those.
Thank you for your great works in the area of Image Processing. Is it possible to find out the missing ICs in a printed circuit board using image processing? If so, then what should be the idea? Please reply me.
Thank you in adavance.
Santhosh—From your description of the problem, it sounds like you might have a reference image available of what the circuit board is supposed to look like. How about using image subtraction or absolute difference?
hello sir, how to segment a video using matlab
I require matlab codes relating to basic image processing topics to implement them like histogram equalization,spatial filtering and other filters used in image processing.
plz help me in these topics.either give me any link where i can get codes related to these topics or someway else.
Haris—You can find these functions, including code, in the Image Processing Toolbox.
I’m taking up the work to calculate the velocity between objects (I wrote you in August 21th), but i’m no sure about the calculation of the Euclidean Distance corresponding to the translation offset:
offset= corr_offset + rect_offset;
xoffset = 1
yoffset = 4
Euclidean_Distance = sqrt(sum(xoffset – yoffset).^2)= 3
Euclidean_Distance = sqrt(sum(xofsset).^2+(yoffset).^2)=4.12
what is correct?
Rosy—The second form.
Very informative post – thanks!
Hi I am writing a code to measure the width of arteries from pictures that are taken from a camera. I was thinking about doing this using the pixel values of the image. By making the background darker and the arteries lighter, i wanted to let Matlab to start to measure vertically from a point where the pixel value is above a certain number and stop at a point where the pixel number is below that number and display the image. Wondering if i could get some help. Thank you
Ato—It sounds like you already have a very specific notion of the algorithm you want to implement. What kind of help are you seeking?
I’m experimenting, trying to learn… How come when I tried converting your red-blue-and-yellow-rectangles rgb image into grayscale, and displaying it using the same steps, I only got a big blue rectangle?
After copying and putting your code into matlab, I did:
gray = rgb2gray(rgb);
Why didn’t I get 3 rectangles in shades of gray?
When you use the image with two-dimensional CData instead of three-dimensional (truecolor) CData, you need to specify the desired colormap as well. Note my use of colormap in several places in this post. Or you could use the Image Processing Toolbox function imshow, which handles such details automatically.
hi, I dont know to convert from grayscale to rgb image because i am new to this field i am strugglling hard can u help me.
Darshan—You can concatenate the grayscale image three times along the third dimension, like this:
rgb = cat(3, I, I, I);
I am very happy for ur kind reply. My aim is to convert first an rgb image to grascale and vice-versa for that i have written the following code as below,
but i couldn’t get the colored image. why? if u have any other idea’s plz help me.
Darshan—You can’t. When you converted the image to grayscale, you threw the color information away. You can’t get it back.
I am working on color images but, i am facing the problem in converting the Ycbcr back to true colors, is there any method ,please help
thanks for the reply, I tried to use ycbcr2rgb, but I was not able get the true colors, actually I am trying to perform wavelet transform on the color images, I tried to use the for loop for RGB and than concatenate, but I was not able to get it,I dont whats the problem, please help.
decimage(:,:,i)=[ca1r ch1r;cv1r cd1r];
I tried this way also, but Its not displaying.pls help
Liya—I can’t help you with debugging your code. You might consider posting a note to the MATLAB newsgroup, comp.soft-sys.matlab. If you do, you’ll need to give a more specific description of your problem, because “it’s not displaying” is too vague.
hello steve. i need some help in converting true color images to 8-bit grayscale i tried the function rgb2gray but it did not help/ guide me please.
Anurag—Here’s a sample:
RGB = imread('peppers.png'); imshow(RGB) I = rgb2gray(RGB); figure, imshow(I) imwrite(I, peppers_gray.png')
hi my problem is how to create a function to convert an image RGB to XYZ then XYZ to LAB and make it “executable” it means i can run it and see the result
and i have another problem ich is how to create a nother function for searching an image using the commande regionprops????
i hope i will get an answer from you
Imen—Use makecform and applycform for converting between RGB and LAB. I don’t understand your question about regionprops; that function is for computing geometric measurements of image regions.
I would like to display RGB planes as separate images with shades of itself. i.e, I break RGB into R,G, and B planes of 8 bit vectors, and now would like to diplay them as a variation of Black to Red, Black to Green, and Black to Blue. Could you please guide me how I can achieve this in MATLAB.
I have tried to take only one plane and cat them with others planes having ‘zeros’, but this doesn’t work.
Shashidhara—What do you mean by “this doesn’t work”? What exactly did you try, what was the result, and why was the result not satisfactory?
Thanks for kind reply.
Here is the part of the code that I am working with;
[m,map]=imread('xyz.bmp'); x=m(:,:,1); y=zeros(size(m)); y(:,:,1)=x; figure(1); imwrite(y,'xyz_RED.bmp','bmp');
I would like to display and write an image with varying only in ‘black-red’ space. But I get blank image filled with red; i.e., image information that is available if I do
is not seen (or lost). Could you please help me how I can solve this.
Shashidhara—You’re using an indexed-image syntax to read the image, and then the rest of the code is assuming that the image is in truecolor format. Also, you are initializing y as a double-precision array but then you’re assigning uint8 values into it, so you’re going to have a dynamic range scaling problem. I’d like to suggest that you take a look at the Introduction section for the Image Processing Toolbox User Guide to get information about image type and data type conventions.
I am quite new to matlab, and have this particular problem or question. Can i take a grayscale image and do some calculations on the cell values, say i want to increase the value of cells which has a value below 50 (say) by 10. After doing such calculations i found that although the resulting matrix showed the calculated results but whenever i used imwrite to write it to a file or use imshow, it either shows a totally black or totally white image. I feel I need to convert the resulting matrix to grayscale, I tried mat2gray and some other commands available in the toolbox but with no use. Hope you can help me.
Yuv—Please contact technical support.
Thanks for your inputs. Your comments really helped me solve the problem. I typecasted ‘y’ to unit8 and it works fine.