In my last post on pixel colors, I described the truecolor and indexed image display models in MATLAB, and I promised to discuss an important variation on the indexed image model. That variation is the scaled indexed image, and the relevant MATLAB image display function is imagesc.
Suppose, for example, we use a small magic square:
A = magic(5)
A = 17 24 1 8 15 23 5 7 14 16 4 6 13 20 22 10 12 19 21 3 11 18 25 2 9
Now let's display A using image and a 256-color grayscale colormap:
map = gray(256); image(A) colormap(map) title('Displayed using image')
The displayed image is very dark. That's because the element values of A vary between 1 and 25, so only the first 25 entries of the colormap are being used. All these values are dark.
Compare that with using imagesc:
imagesc(A) colormap(map) title('Displayed using imagesc')
Here's what's going on. The lowest value of A, which is 1, is displayed using the first colormap color, which is black. The highest value of A, which is 25, is displayed using the last colormap color, which is white. All the other values between 1 and 25 are mapped linearly onto the colormap. For example, the value 12 is displayed using the 118th colormap color, which is an intermediate shade of gray.
You can switch colormaps on the fly, and the values of A will be mapped onto the new colormap.
colormap(jet) title('Scaled image using jet colormap')
Let's dig into the low-level Handle Graphics properties that are controlling these behaviors. Image objects have a property called CDataMapping.
close all h = image(A); get(h, 'CDataMapping') close
ans = direct
You can see that its default value is 'direct'. This means that values of A are used directly as indices into the colormap. Compare that with using imagesc:
h = imagesc(A); get(h, 'CDataMapping') close
ans = scaled
The imagesc function creates an image whose CDataMapping property is 'scaled'. Values of A are scaled to form indices into the colormap. The specific formula is:
C is a value in A, and c_min and c_max come from the CLim property of the axes object.
h = imagesc(A); get(gca, 'CLim') close
ans = 1 25
It's not a coincidence that the CLim (color limits) vector contains the minimum and maximum values of A. The imagesc function does that by default. But you can also specify your own color limits using an optional second argument to imagesc:
imagesc(A, [10 15]) colormap(gray) title('imagesc(A, [10 15])')
10 (and values below 10) were displayed as black. 15 (and values above 15 were displayed as white. Values between 10 and 15 were displayed as shades of gray.
Scaled image display is very important to engineering and scientific applications of image processing, because often what we are looking at it isn't a "picture" in the ordinary sense. Instead, it's an array containing measurements in some sort of physical unit that's not related to light intensity. For example, I showed this image in my first "All about pixel colors" post:
This is a scaled-image display of a matrix containing terrain elevations in meters.
Get the MATLAB code
Published with MATLAB® 7.1
Comments are closed.
52 CommentsOldest to Newest
Hi Steve. This scaled-indexed image is not technically a model, but a “subset” of a scaled one, isn’t it?
Anyway, your blog is a very good reading. Thanks.
Vitor – Did you mean that “direct indexed” is a subset of “scaled indexed”? If so, then yes, that’s a valid way of looking at it. But I think most people think of them and use them quite differently, as do I. Also, the HG image object properties encourage thinking of them as two different models.
Hey Steve, can you tell me how to measure the length of a ling using Image Processing in Matlab?
Suvrat – I assume that “calculate the Euclidean distance between the end-points of the line segment” isn’t the answer you are looking for. However, the way you phrased your question is very vague, so that’s about all I can say. If you can be a lot more specific about your scenario, then maybe I can be more helpful.
Sorry wasn’t specific the last time. I actually had difficulty finding the start and end points of a crooked line from in a hazy image. Then I had to find the length of the curving line.
Suvrat – What procedure did you follow to find the start and end points of the line, and what problems did you have? For computing the length, look at the code for the ComputePerimeter function inside regionprops.m.
i used the colorgrad function with a code for determining the threshold, then erosion to extract lines from the pic of the palm of a person. The main problem that i faced was that lines are not clearly demarkable. Also, there are several lines, and have to choose from amongst them though this might be the easy part.
Suvrat – I understand your question better now, but I don’t have a good answer for you. You are in the territory of serious algorithm development now. Successful methods for such problems are almost always heavily tailored to the characteristics of a specific data set, so any general advice I could give is probably not very useful.
I guess you’re right steve. thanks a lot. wll try to figure it out.
i am Charlie, I started using MATLAB recently .. i am working on image processing .. please help me to pixel information from an image .. then i want to divide all the pixels with the center pixel ..
how can I do this..
ur help is appreciated
thanks in advance
Charlie – I’m sorry, but I do not understand your question.
Steve, thanks for the help with image processing. The listing of the various ways to view an image is useful, but I’m hoping to find one more method. Is there a clean way to view an image WITH the pixel’s value? What I’m looking for is the jet-colormap’ed image with the raw values (17, 24, 1, etc) superimposed on each pixel. I realize that I could probably do something like this by using multiple axes, but I want to be able to zoom in and have the pixel & numeric value remain together. Is there a good way to handle this?
Jon – Try the Pixel Region Tool in the Image Processing Toolbox.
I have a simple question that I would like to ask you.
I import a RGB JPEG image, convert it to a greyscale image with 256 shades of grey, and then extract the colormap to be used for comparing with other images (generated as simulations within Matlab). Here is the code segment:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Import JPEG image as an 8-bit RGB image myimage = imread('image.jpg'); % Adjust the image to grayscale. This is now an Jx-by-Jy array where % II(i,j) gives a numeric value for a shade of gray between 1 and 256. I = 0.2989*myimage(:,:,1)+0.5870*myimage(:,:,2)+0.1140*myimage(:,:,3); % Display the grayscale image: figure; colormap(gray(256)); image([0 2],[0 2],I); % notice how the 1st 2 arguments sets the axis data title('Original image') %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
So my question is how can I do the equivalent thing, but in color. Essentially what I am asking is how do I extract the colormap of the original image. If I replace ‘ colormap(gray(256))’ with e.g., colormap(jet); then the image coloring is all wrong (i.e., different from the original image).
Marcus—Your original image doesn’t have a colormap associated with it, so I really don’t know what you mean by “extract the colormap of the original image.” Can you clarify what you are trying to do?
This question refers to your reply no. 15 above, asking for more information about what I’m trying to do. I know very little about image manipulation, so I apologize beforehand if my description is not very good.
Let me first explain what the snipet of code above (reply no. 14) does. It takes a RGB JPEG image and converts it to a FIG file, which has a grayscale color map with 256 shades of gray. Using a branch of mathematics called optimal control theory I then generate images (FIG files) that are close to this original ‘target’ image, using the SAME colormap (with the PCOLOR command). In theory an image generated using the optimal control algorithm may exactly match the original image that I loaded (i.e., look the same). Here is my problem. I want to do the same thing but in color. I want to take my original color JPEG image and convert it to a Matlab FIG file that looks the same. Thus it will have a colormap associated with it that can be used in the generation of the images from the optimal control algorithm.
I hope this is enough information to explain what I am doing!.
It sounds sort of like you want RGB2IND, which will convert your RGB image into an indexed image and a colormap.
the other Steve
Marcus—I don’t really understand your emphasis on FIG files. A FIG file simply contains the data necessary to reconstruct and display a MATLAB figure window. It isn’t an image. If you have the original color image data from the JPEG file, why not compare that image directly? Converting to an indexed image and saving as a FIG file seems like a very roundabout way to compare outcomes. However, if that’s really what you want to do, then I agree with Steve Lord—you might find RGB2IND useful.
Thanks, both Steves. I’m getting there, but there is still a problem.
OK. So I use my numerical algorithm to generate MATLAB figures, and I wish to compare these figures with my original loaded index image (from RGBIND). And so that I’m not ‘comparing apples with oranges’ I generate these images using the colormap associated with the original indexed image. So Steve E. you are right. I don’t need to convert to a FIG file, but I do need to convert to an indexed file in order to extract a colormap.
Now here is my problem, which is a little more complicated than I first described. After converting my original RGB image to an indexed file I then interpolate this indexed image onto a triangular mesh. (I need to do this because the numerical algorithm for generating the images is a Finite Element Method, where all unknowns lie on the vertices of the triangles. ) But the interpolated image looks nothing like the original image (it looks random). Any ideas why I cannot interpolate the values of the indexed image onto a triangular mesh? The relevent code is given below.
Thank you again for any help you can provide.
% Import JPEG image as an 8-bit RGB image
image = imread(‘image.jpg’);
% Convert image to an indexed image with 256 colors with colormap ‘map’
[indexed_image,map] = rgb2ind(image,256);
% Display indexed image
title(‘Original indexed image’)
% Extract the no. of rows and columns
[Jx,Jy] = size(indexed_image);
% Construct ‘meshgrid’ on [0,2]-by-[0,2] to be used for interpolating
% onto triangular mesh
IndexIx = 1:Jx;
IndexIy = 1:Jy;
hx = 2/(Jx-1);
hy = 2/(Jy-1);
x = (IndexIx-1)*hx;
y = (IndexIy-1)*hy;
[X,Y] = meshgrid(x,y);
% Interpolate indexed image onto the vertices of the triangular mesh
interpolated_image = interp2(X’,Y’,indexed_image,xx,yy,’*bicubic’);
% Plot interpolated image onto triangular mesh
title(‘Image interpolated onto triangular mesh’)
view ( 2 )
% Note: xx and yy give the coordinates of the vertices on the triangular
% mesh, while t gives a list of nodes for the triangles.
Marcus—In general, doing any kind of math (interpolation, filtering, etc.) directly on the index values of indexed image is a bad idea. The index values have no meaning other than as lookup indices into the colormap. 5 could be the color red, and 7 could be the color blue. If you interpolate midway between them and get 6, what does that mean? There could literally be any color in the 6th slot in the colormap, totally unrelated to red or blue. This is why I remain skeptical of the notion of doing some kind of quantitative analysis based on comparing indexed images.
But I do have one possibly constructive suggestion: Use a different RGB2IND syntax. Specifically, use the one where you specify the colormap. In your code you are using the syntax where you specify the desired number of colormap colors. With that syntax, RGB2IND produces a colormap optimized for that particular image. So you are probably comparing two indexed images with different colormaps.
Steve – but now we have come full circle! If I do as you suggest and choose a colormap beforehand, i.e. use the syntax X = rgb2ind(RGB, map), then the image will be colored differently from the original. Which explains why I was saying at the start that I wanted to extract a colormap from the original image. I already did as you suggest using map = gray(256). The interpolation works very nicely, but it’s not in color!
Marcus—So extract the colormap from the original, but then use the extracted colormap when you make the 2nd indexed image for comparison.
Steve – I would like to say that you have shown incredible patience with my questions. Thank you! We are almost there.
So how do I extract a good colormap from the original image. If I do, e.g.
A = imread(‘image.jpg’);
map = colormap;
B = rgb2ind(A,map);
the image B looks nothing like image A?
Marcus—Aha! Maybe we are indeed almost there. (You’ve shown even more patience with my answers.) I see a critical point of confusion in your last snippet of code. If image.jpg contains an RGB image, then image(A) displays the pixels colors directly, meaning that the figure colormap is not used. When you type map = colormap you are just getting the default figure colormap, which unfortunately has nothing whatever to do with the image being displayed!
So use rgb2ind to compute a colormap for the original image, like this:
A = imread('image.jpg'); imshow(A) [X_A,map] = rgb2ind(A, 256); imshow(X_A,map); % or image(X_A), colormap(map)
Then when you get your second image, convert it to indexed using the colormap previously computed by rgb2ind.
B = imread('second_image.jpg'); X_B = rgb2ind(B, map); imshow(X_B, map)
Sorry Steve, but as before the interpolated image is still a mess (a random bunch of dots) – see code in reply no. 19.
Also, I’m not trying to compare one JPEG image with another JPEG image, but a single JPEG ‘target’ image (interpolated onto a triangular mesh after conversion with RGB2IND) with MATLAB figures resulting from numerical solutions on the same mesh.
Marcus—As I mentioned before, you can’t interpolate indexed images by interpolating the index array. I’m afraid I’m going to have to give up; I still suspect the whole idea is flawed.
Thanks for trying Steve. Just one last comment. If we do
A = imread(‘image.jpg’)
map = bone(256); % for example
B = rgb2ind(A,map);
and then apply the interpolation process to image B it WORKS, but the color doesn’t necessarily match the original image. So you CAN apply interpolation to indexed images provided the colormap is in some ordered state.
Hey Steve .I have to use matlab in order to manipulate my research results.My problem is that I don’t know how to calibrate my images.I have write a small script to convert the pixels into centimeters..I know for spesific distance how many pixels I have.For example for 22 cm is 260 pixels in my image…So I found the conversion from pixels to centimeters…is 11.8 pixels/cm…But after that I don’t know how to write the script and for every image to make the conversion..Can u help me?pls?thanx
Stella—Can you be specific about the desired inputs and outputs of your script? What form do they take?
I have a sequence (about 250 frames) and I want each frame to have a real dimensions.I record the movement of dye in water so I need to know how many cm moves every 5 sec. So I know the time in each frame and I know that pix*0.08645=cm (because as I said before 260pixels ==22cm)
Stella—I’m afraid I’m still not getting it. If you already the know how to convert from pixel units to centimeters, then what else do you need?
This is my first message to the matlab central. I have random points of X and Y, where X and both Y are “m times 1” vector each. Then, I generate triangular meshing via TRI = delaunay(X,Y), which gives me “n times 1” vector.
At the same time, I have a set of surface data of “n times 1” vector. May I know how to surf the surface data (“n times 1”) on the information of TRI (“n times 1”) ?
From the help file, I know that if using the command: trisurf(tri,X,Y,rand(size(X),1)), I can plot the distributed point data. So, my question is, how to plot the distributed surface data, instead of the point data, based on the information of TRI?
Gabriel—You have posted a comment to my image processing blog. Since your question doesn’t appear to be related to image processing, perhaps you meant to post a message to the MATLAB newsgroup, comp.soft-sys.matlab? Go to MATLAB Central and click on “MATLAB Newsgroup.”
oh, sorry for the misplacement. All because I saw no. 19 and the use of “trisurf” has reminded me of this problem. Thank you for the clue. I think I would post there.
I have an image question, and is hopefully is relevant to this blog.
I have a matlab model (finite element model) and wanted to mount an image [imread(‘image.jpg’)] to the generated nodes from my model. Then, I want to displace all the nodes and am expected to see the stretching of the image.
My question is:
Is there any function that can assign every of the image pixel (or gradient level) on every nodes (or data points) or surface between the nodes, so that the image is shown stretched, by displacing nodes?
Thank you for your time.
Gabriel—You can texture-map an image onto a surface object. See the MATLAB Handle Graphics documentation about surfaces and texture mapping.
i want the source code for rotating an image in digital image processing using matlab
V.Ravi—See the code in the Image Processing Toolbox.
Have a look at the following code. I have an image (apple.bmp) which is grayscale. I want to display an indexed image on top of it with a transparency value associated to it. But i get some weird pink areas on my image. Could you please let me know why this is happening??
label_color(1,:)=[0 0 0];
label_color(2,:)=[1 0 0];
label_color(3,:)=[0 1 0];
label_color(4,:)=[0 0 1];
label_color(5,:)=[1 1 0];
label_color(6,:)=[0 1 1];
label_color(7,:)=[1 0 1];
Sridharan—When imshow displays a grayscale image, it installs a gray colormap in the figure. In your second call to imshow, it replaced the figure’s gray colormap with your label_color colormap. That’s why your gray image starts to look pink. A figure can only have one colormap active at a time. I suggest that you convert your images to truecolor in order to get the desired effect.
steve we are taking a color image converting it into gray image then wavelet transform is going to apply on that gray image.my question to you is can we recover original color image from that wavelet aplplied gray image.
Smita—When you converted your color image to gray, you discarded information. You can’t get it back.
Why am I having trouble using the imcrop function with a scaled image? Or, rather, why does the image not appear scaled anymore when I try to do imcrop on it? I have 16bit TIFF files with values that go from 0 to like 1500. I can use the imagesc function to display them just fine. But what I want to do then is to create regions of interest on such a scaled image. I was trying to use imcrop, but as soon as I put in [ROI,rect] = imcrop(I,hot); the scaled image that was on the screen is replaced by a solid black rectangle. Why is this, and how do I make it work how I want?
Thank you if you can help!
Alex—The default colormap length returned by hot has only 64 colors, so it doesn’t make sense to use it to display a 16-bit image with values that go from 0 to 1500. When I try that, I get an image that’s all white, which is what I would expect. You might have better luck with something like this:
% Display using the autoscaling syntax of imshow imshow(I, ) % Crop [ROI,rect] = imcrop;
Now what if you wanted take that original imagesc(A) and highlight a few differnt pixels to get the average related to the original array [A]? Thanks!
Greg—I don’t understand your question. Which pixels? Highlight them how? And how is highlighting related to getting the average?
I have a gray image (values ranging from 0 to 5) and I want to color pixels which have certain value (say 1.2).
How can I do that?
Hope you can help.
Ilan—Try my imoverview function on the MATLAB Central File Exchange. Compute the mask using something like this:
bw = abs(A - 1.2) < = tolerance;
I wonder how these linear re-scalings work in the case that you have -Inf for example. More precisly when i plot log intensity map of a magnitude spectrogram of a sound (so that the components with zero intensity turn to -Inf) when i do imagesc it shows an acceptable map however if I tend to save that log-intensity matrix to a image or ascii file my self i fail, i need to know how imagesc copes with these situations. can you help me with that ?
Marco—I think that imagesc just ignores nonfinite values when computing the range.
when using imagesc, the data is mapped LINEARLY to the colors of the colormap. Is there a way to change the mapping function to e.g. a logarithmic one?
(I mean, alternatively I could modify the data and then also modify the tick labels on the colorbar accordingly… but changing the mapping function would be much more elegant.)
Stephan—No, there’s nothing like that. I would love to see a more general data-to-color mapping capability available in MATLAB for image display. I’ll mention it to the graphics team.