I wrote previously that most spatial image transformation implementations use inverse mapping. The Image Processing Toolbox function imtransform is implementated using this technique.
Here's an interesting issue that arose during the design of imtransform: How does it know where the output image is located in the x-y plane? In other words, in the inverse mapping diagram below, how does it know exactly where the output grid should be?
There are three basic problems you can have with the output grid. It can be too small; it can be too big; or it can be in the wrong place entirely. I'll illustrate these situations with an output grid that is the same size as the input grid, and that is also in the same place.
In the first example, the spatial transformation magnifies the input image. The output grid doesn't cover enough territory in output space to capture the entire transformed image.
In the next example, the spatial transformation shrinks the input image. As a result, the output grid covers too much territory. The black output pixels below are output image pixels that aren't needed to capture the entire transformed image.
In the final example, the spatial transformation moves the input somewhere else. Maybe the transformation is simply a 1000-pixel horizontal translation. In this situation, the output grid doesn't contain any of the transformed image!
When we were designing imtransform, we thought that any of these scenarios would likely result in frustrated users calling tech support. We tried to avoid this by making imtransform do "the right thing."
Next time, I'll describe the calculation imtransform does to automatically produce the results expected by users. (Almost all the time, that is.) If you want a preview, take a look at the function findbounds.
7 CommentsOldest to Newest
I don’t have any question regarding your post above but I am wondering if you could give some suggestions/advice on problems I am facing in image processing.
I have numerous images containing some objects. I have been trying to do image segmentation in Matlab ie., to extract the object of interest but so far can only get parts and parts of it (not as a whole object). The rest of the object is more or less transparent and hence couldn’t be detected. Can you tell me if there is any way to obtain a whole object from a few disconnected parts of it?
Thanks a lot for your help
Tien – send me a sample image.
I can’t find your email address here. Can you tell me how to send you the sample image?
Tien – my name at the upper right corner has a mailto link. The address is firstname.lastname@example.org.
I sent a couple of sample images to your email account last week (June 7th). I just want to confirm that you’ve received them.
BTW, I don’t expect you to solve the problem for me. All I’d like to ask for is some direction/ideas to work on.
Thanks in advance,
I want to align two images , I have pose parameters , T1 ,T2,S,teta, as respectivly transform ,scaling and rotation . But I have third promblem you mention in your post , how can I solve my problem?
Mahsa—Take a look at my other spatial transformation blogs, especially the ones about translation and controlling the output grid.