Steve on Image Processing

Spatial transformations: Three-dimensional rotation 72

Posted by Steve Eddins,

Blog reader Stephen N., who's been following my posts about spatial transformations, asked me last week how to rotate a three-dimensional image.

The Image Processing Toolbox function tformarray is a very general multidimensional spatial transformer and can be used for three-dimensional rotation. Here's how.

Contents

Make a three-dimensional blob

First, let's make a three-dimensional image containing a blob that will easily show the effect of a rotation.

[x,y,z] = ndgrid(-1:.025:1);
blob = z <= 0 & z >= -0.75 & x.^2 + y.^2 <= sqrt(0.25);
blob = blob | (z > 0 & (abs(x) + abs(y) <= (0.5 - z)));

Display the blob using isosurface and patch.

p = patch(isosurface(blob,0.5));
set(p, 'FaceColor', 'red', 'EdgeColor', 'none');
daspect([1 1 1]);
view(3)
camlight
lighting gouraud

Make a 3-D affine tform struct

We want to rotate the blob about its own center. For me, the simplest way to construct an affine transform matrix that will do that is to use three steps:

1. Translate the middle of the blob to the origin.

2. Rotate the blob.

3. Translate the rotated blob back to its starting location.

Here's the first translation:

blob_center = (size(blob) + 1) / 2
blob_center =

    41    41    41

T1 = [1 0 0 0
    0 1 0 0
    0 0 1 0
    -blob_center 1]
T1 =

     1     0     0     0
     0     1     0     0
     0     0     1     0
   -41   -41   -41     1

Now here's the rotation. In this example we'll rotate about the second dimension.

theta = pi/8;
T2 = [cos(theta)  0      -sin(theta)   0
    0             1              0     0
    sin(theta)    0       cos(theta)   0
    0             0              0     1]
T2 =

    0.9239         0   -0.3827         0
         0    1.0000         0         0
    0.3827         0    0.9239         0
         0         0         0    1.0000

And here's the final translation.

T3 = [1 0 0 0
    0 1 0 0
    0 0 1 0
    blob_center 1]
T3 =

     1     0     0     0
     0     1     0     0
     0     0     1     0
    41    41    41     1

The forward mapping is the composition of T1, T2, and T3.

T = T1 * T2 * T3
T =

    0.9239         0   -0.3827         0
         0    1.0000         0         0
    0.3827         0    0.9239         0
  -12.5691         0   18.8110    1.0000

tform = maketform('affine', T);

Let's do a quick sanity check: the tform struct should map the blob center to itself.

tformfwd(blob_center, tform)
ans =

    41    41    41

What the tformarray inputs mean

Now let's see how to make the inputs to the tformarray function. The syntax of tformarray is B = tformarray(A, T, R, TDIMS_A, TDIMS_B, TSIZE_B, TMAP_B, F).

A is the input array, and T is the tform struct.

R is a resampler struct produced by the makeresampler function. You tell makeresampler the type of interpolation you want, as well as how to handle array boundaries.

R = makeresampler('linear', 'fill');

TDIMS_A specifies how the dimensions of the input array correspond to the dimensions of the spatial transformation represented by the tform struct. Here I'll use the simplest form, in which each spatial transformation dimension corresponds to the same input array dimension. (Don't worry about the details here. One of these days I'll write a blog posting showing an example of when you might want to do something different with this dimension mapping.)

TDIMS_A = [1 2 3];

TDIMS_B specifies how the dimensions of the output array correspond to the dimensions of the spatial transformation.

TDIMS_B = [1 2 3];

TSIZE_B is the size of the output array.

TSIZE_B = size(blob);

TMAP_B is unused when you have a tform struct. Just specify it to be empty.

TMAP_B = [];

F specifies the values to use outside the boundaries of the input array.

F = 0;

Call tformarray to transform the blob

blob2 = tformarray(blob, tform, R, TDIMS_A, TDIMS_B, TSIZE_B, TMAP_B, F);

Display the rotated blob

clf
p = patch(isosurface(blob2,0.5));
set(p, 'FaceColor', 'red', 'EdgeColor', 'none');
daspect([1 1 1]);
view(3)
camlight
lighting gouraud


Get the MATLAB code

Published with MATLAB® 7.2

72 CommentsOldest to Newest

The makeresampler function allows only ‘nearest’, ‘linear’, ‘cubic’ as an interpolant method. Is there an easy way to
implement ‘spline’ interpolation? How does that work?

But the resampling ‘Type’ can be ‘custom’. And here a ‘ResampleFcn’ can be defined wich can be a spline function, right? Do you have an example for that?

i want a program that let me to convert from several 2D slices to 3D image
can you help me please
thank you

hey I have a 3D control grid, I know the coordinates of my original control grid points and the new control grid points. I just want tformarray to deform my 3D image according to my deformed control grid but i can’t get the syntax correctly. What’s the correct way to input this data? I’m trying to use TMAP_B for this purpose. it keeps giving me errors about my dimensions not matching.
Ax = list of new points x coordinate
Ay = etc. for y.
Az…
.
.
Bz = list of old points z coordinate
I’m inputing TMAP_B as [Ay Ax Az By Bx Bz];

and my control grid resolution does not match my image resolution.

thanks

Dan—If your control grid resolution does not match your image resolution, then it sounds like you want to infer a three-dimensional transformation from your control points and then apply that transformation to your image. tformarray doesn’t do that. In fact, there isn’t a function in the Image Processing Toolbox that infers 3-D transformations. The TMAP_B can be used when you know exactly which input-space location corresponds to every output-space location. For an example that shows how to use TMAP_B in this fashion, see my function iminterpn.

I looked at the iminterpn function but I’m not entirely sure how to get it to do what I want.

I have a routine that can calculate the transformation by determining where each individual pixel is moved because of the control grid. The problem is that the pixels are moved by non-integer amounts and when you expand a object the spaces in between the pixels aren’t filled by this method.

tformarray only appears to work as interpolator from what I’ve seen. Can I use it to reconstruct the image if I can tell it where each individual pixel is moved to? Ie. fill in the space between sparse pixels and stay true to the original image?

Dan—The problem you describe with pixels not being filled in by your method is characteristic of forward mapping spatial transformation algorithms. Inverse mapping algorithms do not have this problem. See my April 2006 and May 2006 posts for more information. tformarray and imtransform use inverse mapping. If I understand your description correctly, then yes, I think tformarray can do what you want.

Hi Steve,

I had a lot of help from your blogs, and they got a lot of my problems solved. However there remains one obstacle: I am trying to rotate an entire CT-image set (512x512x100). Using the tformarray this should be simple, following the example, and for rotation on the axial plane this works fine. In the Y-plane however, there is the problem that the Z-direction is sampled differently than the X and Y. I tried scaling/rescaling in the translation matrices but this did not work. Do you have any ideas? Probably this can be dealt with using the input arguments of tformarray?

Koen—Unlike imtransform, tformarray does not offer syntaxes to scale any of the axes. You will have to create an affine scaling matrix yourself, and then compose it (via matrix multiplication) with your affine rotation matrix.

Steve can you take a look at this piece of code which I programmed based on your example.

Why are the results in fig 3 and 4 not the same?

——–

close all
clear all

dims = 17;

cube = zeros([dims dims dims]);

[x,y] = meshgrid(-floor(dims/2):floor(dims/2),-floor(dims/2):floor(dims/2));
cube(:,ceil(dims/2),:) = sqrt(x.^2+y.^2) < dims/2;

figure(1);
p = patch(isosurface(cube,0.5));
set(p, 'FaceColor', 'red', 'EdgeColor', 'none');
daspect([1 1 1]);
view(3)
camlight
lighting gouraud

phi = pi/2;

center = ceil(dims/2);

P1 = [1 0 0 -center; 0 1 0 -center; 0 0 1 -center; 0 0 0 1]';
P2 = [1 0 0 center; 0 1 0 center; 0 0 1 center; 0 0 0 1]';
R = [cos(phi) 0 -sin(phi) 0; 0 1 0 0; sin(phi) 0 cos(phi) 0;0 0 0 1]';

T = maketform('affine',P1*R*P2);
S = makeresampler('linear', 'fill');

%% check transform

index = find(cube); 
[y , x, z] = ind2sub(size(cube),index);

figure(2);
scatter3(x,y,z);
view(3)

tesp = tformfwd(T,[x y z]);
figure(3);
scatter3(tesp(:,1),tesp(:,2),tesp(:,3));
view(3)

%% apply transform to cube and show results

cubeR = tformarray(cube,T,S,[1 2 3],[1 2 3],size(cube),[],0);

figure(4);
p = patch(isosurface(cubeR,0.5));
set(p, 'FaceColor', 'red', 'EdgeColor', 'none');
daspect([1 1 1]);
view(3)
camlight
lighting gouraud

Daniel—Because you’ve reversed the dimension order in the output of ind2sub, compared to the dimension order you’re passing to tformfwd and tformarray. In your call to tformarray, you are mapping the first, second, and third dimensions of the mathematical transformation to the first, second, and third dimensions of the input and output arrays. Try this instead:

cubeR = tformarray(cube,T,S,[2 1 3],[2 1 3],size(cube),[],0);

New to 3D graphics:
How should one specify a 3D volume for transformation functions such as tformarray when starting with a m by n matrix where m is the number of points, n1 corresponds to x, n2 to y and n3 to z? Thanks

Sorry. If we have a matrix specifying 3D coordinates:

point dimension
x y z
1 x1 y1 z1
2 x2 y2 z2

m
and we want to transform it, how should we input this 3D volume appropriately for tformarray?

Thanks a lot Steve for your reply!
It is true, I am still getting sometimes confused with matlab indexing, i.e. (y,x,z) instead of (x,y,z) ;)

Hi Steve,

How can one inform tformarray about the range of output space co-ordinates? I need to translate a 3D stack in XY plane but want the output space range to be the same as input range. Currently, I have used ‘XData’, ‘YData’ options with imtransform and am applying it to each slice in sequence.

thanks.

Shalin—tformarray is a low-level function that works only in the natural MATLAB array indexing coordinates. You can control the output size with the TSIZE_B parameter, but you’ll need to fold any spatial coordinate system scaling into the definition of the transformation.

Hi Mr. Steve,

I’m doing research in iris biometric images.

I’m trying to rotate a 2D image in yaw, pitch and roll, i.e. in a range of 30º. It only works fine with yaw (with the pitch and roll set to 0º).

For pitch and roll, I only can get good visual results in the range:
1 < (pitch, roll) < -1

I = imread('something');

Y=-10;  
P=0.02;
R = 0;

yawMatrix = [cosd(Y) -sind(Y) 0; sind(Y) cosd(Y) 0; 0 0 1];
pitchMatrix = [cosd(P) 0 sind(P); 0 1 0; -sind(P) 0 cosd(P)];
rollMatrix = [1 0 0; 0 cosd(R) -sind(R); 0 sind(R) cosd(R)];

T = yawMatrix * pitchMatrix * rollMatrix;

t_proj = maketform('projective',T);
I_projective = imtransform(I,t_proj, 'size', size(I), 'fill', 128);

imshow(I_projective);

Thanks in advance any help.

Best regards

Rui—Two thoughts come to mind that might help you with debugging your code. First, make sure that your T matrix is not tranposed from what maketform expects. Review the reference page for maketform to be sure. Second, if the image doesn’t look like what you expect, the problem could be with your spatial transformation. Use tforminv to transform points from output space back to input space and make sure the spatial warping function is what you expect it to be.

Hi Steve

Very useful code, yet what if I parts of my rotated+translated object are outside the original boundaries? How can I define the boundaries as such that the output object (suggest I want to rotate a long cylinder for 45 degrees and obviously, one of the dimensions is becoming larger but standard tformarray is going to cut that part, no?) is incorporated entirely in the new volume? I can find the expected outbounds of the volume (findbounds), but how feeding them to tformarray? I tried to pad the 3D array with zeros (depending on the outcome of findbounds), but I think I am doing things wrong … Because when I pad the matrix, my center position changes and as such my tform changes too …

Thanks in advance

Jan.

Jan—The function tformarray does not have the convenience syntaxes of imtransform for setting up alternative coordinate system mappings. Any coordinate system scaling or translating that you might need, for example to shift the image over in order to capture all of it, have to be incorporated into the spatial transformation function itself.

Thanks for the answer, Steve. I thought I had some kind of a solution by estimating the bounds using the function

findbounds

. By pre- or postpadding of the 3D array, recalculating the

TFORM

structure (based on a new translation due to padding) I tried to fit the entire volume. Yet sometimes this does not work (strangely enough in the Z-direction). Part of my volume is still out of bound. Is this due to the approximation of the findbounds algorithm? Or am I doing something wrong?

Thanks

Jan.

Jan—What kind of tform are you using? Does it have a forward mapping? You can find out by trying to use tformfwd. If your tform does not have a forward mapping, then it’s much harder for findbounds to figure out where the transformed image is going to “land” in output space; it has to use a search method.

Hi Steve,

Using tformarray, I am able to reformat axial slices of my CT data to obtain coronal and sagittal views. However, I would like to reformat my axial slices along an oblique axis (say 15 degrees from the horizontal) to simulate gantry tilt. How can I do this?

Thanks

Mark—I don’t know anything about that, but it sounds like something that could be implemented using a 3-D affine transform.

Hi,

I am working on trajectory analysis. Let’s say I already have the trajectory corresponding to a car moving on the ground. I can plot the trajectory by plot(x,y). Now I want to create a small JPEG of the car and superimpose it on the trajectory plot to see the position of the car. For this, I need to insert the JPEG multiple number of times onto the (x,y) plot and also rotate it. Imagine a car going round in a circle, and you have to plot its position and orientation say, every 1 second. Can you help?

Imon—Form an affine transformation matrix that rotates and shifts the image appropriately, and then use imtransform to warp the image. Alternatively, just rotate the image using imrotate, display the image superimposed on your plot, and give the image object the appropriate XData and YData properties so that it appears in the desired location.

hai,
I have rotated an image through 45 ,then to obtain the original image I rotated through -45.I’m not getting the exact original image.How can I get the original image back?
reji

Reji—Since rotation involves interpolation, you can’t expect to get the original image back exactly except for angles that are multiples of 90 degrees.

Hi Steve,

This info is very valuable. I found an answer to a question I was trying to solve i.e How to draw in Matlab a thick walled cylinder?

The thing I would like to ask you is how to figure out the dimension and coordinates where the cylinder is going to be placed by Matlab.

Also why the isovalue can’t be bigger or equal to 1.

This the modified code I took from your explanation:

[x,y,z] = ndgrid(-1:.025:1);
blob = z = -0.75 & x.^2 + y.^2 >= sqrt(0.25) & x.^2 + y.^2 <= sqrt(0.35);

p = patch(isosurface(blob,0.5));
set(p, 'FaceColor', 'green', 'EdgeColor', 'none');
daspect([1 1 1]);
view(3)
camlight
lighting gouraud  

I appreciate you help.

Thanks

I am new to 3D processing using MATLAB. I have a question about how to assign arbitrary coordinate system to a 3D image volume, and use that coordinate system to visualize data.

Say I have a 100*50*60 volume. For each index (i,j,k) in the MATLAB array, I know the corresponding x,y,and z in my coordinate system.

MATLAB by default takes the second index as x, and the first as y. How can I force it to take a custom coordinate system for this volume.

Thanks,
Vijay

Steve,
kindly refer to your responce no 8 to Dan.
I’ve two 3D images (e.g. I1 and I2) in which i know the correspondences of four control points. How can i apply 3-dimensional spaial transformation on I1? and minimum how many control points i need to specify for rigid and affine transformation respectively in 3D?

Sohaib—You can use tformarray, as I’ve described in this post, to apply a three-dimensional affine transform. I believe the three-dimensional affine transform matrix has 12 free parameters and that you will need 6 pairs of control points. See the subfunction findAffineTransform in cp2tform.m if you want to see how the math works for inferring the transform matrix.

Steve, i think i didnt make myself clear…
I am facing exactly the same senario and problem as Dan was facing that I’ve an 3Dimage I1 and i know the old and new x,y,z locations of four control points only. i dont know the transformation matrix…….. what i want is to infer a three-dimensional transformation from these four control points and then apply that transformation to image I1. you said that tformarray cant do that…….then what should i do……….?

Sohaib—I guess I’m the one who wasn’t clear. I said that tformarray doesn’t do that, meaning that tformarray doesn’t infer transformations. tformarray can be used, however to APPLY a 3-D transformation once it is inferred. Because there isn’t a toolbox function that infers a 3-D affine transform, I suggested that you look at the math in findAffineTransform in cp2tform.m to see how you might formulate the problem yourself.

And … I don’t think you have enough control points.

Hi Steve,
I am just curious about internal structure of tformarray I mean how it changes the coordinate values. As far I know maketform gives one forward and one inverse data. How the tformarray use these forward and inverse data of maketform?

Hi Steve,
Thanks for this blog! One question: I would like to use tformarray for a general affine transform. However, when performing a simple downscaling of one of my axes, in the example below the x-axis by a factor of 3, I get non-sensical results. In Matlab 7.5.0 under windows XP the code below produces an image with the x-axis scaled down, but with the tiles hading different size in that dimension. Am I using tformarray wrongly?

Thanks.
Esben


img = checkerboard(8);
[n,m] = size(img);
Tf = [1 0 0; 0 1/3 0 ; 0 0 1];

scaled_img = tformarray(img, maketform('affine',Tf'),makeresampler('linear', 'fill'),(1:2),(1:2), [n round(m/3)] ,[],0);
figure; imagesc(scaled_img), colormap(gray);

Ok thinking about the above example I realize that tformarray in that case simple resamples the original image along the x-axis in some optimal way, and that the result makes sense. BUT let me then take an example that illustrates my real problem.

Below I generate an image with a grid of 4-by-4 black blocks seperated by white vertical and horizontal bars. When I try to downscale THAT image, it looks as if I’m zooming in or something… Any suggestions on what is happening and how to perform the scaling properly.

Thanks,
Esben

%******* Generate grid image************
N = 64;
img = zeros(N);
nlevels = 2;
nlines = 2^nlevels - 1;
for i = 1:nlines
    c(i) = i*(N+1)/2^nlevels;
end
c = round(c);
r = c;
img(:,c) = 255;
img(r,:) = 255;
figure; imagesc(img), colormap(gray);
%**************************************

Tf = [1 0 0; 0 1/3 0 ; 0 0 1];

scaled_img = tformarray(img, maketform('affine',Tf'),makeresampler('linear', 'fill'),(1:2),(1:2), [N round(N/3)] ,[],0);
figure; imagesc(scaled_img), colormap(gray);

Steve, Kindly refer to post no. 40.
I was able to infer 3D affine transformation matrix through the correspondence of 4 pairs of control points by changing cp2tform.m. I don’t know why were you insisting on having 6 pairs of CPs, when you can form three equations (x,y,z), each for one CP, and 12 eqs in total from 4 CPs.

Hello, I am very usefully using your website. Thank you.
I want to rotate 3xN array with translation and rotation at the translated point. If I donot want to restore the original point, would you give me advice the method?
This is my code. Would you assume that there is 3×36 sized array as input.


% [ row col ] = size(nodes);  row=3, col=36
% Then I made it to translate and rotation matrix. I want to
% rotate simply only on Z-axis. 
% Original center is ( 0,0,0) but I moved the center ( 10,
% 10, 5) for example. 

T_orig = [ 1 0 0 0
           0 1 0 0
           0 0 1 0
           -center 1 ];
       
T_Z = [ cos(angle_phi) -sin(angle_phi) 0 0
        sin(angle_phi) cos(angle_phi) 0 0
        0 0 1 0
        0 0 0 1 ];
    
T_rotation = T_orig*T_Z;

tform = maketform('affine', T_rotation);
ctr_CK = tformfwd(center, tform);

R = makeresampler('linear', 'fill');
TDIMS_A = [1 2 3];
TDIMS_B = [1 2 3];
TSIZE_B = size(nodes);
TMAP_B = [];
F = 0;
updated_nodes = tformarray(nodes, tform, R, TDIMS_A, TDIMS_B, TSIZE_B, TMAP_B, F);

% However, I got error message that TDIMS_B doesn’t agree size of matrix ‘nodes’. I do not know about this solution. Would you please let me know how I should do?
Thank you.

the input is 36 three-dimensional points. I put the input data below. Hoping to be understandable for you. Sorry.

nodes =
  Columns 1 through 10

   79.5386   78.1686   73.2929   65.0666   54.0777   41.3673   49.1616   32.7268   15.7368    0.0000
   61.4152   74.4752   86.6625   96.9287  104.2670  107.9266  142.0418  146.4543  145.7531  140.2281
   23.5205   32.9263   42.5296   52.0072   60.9239   68.8030   30.9668   41.6868   51.5158   59.9798

  Columns 11 through 20

  -12.9752  -22.2586  -48.8102  -57.8819  -62.5865  -63.2595  -60.4900  -54.9741  -78.1809  -71.3186
  130.8819  119.1110  141.4561  126.7170  111.1074   95.9237   82.1304   70.3948   61.3904   49.3629
   66.8406   72.1102   32.6452   41.1365   48.3762   54.4947   59.6777   64.1139   27.6846   34.6415

  Columns 21 through 30

  -62.5377  -52.4885  -41.8070  -31.1141  -33.4687  -21.7499  -10.4092   -0.0000    8.9438   15.9230
   39.6041   32.3650   27.7684   25.8271    5.9872    4.2376    4.8986    7.8331   12.8004   19.4477
   41.0146   46.9134   52.4313   57.6386   25.0391   31.8120   38.4243   44.9103   51.2759   57.4881

  Columns 31 through 36

   33.2693   41.5772   47.6631   51.1125   51.6301   49.1109
    6.3126   14.0158   23.1268   33.1120   43.3054   52.9263
   26.2360   33.7998   41.4568   49.1327   56.6819   63.8760

Silvia—Then you should not be using tformarray. The functions for transforming points (as opposed to transforming two-dimensional or multidimensional image arrays) are tformfwd and tforminv.

Those points are 3D points. I should plot (x,y,z) in 3D. For example, Columns 31, (x=33.2693, y=6.3126, z=26.2360). I used meshgrid to creat 3D coordinate. It is working. but the thing that I can not manage is rotation after central axis is moved to another position. I want to rotate those point when central axis is shifted. Those points are at central axis (0,0,0). Is it impossible to use tformarray?

Silvia—tformarray is not for transforming points. tformfwd and tforminv are for transforming points. tformarray actually calls tforminv as part of its computation for transforming a multidimensional array.

Hi Steve, I managed it without using tformarray. Yes, it wasn’t that complex like usually difficult way is not a best solution. Thank you for your comments.

I would like to specify the origin of the rotation to be [0,0,0] instead of the center of the blob (I felt this was how the rotation is done normally) as well as use my own rotation matrix.

I have simply replaced blob_center to blob_center = [0,0,0] and it seems most of the data are out of range.

However the resulting matrix is filled with 0s.
Is it because the data are out of range??
What would be the good solution to this problem??

Kim—I disagree with you about how image rotation is “normally done,” and the problem you are having is the main reason why. The software is just doing what you asked it to do: rotate the image around the location (0,0,0), which isn’t contained in the image at all. The rotated image therefore moves partially or completely “out of view” of the output space region you are computing. For two-dimensional examples of this, see my 12-May-2006 post.

Hi Steve, Thanks for your reply.
I woule specify problem a bit more in detail.

I have two types of data.
(data type 1) blob
blob = [ 0 0 0; 0 1 0; 0 0 1 ];
(data type 2) x,y,z coordinate of objects
object1 = [2,2];
object2 = [ 3,3];

I have some objects which I want to align.
The objects are currently separated (in (data type 2)) and I have calculated a rotation matrix that would align them.
When I visualize objects in (data type 2) after applying rotation matrix, they are all nicely aligned.
I guess here the rotation is done with the origin at the [0,0,0].

Now I would like to rotate the (data type 1).
However this does not seem to work.
Could you suggest me a good way to rotation (data type 1) with the same rotation matrix I created with (data type 2)??

Was I clear with my question?
Thank you in advance.

Hi Steve,
I wonder if I can draw blob rotation in different coordination.
For example, this is my modification.
x_axis = -(blob_center+50):1:blob_center+50;
y_axis = -(blob_center+50):1:blob_center+50;
z_axis = -(blob_center+50):1:blob_center+50;
Then I applied blob rotation in figure based on my modified coordinate. But when rotation location gets negative, ie., blob_center = -10 -10 -10, then blob is disappeared. Could you let me know how I can draw it in negative blob_center?
Thank you.

Hi Steve,
I want to add one more question.
due to some file format, I need to change coordinate.
For example, I shifted blob_center at (0,0,0) for initial rotation. then I like to translate the rotated blob into different location, ie., (-10,-10,-10). Is it possible?
That’s why I asked you previous question. Please consider it and please give me some comments. Thank you.

Silvia—Unlike imtransform, tformarray does not have to option to specify alternate coordinate systems for the input and output spaces. However, you can still use tformarray for your purpose by incorporating the necessary coordinate system scaling and origin translation directly into your affine transform matrix.

I have a problem with tformarray:
I do rotation around the center for a matrix and here is my transformation matrix: Say my matrix I is of size 100 by 100

T_translation1 = [1 0 0;0 1 0;-50 -50 1];
T_rot = [cost -sint 0;sint cost 0;0 0 1];
T_translation2 = [1 0 0;0 1 0;50 50 1];
Transform_affine = maketform(‘affine’,T_translation1*T_rot*T_translation2);
Resample = makeresampler(‘nearest’,’fill’);
[I_Rot] = tformarray(I, Transform_affine, Resample, [1 2], [1 2], size(I), [], 0);

With this, even though T_rot should give an anticlockwise rotation, after performing tformarray I find that I_rot is always rotated in the opposite direction. Can you tell me where I am going wrong?
I am really stuck up in this part and it would be very helpful if you can give some suggestions here.

Thanks

Hi Steve,

I have got the same problem with Silvia at no.58

The ‘imtransform’ can define alternate coordinate for the input and output spaces, and in order to change the rotation center. However, we can’t define it using ‘tformarray’ directly.

You answered that we can add extra scaling and translation. Do you mean, for example, your blob rotation center is at (0,0,0) if I want to change it to (10,10,10). Then before I use ‘tformarray’, I add extra 10 voxels translation to each of the direction and then put it into the ‘makeform’?

Cheers!

Sorry, by reading your post carefully, I think the method is “translate to the origin [10 10 10] + rotation + translation back”

which is

blob_center = [10 10 10];

T1 = [1 0 0 0
0 1 0 0
0 0 1 0
-blob_center 1];

theta = pi/8;
T2 = [cos(theta) 0 -sin(theta) 0
0 1 0 0
sin(theta) 0 cos(theta) 0
0 0 0 1];

T3 = [1 0 0 0
0 1 0 0
0 0 1 0
blob_center 1];

T = T1 * T2 * T3;

Then we rotate from the center. Am I right?

Cheers.

Hi Steve,

I was reading your answer at number 25:

“Any coordinate system scaling or translating that you might need, for example to shift the image over in order to capture all of it, have to be incorporated into the spatial transformation function itself.”

I don’t understand how to incorporate the changes in the coordinates into the spatial transformation function.. can you give me a few more details? I am doing an affine transformation for a volume.

Also, where can I get more info about the internal structure of a TFORM struct? I couldn’t seem to find it online.

Thanks, your example was very helpful.

Sundar—When using tformarray, you should be aware that it maps dimensions differently than imtransform. With tformarray, the first dimension of the mathematical warping function corresponds to the first subscript dimension of the array. The second dimension of the mathematical warping function corresponds to the second subscript dimension of the array, and so on. For the first two dimensions, this order is the opposite of the X-Y convention used by imtransform. This might be the reason your rotation is working in a different direction than you expect.

Hi Steve,
I have a 3D coordinate of a Volume like,
Index=find(V);
[x,y,z]=ind2sub(size(V),Index);
Now after applying few transformations the new coordinate becomes Xn,Yn,Zn those are noninteger values. Though, the number of grid points is same for both of them but the new coordinate’s maximum dimension is bigger than the old one. Like, for the old coordinate the maximum value was 128 and now it’s becoming 300. Now I want to interpolate V into the Xn,Yn,Zn and refill the empty grids with the values based on interpolation methods. Would you please help me out by giving some suggestion based on how could I do that?
Thanks in advance.

Haque—If I understand your description correctly, you could just use tformarray for the whole operation. But if you want to implement the pieces yourself as you describe, then it sounds like you could use interp3 for your missing piece.

Hi Steve,

I worked on your example code, but I could not get what I really wanted.

I want to rotate a 3D image, but when I rotate it, rotated image is cropped.

I need help, could you please help me?

Hi Steve!

I’m trying to use the approach described in this blog to rotate a 3-D volume. The volume’s main axis is oriented along a z’ direction, which differs from the z direction on the cartesian coordinate system.

In fact up to this point what I have is the volume and the set of unit vectors defining the local coordinate system:
x’ = [0.7071 0.7071 0];
y’ = [-0.6802 0.6802 0.2735];
z’ = [0.1934 -0.1934 0.9619];

What I want is to rotate this volume so as to obtain a new one with its main axis parallel to z-axis ([0 0 1]). In order to do so I’m using the sequence of affine transformations:
– Translation to the origin (T1 matrix)
– Rotation (T2 matrix)
– Translation back to the starting location (T3 matrix)

I’m having problems defining the angle (theta) of the rotation matrix T2, since at this point what I have are the unit vector of my local coordinate system.

What would be the approach in this case?
Thanks!

These postings are the author's and don't necessarily represent the opinions of MathWorks.