Back in October I introduced the concept of filter separability. A two-dimensional filter s is said to be separable if it can be written as the convolution of two one-dimensional filters v and h:
I said then that "next time" I would explain how to determine whether a given filter is separable. Well, I guess I got side-tracked, but I'm back on topic now.
This question gave me one of earliest opportunities at The MathWorks to wander down to company co-founder Cleve's office and ask for advice. I asked, "How can I determine if a matrix is an outer product of two vectors?" Cleve was very helpful, as he always is, although I was a little embarrassed afterward that I hadn't figured it out myself. "Go look at the rank function," Cleve told me.
Of course. If a matrix is an outer product of two vectors, its rank is 1. Here are the key lines of code in rank:
dbtype 15:19 rank
15 s = svd(A); 16 if nargin==1 17 tol = max(size(A)') * eps(max(s)); 18 end 19 r = sum(s > tol);
So the test is this: The rank of A is the number of nonzero singular values of A, with some numerical tolerance based on eps and the size of A.
Let's try it with a few common filters.
An averaging filter should be obvious:
averaging = ones(5,5) / 25; rank(averaging)
ans = 1
The Sobel kernel:
sobel = [-1 0 1; -2 0 2; -1 0 1]; rank(sobel)
ans = 1
The two-dimensional Gaussian is the only radially symmetric function that is also separable:
gaussian = fspecial('gaussian'); rank(gaussian)
ans = 1
A disk is not separable:
disk = fspecial('disk'); rank(disk)
ans = 5
So how can we determine the outer product vectors? The answer is to go back to the svd function. Here's a snippet from the doc: [U,S,V] = svd(X) produces a diagonal matrix S of the same dimension as X, with nonnegative diagonal elements in decreasing order, and unitary matrices U and V so that X = U*S*V'.
A rank 1 matrix has only one nonzero singular value, so U*S*V' becomes U(:,1) * S(1,1) * V(:,1)'. This is basically the outer product we were seeking. Therefore, we want the first columns of U and V. (We have to remember also to use the nonzero singular value as a scale factor.)
Let's try this with the Gaussian filter.
[U,S,V] = svd(gaussian)
U = -0.1329 0.9581 -0.2537 -0.9822 -0.1617 -0.0959 -0.1329 0.2364 0.9625 S = 0.6420 0 0 0 0.0000 0 0 0 0.0000 V = -0.1329 -0.6945 -0.7071 -0.9822 0.1880 0.0000 -0.1329 -0.6945 0.7071
Now get the horizontal and vertical vectors from the first columns of U and V.
v = U(:,1) * sqrt(S(1,1))
v = -0.1065 -0.7870 -0.1065
h = V(:,1)' * sqrt(S(1,1))
h = -0.1065 -0.7870 -0.1065
I have chosen, somewhat arbitrarily to split the scale factor, S(1,1), "equally" between v and h.
Now check to make sure this works:
gaussian - v*h
ans = 1.0e-015 * -0.0243 -0.1527 -0.0243 -0.0139 -0.1110 -0.0139 -0.0035 0 -0.0035
Except for normal floating-point roundoff differences, gaussian and v*h are equal.
You can find code similar to this in the MATLAB function filter2, as well as in the Image Processing Toolbox function imfilter.
Get the MATLAB code
Published with MATLAB® 7.3
Comments are closed.
25 CommentsOldest to Newest
I have been looking for a method to factor a rank 1 matrix into the outer product of two vectors, and this is the only place I’ve seen it described. The SVD is a bit of overkill for the problem in fact. The definition of a rank 1 matrix is that the columns (or rows) are not linearly independent. This means the columns are multiples of each other. Therefore the matrix can be factored by finding the column multiples using e.g.
sum(m(:,2))./sum(m(;,1)). The factorization is simply expressed as m(:,1)*[alpha, beta, gamma ….] where the right hand vector is a vector of the scale factors. Obviously you can adjust the values in the two vectors afterwards, and you have to take some care if one column has a very small sum. This seems so simple when you see it, it perhaps explains why no-one bothers to describe it.
Steve—Thanks for your comment. I’ll look into it when I get a little more time.
You have any suggestions on using SVD for noise filtering in images.
Thank you. Great stuff!
Tsega—No, not really.
Steve—Well, I guess it depends on your perspective. From my point of view, I’m pretty happy just calling svd. No worries about writing and testing new numerical codes, adjusting values, worrying about low column sums, etc. And svd crunches a 33-by-33 Gaussian filter on my computer in just 90 microseconds. Also, the output from svd is used to test for rank equal 1, not just for computing the resulting outer product vectors of a known rank-1 matrix.
interesting method but I don’t think that this algorythm can be used for variety of tasks
Spiele—On the contrary—separable convolution is extremely common in image processing. The MATLAB function filter2 and the Image Processing Toolbox function imfilter both test for separability using the technique described here.
Thank a lot for this code. There is any similar procedure to separe a 3-dimensional kernel? (e.g gaussian and derivatives of gaussian)
Francesco—There might be, but I’ve never looked into it.
“The two-dimensional Gaussian is the only radially symmetric function that is also separable”
i like to filter by 2-d gaussians with non-identity covariance. if the covariance is diagonal, so that the axes of the gaussian are aligned with the space, the rank is still 1 (the rows are all multiples of each other). i would have thought this would still be true for general covariances, since they just rotate the axes, but this doesn’t seem to be true. i see that the rows are no longer multiples, but shouldn’t the rotation be absorbed in U and V, leaving S with a single significant entry?
orientation=pi/6; %note rank=1 when orientation=0,pi/2,…
rot=[cos(orientation) -sin(orientation); sin(orientation) cos(orientation)];
[U S V]=svd(kernel);
Erik—I don’t see why it would. Anway, think of a vertical line – all the rows are identical (rank=1), but you can rotate it 45 degrees into the identity matrix. That’s about as well-conditioned a full-rank matrix as you’ll get. :-)
ah, right, sorry. i was thinking about it wrong…
my intuition was that the computational complexity of 2-d filtering shouldn’t be affected by just rotating the kernel (the output at any pixel depends on the same number of other pixels, in the same ways, just in different directions). or in other words, rotating a kernel doesn’t change the amount of information it contains, its frequency content, etc. it seems to me that some important properties of a matrix must be conserved regardless of applying imrotate to it?
so, it is surprising to me that separability depends on the rotation, since complexity depends on separability. if the 2-d filtering were done in fourier space, would the complexity difference between separable vs. inseparable go away? why don’t imfilter/convn transform to the fourier domain?
Erik—Good questions. Remember that, practically speaking, when we filter a 2-D signal with a 1-D filter using a spatial-domain implementation, it’s difficult to do it in any direction other than vertical or horizontal. Regarding FFT-based implementations: Yes, an FFT-based implementation of convolution would be insensitive to rotation of the kernel. But for the small-ish kernels so commonly used in image processing, spatial domain implementations are often faster and always take up much less memory. When you’re comparing computational complexity of different approaches, you have to remember that the computational complexity curves often have a crossing point.
Erik—Also, separability of the kernel provides no speed benefit in FFT-based implementations. Computation of the two-dimensional discrete Fourier transform is always separable, independent of the separability of the kernel.
I was wondering is there any algorithm you know of forming a general filter from the sum of separable filters?
For example if a filter PSF has rank 2 could you form this from two separable filters?
Jack—Yes, using singular value decomposition (SVD). I believe there is code on the MATLAB Central File Exchange that does this.
Can you tell me more about the separable convolution and how to us it when dealing with spatial temporal Gaussian kernel along all three dimensions x,y,t?
Thanks a lot!
Fraternite—Separable convolution is filtering with a one-dimensional filter along each dimension, and this idea extends to any number of dimensions. What more would you like to know about it?
Along each dimension？Does it means that when I am processing a video, I will have to use 3 separable one-dimensional filters and convolve the video along x,y,t with these filters one by one? However, I am quite confused about how to convolve the video. The question is that how to convolve a one-dimensional filter with an video along time dimension?
Thanks a lot!
Fraternite—You asked me what separable filtering means in multiple dimensions, and I gave a brief response. As to whether you “have to” do it, I have no idea. You haven’t given us any information about what you need to accomplish by filtering.
Thanks a lot! There is more thing I want to know about, is it true that N-D convolution can be easily done using imfilter? Then how to decide what is the kernel? 3D gaussian kernel must be a lot different from that mentioned above.
Fraternite—Yes, imfilter can handle inputs of any dimension. Asking “how to decide what is the kernel” is kind of like asking “what book should I read?” I can’t give a meaningful answer without knowing what you are trying to accomplish by filtering.
Sorry, let me talk about it now. I want to use imfilter to convolve the video using a spatial temporal gaussian kernel. In image processing, it is called scale-space theory. Assign sigma and tau certain values it can detect objects in different scales. The spatial temporal gaussian kernel is something like this, 1 / sqrt((2pi)^3 * sigma^4 * tau^2) * exp(-(x^2+y^2)/(2 * sigma^2)-t^2/(2*tau^2)).
Thanks a lot!
Fraternite—You can form a multidimensional filter kernel directly from your formula and use it to filter a three-dimensional array. However, you’ll be limited by your available memory for how many frames you’ll be able to load into a single multidimensional array.