Steve on Image Processing

Image deblurring – Introduction 86

Posted by Steve Eddins,

I'd like to introduce guest blogger Stan Reeves. Stan is a professor in the Department of Electrical and Computer Engineering at Auburn University. He serves as an associate editor for IEEE Transactions on Image Processing. His research activities include image restoration and reconstruction, optimal image acquisition, and medical imaging.

Over the next few months, Stan plans to contribute several blogs here on the general topic of image deblurring in MATLAB.

Image deblurring (or restoration) is an old problem in image processing, but it continues to attract the attention of researchers and practitioners alike. A number of real-world problems from astronomy to consumer imaging find applications for image restoration algorithms. Plus, image restoration is an easily visualized example of a larger class of inverse problems that arise in all kinds of scientific, medical, industrial and theoretical problems. Besides that, it's just fun to apply an algorithm to a blurry image and then see immediately how well you did.

To deblur the image, we need a mathematical description of how it was blurred. (If that's not available, there are algorithms to estimate the blur. But that's for another day.) We usually start with a shift-invariant model, meaning that every point in the original image spreads out the same way in forming the blurry image. We model this with convolution:

g(m,n) = h(m,n)*f(m,n) + u(m,n)

where * is 2-D convolution, h(m,n) is the point-spread function (PSF), f(m,n) is the original image, and u(m,n) is noise (usually considered independent identically distributed Gaussian). This equation originates in continuous space but is shown already discretized for convenience.

Actually, a blurred image is usually a windowed version of the output g(m,n) above, since the original image f(m,n) isn't ordinarily zero outside of a rectangular array. Let's go ahead and synthesize a blurred image so we'll have something to work with. If we assume f(m,n) is periodic (generally a rather poor assumption!), the convolution becomes circular convolution, which can be implemented with FFTs via the convolution theorem.

If we model out-of-focus blurring using geometric optics, we can obtain a PSF using fspecial and then implement circular convolution:

Form PSF as a disk of radius 3 pixels

h = fspecial('disk',4);
% Read image and convert to double for FFT
cam = im2double(imread('cameraman.tif'));
hf = fft2(h,size(cam,1),size(cam,2));
cam_blur = real(ifft2(hf.*fft2(cam)));
imshow(cam_blur)

A similar result can be computed using imfilter with appropriate settings.

You'll immediately notice that the circular convolution caused the pants and tripod to wrap around and blur into the sky. I told you that periodicity of the input image was a poor assumption! :-) But we won't worry about that for the time being.

Now we need to add some noise. If we define peak SNR (PSNR) as

then the noise scaling is given by

Now we add noise to get a 40 dB PSNR:

sigma_u = 10^(-40/20)*abs(1-0);
cam_blur_noise = cam_blur + sigma_u*randn(size(cam_blur));
imshow(cam_blur_noise)

The inverse filter is the simplest solution to the deblurring problem. If we ignore the noise term, we can implement the inverse by dividing by the FFT of h(m,n) and performing an inverse FFT of the result. People who work with image restoration love to begin with the inverse filter. It's really great because it's simple and the results are absolutely terrible. That means that any new-and-improved image restoration algorithm always looks good by comparison! Let me show you what I mean:

cam_inv = real(ifft2(fft2(cam_blur_noise)./hf));
imshow(cam_inv)

Something must be wrong, right? Well, nothing is wrong with the code. But it is definitely wrong to think that one can ignore noise. To see why, look at the frequency response magnitude of the PSF:

hf_abs = abs(hf);
surf([-127:128]/128,[-127:128]/128,fftshift(hf_abs))
shading interp, camlight, colormap jet
xlabel('PSF FFT magnitude')

We see right away that the magnitude response of the blur has some very low values. When we divide by this pointwise, we are also dividing the additive noise term by these same low values, resulting in a huge amplification of the noise--enough to completely swamp the image itself.

Now we can apply a very simple trick to attempt our dramatic and very satisfying improvement. We simply zero out the frequency components in the inverse filter result for which the PSF frequency response is below a threshold.

cam_pinv = real(ifft2((abs(hf) > 0.1).*fft2(cam_blur_noise)./hf));
imshow(cam_pinv)
xlabel('pseudo-inverse restoration')

For comparison purposes, we repeat the blurred and noise image.

imshow(cam_blur_noise)
xlabel('blurred image with noise')

This result is obviously far better than the first attempt! It still contains noise but at a much lower level. It's not dramatic and satisfying, but it's a step in the right direction. You can see some distortion due to the fact that some of the frequencies have not been restored. In general, some of the higher frequencies have been eliminated, which causes some blurring in the result as well as ringing. The ringing is due to the Gibbs phenomenon -- an effect in which a steplike transition becomes "wavy" due to missing frequencies.

A similar but slightly improved result can be obtained with a different form of the pseudo-inverse filter. By adding a small number delta^2 to the number being divided, we get nearly the same number unless the number is in the same range or smaller than delta^2. That is, if we let

then

and

which is like the previous pseudo-inverse filter but with a smooth transition between the two extremes. To implement this in MATLAB, we do:

cam_pinv2 = real(ifft2(fft2(cam_blur_noise).*conj(hf)./(abs(hf).^2 + 1e-2)));
imshow(cam_pinv2)
xlabel('alternative pseudo-inverse restoration')

As you can see, this produces better results. This is due to a smoother transition between restoration and noise smoothing in the frequency components.

I hope to look at some further improvements in future blogs as well as some strategies for dealing with more real-world assumptions.

- by Stan Reeves, Department of Electrical and Computer Engineering, Auburn University


Get the MATLAB code

Published with MATLAB® 7.4

86 CommentsOldest to Newest

This is great. I remember coding all this junk up by hand in C and C++. I was doing good enough to get the forward DFT to work.

Can’t wait to see the next better steps.

- Thomaz

This is a great any very interesting topic to blog about – thanks.

While dividing by hf:
cam_pinv = real(ifft2((abs(hf) > 0.1).*fft2(cam_blur_noise)./hf));
a good care should be taken for not dividing by zero – is that right?

Adi,

You are correct! I probably should’ve been more explicit about that. If hf has a value that is identically zero, then the inverse does not exist and MATLAB will complain with a “Warning: Divide by zero.” In this case, whatever one might do to avoid dividing by zero will lead to some type of pseudo-inverse implementation like the two I suggested in the blog.

In the example I showed, hf is not identically zero (or machine zero), so the division by zero works without complaint.

Thanks for giving me the opportunity to clarify this.

Hi, i am student of mexico and my thesis is about the deblur, my problem is when a picture (out of focus) have a objet with sparkle( a reflection of light). How i can solve this?

Thank you.

Roberto—Can you be more specific? What have you tried, and what problem does the sparkle cause for your solution?

Hello, i am a student conducting a study on deblurring techniques used in MATLAB and your work really helped me alot.

Sorry but Im having a hard time studying the PSNR and Hi. Can you please explain further the equations on the PSNR and Hi? A link to more information about the equations would also be highly appreciated. Thank you.

Edward—PSNR is a measurement that appears fairly often in papers about image compression and image deblurring methods. It is simply normalizes the noise variance (σ2u) by the image dynamic range and applies a log scale. Dr. Reeves wanted to add synthetic noise to the image corresponding to a 40 dB PSNR, and so he used the formula to determine the corresponding noise variance.

The definition of HI provides a way to define an inverse operator that approximately equals the blur filter where the magnitude of the blur filter is relatively large (in the frequency domain), and that approximately equals 0 where the magnitude of the blur filter is 0.

Hi, Code does not work MATLAB R2007

>>cam_blur = real(ifft2(hf.*fft2(cam)));
??? Error using ==> times
Number of array dimensions must match for binary array op.

Tom—Like almost all of the blog posts here, this blog was generated by using the MATLAB publish feature, running and capturing code from a MATLAB script file. It does work in the latest shipping version of MATLAB (R2007a). I just verified this by copying and pasting code lines from my browser directly into MATLAB. You might try clicking on the “Get the MATLAB code” link at the bottom of the posting, and then saving the result into an M-file that you can run directly in MATLAB.

Steve, I tried both the way, pasting from browser and getting MATLAB code. Same results. See size difference here

>> size(cam)

ans =

400 400 3

>> size(hf)

ans =

400 400

>>

Tom—The file cameraman.tif ships with the Image Processing Toolbox, and its a gray scale image. Use that one, not the image embedded on this web page.

Most of the papers and book in the field of deblurring are dealing with either a truecolor or grayscale images.
I’m curious about the possibilities to deblur a Bayer pattern encoded image. I’ve not seen so much work in that filed?!

Waiting for the next post…

please can u tell me the code for bicubic spline interpolation for performing color image zooming

Is the above technique similar to Wiener Filtering?

I am currently working on an image which has been blurred by a degradation function and there is some additive noise. The PSF function used for degrading the image is not known and there is no information about the noise component too.I know this can be done using Wiener Filtering(Optimal Filtering). So how should I go about it?

Sagiv–This is a good observation about deblurring Bayer pattern images. There has been a little work done on this. I remember seeing a paper at the Computational Imaging COnference of the SPIE Electronic Imaging Symposium a few years ago. Also, my former PhD student Manu Parmar and I had a paper on this subject at ICIP this year — “Bayesian restoration of color images using a non-homogenous cross-channel prior”.

Tejas–Yes, the technique above is similar to Wiener filtering if delta is large enough. It is identical to a Wiener filter if the noise-to-signal power-spectral ratio is constant. I hope to elaborate on this if I can find time soon for another blog.

The case of an unknown PSF is not addressed by a Wiener filter. Other techniques have to be brought in to identify the blur and then deblur or do blur ID and deblurring simultaneously.

Hi Steve, I was waiting for your reply to my question about Wiener Filtering Technique. I did implement the Image deblurring technique which you have suggested and it works pretty well. However I wanted to know how you have calculated the term (1e-2) which essentially is the ratio of P(n)/P(s).

Sagiv and Tejas—I’m sorry, Dr. Reeves’ replies were caught in the spam filter. I just unblocked them.

I chose delta (1e-2) by trial and error. There are some automated ways of choosing it — maximum likelihood and generalized cross-validation, for instance — but that was “beyond the scope of this blog.” :-)

Hi Steve,
Is there a function for generating a 3D ‘motion’ blur filter? The fspecial function generates only 2D filters. I found a fspecial3 function on this website but sadly it does not have the ‘motion’ filter option. Thanks!

Hi steve,
I am working on a couple of images and i need to focus a part of the image which is not in focus(either the foreground or the background)
tried running the fourier analysis on the full image but to no good..want to run it on only the part of the image which is out of focus
how can i select just a part of the image and try running the deblurring algorithms??
which one wuld be the best for the particular image on the website
plz help

charon–this is a good question. This is a very common but challenging situation. I can think of three reasonable options: 1) Use an iterative deblurring algorithm that defines the blur in a way that varies over the image. This would involve some programming as well as knowledge of the inner workings of image restoration algorithms. 2) Crop out a region of the image that is affected by the defocus blur and try restoring that part by itself. 3) Using the PSF of the defocus blur, blur the non-blurred part of the image and taper it off as you get inside the blurred region. Then deblur the entire image with the PSF. Hope that helps.

can you check the website http://epsilonminussemimoron.blogspot.com
hv put up the two images which i am currently working on.the problem is that i hv no idea how to model the blurred part of the image as …
if i find a reasonable way to model the blur i dont know how to selectively work on only the deblurred part of the image..
plz help

Charon—You face an algorithm development project, because there isn’t a function in the Image Processing Toolbox that does exactly what you want. I think Dr. Reeves has given you some reasonable suggestions about how to get started.

Charon–I don’t know if you mean that you don’t know how to find the PSF, or you don’t know how to indicate which part of the image is blurred with a known PSF. If it’s the former, I may blog on this some time in the future. If it’s the later, you can create a binary image where 1 indicates a blurred pixel and 0 indicates a non-blurred pixel. Then you’ll have to figure out how to use that to calculate a weight to blur the image again and then do a weighted combination of that image and the originally blurred image (assuming you’re trying #3 in my previous comment). Remember, I’m trying to give very general ideas here, not provide exact details for blog participants’ applications. That is left as an exercise for the reader. :-)

There are a lot of works on variational (PDE) approach, especially total-variation
regularization, for image deblurring and denoising in the literature.
Why are these methods not implemented in image processing
toolbox? Do they work well in comparison to the methods here?

Thanks.

Yon—There are lots of image processing methods not in the Image Processing Toolbox. At any given time there are hundreds of enhancements that customers are asking us for. It is a matter of developer bandwidth and prioritization of work. With respect to deblurring functions, the first ones we chose to tackle were the most well-established and widely used.

Hey Steve
I need to Blurr a 3D image.can u help me out.and im facin another problem.if i using imwrite function in a loop to write multiple 2D images.i want to change the output filename every time.how to do it.As it takes hard coded filename..

Mriya—Use imfilter. The filename input to imwrite isn’t hard-coded. You can pass in any filename string you want.

thnx but i want to write multiple files of different names in a loop and im giving a parmeter to function which tells how many files to create …

Hi,
Mathematically, convolution is commutative: a*b = b*a. It means that if I have an original image and a blurred version, I can use deconvolution to find the PSF – all I have to do is switch the roles of the kernel(=the PSF) and the input (the original image).
Is it true in practice? It seems the deblurring algorithms assume a PSF which is very small as compared to the area of the image. When trying to use deconvolve to estimate the PSF, I get a blank image (all zeros). Where am I wrong?

Thanks!

Ita—I tried your problem using deconvwnr with cameraman.tif and a version of cameraman.tif blurred by a 5-by-5 constant PSF, and it seemed to do a decent job in recovering the PSF. The output did suffer from ringing, but that’s typical for many deblurring algorithms along intensity discontinuities. Since you didn’t say what function you tried to use, as well as how you called it, I can’t give a guess about why you might have gotten all zeros.

hey, am doing a project on deblurring a uniformly and non uniformly blurred image .How do i estimate the psf?? can u tell me the matlab codes to estimate the psf?

Swati—You might want to take another look at the doc for deconvblind. The PSF you pass as the 2nd input argument is an initial estimate. The function returns the final PSF estimate as the 2nd output argument.

Hi.

Is this the same sort of thing that you would use to deblur an image taken with a fixed-focal length camera when the image wasn’t at the focal point?

Thanks.

Sure—Sure. You’ll need a blur model; that is, an estimation of the point spread function. Or you could try deconvblind, which can be used to estimate both the deblurred image and the blur model.

Interesting. I’ve been having a bit of trouble finding an answer to the following question – If you know the details of the camera (focal length, f-stop, etc.) is that sufficient to mathematically derive the point spread function? IE does the PSF take on a specific form that can be reverse-engineered from a camera spec?

Thank you, steve.
I am an undergraduate student. Recently I have started learning about the image processing, image restoration. The deblurring part is fascinating me.
in this experiment u yourself create a distortion(PSF),and noise, and out convolved it with the original image and restore to similar original image with the help of inverse filter,(this is de-convolution, if I am right).
What if the picture is blurred? And you have to make it deblurred? Do you apply the same process?
Can you help me out, I am still in learning phase, and sorry for my stupidity

Amit—You need to either provide an estimate of the PSF function, or you use something like deconvblind to simultaneously estimate the PSF and the deblurred image.

Hey, i really need your help on this. i’m need to deblur an image which was directly captured in blurred form by the cam. So, i need a code to estimate the psf so i can use the deconvolution procedure. M really stuck as i cant find that anywhere. if you can gimme a link to follow or some source, that would be amazing! thanks

Hi Steve!

I’m learning deconvolution and I kind of understood what you got here but what if I had a stack of images and needed to perform a 3D deconvolution? How would I fit the fftn function here to work with a gaussian PSF? Thanks for your time and care ;)

Ray—All of the Image Processing Toolbox deblurring functions support multidimensional images and PSFs. If you’re coding things up yourself based on the math, there’s no significant difference between the two-dimensional and n-dimensional cases, other than using fftn instead of fft2.

Ok Steve, I got it! :)
But now that I made it, or think I did… my implementation is not succeeding. Instead of deconvolving, it’s blurring the images and I think this was supposed to work…

Below is what I got for a simple deconvolution function with “stack_img(x, y, z)” as the input:

sigma = 2; rad = 32;
psf1 = fspecial(‘gaussian’, [rad rad], 2*sigma) +eps;
psf2 = fspecial(‘gaussian’, [rad rad], sigma) +eps;
psf3 = fspecial(‘gaussian’, [rad rad], 2*sigma) +eps;

psf_3D = cat(3, psf1, psf2, psf3); % ’3D’ 2 inverted gaussian cones

fpsf = fftn(psf_3D, size(stack_img));
img_temp = ifftn((fftn(stack_img)).*conj(fpsf))./abs(fpsf.^2 + 1e-2);
img_out = real(img_temp);

Thanks again!

Ray—That doesn’t look like a very plausible three-dimensional PSF to me. Are you really just trying to do several two-dimensional deblurring operations? Then do that in a loop. You might want to give the deconvwnr function a try.

I wonder how to calculate the deblur parameter and deblur type? Without these information, it is hard to deblur the degraded image.

Yuan—Usually the only time you know the blur function is when you have some model of the physical imaging processing that includes the blurring mechanism. Deblurring when you don’t know the blur function is called blind deblurring in the literature. You might try the function deconvblind.

Hi there, I would like to perform an inverse gaussian filter in matlab. Is there any pre-written function already available in the toolbox? I have searched for it but perhaps I am looking in the wrong place. Otherwise, how should I write the matlab code? Could you kindly guide me? Thank you very much.

Ting—The Image Processing Toolbox contains deconvolution functions, including deconvwnr, deconvreg, deconvlucy, and deconvblind.

I try to do a normalization process to a chest radiograph image to obtain the soft tissue for the lungs separated from the ribs, to do this I have to do Gaussian blurring to the image, so can you help me in doing this please.

Hi Steve,
I am working on iterative blind deconvolution method. Can you explain how can we connect the codes above (given) by you to interative blind deconvolution method. Any help or suggestion will really be appreaciated.

Thanks

as i understaned you recive signal blurred + noise
so you cant know only the blurred signal without the noise so the formula with the dividing can be true

Matlab programming — sinusoidal noise
hi,

MATLAB programming.

Please, can you tell me how to add sinusoidal noise to a grayscale image?
‘imnoise’ can add different kinds of noises, ‘randerr’ can also be used to create noise in an image, but they dont serve my purpose…

I want to add sinusoidal noise…I googled a lot but in vain..

can you help me? please..

Kind Regards..
Henan

——————————————————————————–

Henan—You can generate any kind of synthetic signal you like and then just add it to your image data using “+”. See my 13-Jan-2006 post for some sample code to generate a sinusoidal signal.

Please replace this picture of yours, the one with the cat. It’s a little creepy, reminds me of the villains from Bond movies.

. — Thanks for the suggestion. I might change it, especially because my wife doesn’t like it either.

May I suggest in turn that you replace your name (.) with something else?

It’s a little creepy, reminds me of the villains from Bond movies.

You say that like it’s a bad thing, Ernst Stavro Blofeld wasn’t all bad, he worked towards nuclear disarmament and punished a crooked investment banker in Thunderball. ;^)
BTW I ‘ll chime in on chorus, wonderful blog. Just bought the 2ed of DIP using Matlab. Looking to register some complex images (Time lapse movies of confocal microscope data, 50 image series, each with 26 Z-stacks…)

I need to do image blurring using the method of energy minmization: solving euler lagrange equations,

I have no back ground in the field of image processing ,

So can you suggest me some literature which can help me to start from scratch?

That would be so nice of you

Hi thanks for sharing these with us.. I am working on image motion deblurring. I have been looking for motion blur models recently. Although i have found built in function to simulate linear motion blur,

h = fspecial('motion', len, theta);

i dont know how to simulate rotational motion blur on matlab, could you please help about this ? Is there any report or example code script about this ?

Kind Regards,
Philip

Hi Dr. Reeves and Dr. Eddins, I enjoyed the article. Very interesting blog.

To answer Jeff’s question about PSF of a camera lens (Comment 45): If you know the details of the camera (focal length, f-stop, etc.) is that sufficient to mathematically derive the point spread function? IE does the PSF take on a specific form that can be reverse-engineered from a camera spec?

I’ve been working at a major lens company for three years. As far as I know, the PSF or MTF of a camera lens varies from lens to lens, here I’m assuming you’re talking about the optical part of a camera, the lens.

A simple analogy would be: just because two people have the same name doesn’t mean they are the same person. Focal length and maximum aperture help to put the particular lens into a particular category of optical characteristics, but the variation inside that category is probably more than you can imagine.

Matter of fact, all major lens manufacturers are spending a lot of money on developing lens MTF (PSF) testing packages.

i do not what is the reason exactly when im executing the code given by u here,it is showing an error at fft2 function.im unable to rectify that.i have been trying these from 5 days but could not do it.can u please tell me the reason.im very much interested in the algorithm given by you here.

Sarath—Are you sure you’re running exactly the same code I’m running? Click on the “Get the MATLAB code” link at the bottom of the post.

Hi Steve – I am working on blind image de blurring myself as well. The the above mentioned process of yours’ works fine for artificially blurred images i.e. we blur an image by a specific PSF and we can later de-blur it using blind and non-blind de-blurring techniques.

But the problem is that for real life images the process of de-blurring doesn’t provide any significant results. Any tips?

Hi Steve!

thanks for this article. It is written very well and I believe it serves as a nice starting point for people who are interested in image deblurring, because you present very clearly the main “ingredients” (and problems) of deconvolution.

Perhaps I missed something in your article, but I would like to know how this basic approach that you presented here relates to other well-known approaches like Richardson-Lucy algorithm, or Wiener deconvolution.

Thanks.

Thanks alot Steve for such helpful blog

can u plz let me knw what sould b result of this.

cam_in = real(ifft2(fft2(cam_blur)./fft2(cam)));
imshow(cam_in)

what i guessed is that it would be h (PSF) .I m getting all zeros.

kindly also guide me..
What should i do if i have real image and blurred image..and
I want to estimate PSF using these images…???

Thank you for sharing valuable information.

I am working on a set of images from a microscope which have varying regions of focus and blur.(the section on the slide was thick so the images get out of focus in certain regions beacause of uneven surface)
How do I begin with solving the problem.
I know the specifications of the camera and the distance the object was placed etc.

Thanks!

These postings are the author's and don't necessarily represent the opinions of MathWorks.