{"id":180,"date":"2007-11-02T07:00:25","date_gmt":"2007-11-02T11:00:25","guid":{"rendered":"https:\/\/blogs.mathworks.com\/steve\/2007\/11\/02\/image-deblurring-wiener-filter\/"},"modified":"2019-10-23T15:25:29","modified_gmt":"2019-10-23T19:25:29","slug":"image-deblurring-wiener-filter","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/steve\/2007\/11\/02\/image-deblurring-wiener-filter\/","title":{"rendered":"Image deblurring &#8211; Wiener filter"},"content":{"rendered":"<div xmlns:mwsh=\"https:\/\/www.mathworks.com\/namespace\/mcode\/v1\/syntaxhighlight.dtd\" class=\"content\">\r\n   <introduction>\r\n      <p><i>I'd like to welcome back guest blogger Stan Reeves, professor of Electrical and Computer Engineering at Auburn University.\r\n            Stan will be writing a few blogs here about image deblurring.<\/i><\/p>\r\n   <\/introduction>\r\n   <p>In my <a href=\"https:\/\/blogs.mathworks.com\/steve\/2007\/08\/13\/image-deblurring-introduction\/\">last blog<\/a>, I looked at image deblurring using an inverse filter and some variations.  The inverse filter does a terrible job due to\r\n      the fact that it divides in the frequency domain by numbers that are very small, which amplifies any observation noise in\r\n      the image.  In this blog, I'll look at a better approach, based on the Wiener filter.\r\n   <\/p>\r\n   <p>I will continue to assume a shift-invariant blur model with independent additive noise, which is given by<\/p>\r\n   <p>y(m,n) = h(m,n)*x(m,n) + u(m,n)<\/p>\r\n   <p>where * is 2-D convolution, h(m,n) is the point-spread function (PSF), f(m,n) is the original image, and u(m,n) is noise.<\/p>\r\n   <p>The Wiener filter can be understood better in the frequency domain. Suppose we want to design a frequency-domain filter G(k,l)\r\n      so that the restored image is given by\r\n   <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq13106.png\"> <\/p>\r\n   <p>We can choose G(k,l) so that we minimize<\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq14134.png\"> <\/p>\r\n   <p>E[] is the expected value of the expression.  We assume that both the noise and the signal are random processes and are independent\r\n      of one another.  The minimizer of this expression is given by\r\n   <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq72763.png\"> <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq70226.png\"> <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq60317.png\"> <\/p>\r\n   <p>This filter gives the minimum mean-square error estimate of X(k,l).  Now that sounds really great until we realize that we\r\n      must supply the signal and noise power spectra (or equivalently the autocorrelation functions of the signal and noise).\r\n   <\/p>\r\n   <p>The noise power spectrum is fairly easy.  We usually assume white noise, which makes the noise power spectrum a constant.\r\n       But how do we determine what this constant is?  If\r\n   <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq48674.png\"> <\/p>\r\n   <p>then the noise power spectrum is given by<\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq9344.png\"> <\/p>\r\n   <p>for an MxN image.  This noise variance may be known based on knowledge of the image acquisition process or may be estimated\r\n      from the local variance of a smooth region of the image.\r\n   <\/p>\r\n   <p>The signal power spectrum is a little more challenging in principle, since it is not flat.  However, we have two factors working\r\n      in our favor: 1) most images have fairly similar power spectra, and 2) the Wiener filter is insensitive to small variations\r\n      in the signal power spectrum.\r\n   <\/p>\r\n   <p>Consider two very different images -- the cameraman and house.<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">cam = im2double(imread(<span style=\"color: #A020F0\">'cameraman.tif'<\/span>));\r\nhouse_url = <span style=\"color: #A020F0\">'https:\/\/blogs.mathworks.com\/images\/steve\/180\/house.tif'<\/span>;\r\nhouse = im2double(imread(house_url));\r\n\r\nsubplot(1,2,1)\r\nimshow(cam)\r\ncolormap(gray(256))\r\ntitle(<span style=\"color: #A020F0\">'cameraman'<\/span>)\r\nsubplot(1,2,2)\r\nimshow(house)\r\ntitle(<span style=\"color: #A020F0\">'house'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_01.jpg\"> <p>We show the actual (log) spectrum for these two images.<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">cf = abs(fft2(cam)).^2;\r\nhf = abs(fft2(house)).^2;\r\nsubplot(1,2,1)\r\nsurf([-127:128]\/128,[-127:128]\/128,log(fftshift(cf)+1e-6))\r\nshading <span style=\"color: #A020F0\">interp<\/span>, colormap <span style=\"color: #A020F0\">gray<\/span>\r\ntitle(<span style=\"color: #A020F0\">'cameraman power spectrum'<\/span>)\r\nsubplot(1,2,2)\r\nsurf([-127:128]\/128,[-127:128]\/128,log(fftshift(hf)+1e-6))\r\nshading <span style=\"color: #A020F0\">interp<\/span>, colormap <span style=\"color: #A020F0\">gray<\/span>\r\ntitle(<span style=\"color: #A020F0\">'house power spectrum'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_02.jpg\"> <p>Let's see what happens if we restore the cameraman with the actual power spectrum.<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">h = fspecial(<span style=\"color: #A020F0\">'disk'<\/span>,4);\r\ncam_blur = imfilter(cam,h,<span style=\"color: #A020F0\">'circular'<\/span>);\r\n<span style=\"color: #228B22\">% 40 dB PSNR<\/span>\r\nsigma_u = 10^(-40\/20)*abs(1-0);\r\ncam_blur_noise = cam_blur + sigma_u*randn(size(cam_blur));\r\n\r\ncam_wnr = deconvwnr(cam_blur_noise,h,numel(cam)*sigma_u^2.\/cf);\r\nsubplot(1,1,1)\r\nimshow(cam_wnr)\r\ncolormap(gray(256))\r\ntitle(<span style=\"color: #A020F0\">'restored image with exact spectrum'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_03.jpg\"> <p>For comparison purposes, we restore the cameraman using the power spectrum obtained from the house image.<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">cam_wnr2 = deconvwnr(cam_blur_noise,h,numel(cam)*sigma_u^2.\/hf);\r\nimshow(cam_wnr2)\r\ncolormap(gray(256))\r\ntitle(<span style=\"color: #A020F0\">'restored image with house spectrum'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_04.jpg\"> <p>Visually, the two are very similar in quality.  In terms of mean-square error (MSE), the former is better (lower), as the\r\n      theory predicts:\r\n   <\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">format <span style=\"color: #A020F0\">short<\/span> <span style=\"color: #A020F0\">e<\/span>\r\nmse1 = mean((cam(:)-cam_wnr(:)).^2)\r\nmse2 = mean((cam(:)-cam_wnr2(:)).^2)<\/pre><pre style=\"font-style:oblique\">\r\nmse1 =\r\n\r\n  2.9905e-003\r\n\r\n\r\nmse2 =\r\n\r\n  3.9191e-003\r\n\r\n<\/pre><p>In both cases, the spectrum is concentrated in the low frequencies and falls away at higher frequencies.  A smoothed version\r\n      of the spectra would look even more similar.  Rather than using the power spectrum from a specific image, one can either average\r\n      a large number of images or use a simple model of the power spectrum or autocorrelation function.\r\n   <\/p>\r\n   <p>A common model for the image autocorrelation function is<\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq84552.png\"> <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq34279.png\"> <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_eq136456.png\"> <\/p>\r\n   <p>We calculate these parameters for the cameraman, averaging over horizontal and vertical shifts to get a single parameter for\r\n      the correlation coefficient.  Then we calculate an autocorrelation function.\r\n   <\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">sigma2_x = var(cam(:))\r\nmean_x = mean(cam(:))\r\ncam_r = circshift(cam,[1 0]);\r\ncam_c = circshift(cam,[0 1]);\r\nrho_mat = corrcoef([cam(:); cam(:)],[cam_r(:); cam_c(:)])\r\nrho = rho_mat(1,2);\r\n[rr,cc] = ndgrid([-128:127],[-128:127]);\r\nr_x = sigma2_x*rho.^sqrt(rr.^2+cc.^2) + mean_x^2;\r\n\r\nsurf([-128:127],[-128:127],r_x)\r\naxis <span style=\"color: #A020F0\">tight<\/span>\r\nshading <span style=\"color: #A020F0\">interp<\/span>, camlight, colormap <span style=\"color: #A020F0\">jet<\/span>\r\ntitle(<span style=\"color: #A020F0\">'image autocorrelation model approximation'<\/span>)<\/pre><pre style=\"font-style:oblique\">\r\nsigma2_x =\r\n\r\n  5.9769e-002\r\n\r\n\r\nmean_x =\r\n\r\n  4.6559e-001\r\n\r\n\r\nrho_mat =\r\n\r\n  1.0000e+000  9.4492e-001\r\n  9.4492e-001  1.0000e+000\r\n\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_05.jpg\"> <p>From this we calculate another restored image:<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">cam_wnr3 = deconvwnr(cam_blur_noise,h,sigma_u^2,r_x);\r\nimshow(cam_wnr3)\r\ncolormap(gray(256))\r\ntitle(<span style=\"color: #A020F0\">'restored image using correlation model'<\/span>)\r\nmse3 = mean((cam(:)-cam_wnr3(:)).^2)<\/pre><pre style=\"font-style:oblique\">\r\nmse3 =\r\n\r\n  3.5255e-003\r\n\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/180\/restore_blog2_06.jpg\"> <p>The restored image from this method is better than the restored image using the house spectrum but not quite as good as the\r\n      one using the exact cameraman spectrum.  All in all, we can see that the exact noise-to-signal spectrum isn't necessary to\r\n      yield good results.\r\n   <\/p>\r\n   <p><i>- by Stan Reeves, Department of Electrical and Computer Engineering, Auburn University<\/i><\/p><script language=\"JavaScript\">\r\n<!--\r\n\r\n    function grabCode_43a29fd9845b4bbbba5e80e590ae9888() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='43a29fd9845b4bbbba5e80e590ae9888 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 43a29fd9845b4bbbba5e80e590ae9888';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        author = '';\r\n        copyright = '';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add author and copyright lines at the bottom if specified.\r\n        if ((author.length > 0) || (copyright.length > 0)) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (author.length > 0) {\r\n                d.writeln('% _' + author + '_');\r\n            }\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n      \r\n      d.title = title + ' (MATLAB code)';\r\n      d.close();\r\n      }   \r\n      \r\n-->\r\n<\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_43a29fd9845b4bbbba5e80e590ae9888()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n            the MATLAB code \r\n            <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; 7.5<br><\/p>\r\n<\/div>\r\n<!--\r\n43a29fd9845b4bbbba5e80e590ae9888 ##### SOURCE BEGIN #####\r\n%%\r\n% _I'd like to welcome back guest blogger Stan Reeves, professor of \r\n% Electrical and Computer Engineering at Auburn University.  Stan will be\r\n% writing a few blogs here about image deblurring._\r\n\r\n%%\r\n% In my <https:\/\/blogs.mathworks.com\/steve\/2007\/08\/13\/image-deblurring-introduction\/\r\n% last blog>, I looked at image deblurring using an inverse filter and\r\n% some variations.  The inverse filter does a terrible job due to the fact\r\n% that it divides in the frequency domain by numbers that are very small,\r\n% which amplifies any observation noise in the image.  In this blog, I'll\r\n% look at a better approach, based on the Wiener filter.\r\n\r\n%%\r\n% I will continue to assume a\r\n% shift-invariant blur model with independent additive noise, which is\r\n% given by\r\n\r\n%%\r\n% y(m,n) = h(m,n)*x(m,n) + u(m,n)\r\n\r\n%%\r\n% where * is 2-D convolution, h(m,n) is the point-spread function (PSF),\r\n% f(m,n) is the original image, and u(m,n) is noise. \r\n\r\n%%\r\n% The Wiener filter can be understood better in the frequency domain.\r\n% Suppose we want to design a frequency-domain filter G(k,l) so that the\r\n% restored image is given by\r\n\r\n%%\r\n%\r\n% $$\\hat{X}(k,l) = G(k,l)Y(k,l)$$\r\n%\r\n\r\n%%\r\n% We can choose G(k,l) so that we minimize\r\n\r\n%%\r\n%\r\n% $$E[|X(k,l) - G(k,l)Y(k,l)|^2]$$\r\n%\r\n\r\n%%\r\n% E[] is the expected value of the expression.  We assume that both the\r\n% noise and the signal are random processes and are independent of one\r\n% another.  The minimizer of this expression is given by\r\n\r\n%%\r\n%\r\n% $$G(k,l) = \\frac{H(k,l)}{|H(k,l)|^2 + S_u(k,l)\/S_x(k,l)}$$\r\n%\r\n\r\n%%\r\n%\r\n% $$\\mbox{where } S_x(k,l) = \\mbox{signal power spectrum}$$\r\n%\r\n\r\n%%\r\n%\r\n% $$\\mbox{and } S_u(k,l) = \\mbox{noise power spectrum}$$\r\n%\r\n\r\n%%\r\n% This filter gives the minimum mean-square error estimate of X(k,l).  Now\r\n% that sounds really great until we realize that we must supply the signal\r\n% and noise power spectra (or equivalently the autocorrelation functions of\r\n% the signal and noise).  \r\n\r\n%%\r\n% The noise power spectrum is fairly easy.  We usually assume white noise,\r\n% which makes the noise power spectrum a constant.  But how do we determine\r\n% what this constant is?  If \r\n\r\n%%\r\n%\r\n% $$\\sigma_u^2 = \\mbox{the variance at each pixel}$$\r\n%\r\n\r\n%%\r\n% then the noise power spectrum is given by\r\n\r\n%%\r\n%\r\n% $$S_u(k,l) = MN\\sigma_u^2$$\r\n%\r\n\r\n%%\r\n% for an MxN image.  This noise variance may be known based on knowledge of\r\n% the image acquisition process or may be estimated from the local variance\r\n% of a smooth region of the image.\r\n\r\n%%\r\n% The signal power spectrum is a little more challenging in principle,\r\n% since it is not flat.  However, we have two factors working in our favor:\r\n% 1) most images have fairly similar power spectra, and 2) the Wiener\r\n% filter is insensitive to small variations in the signal power spectrum.\r\n\r\n%%\r\n% Consider two very different images REPLACE_WITH_DASH_DASH the cameraman and house.\r\n\r\ncam = im2double(imread('cameraman.tif')); \r\nhouse_url = 'https:\/\/blogs.mathworks.com\/images\/steve\/180\/house.tif';\r\nhouse = im2double(imread(house_url));\r\n\r\nsubplot(1,2,1)\r\nimshow(cam)\r\ncolormap(gray(256))\r\ntitle('cameraman')\r\nsubplot(1,2,2)\r\nimshow(house)\r\ntitle('house')\r\n\r\n%%\r\n% We show the actual (log) spectrum for these two images.\r\ncf = abs(fft2(cam)).^2;\r\nhf = abs(fft2(house)).^2;\r\nsubplot(1,2,1)\r\nsurf([-127:128]\/128,[-127:128]\/128,log(fftshift(cf)+1e-6))\r\nshading interp, colormap gray\r\ntitle('cameraman power spectrum')\r\nsubplot(1,2,2)\r\nsurf([-127:128]\/128,[-127:128]\/128,log(fftshift(hf)+1e-6))\r\nshading interp, colormap gray\r\ntitle('house power spectrum')\r\n\r\n%%\r\n% Let's see what happens if we restore the cameraman with the actual power\r\n% spectrum.\r\n\r\nh = fspecial('disk',4);\r\ncam_blur = imfilter(cam,h,'circular');\r\n% 40 dB PSNR\r\nsigma_u = 10^(-40\/20)*abs(1-0);\r\ncam_blur_noise = cam_blur + sigma_u*randn(size(cam_blur));\r\n\r\ncam_wnr = deconvwnr(cam_blur_noise,h,numel(cam)*sigma_u^2.\/cf);\r\nsubplot(1,1,1)\r\nimshow(cam_wnr)\r\ncolormap(gray(256))\r\ntitle('restored image with exact spectrum')\r\n\r\n%%\r\n% For comparison purposes, we restore the cameraman using the power\r\n% spectrum obtained from the house image.\r\n\r\ncam_wnr2 = deconvwnr(cam_blur_noise,h,numel(cam)*sigma_u^2.\/hf);\r\nimshow(cam_wnr2)\r\ncolormap(gray(256))\r\ntitle('restored image with house spectrum')\r\n\r\n%%\r\n% Visually, the two are very similar in quality.  In terms of mean-square\r\n% error (MSE), the former is better (lower), as the theory predicts:\r\n\r\nformat short e\r\nmse1 = mean((cam(:)-cam_wnr(:)).^2)\r\nmse2 = mean((cam(:)-cam_wnr2(:)).^2)\r\n\r\n%%\r\n% In both cases, the spectrum is concentrated in the low frequencies and\r\n% falls away at higher frequencies.  A smoothed version of the spectra\r\n% would look even more similar.  Rather than using the power spectrum from\r\n% a specific image, one can either average a large number of images or use\r\n% a simple model of the power spectrum or autocorrelation function.\r\n\r\n%%\r\n% A common model for the image autocorrelation function is\r\n\r\n\r\n\r\n\r\n%%\r\n%\r\n% $$r_x(m,n) = \\sigma_x^2\\rho^{-\\sqrt{m^2 + n^2}} + \\bar{X}^2$$\r\n%\r\n\r\n%%\r\n%\r\n% $$\\bar{X} = \\mbox{mean value of the image}$$\r\n%\r\n\r\n%%\r\n%\r\n% $$\\rho = \\mbox{correlation coefficient between pixels one pixel apart}$$\r\n%\r\n\r\n%%\r\n% We calculate these parameters for the cameraman, averaging over\r\n% horizontal and vertical shifts to get a single parameter for the\r\n% correlation coefficient.  Then we calculate an autocorrelation function.\r\n\r\nsigma2_x = var(cam(:))\r\nmean_x = mean(cam(:))\r\ncam_r = circshift(cam,[1 0]);\r\ncam_c = circshift(cam,[0 1]);\r\nrho_mat = corrcoef([cam(:); cam(:)],[cam_r(:); cam_c(:)])\r\nrho = rho_mat(1,2);\r\n[rr,cc] = ndgrid([-128:127],[-128:127]);\r\nr_x = sigma2_x*rho.^sqrt(rr.^2+cc.^2) + mean_x^2;\r\n\r\nsurf([-128:127],[-128:127],r_x)\r\naxis tight\r\nshading interp, camlight, colormap jet\r\ntitle('image autocorrelation model approximation')\r\n\r\n%%\r\n% From this we calculate another restored image:\r\n\r\ncam_wnr3 = deconvwnr(cam_blur_noise,h,sigma_u^2,r_x);\r\nimshow(cam_wnr3)\r\ncolormap(gray(256))\r\ntitle('restored image using correlation model')\r\nmse3 = mean((cam(:)-cam_wnr3(:)).^2)\r\n\r\n%%\r\n% The restored image from this method is better than the restored image\r\n% using the house spectrum but not quite as good as the one using the exact\r\n% cameraman spectrum.  All in all, we can see that the exact\r\n% noise-to-signal spectrum isn't necessary to yield good results.\r\n%\r\n% _- by Stan Reeves, Department of Electrical and Computer Engineering,\r\n% Auburn University_\r\n##### SOURCE END ##### 43a29fd9845b4bbbba5e80e590ae9888\r\n-->","protected":false},"excerpt":{"rendered":"<p>\r\n   \r\n      I'd like to welcome back guest blogger Stan Reeves, professor of Electrical and Computer Engineering at Auburn University.\r\n            Stan will be writing a few blogs here about image... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/steve\/2007\/11\/02\/image-deblurring-wiener-filter\/\">read more >><\/a><\/p>","protected":false},"author":42,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[11],"tags":[208,50,218,434,58,436,428,426,400,430,292,56,390,370,76,36,224,308,206,162,396,386,194,72,398,52,432],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/180"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/users\/42"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/comments?post=180"}],"version-history":[{"count":1,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/180\/revisions"}],"predecessor-version":[{"id":3572,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/180\/revisions\/3572"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/media?parent=180"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/categories?post=180"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/tags?post=180"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}