{"id":165,"date":"2007-08-13T13:41:36","date_gmt":"2007-08-13T17:41:36","guid":{"rendered":"https:\/\/blogs.mathworks.com\/steve\/2007\/08\/13\/image-deblurring-introduction\/"},"modified":"2019-10-23T13:46:49","modified_gmt":"2019-10-23T17:46:49","slug":"image-deblurring-introduction","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/steve\/2007\/08\/13\/image-deblurring-introduction\/","title":{"rendered":"Image deblurring &#8211; Introduction"},"content":{"rendered":"<div xmlns:mwsh=\"https:\/\/www.mathworks.com\/namespace\/mcode\/v1\/syntaxhighlight.dtd\" class=\"content\">\r\n   <introduction>\r\n      <p><i>I'd like to introduce guest blogger <a href=\"http:\/\/www.eng.auburn.edu\/users\/reevesj\/\">Stan Reeves<\/a>. Stan is a professor in the Department of Electrical and Computer Engineering at Auburn University.  He serves as an associate\r\n            editor for IEEE Transactions on Image Processing.  His research activities include image restoration and reconstruction, optimal\r\n            image acquisition, and medical imaging.<\/i><\/p>\r\n      <p><i>Over the next few months, Stan plans to contribute several blogs here on the general topic of image deblurring in MATLAB.<\/i><\/p>\r\n   <\/introduction>\r\n   <p>Image deblurring (or restoration) is an old problem in image processing, but it continues to attract the attention of researchers\r\n      and practitioners alike.  A number of real-world problems from astronomy to consumer imaging find applications for image restoration\r\n      algorithms.  Plus, image restoration is an easily visualized example of a larger class of inverse problems that arise in all\r\n      kinds of scientific, medical, industrial and theoretical problems.  Besides that, it's just fun to apply an algorithm to a\r\n      blurry image and then see immediately how well you did.\r\n   <\/p>\r\n   <p>To deblur the image, we need a mathematical description of how it was blurred.  (If that's not available, there are algorithms\r\n      to estimate the blur.  But that's for another day.) We usually start with a shift-invariant model, meaning that every point\r\n      in the original image spreads out the same way in forming the blurry image.  We model this with convolution:\r\n   <\/p>\r\n   <p>g(m,n) = h(m,n)*f(m,n) + u(m,n)<\/p>\r\n   <p>where * is 2-D convolution, h(m,n) is the point-spread function (PSF), f(m,n) is the original image, and u(m,n) is noise (usually\r\n      considered independent identically distributed Gaussian).  This equation originates in continuous space but is shown already\r\n      discretized for convenience.\r\n   <\/p>\r\n   <p>Actually, a blurred image is usually a windowed version of the output g(m,n) above, since the original image f(m,n) isn't\r\n      ordinarily zero outside of a rectangular array.  Let's go ahead and synthesize a blurred image so we'll have something to\r\n      work with.  If we assume f(m,n) is periodic (generally a rather poor assumption!), the convolution becomes circular convolution,\r\n      which can be implemented with FFTs via the convolution theorem.\r\n   <\/p>\r\n   <p>If we model out-of-focus blurring using geometric optics, we can obtain a PSF using <tt>fspecial<\/tt> and then implement circular convolution:\r\n   <\/p>\r\n   <p>Form PSF as a disk of radius 3 pixels<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">h = fspecial(<span style=\"color: #A020F0\">'disk'<\/span>,4);\r\n<span style=\"color: #228B22\">% Read image and convert to double for FFT<\/span>\r\ncam = im2double(imread(<span style=\"color: #A020F0\">'cameraman.tif'<\/span>));\r\nhf = fft2(h,size(cam,1),size(cam,2));\r\ncam_blur = real(ifft2(hf.*fft2(cam)));\r\nimshow(cam_blur)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_01.jpg\"> <p>A similar result can be computed using <tt>imfilter<\/tt> with appropriate settings.\r\n   <\/p>\r\n   <p>You'll immediately notice that the circular convolution caused the pants and tripod to wrap around and blur into the sky.\r\n       I told you that periodicity of the input image was a poor assumption!  :-)  But we won't worry about that for the time being.\r\n   <\/p>\r\n   <p>Now we need to add some noise.  If we define peak SNR (PSNR) as<\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_eq97143.png\"> <\/p>\r\n   <p>then the noise scaling is given by<\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_eq43245.png\"> <\/p>\r\n   <p>Now we add noise to get a 40 dB PSNR:<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">sigma_u = 10^(-40\/20)*abs(1-0);\r\ncam_blur_noise = cam_blur + sigma_u*randn(size(cam_blur));\r\nimshow(cam_blur_noise)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_02.jpg\"> <p>The inverse filter is the simplest solution to the deblurring problem. If we ignore the noise term, we can implement the inverse\r\n      by dividing by the FFT of h(m,n) and performing an inverse FFT of the result. People who work with image restoration love\r\n      to begin with the inverse filter.  It's really great because it's simple and the results are absolutely terrible.  That means\r\n      that any new-and-improved image restoration algorithm always looks good by comparison!  Let me show you what I mean:\r\n   <\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">cam_inv = real(ifft2(fft2(cam_blur_noise).\/hf));\r\nimshow(cam_inv)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_03.jpg\"> <p>Something must be wrong, right?  Well, nothing is wrong with the code. But it is definitely wrong to think that one can ignore\r\n      noise.  To see why, look at the frequency response magnitude of the PSF:\r\n   <\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">hf_abs = abs(hf);\r\nsurf([-127:128]\/128,[-127:128]\/128,fftshift(hf_abs))\r\nshading <span style=\"color: #A020F0\">interp<\/span>, camlight, colormap <span style=\"color: #A020F0\">jet<\/span>\r\nxlabel(<span style=\"color: #A020F0\">'PSF FFT magnitude'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_04.jpg\"> <p>We see right away that the magnitude response of the blur has some very low values.  When we divide by this pointwise, we\r\n      are also dividing the additive noise term by these same low values, resulting in a huge amplification of the noise--enough\r\n      to completely swamp the image itself.\r\n   <\/p>\r\n   <p>Now we can apply a very simple trick to attempt our dramatic and very satisfying improvement.  We simply zero out the frequency\r\n      components in the inverse filter result for which the PSF frequency response is below a threshold.\r\n   <\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">cam_pinv = real(ifft2((abs(hf) &gt; 0.1).*fft2(cam_blur_noise).\/hf));\r\nimshow(cam_pinv)\r\nxlabel(<span style=\"color: #A020F0\">'pseudo-inverse restoration'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_05.jpg\"> <p>For comparison purposes, we repeat the blurred and noise image.<\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">imshow(cam_blur_noise)\r\nxlabel(<span style=\"color: #A020F0\">'blurred image with noise'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_06.jpg\"> <p>This result is obviously far better than the first attempt!  It still contains noise but at a much lower level.  It's not\r\n      dramatic and satisfying, but it's a step in the right direction.  You can see some distortion due to the fact that some of\r\n      the frequencies have not been restored.  In general, some of the higher frequencies have been eliminated, which causes some\r\n      blurring in the result as well as ringing. The ringing is due to the Gibbs phenomenon -- an effect in which a steplike transition\r\n      becomes \"wavy\" due to missing frequencies.\r\n   <\/p>\r\n   <p>A similar but slightly improved result can be obtained with a different form of the pseudo-inverse filter.  By adding a small\r\n      number delta^2 to the number being divided, we get nearly the same number unless the number is in the same range or smaller\r\n      than delta^2.  That is, if we let\r\n   <\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_eq28106.png\"> <\/p>\r\n   <p>then<\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_eq92782.png\"> <\/p>\r\n   <p>and<\/p>\r\n   <p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_eq56823.png\"> <\/p>\r\n   <p>which is like the previous pseudo-inverse filter but with a smooth transition between the two extremes. To implement this\r\n      in MATLAB, we do:\r\n   <\/p><pre style=\"background: #F9F7F3; padding: 10px; border: 1px solid rgb(200,200,200)\">cam_pinv2 = real(ifft2(fft2(cam_blur_noise).*conj(hf).\/(abs(hf).^2 + 1e-2)));\r\nimshow(cam_pinv2)\r\nxlabel(<span style=\"color: #A020F0\">'alternative pseudo-inverse restoration'<\/span>)<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/images\/steve\/165\/restore_blog1_07.jpg\"> <p>As you can see, this produces better results.  This is due to a smoother transition between restoration and noise smoothing\r\n      in the frequency components.\r\n   <\/p>\r\n   <p>I hope to look at some further improvements in future blogs as well as some strategies for dealing with more real-world assumptions.<\/p>\r\n   <p><i>- by Stan Reeves, Department of Electrical and Computer Engineering, Auburn University<\/i><\/p><script language=\"JavaScript\">\r\n<!--\r\n\r\n    function grabCode_9dfac47bf4de4c1492d88c00c8af3196() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='9dfac47bf4de4c1492d88c00c8af3196 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 9dfac47bf4de4c1492d88c00c8af3196';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        author = '';\r\n        copyright = '';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add author and copyright lines at the bottom if specified.\r\n        if ((author.length > 0) || (copyright.length > 0)) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (author.length > 0) {\r\n                d.writeln('% _' + author + '_');\r\n            }\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n      \r\n      d.title = title + ' (MATLAB code)';\r\n      d.close();\r\n      }   \r\n      \r\n-->\r\n<\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_9dfac47bf4de4c1492d88c00c8af3196()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n            the MATLAB code \r\n            <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; 7.4<br><\/p>\r\n<\/div>\r\n<!--\r\n9dfac47bf4de4c1492d88c00c8af3196 ##### SOURCE BEGIN #####\r\n%%\r\n% _I'd like to introduce guest blogger \r\n% <http:\/\/www.eng.auburn.edu\/users\/reevesj\/ Stan Reeves>. \r\n% Stan is a\r\n% professor in the Department of Electrical and Computer Engineering at\r\n% Auburn University.  He serves as an associate editor for IEEE\r\n% Transactions on Image Processing.  His research activities include image\r\n% restoration and reconstruction, optimal image acquisition, and medical\r\n% imaging._\r\n%\r\n% _Over the next few months, Stan plans to contribute several blogs here on\r\n% the general topic of image deblurring in MATLAB._\r\n\r\n%%\r\n% Image deblurring (or restoration) is an old problem in image processing, but it continues\r\n% to attract the attention of researchers and practitioners alike.  A\r\n% number of real-world problems from astronomy to consumer imaging find\r\n% applications for image restoration algorithms.  Plus, image restoration\r\n% is an easily visualized example of a larger class of inverse problems\r\n% that arise in all kinds of scientific, medical, industrial and\r\n% theoretical problems.  Besides that, it's just fun to apply an\r\n% algorithm to a blurry image and then see\r\n% immediately how well you did.\r\n\r\n%%\r\n% To deblur the image, we need a mathematical description of\r\n% how it was blurred.  (If that's not available, there are algorithms to\r\n% estimate the blur.  But that's for another day.) We usually start with a\r\n% shift-invariant model, meaning that every point in the original image\r\n% spreads out the same way in forming the blurry image.  We model this with\r\n% convolution:\r\n\r\n%%\r\n% g(m,n) = h(m,n)*f(m,n) + u(m,n)\r\n\r\n%%\r\n% where * is 2-D convolution, h(m,n) is the point-spread function (PSF), f(m,n) is the original image,\r\n% and u(m,n) is noise (usually considered independent identically\r\n% distributed Gaussian).  This equation originates in continuous space but\r\n% is shown already discretized for convenience. \r\n\r\n%%\r\n% Actually, a blurred image is usually a windowed version of the output\r\n% g(m,n) above, since the original image f(m,n) isn't ordinarily zero\r\n% outside of a rectangular array.  Let's go ahead and synthesize a blurred\r\n% image so we'll have something to work with.  If we assume f(m,n) is\r\n% periodic (generally a rather poor assumption!), the convolution becomes\r\n% circular convolution, which can be implemented with FFTs via the\r\n% convolution theorem.\r\n\r\n%%\r\n% If we model out-of-focus blurring using geometric optics, we can obtain a\r\n% PSF using |fspecial| and then implement circular convolution:\r\n\r\n%%\r\n% Form PSF as a disk of radius 3 pixels\r\nh = fspecial('disk',4); \r\n% Read image and convert to double for FFT\r\ncam = im2double(imread('cameraman.tif')); \r\nhf = fft2(h,size(cam,1),size(cam,2));\r\ncam_blur = real(ifft2(hf.*fft2(cam)));\r\nimshow(cam_blur)\r\n\r\n%%\r\n% A similar result can be computed using |imfilter| with appropriate\r\n% settings.\r\n\r\n%%\r\n% You'll immediately notice that the circular convolution caused the pants\r\n% and tripod to wrap around and blur into the sky.  I told you that\r\n% periodicity of the input image was a poor assumption!  :-)  But we won't\r\n% worry about that for the time being.\r\n\r\n%%\r\n% Now we need to add some noise.  If we define peak SNR (PSNR) as\r\n\r\n%%\r\n% \r\n% $$\\mbox{PSNR} = 10 \\log_{10}\\frac{[gmax - gmin]^2}{\\sigma_u^2}$$\r\n% \r\n\r\n%%\r\n% then the noise scaling is given by\r\n\r\n%%\r\n% \r\n% $$\\sigma_u = 10^{-\\mbox{PSNR}\/20}|gmax - gmin|$$\r\n% \r\n\r\n%%\r\n% Now we add noise to get a 40 dB PSNR:\r\n\r\n%%\r\nsigma_u = 10^(-40\/20)*abs(1-0);\r\ncam_blur_noise = cam_blur + sigma_u*randn(size(cam_blur));\r\nimshow(cam_blur_noise)\r\n\r\n%%\r\n% The inverse filter is the simplest solution to the deblurring problem.\r\n% If we ignore the noise term, we can implement the inverse by dividing by\r\n% the FFT of h(m,n) and performing an inverse FFT of the result. \r\n% People who work with image restoration love to begin with the inverse\r\n% filter.  It's really great because it's simple and the results are\r\n% absolutely terrible.  That means that any new-and-improved image\r\n% restoration algorithm always looks good by comparison!  Let me show you\r\n% what I mean:\r\n\r\n%%\r\ncam_inv = real(ifft2(fft2(cam_blur_noise).\/hf));\r\nimshow(cam_inv)\r\n\r\n%%\r\n% Something must be wrong, right?  Well, nothing is wrong with the code.\r\n% But it is definitely wrong to think that one can ignore noise.  To see\r\n% why, look at the frequency response magnitude of the PSF:\r\n\r\n%%\r\nhf_abs = abs(hf);\r\nsurf([-127:128]\/128,[-127:128]\/128,fftshift(hf_abs))\r\nshading interp, camlight, colormap jet\r\nxlabel('PSF FFT magnitude')\r\n\r\n%%\r\n% We see right away that the magnitude response of the blur has some very\r\n% low values.  When we divide by this pointwise, we are also dividing the\r\n% additive noise term by these same low values, resulting in a huge\r\n% amplification of the noiseREPLACE_WITH_DASH_DASHenough to completely swamp the image itself.\r\n\r\n%%\r\n% Now we can apply a very simple trick to attempt our dramatic and very\r\n% satisfying improvement.  We simply zero out the frequency\r\n% components in the inverse filter result for which the PSF frequency\r\n% response is below a threshold.\r\n\r\n%%\r\ncam_pinv = real(ifft2((abs(hf) > 0.1).*fft2(cam_blur_noise).\/hf));\r\nimshow(cam_pinv)\r\nxlabel('pseudo-inverse restoration')\r\n\r\n%%\r\n% For comparison purposes, we repeat the blurred and noise image.\r\n\r\n%%\r\nimshow(cam_blur_noise)\r\nxlabel('blurred image with noise')\r\n\r\n%%\r\n% This result is obviously far better than the first attempt!  It still\r\n% contains noise but at a much lower level.  It's not dramatic and satisfying, but it's a \r\n% step in the right direction.  You can see some\r\n% distortion due to the fact that some of the frequencies have not been\r\n% restored.  In general, some of the higher frequencies have been\r\n% eliminated, which causes some blurring in the result as well as ringing.\r\n% The ringing is due to the Gibbs phenomenon REPLACE_WITH_DASH_DASH an effect in which a\r\n% steplike transition becomes \"wavy\" due to missing frequencies.\r\n\r\n%%\r\n% A similar but slightly improved result can be obtained with a different form of\r\n% the pseudo-inverse filter.  By adding a small number delta^2 to the number\r\n% being divided, we get nearly the same number unless the number is in the\r\n% same range or smaller than delta^2.  That is, if we let\r\n\r\n%%\r\n%\r\n% $$H_I = \\frac{H^\\ast}{|H|^2 + \\delta^2}$$\r\n%\r\n\r\n%%\r\n% then\r\n\r\n%%\r\n%\r\n% $$H_I \\approx \\frac{1}{H}\\ \\ \\ \\mbox{if}\\ \\ \\ |\\delta| << |H|$$\r\n%\r\n\r\n%%\r\n% and\r\n\r\n%%\r\n%\r\n% $$H_I \\approx 0\\ \\ \\ \\mbox{if}\\ \\ \\ |\\delta| >> |H|$$\r\n%\r\n\r\n%%\r\n% which is like the previous pseudo-inverse filter but with a smooth\r\n% transition between the two extremes. To implement this in MATLAB, we do:\r\n\r\n%%\r\ncam_pinv2 = real(ifft2(fft2(cam_blur_noise).*conj(hf).\/(abs(hf).^2 + 1e-2)));\r\nimshow(cam_pinv2)\r\nxlabel('alternative pseudo-inverse restoration')\r\n\r\n%%\r\n% As you can see, this produces better results.  This is due to a smoother transition\r\n% between restoration and noise smoothing in the frequency components.\r\n\r\n%%\r\n% I hope to look at some further improvements in future blogs as well as\r\n% some strategies for dealing with more real-world assumptions.\r\n%\r\n% _- by Stan Reeves, Department of Electrical and Computer Engineering,\r\n% Auburn University_\r\n##### SOURCE END ##### 9dfac47bf4de4c1492d88c00c8af3196\r\n-->","protected":false},"excerpt":{"rendered":"<p>\r\n   \r\n      I'd like to introduce guest blogger Stan Reeves. Stan is a professor in the Department of Electrical and Computer Engineering at Auburn University.  He serves as an associate\r\n          ... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/steve\/2007\/08\/13\/image-deblurring-introduction\/\">read more >><\/a><\/p>","protected":false},"author":42,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[11],"tags":[208,218,58,392,400,292,394,390,76,36,396,200,386,190,398,94],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/165"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/users\/42"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/comments?post=165"}],"version-history":[{"count":1,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/165\/revisions"}],"predecessor-version":[{"id":3554,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/posts\/165\/revisions\/3554"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/media?parent=165"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/categories?post=165"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/steve\/wp-json\/wp\/v2\/tags?post=165"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}