{"id":1280,"date":"2019-02-21T20:40:33","date_gmt":"2019-02-21T20:40:33","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=1280"},"modified":"2019-02-26T13:28:40","modified_gmt":"2019-02-26T13:28:40","slug":"image-to-image-regression","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/02\/21\/image-to-image-regression\/","title":{"rendered":"Image-to-Image Regression"},"content":{"rendered":"Today I'd like to talk about the basic concepts of setting up a network to train on an image-to-image regression problem.\r\nThis demo came about for two reasons:\r\n<ol>\r\n \t<li>There are quite a few questions on <a href=\"http:\/\/mathworks.com\/matlabcentral\/answers\">MATLAB answers<\/a> about image\u2013to\u2013image deep learning problems.<\/li>\r\n \t<li>I\u2019m planning a future in-depth post with an image processing\/deep learning expert, where we\u2019ll be getting into the weeds on regression, and it would be good to understand the basics to keep up with him.<\/li>\r\n<\/ol>\r\nSo, let\u2019s dive into the concept of image-to-image deep learning problems in MATLAB.\r\n<img decoding=\"async\" loading=\"lazy\" width=\"772\" height=\"325\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/semanticseg2.png\" alt=\"\" class=\"alignnone size-full wp-image-1364\" \/>\r\n<h6><\/h6>\r\nTypically, deep learning problems can be divided into classification or regression problems. Classification is the problem that most people are familiar with, and we write about often.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1314 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/2019-02-21_13-25-18-300x195.png\" alt=\"\" width=\"300\" height=\"195\" \/>\r\n<h6><\/h6>\r\n<p style=\"text-align: center;\">Given an image, predict which category an object belongs to.<\/p>\r\n\r\n<h6><\/h6>\r\nIn regression problems, there are no longer discrete categories. The output could be a non-discrete value: for example, given an image, output <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/examples\/convert-classification-network-into-regression-network.html\">the rotation value<\/a>.\r\n<h6><\/h6>\r\nAlong the same lines, given an image, predict a new image!\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"550\" height=\"205\" class=\"alignnone size-full wp-image-1318\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/semanticseg.png\" alt=\"\" \/>\r\n<h3><\/h3>\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<strong>To learn more about the concept<\/strong>, of image-to-image deep learning we can start with a simple example in documentation:\r\n<a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/examples\/remove-noise-from-color-image-using-pretrained-neural-network.html\">https:\/\/www.mathworks.com\/help\/deeplearning\/examples\/remove-noise-from-color-image-using-pretrained-neural-network.html<\/a>\r\n<h6><\/h6>\r\nThis is a great introduction to the topic that's explained well in the example. Plus, if you\u2019re trying to denoise an image, this example solves the problem, so you're done!\r\n<h6><\/h6>\r\nHowever, the goal of this post is understand how to create our custom deep learning algorithm <strong>from scratch<\/strong>. The hardest part is getting the data set up. Everything else should be <em>reasonably<\/em> straightforward.\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<h2>All About Datastores<\/h2>\r\n<h6><\/h6>\r\nDatastores deserve a post of their own, but let me just say, if you can appreciate and master datastores, you can conquer the world.\r\n<h6><\/h6>\r\nAt a high level, datastores make sense: They are an efficient way of bringing in data for deep learning (and other) applications. You don\u2019t have to deal with memory management, and deep learning functions know how to handle the datastores as an input to the function. This is all good.\r\n<em>\u201cHow do I get datastores to work for image-to-image deep learning training data?\u201d<\/em> Great question!!\r\n<h6><\/h6>\r\n<h3>randomPatchExtractionDatastore<\/h3>\r\nI'm going to recommend using this handy function called <a href=\"https:\/\/www.mathworks.com\/help\/images\/ref\/randompatchextractiondatastore.html\">Random Patch Extraction Datastore<\/a>, which is what I use in the example below.\r\n<h6><\/h6>\r\nWe\u2019re not exactly short with our naming convention here, but you get a great idea of what you\u2019re getting with this function!\r\n<h6><\/h6>\r\nExtracting random patches of your images is a great way to cultivate more input images, especially if you're low on data. The algorithm needs enough data samples to train accurately, so we can cut the images into smaller pieces and deliver more examples for the network to learn.\r\n<h6><\/h6>\r\nThis function will take an input datastore, a corresponding output datastore, and a patch size.\r\n<h6><\/h6>\r\n<h2>The code:<\/h2>\r\n<h6><\/h6>\r\nOur problem is going to be image deblurring. And we're going to set up this up from scratch.\r\n<h6><\/h6>\r\nI have a perfect final image:\r\n<img decoding=\"async\" loading=\"lazy\" width=\"654\" height=\"453\" class=\"alignnone size-full wp-image-1332\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/image1.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\nI blur the image:\r\n<img decoding=\"async\" loading=\"lazy\" width=\"654\" height=\"453\" class=\"alignnone size-full wp-image-1334\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/image2.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\nand I put all of my data into individual folders.\r\n<pre>blurredDir = createTrainingSet(trainImages);\r\nblurredImages = imageDatastore(blurredDir,'FileExtensions','.mat','ReadFcn',@matRead);\r\nimagesDir = '.';\r\ntrainImagesDir = fullfile(imagesDir,'iaprtc12','images','02');\r\nexts = {'.jpg','.bmp','.png'};\r\ntrainImages = imageDatastore(trainImagesDir,'FileExtensions',exts);<\/pre>\r\n<h6><\/h6>\r\nThe blurred image is my input, the perfect\/original image is my output. This felt backwards, but I reminded myself: I want the network to see a blurry image and output the clean image as a final result.\r\n<h6><\/h6>\r\nVisualize the input and output images\r\n<h6><\/h6>\r\n<pre>im_orig = trainImages.readimage(ii);\r\nim_blurred = blurredImages.readimage(ii);\r\n\r\nimshow(im_orig);\r\ntitle('Clean Image - Final Result');\r\nfigure; imshow(im_blurred);\r\ntitle('Blurred Image - Input');<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1344 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/image0a-300x208.png\" alt=\"\" width=\"300\" height=\"208\" \/>\r\n\r\n<img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1346 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/image0b-300x208.png\" alt=\"\" width=\"300\" height=\"208\" \/>\r\n<h6><\/h6>\r\nSet up data augmentation for even more variety of training images.\r\n<pre>augmenter = imageDataAugmenter( ...\r\n    'RandRotation',@()randi([0,1],1)*90, ...\r\n    'RandXReflection',true);<\/pre>\r\nThis will rotate the input images a random amount, and allow for reflection on the X axis.\r\n<h6><\/h6>\r\nThen our random patch extraction datastore is used to compile the input and output images in a way the trainNetwork command will understand.\r\n<pre>miniBatchSize = 64;\r\npatchSize = [40 40];\r\npatchds = randomPatchExtractionDatastore(blurredImages,trainImages,patchSize, ....\r\n'PatchesPerImage',64, ...\r\n'DataAugmentation',augmenter);\r\npatchds.MiniBatchSize = miniBatchSize;<\/pre>\r\n<h6><\/h6>\r\n<h2>Network layers<\/h2>\r\nTo set up an image-to-image regression network, let's start with a set of layers <em>almost <\/em> right for our example.\r\n<h6><\/h6>\r\nComputer Vision Toolbox has the function <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/unetlayers.html\">unetLayers<\/a> that allows you to set up the layers of a semantic segmentation network (U-Net) quickly.\r\n<h6><\/h6>\r\n<pre>lgraph = unetLayers([40 40 3] , 3,'encoderDepth',3);<\/pre>\r\nWe have to alter this slightly to fit our network by adding an L2 loss layer.\r\nRemove the last 2 layers, replace them with a regression layer.\r\n<pre>lgraph = lgraph.removeLayers('Softmax-Layer');\r\nlgraph = lgraph.removeLayers('Segmentation-Layer');\r\nlgraph = lgraph.addLayers(regressionLayer('name','regressionLayer'));\r\nlgraph = lgraph.connectLayers('Final-ConvolutionLayer','regressionLayer');<\/pre>\r\n<h6><\/h6>\r\ndeepNetworkDesigner app will also remove and connect new layers for you as shown below.\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"600\" height=\"338\" class=\"alignnone size-full wp-image-1300\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/networkDesigner.gif\" alt=\"\" \/>\r\n<h6><\/h6>\r\nSet the training parameters\r\n<pre>maxEpochs = 100;\r\nepochIntervals = 1;\r\ninitLearningRate = 0.1;\r\nlearningRateFactor = 0.1;\r\nl2reg = 0.0001;\r\noptions = trainingOptions('sgdm', ...\r\n    'Momentum',0.9, ...\r\n    'InitialLearnRate',initLearningRate, ...\r\n    'LearnRateSchedule','piecewise', ...\r\n    'LearnRateDropPeriod',10, ...\r\n    'LearnRateDropFactor',learningRateFactor, ...\r\n    'L2Regularization',l2reg, ...\r\n    'MaxEpochs',maxEpochs ,...\r\n    'MiniBatchSize',miniBatchSize, ...\r\n    'GradientThresholdMethod','l2norm', ...\r\n    'Plots','training-progress', ...\r\n    'GradientThreshold',0.01);<\/pre>\r\n<h6><\/h6>\r\nand train\r\n<pre>   modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');\r\n    net = trainNetwork(patchds,lgraph,options);\r\n    save(['trainedNet-' modelDateTime '-Epoch-' num2str(maxEpochs*epochIntervals) ...\r\n            'ScaleFactors-' num2str(234) '.mat'],'net','options');<\/pre>\r\n<h6><\/h6>\r\n(...8 hours later...)\r\n<h6><\/h6>\r\nI came back this morning and\u2026 I have a fully trained network!\r\n<h6><\/h6>\r\nNow the quality may not be the best for deblurring images, because my main intention was to show the setup of the training images and the network. <em>But I have a network that really tries.<\/em>\r\n<h6><\/h6>\r\nShow the original image and the blurred image.\r\n<pre>testImage = testImages.readimage(randi(400));\r\n\r\nLEN = 21;\r\nTHETA = 11;\r\nPSF = fspecial('motion', LEN, THETA);\r\n\r\nblurredImage = imfilter(testImage, PSF, 'conv', 'circular');\r\ntitle('Blurry Image');\r\n\r\nfigure; imshow(testImage);\r\ntitle('Original Image');<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"654\" height=\"453\" class=\"alignnone size-full wp-image-1336\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/image2a.png\" alt=\"\" \/>\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"654\" height=\"453\" class=\"alignnone size-full wp-image-1338\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/image2b.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n... and create a 'deblurred' image from the network:\r\n<h6><\/h6>\r\n<pre>Ideblurred = activations(net,blurredImage,'regressionoutput');\r\nfigure; imshow(Ideblurred)\r\nIapprox = rescale(Ideblurred);\r\nIapprox = im2uint8(Iapprox);\r\nimshow(Iapprox)\r\ntitle('Denoised Image')<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"654\" height=\"453\" class=\"alignnone size-full wp-image-1340\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/image2c.png\" alt=\"\" \/>\r\n\r\n<h6><\/h6>\r\n<em>Update: I know it says denoised, rather than deblurred, I coppied the code from another example and forgot to switch the title.<\/em>\r\n<h6><\/h6>\r\nKeep in mind, the quality of the network was not the point, though now I\u2019m very curious to keep working and improving this network. That\u2019s all today! I hope you found this useful \u2013 I had a great time playing in MATLAB, and I hope you do too.\r\n\r\n<h6><\/h6><h6><\/h6>\r\nUPDATE: I changed a few training parameters and ran the network again. If you're planning on running this code, I would highly suggest training with these parameters: \r\n<pre>options = trainingOptions('adam','InitialLearnRate',1e-4,'MiniBatchSize',64,...\r\n        'Shuffle','never','MaxEpochs',50,...\r\n        'Plots','training-progress');\r\n<\/pre>\r\nThe results are much better:\r\n<img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"495\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/denoisedFlower-1.png\" alt=\"\" class=\"alignnone size-full wp-image-1396\" \/>\r\n\r\n\r\n<script language=\"JavaScript\"> <!-- \r\n    function grabCode_a710f144b80042c592b9fe35aab1fc59() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='a710f144b80042c592b9fe35aab1fc59 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' a710f144b80042c592b9fe35aab1fc59';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2018 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\">Copyright 2018 The MathWorks, Inc.<br><a href=\"javascript:grabCode_a710f144b80042c592b9fe35aab1fc59()\"><span class=\"get_ml_code\">Get the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      <br><\/p><!--\r\na710f144b80042c592b9fe35aab1fc59 ##### SOURCE BEGIN #####\r\n\r\n%% My very own deep learning script for image-to-image regression\r\n% Reduce blur in images - starting from scratch\r\n% Main Points are using randomPatchExtractionDatastore\r\n% and UnetLayers to design the network.\r\n%% start by downloading a set of images\r\n% % FYI - downloading these images was a bear - I would highly recommend\r\n% % doing this once, and saving to local directory. \r\n% imagesDir = '.';\r\n% url = 'http:\/\/www-i6.informatik.rwth-aachen.de\/imageclef\/resources\/iaprtc12.tgz';\r\n% downloadIAPRTC12Data(url,imagesDir);\r\n%%\r\n% Our problem is going to be image deblurring, \r\n% There is currently no customized datastore for this problem, so we need to start from scratch.\r\nimagesDir = '.';\r\ntrainImagesDir = fullfile(imagesDir,'iaprtc12','images','02');\r\nexts = {'.jpg','.bmp','.png'};\r\ntrainImages = imageDatastore(trainImagesDir,'FileExtensions',exts);\r\n%% create the training set (helper function below)\r\n% you need a datastore for the input images\r\n% input for this network is the blurry image\r\n\r\n% scaleFactors = [2 3 4];\r\n[upsampledDirName,residualDirName] = createTrainingSet(trainImages);\r\n\r\n% you need a datastore for the output images\r\n\r\nblurredImages = imageDatastore(upsampledDirName,'FileExtensions','.mat','ReadFcn',@matRead);\r\n\r\n%% Blog code: visualize\r\nii = randi(200);\r\nim_orig = trainImages.readimage(ii);\r\nim_blurred = blurredImages.readimage(ii);\r\n\r\nimshow(im_orig);\r\ntitle('Clean Image - Final Result');\r\nfigure; imshow(im_blurred);\r\ntitle('Blurred Image - Input');\r\n\r\n%% add data augmentation\r\naugmenter = imageDataAugmenter( ...\r\n    'RandRotation',@()randi([0,1],1)*90, ...\r\n    'RandXReflection',true);\r\n\r\n%% patch datastore time!\r\nminiBatchSize = 64;\r\npatchSize = [40 40];\r\npatchds = randomPatchExtractionDatastore(blurredImages,trainImages,patchSize, ....\r\n    'PatchesPerImage',64, ...\r\n    'DataAugmentation',augmenter);\r\npatchds.MiniBatchSize = miniBatchSize;\r\n\r\n%% training time!\r\n% define the network. This is up to the problem you're trying to solve. I'm\r\n% copying from another example here.\r\n\r\nlgraph = unetLayers([40 40 3],3,'encoderDepth',3);\r\nlgraph = lgraph.removeLayers('Softmax-Layer');\r\nlgraph = lgraph.removeLayers('Segmentation-Layer');\r\nlgraph = lgraph.addLayers(regressionLayer('name','regressionLayer'));\r\nlgraph = lgraph.connectLayers('Final-ConvolutionLayer','regressionLayer');\r\n%% OR use deep network designer here...\r\n% deepNetworkDesigner\r\n\r\n%% training options\r\nmaxEpochs = 100;\r\nepochIntervals = 1;\r\ninitLearningRate = 0.1;\r\nlearningRateFactor = 0.1;\r\nl2reg = 0.0001;\r\noptions = trainingOptions('sgdm', ...\r\n    'Momentum',0.9, ...\r\n    'InitialLearnRate',initLearningRate, ...\r\n    'LearnRateSchedule','piecewise', ...\r\n    'LearnRateDropPeriod',10, ...\r\n    'LearnRateDropFactor',learningRateFactor, ...\r\n    'L2Regularization',l2reg, ...\r\n    'MaxEpochs',maxEpochs ,...\r\n    'MiniBatchSize',miniBatchSize, ...\r\n    'GradientThresholdMethod','l2norm', ...\r\n    'Plots','training-progress', ...\r\n    'GradientThreshold',0.01);\r\n%% Train!!\r\n% this takes quite a while, (around 8 hours to complete training)\r\nmodelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');\r\nnet = trainNetwork(patchds,lgraph,options);\r\nsave(['trainedVDSR-' modelDateTime '-Epoch-' num2str(maxEpochs*epochIntervals) 'ScaleFactors-' num2str(234) '.mat'],'net','options');\r\n\r\n\r\n%% Test the final images\r\ntestImages = imageDatastore('iaprtc12\/images\/40\/');\r\ntestImage = testImages.readimage(randi(400));\r\n\r\nLEN = 11;\r\nTHETA = 11;\r\nPSF = fspecial('motion', LEN, THETA);\r\nnoise_mean = 0;\r\n    noise_var = 0.0001;\r\nblurredImage = imfilter(testImage, PSF, 'conv', 'circular');\r\nblurred_noisy = imnoise(blurredImage, 'gaussian', ...\r\n                        noise_mean, noise_var);\r\nfigure(1);\r\nimshow(blurredImage);\r\ntitle('Blurry Image');\r\n\r\nfigure(2); imshow(testImage);\r\ntitle('Original Image');\r\n\r\n%\r\nIdeblurred = activations(net,blurredImage,'regressionLayer');\r\nfigure(3); imshow(Ideblurred)\r\nIapprox = rescale(Ideblurred);\r\nIdeblurred = im2uint8(Iapprox);\r\nimshow(Iapprox)\r\ntitle('Denoised Image')\r\n%%\r\nfunction [blurredDir] = createTrainingSet(imds)\r\n\r\n    LEN = 11;\r\n    THETA = 11;\r\n    PSF = fspecial('motion', LEN, THETA);\r\n\r\nwhile hasdata(imds)\r\n\r\n    [I,info] = read(imds);\r\n\r\n    blurredImage = imfilter(I, PSF, 'conv', 'circular');\r\n    \r\n    [filePath, fileName, ~] = fileparts(info.Filename);\r\n    \r\n    \r\n    if ~isfolder([filePath filesep 'blurredImages2'])\r\n        mkdir([filePath filesep 'blurredImages2']);\r\n    end\r\n        extn = '.mat'; \r\n\r\n    save([filePath filesep 'blurredImages2' filesep fileName extn],'blurredImage');\r\n\r\n    blurredDir = [filePath filesep 'blurredImages2'];\r\n\r\nend\r\nend\r\n\r\n\r\n##### SOURCE END ##### a710f144b80042c592b9fe35aab1fc59\r\n--><!-- AddThis Sharing Buttons below -->\r\n\r\n\r\n","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/02\/semanticseg2.png\" onError=\"this.style.display ='none';\" \/><\/div><p>Today I'd like to talk about the basic concepts of setting up a network to train on an image-to-image regression problem.\r\nThis demo came about for two reasons:\r\n\r\n \tThere are quite a few questions... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/02\/21\/image-to-image-regression\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[5],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/1280"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=1280"}],"version-history":[{"count":42,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/1280\/revisions"}],"predecessor-version":[{"id":1382,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/1280\/revisions\/1382"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=1280"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=1280"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=1280"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}