{"id":6931,"date":"2021-05-10T11:15:48","date_gmt":"2021-05-10T15:15:48","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=6931"},"modified":"2021-06-15T12:23:28","modified_gmt":"2021-06-15T16:23:28","slug":"semantic-segmentation-for-medical-imaging","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2021\/05\/10\/semantic-segmentation-for-medical-imaging\/","title":{"rendered":"Semantic Segmentation for Medical Imaging"},"content":{"rendered":"<em>The following post is\u00a0by Dr.\u00a0<a href=\"https:\/\/udayton.edu\/directory\/udri\/sensorsoftwaresystems\/narayanan-barath.php\">Barath Narayanan<\/a>,\u00a0University of Dayton Research Institute\u00a0(UDRI) with co-authors: Dr.\u00a0Russell C. Hardie, and Redha Ali.<\/em>\r\n<h6><\/h6>\r\n<table style=\"height: 161px;\" width=\"410\">\r\n<tbody>\r\n<tr>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"293\" height=\"293\" class=\"alignnone size-full wp-image-6933\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/image1.jpeg\" alt=\"\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-6935 \" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/image2-300x300.jpeg\" alt=\"\" width=\"291\" height=\"291\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\nIn this blog, we apply Deep Learning based segmentation to skin lesions in dermoscopic images to aid in melanoma detection.\r\n<h6><\/h6>\r\n<strong>Affiliations:<\/strong>\r\n<h6><\/h6>\r\n*<a href=\"https:\/\/udayton.edu\/udri\/\">Sensors and Software Systems, University of Dayton Research Institute<\/a>, 300 College Park, Dayton, OH, 45469\r\n<h6><\/h6>\r\n**<a href=\"https:\/\/udayton.edu\/engineering\/departments\/electrical_and_computer\/index.php\">Department of Electrical and Computer Engineering, University of Dayton<\/a>, 300 College Park, Dayton, OH,\r\n45469\r\n<h6><\/h6>\r\n<h2>Background<\/h2>\r\n<h6><\/h6>\r\nSkin lesion segmentation is an important step in Computer-Aided Diagnosis (CAD) of melanoma. In this blog, we present a Convolutional Neural Network (CNN) based segmentation approach applied to skin lesions in dermoscopic images. Early stage detection and diagnosis of melanoma detection increases one's survival rate significantly.\r\n<h6><\/h6>\r\nPlease cite the following article if you're using any part of the code for your research.\r\n<h6><\/h6>\r\n[1] <a href=\"https:\/\/www.linkedin.com\/in\/redhaali\/\">Ali, R.<\/a>, <a href=\"https:\/\/udayton.edu\/directory\/engineering\/electrical_and_computer\/hardie_russell.php\">Hardie, R. C.<\/a>, <a href=\"https:\/\/udayton.edu\/directory\/udri\/sensorsoftwaresystems\/narayanan-barath.php\">Narayanan, B. N.<\/a>, &amp; De Silva, S. (2019, July). \"<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9058245\">Deep learning ensemble methods for skin lesion analysis towards melanoma detection<\/a>\". In 2019 IEEE National Aerospace and Electronics Conference (NAECON) (pp. 311-316). IEEE.\r\n<h6><\/h6>\r\nDataset utilized for this blog is taken from <a href=\"https:\/\/challenge.isic-archive.com\/data#2018\">ISIC 2018<\/a>. Instructions about the dataset are provided at the end of this post.\r\n<h6><\/h6>\r\n<h2>Load the Dataset and Resize<\/h2>\r\n<h6><\/h6>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td style=\"width: 80%; padding: 10px;\">Raw images are loaded using <span style=\"font-family: courier;\">imageDatastore<\/span>. It is a computationally efficient function to collect image information. Load the ground truth masks using <span style=\"font-family: courier;\">pixelLabelDatastore<\/span>. White region in the ground truth mask indicates the \"lesion\" and rest of the image belongs to \"background\" class. The function <span style=\"font-family: courier;\">pixelLabelImageDatastore<\/span> helps in tagging the raw image with its corresponding ground truth mask. Let's visualize certain random images from the dataset for our reference. Later, resize all images to a size 224 x 224 for the deep learning network.\r\n<h6><\/h6>\r\n<\/td>\r\n<td>\r\n\r\n<div id=\"attachment_6975\" style=\"width: 160px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-6975\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-6975 size-thumbnail\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/ISIC_0000014_segmentation-150x150.png\" alt=\"\" width=\"150\" height=\"150\" \/><p id=\"caption-attachment-6975\" class=\"wp-caption-text\">Labeled image showing pixels as either background (black) and foreground \"lesion\" (white)<\/p><\/div><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<pre><span class=\"comment\">% Clear workspace <\/span>\r\nclear; close all; clc;\r\n\r\n<span class=\"comment\">% All images<\/span>\r\nimds=imageDatastore('ISIC2018_Task1-2_Training_Input','IncludeSubfolders',true);\r\n\r\n<span class=\"comment\">% Define class names and their corresponding IDs<\/span>\r\nclassNames=[\"Lesion\",\"Background\"];\r\nlabelIDs=[255,0];\r\n\r\n<span class=\"comment\">% Create a pixelLabelDatastore holding the ground truth pixel labels<\/span>\r\npxds=pixelLabelDatastore('ISIC2018_Task1_Training_GroundTruth',classNames, labelIDs);\r\n\r\n<span class=\"comment\">% Create a pixel label image datastore of all images  <\/span>\r\npximds=pixelLabelImageDatastore(imds,pxds);\r\n\r\n<span class=\"comment\">% Number of Images<\/span>\r\ntotal_num_images=length(pximds.Images);\r\n\r\n<span class=\"comment\">% Visualize random images<\/span>\r\nperm=randperm(total_num_images,4);\r\n\r\nfigure;\r\n<span class=\"comment\">% Visualize the images with Mask<\/span>\r\nfor idx=1:length(perm)\r\n    \r\n    [~,filename]=fileparts(pximds.Images{idx});\r\n    subplot(2,2,idx);\r\n    imshow(imread(pximds.Images{perm(idx)}));\r\n    hold on;\r\n    visboundaries(imread(pximds.PixelLabelData{perm(idx)}),'Color','r');\r\n    title(sprintf('%s',filename),'Interpreter',\"none\");\r\nend<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"466\" height=\"362\" class=\"alignnone size-full wp-image-6945\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/figu1.png\" alt=\"\" \/>\r\n<pre><span class=\"comment\">% Desired Image Size <\/span>\r\nimageSize=[224 224 3];\r\n\r\n<span class=\"comment\">% Create a pixel label image datastore of all resized images  <\/span>\r\npximdsResz=pixelLabelImageDatastore(imds,pxds,'OutputSize',imageSize);\r\n\r\n<span class=\"comment\">% Clear all variables except the necessary variables<\/span>\r\nclearvars -except pximdsResz classNames total_num_images imageSize\r\n\r\n<\/pre>\r\n<h2>Split the Dataset - Training, Validation and Testing<\/h2>\r\n<pre><span class=\"comment\">% Randomly select 100 images for testing from the dataset<\/span>\r\ntest_idx=randperm(total_num_images,100);\r\n\r\n<span class=\"comment\">% Rest of the indices are utilize for training and validation <\/span>\r\ntrain_valid_idx=setdiff(1:total_num_images,test_idx);\r\n\r\n<span class=\"comment\">% Randomly pick 100 images for validation from the training dataset<\/span>\r\nvalid_idx=train_valid_idx(randperm(length(train_valid_idx),100));\r\n\r\n<span class=\"comment\">% Rest of the indices are used for training<\/span>\r\ntrain_idx=setdiff(train_valid_idx,valid_idx);\r\n\r\n<span class=\"comment\">% Train Dataset<\/span>\r\npximdsTrain=partitionByIndex(pximdsResz,train_idx);\r\n\r\n<span class=\"comment\">% Validation Dataset<\/span>\r\npximdsValid=partitionByIndex(pximdsResz,valid_idx);\r\n\r\n<span class=\"comment\">% Test Dataset<\/span>\r\npximdsTest=partitionByIndex(pximdsResz,test_idx);<\/pre>\r\n<h6><\/h6>\r\n<h2>Deep Learning Approach<\/h2>\r\n<h6><\/h6>\r\nDefine the CNN network for training the network along with the parameters necessary.\r\n<h6><\/h6>\r\nIn this blog, we study the performance using DeepLab v3+ network. DeepLab v3+ is a CNN for semantic image segmentation. It utilizes an encoder-decoder based architecture with dilated convolutions and skip convolutions to segment images. In [1], we present an ensemble approach of combining both U-Net with DeepLab v3+ network. In the blog, we solely focus on DeepLab v3+ network using ResNet50 architecture. Feel free to change the hyperparameters and observe the performance.\r\n<h6><\/h6>\r\n<strong>Notes:<\/strong>\r\n<h6><\/h6>\r\n<ul>\r\n \t<li>Make sure to install Deep Learning Toolbox Model for ResNet-50 Network support package through add-on explorer.<\/li>\r\n \t<li>The input normalization might take about 5-10 minutes due to resolution of the original images. Training time per epoch is about 10 minutes in NVIDIA GeForce GTX 1070.<\/li>\r\n \t<li>You can also set the execution environment to 'multi-gpu' in the training options if you have access to more than one GPU.<\/li>\r\n<\/ul>\r\n<pre><span class=\"comment\"> % Number of classes<\/span>\r\nnumClasses=length(classNames);\r\n\r\n<span class=\"comment\">% Network<\/span>\r\nlgraph=deeplabv3plusLayers(imageSize, numClasses,'resnet50');\r\n\r\n<span class=\"comment\">% Define the parameters for the network <\/span>\r\noptions=trainingOptions('sgdm',...\r\n    'InitialLearnRate', 0.03, ...\r\n    'Momentum',0.9,...\r\n    'L2Regularization',0.0005,...\r\n    'MaxEpochs',20,...\r\n    'MiniBatchSize',32,...\r\n    'VerboseFrequency',20,...\r\n    'LearnRateSchedule','piecewise',... \r\n    'ExecutionEnvironment','gpu',...\r\n    'Shuffle','every-epoch',...\r\n    'ValidationData',pximdsValid, ...\r\n    'ValidationFrequency',50, ...\r\n    'ValidationPatience',4,...\r\n    'Plots','training-progress',...\r\n    'GradientThresholdMethod','l2norm',...\r\n    'GradientThreshold',0.05);\r\n\r\n<span class=\"comment\">% Train the network <\/span>\r\nnet=trainNetwork(pximdsTrain,lgraph,options);<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1280\" height=\"646\" class=\"alignnone size-full wp-image-6947\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/figu2.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<h2>Testing and Performance Analysis<\/h2>\r\n<h6><\/h6>\r\nNow, let's study the performance of the network on the test set. We study the performance in terms of following metrics:\r\n<h6><\/h6>\r\n<ul>\r\n \t<li>Pixel classification accuracy: global and mean<\/li>\r\n \t<li>Intersection over Union (IoU): weighted and mean<\/li>\r\n \t<li>Normalized confusion matrix<\/li>\r\n<\/ul>\r\n\r\nIn [1], we also study the performance in terms of Jaccard index and Dice coefficient.\r\n<h6><\/h6>\r\n<pre><span class=\"comment\">% Semantic segmentation of test dataset based on the trained network<\/span>\r\n[pxdspredicted]=semanticseg(pximdsTest,net,'WriteLocation',tempdir);\r\n\r\n<span class=\"comment\">% Evaluation<\/span>\r\nmetrics=evaluateSemanticSegmentation(pxdspredicted,pximdsTest);<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1129\" height=\"342\" class=\"alignnone size-full wp-image-6949\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/figu3.png\" alt=\"\" \/>\r\n<pre><span class=\"comment\">% Normalized Confusion Matrix<\/span>\r\nnormConfMatData=metrics.NormalizedConfusionMatrix.Variables;\r\nfigure\r\nh=heatmap(classNames,classNames,100*normConfMatData);\r\nh.XLabel='Predicted Class';\r\nh.YLabel='True Class';\r\nh.Title='Normalized Confusion Matrix (%)';<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"707\" height=\"506\" class=\"alignnone size-full wp-image-6951\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/figu4.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<h2>Visual Inspection<\/h2>\r\n<h6><\/h6>\r\nIn this section, we visually inspect the results by visualizing both the predicted and actual masks for a given image.\r\n<pre><span class=\"comment\"> % Number of Images<\/span>\r\nnum_test_images=length(pximdsTest.Images);\r\n\r\n<span class=\"comment\">% Pick any random 2 images<\/span>\r\nperm=randperm(num_test_images,2);\r\n\r\n<span class=\"comment\">% Visualize the images with Mask<\/span>\r\nfor idx=1:length(perm)\r\n    \r\n    <span class=\"comment\">% Extract filename for the title<\/span>\r\n    [~,filename]=fileparts(pximdsTest.Images{idx});\r\n    \r\n    <span class=\"comment\">% Read the original file and resize it for network purposes<\/span>\r\n    I=imread(pximdsTest.Images{perm(idx)});\r\n    I=imresize(I,[imageSize(1) imageSize(2)],'bilinear');\r\n    \r\n    figure;\r\n    image(I);\r\n    hold on;\r\n    \r\n   <span class=\"comment\"> % Read the actual mask and resize it for visualization<\/span>\r\n    actual_mask=imread(pximdsTest.PixelLabelData{perm(idx)});\r\n    actual_mask=imresize(actual_mask,[imageSize(1) imageSize(2)],'bilinear');\r\n    \r\n    <span class=\"comment\">% Ground Truth<\/span>\r\n    visboundaries(actual_mask,'Color','r');\r\n        \r\n    <span class=\"comment\">% Predicted by the Algorithm<\/span>\r\n    predicted_image=(uint8(readimage(pxdspredicted,perm(idx)))); % Values are 1 and 2\r\n    predicted_results=uint8(~(predicted_image-1)); % Conversion to binary and reverse the polarity to match with the labelIds\r\n    \r\n    <span class=\"comment\">% Predicted result<\/span>\r\n    visboundaries(predicted_results,'Color','g');\r\n    title(sprintf('%s Red- Actual, Green - Predicted',filename),'Interpreter',\"none\");\r\n    \r\n    imwrite(mat2gray(predicted_results),sprintf('%s.png',filename));\r\n    \r\nend<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"595\" height=\"471\" class=\"alignnone size-full wp-image-6953\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/figu5.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"584\" height=\"455\" class=\"alignnone size-full wp-image-6955\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/figu6.png\" alt=\"\" \/>\r\n<h2>Conclusions<\/h2>\r\nIn this blog, we have presented a simple deep learning-based segmentation applied to skin lesions in dermoscopic images to aid in melanoma detection. The segmentation algorithm using DeepLab v3+ with ResNet50 architecture performed relatively well with a good IoU and pixel classification accuracy. Combining these results with other existing architectures would provide a boost in performance. Feel free to study the performance under different hyperparameters settings and architectures. In our paper [1], we fused the results of DeepLab v3+ with a U-Net architecture. Segmentation of skin lesion would serve as a valuable preprocessing step for classification algorithm for the detection of melanoma.\r\n<h2>Dataset Instructions<\/h2>\r\nPlease cite the following articles if you're using the dataset.\r\n<h2><\/h2>\r\n[2] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, Harald Kittler, Allan Halpern: \u201cSkin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)\u201d, 2018; https:\/\/arxiv.org\/abs\/1902.03368\r\n<h2><\/h2>\r\n[3] Tschandl, P., Rosendahl, C. &amp; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5, 180161 doi:10.1038\/sdata.2018.161 (2018).\r\n<h6><\/h6>\r\nDownload the <strong><a href=\"https:\/\/challenge.isic-archive.com\/data#2018\">ISIC 2018<\/a><\/strong> Task 1,2 Training Data (10.4 GB) and the Training Ground Truth (26 MB) for Task 1. After downloading the zip files, extract them into respective folders (\"ISIC2018_Task1-2_Training_Input\", \"ISIC2018_Task1_Training_GroundTruth\" as needed for the script). The dataset contains 2594 images in total. Note that we solely utilize \"Task 1 - Training Data\" to study the performance of the system, as the ground truth is publicly available.\r\n<h6><\/h6>\r\n<h2>Biography<\/h2>\r\n<h6><\/h6>\r\n&nbsp;\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td style=\"width: 15%; padding: 10px;\"><img decoding=\"async\" loading=\"lazy\" width=\"320\" height=\"213\" class=\"alignnone size-full wp-image-6939\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/image3.jpeg\" alt=\"\" \/><\/td>\r\n<td><strong>Barath Narayanan <\/strong>graduated with M.S. and Ph.D. degree in Electrical Engineering from <a href=\"https:\/\/udayton.edu\/\">University of Dayton<\/a> (UD) in 2013 and 2017 respectively. He currently holds a joint appointment as a Research Scientist at UDRI's Software Systems Group and as an Adjunct Faculty for the ECE department at UD. He graduated with distinction from <a href=\"https:\/\/www.srmist.edu.in\/\">SRM University<\/a>, Chennai, India in 2012 with a Bachelor\u2019s degree in Electrical and Electronics Engineering. His research interests include deep learning, machine learning, computer vision, and pattern recognition.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td style=\"width: 15%; padding: 10px;\"><img decoding=\"async\" loading=\"lazy\" width=\"199\" height=\"155\" class=\"alignnone size-full wp-image-6937\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/image1.png\" alt=\"\" \/><\/td>\r\n<td><strong>Redha Ali<\/strong> received his B.S. in Computer Science and Information Technology from the College of Electronic Technology, Bani Walid, Libya, in 2012. He completed his M.S. in Electrical and Computer Engineering from the University of Dayton in 2016. His Master's thesis work and publication are in the field of image and video denoising. He is currently pursuing his Ph. D. research in medical imaging at the <a href=\"https:\/\/udayton.edu\/\">University of Dayton<\/a>. His applied research interests include medical image processing, deep learning, machine learning, computer vision, video restoration, and enhancement.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td style=\"width: 15%; padding: 10px;\"><img decoding=\"async\" loading=\"lazy\" width=\"199\" height=\"192\" class=\"alignnone size-full wp-image-6941\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/image4.jpeg\" alt=\"\" \/><\/td>\r\n<td><strong>Dr. Russell C. Hardie<\/strong> graduated Magna Cum Laude from <a href=\"https:\/\/www.loyola.edu\/\">Loyola University<\/a> in Baltimore Maryland in 1988 with a B.S. degree in Engineering Science. He obtained an M.S. and Ph.D. degree in Electrical Engineering from the <a href=\"https:\/\/www.udel.edu\/\">University of Delaware<\/a> in 1990 and 1992, respectively. Dr. Hardie served as a Senior Scientist at Earth Satellite Corporation (Now MDA) in Maryland prior to his appointment at the <a href=\"https:\/\/udayton.edu\/\">University of Dayton<\/a> in 1993. He is currently a Full Professor in the Department of Electrical and Computer Engineering and holds a joint appointment with the Department of Electro-Optics and Photonics.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<h6><\/h6>\r\nHave a question or comment for the authors? Leave a comment below.","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/04\/image1.jpeg\" onError=\"this.style.display ='none';\" \/><\/div><p>The following post is\u00a0by Dr.\u00a0Barath Narayanan,\u00a0University of Dayton Research Institute\u00a0(UDRI) with co-authors: Dr.\u00a0Russell C. Hardie, and Redha Ali.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nIn this blog, we apply Deep... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2021\/05\/10\/semantic-segmentation-for-medical-imaging\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/6931"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=6931"}],"version-history":[{"count":31,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/6931\/revisions"}],"predecessor-version":[{"id":7424,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/6931\/revisions\/7424"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=6931"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=6931"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=6931"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}