{"id":4321,"date":"2020-07-07T23:28:10","date_gmt":"2020-07-07T21:28:10","guid":{"rendered":"https:\/\/blogs.mathworks.com\/student-lounge\/?p=4321"},"modified":"2020-09-11T19:37:16","modified_gmt":"2020-09-11T17:37:16","slug":"yolov2-object-detection-data-labelling-to-neural-networks-in-matlab","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/student-lounge\/2020\/07\/07\/yolov2-object-detection-data-labelling-to-neural-networks-in-matlab\/","title":{"rendered":"YOLOv2 Object Detection: Data Labelling to Neural Networks in MATLAB"},"content":{"rendered":"<p>Today in this blog, we will talk about the complete workflow of Object Detection using Deep Learning. You will learn the step by step approach of Data Labeling, training a YOLOv2 Neural Network, and evaluating the network in MATLAB. The <a href=\"https:\/\/drive.google.com\/drive\/u\/0\/folders\/1bhohhPoZy03ffbM_rl8ZUPSvJ5py8rM-\">data used in this example<\/a> is from a RoboNation Competition team.<\/p>\n<h1><strong>I. <\/strong><strong>Data Pre-Processing<\/strong><\/h1>\n<p>The first step towards a data science problem is to prepare your data. Below are the few steps that you should perform to process your dataset.<\/p>\n<ol>\n<li>Download the dataset and its subfolder and add them to the MATLAB path.<\/li>\n<li>Resize the image\u2019s size to <em>416x416X3<\/em> to account for the YOLOv2 architecture, using the function <a href=\"https:\/\/www.mathworks.com\/help\/releases\/R2020a\/matlab\/ref\/imresize.html\"><strong><em>imresize<\/em><\/strong>.<\/a><\/li>\n<li>Split the complete dataset into train, validation, and test data, to avoid overfitting and optimize the training dataset accuracy.<\/li>\n<\/ol>\n<p><em>Note: The <a href=\"https:\/\/drive.google.com\/drive\/u\/0\/folders\/1bhohhPoZy03ffbM_rl8ZUPSvJ5py8rM-\">dataset provided<\/a> has already been resized and divided in folders to make it easier for sharing; these steps only need to be done if you are using your own data. Refer to the \u2018code files\u2019 folder at <a href=\"https:\/\/github.com\/mathworks-robotics\/deep-learning-for-object-detection-yolov2\">this GitHub repo<\/a> to check the code for it.<\/em><\/p>\n<h1><strong>II. <\/strong><strong>Data Labelling<\/strong><\/h1>\n<p>You need labelled images to perform a supervised learning approach. Hence the next step will be to label the objects of interest in the dataset.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-4325 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2020\/07\/Pic_1.png\" alt=\"screenshot\" width=\"975\" height=\"317\" \/><\/p>\n<h2>A. Create Ground Truth<\/h2>\n<p>With <a href=\"https:\/\/www.mathworks.com\/help\/driving\/ug\/get-started-with-the-ground-truth-labeler.html\">Ground Truth Labeler app<\/a> or the <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/videolabeler-app.html\">Video Labeler app<\/a>, you can label the objects, <a href=\"https:\/\/www.youtube.com\/watch?v=V2e0cygY9Vg&amp;t=16s\">by using the in-built algorithms of the app<\/a> or by <a href=\"https:\/\/www.youtube.com\/watch?v=Y36D1fJZkT0\">integrating your own custom algorithms within the app<\/a>. To create the ground truth in this example, we used the Ground Truth Labeler app, but you can achieve the same results and workflow with the Video Labeler app as well.<\/p>\n<p>Once you have labelled images, you export the ground truth as a ground truth data object for each train, test, and validation dataset. Next you create the training data from the ground truth object by using the function <a href=\"https:\/\/www.mathworks.com\/help\/releases\/R2020a\/vision\/ref\/objectdetectortrainingdata.html\">objectDetectorTrainingData<\/a>. You feed this training data in your network.<\/p>\n<pre>trainingData = objectDetectorTrainingData(gTruthResizedTrain,'SamplingFactor',1,...\r\n\u00a0\u00a0\u00a0 'WriteLocation','TrainingData');<\/pre>\n<p>To learn how you can implement the above steps, check out the video linked below.<\/p>\n<p><iframe loading=\"lazy\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/g_Vj1ASBcYo?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<h1><strong>III. <\/strong><strong>Design &amp; Train YOLOv2 Network<\/strong><\/h1>\n<p>Now your data is ready. Let\u2019s talk about the neural network.<\/p>\n<p><strong>So, what is a YOLOv2 Network? &#8211; <\/strong><em><u>You only look once (YOLO)<\/u><\/em> is an object detection system targeted for real-time processing. It uses a single stage object detection network which is faster than other two-stage deep learning object detectors, such as regions with convolutional neural networks (Faster R-CNNs).<\/p>\n<p>The YOLOv2 model runs a deep learning CNN on an input image to produce network predictions. The object detector decodes the predictions and generates bounding boxes.<\/p>\n<h2>A. Design YOLOv2 network layers<\/h2>\n<p>You can design a custom YOLOv2 model layer by layer from scratch. The model should always start with an input layer, followed by the detection subnetwork containing a series of Convolutional, Batch normalization, and ReLu (Rectified Linear Unit) layers. These layers are then connected the MATLAB\u2019s inbuilt yolov2TransformLayer and yolov2OutputLayer.<\/p>\n<p><a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/nnet.cnn.layer.yolov2transformlayer.html\">yolov2TransformLayer<\/a>\u00a0transforms the raw CNN output into a form required to produce object detections.\u00a0<a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/nnet.cnn.layer.yolov2outputlayer.html\">yolov2OutputLayer<\/a>\u00a0defines the anchor box parameters and implements the loss function used to train the detector.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-4323 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2020\/07\/pics_2.png\" alt=\"\" width=\"620\" height=\"131\" \/><\/p>\n<p>Following the above approach, you use the <strong>imageInputLayer<\/strong>\u00a0function to define the image input layer with minimum image size (128x128x3 used here). Use your best judgement based on the dataset and objects that need to be detected.<\/p>\n<pre>inputLayer = imageInputLayer([128 128 3],'Name','input','Normalization','none');\r\nfilterSize = [3 3];<\/pre>\n<p>Next is the middle layers. Following the basic approach of <a href=\"https:\/\/arxiv.org\/pdf\/1612.08242.pdf\">YOLO9000<\/a>\u00a0paper, use a repeated batch of <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.convolution2dlayer.html?searchHighlight=Convolution2dLayer&amp;s_tid=doc_srchtitle\">Convolution2dLayer<\/a><strong>, <\/strong><a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.batchnormalizationlayer.html?searchHighlight=batchnormalizationlayer&amp;s_tid=doc_srchtitle\">Batch Normalization Layer<\/a><strong>,<\/strong> <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.relulayer.html?searchHighlight=relu%20layer&amp;s_tid=doc_srchtitle\">RelU Layer,<\/a><strong>\u00a0and <\/strong><a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.maxpooling2dlayer.html?s_tid=doc_ta\">Max Pooling Layer<\/a>.<\/p>\n<pre>middleLayers = [\r\n \u00a0\u00a0 convolution2dLayer(filterSize, 16, 'Padding', 1,'Name','conv_1',...\r\n \u00a0\u00a0 'WeightsInitializer','narrow-normal')\r\n \u00a0\u00a0 batchNormalizationLayer('Name','BN1')\r\n \u00a0\u00a0 reluLayer('Name','relu_1')\r\n \u00a0\u00a0 maxPooling2dLayer(2, 'Stride',2,'Name','maxpool1')\r\n \u00a0\u00a0 convolution2dLayer(filterSize, 32, 'Padding', 1,'Name', 'conv_2',...\r\n \u00a0\u00a0 'WeightsInitializer','narrow-normal')\r\n \u00a0\u00a0 batchNormalizationLayer('Name','BN2')\r\n \u00a0\u00a0 reluLayer('Name','relu_2')\r\n \u00a0\u00a0 maxPooling2dLayer(2, 'Stride',2,'Name','maxpool2')\r\n \u00a0\u00a0 convolution2dLayer(filterSize, 64, 'Padding', 1,'Name','conv_3',...\r\n \u00a0\u00a0 'WeightsInitializer','narrow-normal')\r\n \u00a0\u00a0 batchNormalizationLayer('Name','BN3')\r\n \u00a0\u00a0 reluLayer('Name','relu_3')\r\n \u00a0\u00a0 maxPooling2dLayer(2, 'Stride',2,'Name','maxpool3')\r\n \u00a0\u00a0 convolution2dLayer(filterSize, 128, 'Padding', 1,'Name','conv_4',...\r\n \u00a0\u00a0 'WeightsInitializer','narrow-normal')\r\n \u00a0\u00a0 batchNormalizationLayer('Name','BN4')\r\n \u00a0\u00a0 reluLayer('Name','relu_4')\r\n\u00a0\u00a0\u00a0 ];<\/pre>\n<p>At the end combine the initial &amp; middle layers and convert into a layer graph object in order to manipulate the layers. You will use this layer graph for assembling the final network in step c below.<\/p>\n<pre>lgraph = layerGraph([inputLayer; middleLayers]);<\/pre>\n<p>Another parameter you require is the number of Classes. You should calculate it based on your Input data.<\/p>\n<pre>numClasses = size(trainingData,2)-1;<\/pre>\n<h2>B. Define Anchor boxes<\/h2>\n<p>Before assembling the final network, you have the concept of anchor boxes in YOLO architecture. Anchor boxes are a set of predefined bounding boxes of a certain height and width. They are defined to capture the scale and aspect ratio of specific object classes you want to detect and are typically chosen based on object sizes in your training datasets. You can define several anchor boxes, each for a different object size. The use of anchor boxes enables a network to detect multiple objects, objects of different scales, and overlapping objects. You can study details about the <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ug\/anchor-box-basics.html\">Basics of anchor boxes <\/a>here.<\/p>\n<p>The anchor boxes are selected based on the scale and size of objects in the training data. You can <a href=\"https:\/\/www.mathworks.com\/help\/vision\/examples\/estimate-anchor-boxes-from-training-data.html\">Estimate Anchor Boxes Using Clustering<\/a>\u00a0to determine a good set of anchor boxes based on the training data. Using this procedure, the anchor boxes for the dataset used in this example are:<\/p>\n<pre>Anchors = [43 59\r\n \u00a0\u00a0 18 22\r\n \u00a0\u00a0 23 29\r\n\u00a0\u00a0\u00a0 84 109];<\/pre>\n<h2>C. Assemble YOLOv2 network<\/h2>\n<p>The final step is to assemble all our above pieces of the network in a YOLOv2 architecture, using the function <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/yolov2layers.html?s_tid=doc_ta\">yolov2Layers<\/a>. This function adds an inbuilt subnetwork of YOLO layers along with yolov2Transform and yolov2OutputLayer.<\/p>\n<pre>lgraph = yolov2Layers ([128 128 3],numClasses,Anchors,lgraph,'relu_4');<\/pre>\n<p>&#8216;relu_4&#8217;\u00a0is the feature extraction layer. The features extracted from this layer are given as input to the YOLO v2 object detection subnetwork. You can specify any network layer except the fully connected layer as the feature layer.<\/p>\n<p>You can visualize the <strong>lgraph<\/strong>\u00a0using the <strong>network analyzer app.<\/strong><\/p>\n<pre>analyzeNetwork(lgraph);<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-4327 size-large\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2020\/07\/pic_3-1024x1012.png\" alt=\"\" width=\"1024\" height=\"1012\" \/><\/p>\n<h2>D. Train the Network<\/h2>\n<p>You can train the network once you have your layers ready. You now want to work on your model\u2019s training options.<\/p>\n<p>For training a network you always provide the algorithm with some <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/trainingoptions.html?s_tid=doc_ta\">training options<\/a>. These options guide the network about how the network should learn. Changing with these options can help you to modify the network\u2019s performance.<\/p>\n<p><em>Learning Rate, mini batch size, and no of epochs<\/em> are some of the important training options to consider. These help you decide what should be the learning speed of your network and how much data sample your network should train on, in each round of training.<\/p>\n<p>For this example, based on the size of the data set I trained the network with the solver &#8211; stochastic gradient descent for 80 epochs with initial learning rate of 0.001 and mini-batch size of 16. Here I performed lower learning rate to give more time for training considering the size of data and adjusting the epoch and mini-batch size. You should modify the options based on your dataset.<\/p>\n<pre>options = trainingOptions('sgdm', ...\r\n \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0'InitialLearnRate',0.001, ...\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 'Verbose',true,'MiniBatchSize',16,'MaxEpochs',80,...\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 'Shuffle','every-epoch','VerboseFrequency',50, ...\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 'DispatchInBackground',true,...\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 'ExecutionEnvironment','auto');<\/pre>\n<p>Once you have your training data, network, and training options, train the detector using\u00a0 the YOLOv2 training function &#8211; <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/trainyolov2objectdetector.html?s_tid=doc_ta\">trainYOLOv2ObjectDetector<\/a><\/p>\n<pre>[detectorYolo2, info] = trainYOLOv2ObjectDetector(trainingData,lgraph,options);<\/pre>\n<h2>E. Detect output with the detector<\/h2>\n<p>Once you have the detector you can check the results visually, by running the detector through the images in the validation dataset.<\/p>\n<p>You can do so by first creating a table to hold the results.<\/p>\n<pre>results = table('Size',[height(TestData) 3],...\r\n \u00a0\u00a0 'VariableTypes',{'cell','cell','cell'},...\r\n\u00a0\u00a0\u00a0 'VariableNames',{'Boxes','Scores', 'Labels'});<\/pre>\n<p>Then Initialize a Deployable Video Player to view the image stream.<\/p>\n<pre>depVideoPlayer = vision.DeployableVideoPlayer;<\/pre>\n<p>And then loop through all the images in the Validation set.<\/p>\n<pre>for i = 1:height(ValidationData)\r\n\r\n \u00a0\u00a0 % Read the image\r\n\u00a0\u00a0\u00a0 I = imread(ValidationData.imageFilename{i});\r\n\r\n \u00a0\u00a0 % Run the detector.\r\n\u00a0\u00a0\u00a0 [bboxes,scores,labels] = detect(detectorYolo2,I);\r\n\r\n \u00a0\u00a0 %\r\n \u00a0\u00a0 if ~isempty(bboxes)\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 I = insertObjectAnnotation(I,'Rectangle',bboxes,cellstr(labels));\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 depVideoPlayer(I);\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 pause(0.1);\r\n\r\n\u00a0\u00a0\u00a0 end\u00a0\u00a0\u00a0\r\n\r\n \u00a0\u00a0 % Collect the results in the results table\r\n \u00a0\u00a0 results.Boxes{i} = floor(bboxes);\r\n \u00a0\u00a0 results.Scores{i} = scores;\r\n\u00a0\u00a0\u00a0 results.Labels{i} = labels;\r\n\r\nend<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-4343 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2020\/07\/yolo_small_final.gif\" alt=\"\" width=\"400\" height=\"400\" \/><\/p>\n<h2>F. Evaluate<\/h2>\n<p>Once you have a trained detector and have visually confirmed the detection on Validation data, you can compute numerical evaluation metrics and plot the results on the test data.<\/p>\n<p>MATLAB offers built-in functions <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/evaluatedetectionprecision.html\">evaluateDetectionPrecision<\/a> and <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/evaluatedetectionmissrate.html?s_tid=doc_ta\">evaluateDetectionMissRate<\/a> to evaluate precision metrics and miss rate metrics respectively.<\/p>\n<pre>[ap, recall, precision] = evaluateDetectionPrecision(results, TestData(:,2:end),threshold);\r\n\r\n[am,fppi,missRate] = evaluateDetectionMissRate(results, TestData(:,2:end),threshold);<\/pre>\n<p>When it comes to the miss rate and precision, an important parameter used is threshold. The threshold parameter determines the extent of overlap of the bounding box around an object of interest given by the detector over the bounding box of the same object in the ground truth. It is calculated as the\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Jaccard_index\">Intersection over Union (IoU) or Jaccard index<\/a>. As shown in the plots below for the same detection and ground truth data, changing the value of the threshold parameter drastically changes the value of the evaluation metric. Pick an overlap threshold value that best suits your application and keep in mind a higher threshold means you are expecting your detection results to overlap a larger area of the ground truth.<\/p>\n<p><strong>Threshold value 0.7<\/strong><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-4329 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2020\/07\/pic_5.png\" alt=\"\" width=\"560\" height=\"420\" \/><\/p>\n<p><strong>Threshold value 0.3<\/strong><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-4331 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2020\/07\/pic_6.png\" alt=\"\" width=\"560\" height=\"420\" \/><\/p>\n<p>Check out the video below to learn how you can work through above described steps of Designing and Training YOLOv2 network.<\/p>\n<p><iframe loading=\"lazy\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/xOvuQ6DY_4w?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<h1><strong>IV. <\/strong><strong>Import Python based model <\/strong><\/h1>\n<p>Another approach for training the network is importing the Python or other 3<sup>rd<\/sup> party developed models in MATLAB. One way is to follow the below workflow:<\/p>\n<p>Convert your Python into ONNX model &#8211;&gt; import the ONNX model into MATLAB<\/p>\n<p>To learn more how you can import the pre-trained YOLOv2 ONNX model into MATLAB and train it on your custom dataset check out the below blog &amp; video.<\/p>\n<p>Blog: <a href=\"https:\/\/towardsdatascience.com\/yolov2-object-detection-from-onnx-model-in-matlab-3bb25568aa15\">YOLOv2 Object Detection from ONNX Model in MATLAB<\/a><\/p>\n<p>Video: <a href=\"https:\/\/www.youtube.com\/watch?v=5bnIYH6P-vE&amp;list=PLn8PRpmsu08oLufaYWEvcuez8Rq7q4O7D&amp;index=46&amp;t=0s\">Import Pretrained Deep Learning Networks into MATLAB<\/a><\/p>\n<h1><strong>Summary<\/strong><\/h1>\n<p>Some key takeaways:<\/p>\n<ul>\n<li>You can use <a href=\"https:\/\/www.mathworks.com\/help\/driving\/ug\/get-started-with-the-ground-truth-labeler.html\">Ground Truth Labeler app<\/a> or <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/videolabeler-app.html\">Video Labeler app<\/a> to label your images either using app\u2019s inbuilt algorithms or importing your own custom algorithms. <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ug\/choose-a-labeling-app.html\">Check out this doc page to choose the appropriate labeling tool for your application<\/a><\/li>\n<li>You can design a neural network using MATLAB\u2019s inbuilt layers functions<\/li>\n<li>The <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/yolov2layers.html?s_tid=doc_ta\">yolov2Layers<\/a> functions adds subnetwork of yolo layers at the end of your own network or pretrained network<\/li>\n<li>You can import the Python based ONNX models in MATLAB and retrain them on your own dataset<\/li>\n<\/ul>\n<p>Hence, we learned here that you can simplify the difficulty of working with different platforms by developing a complete Object Detection model in a single environment of MATLAB. Check out the next post to learn how you can deploy this model on your NVIDIA Jetson.<\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/student-lounge\/files\/2020\/07\/Pic_1.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"screenshot\" decoding=\"async\" loading=\"lazy\" \/><\/div>\n<p>Today in this blog, we will talk about the complete workflow of Object Detection using Deep Learning. You will learn the step by step approach of Data Labeling, training a YOLOv2 Neural Network, and&#8230; <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/student-lounge\/2020\/07\/07\/yolov2-object-detection-data-labelling-to-neural-networks-in-matlab\/\">read more >><\/a><\/p>\n","protected":false},"author":163,"featured_media":4325,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[365,14,11,12],"tags":[102,363,104,417,161,419],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts\/4321"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/users\/163"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/comments?post=4321"}],"version-history":[{"count":16,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts\/4321\/revisions"}],"predecessor-version":[{"id":6627,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts\/4321\/revisions\/6627"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/media\/4325"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/media?parent=4321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/categories?post=4321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/tags?post=4321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}