{"id":298,"date":"2018-06-22T17:59:52","date_gmt":"2018-06-22T17:59:52","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=298"},"modified":"2021-04-06T15:51:53","modified_gmt":"2021-04-06T19:51:53","slug":"deep-learning-in-action-part-1","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/06\/22\/deep-learning-in-action-part-1\/","title":{"rendered":"Deep Learning in Action &#8211; part 1"},"content":{"rendered":"<span style=\"font-size: 16px\">\r\nHello Everyone! Allow me to quickly introduce myself. My name is <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/profile\/authors\/4758135-johanna-pingel\">Johanna<\/a>, and Steve has allowed me to take over the blog from time to time to talk about deep learning.\r\n<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nToday I\u2019d like to kick off a series called:\r\n<\/span>\r\n<p style=\"margin: 1% 13%;background-color: #86c5da;text-align: center\"><span style=\"color: #ffffff;font-size: 20px\">\u201cDeep Learning in Action:\r\n<\/span><\/p>\r\n<span style=\"color: #ffffff;font-size: 16px\">Cool projects created at MathWorks<\/span><span style=\"color: #ffffff;font-size: 20px\">\u201d<\/span>\r\n\r\n&nbsp;\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nThis aims to give you insight into what we\u2019re working on at MathWorks: I\u2019ll show some demos, and give you access to the code and maybe even post a video or two.\r\n<\/span>\r\n<span style=\"font-size: 16px\">\r\nToday\u2019s demo is called \"<strong>Pictionary\"<\/strong> and it\u2019s the first article in a series of posts, including:<\/span>\r\n<h6><\/h6>\r\n&nbsp;\r\n<ul>\r\n \t<li>3D Point Cloud Segmentation using CNNs<\/li>\r\n \t<li>GPU Coder<\/li>\r\n \t<li>Age Detection<\/li>\r\n \t<li>And maybe a few more!<\/li>\r\n<\/ul>\r\n&nbsp;\r\n\r\n<hr width=\"50%\/\" \/>\r\n\r\n<span style=\"color: #e67e22;font-size: 20px\"><strong>Demo: Pictionary<\/strong><\/span>\r\n<h6><\/h6>\r\n\r\n<span style=\"font-size: 14px\"><em>Pictionary refers to a game in which one person\/team draws an object and the other person\/team tries to guess what the object is.<\/em><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">The developer of the Pictionary demo is actually \u2026 me! This demo came about when a MathWorks developer posted on an internal message board:<\/span>\r\n\r\n<span style=\"font-size: 16px\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-450 size-full aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/InternalPosting-1.png\" alt=\"\" width=\"693\" height=\"508\" \/>\r\nWe already had an <a href=\"https:\/\/www.mathworks.com\/help\/nnet\/examples\/create-simple-deep-learning-network-for-classification.html\">example<\/a> of doing handwriting detection with the MNIST dataset, but this was a unique spin on that concept. Thus, the idea of creating a Pictionary example was born.<\/span>\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 20px\"><strong>Read the images in the dataset<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nThe first challenge [and honestly, the hardest part of the example] was reading in the images. Each image contains many drawings of an object category, for example there\u2019s an \u201cant\u201d category which has thousands of hand-drawn ants stored in a JSON file. \u00a0Each line of the file looks something like this:\r\n<\/span>\r\n<h6><\/h6>\r\n<div style=\"width: 80%;max-width: 100em;height: 8em;padding: 1em;margin: auto;overflow: auto\">{\"word\":\"ant\",\"countrycode\":\"US\",\"timestamp\":\"2017-03-27 00:14:57.31033 UTC\",\"recognized\":true,\"key_id\":\"5421013154136064\",\"drawing\":[[[27,17,16,21,34,50,49,34,23,17],[47,58,73,81,84,67,54,46,47,51]],[[22,0],[51,18]],[[41,46,43],[45,11,0]],[[53,65,64,69,91,119,135,148,159,158,149,126,87,68,62],[68,68,58,51,36,34,38,48,64,78,85,90,90,83,73]],[[161,175],[70,69]],[[180,177,176,187,206,226,244,250,250,245,233,207,188,180,180],[68,67,61,50,42,40,48,58,72,80,87,89,83,76,71]],[[73,61],[85,113]],[[95,94],[88,126]],[[140,157],[90,118]],[[199,201,208],[90,116,122]],[[234,242,255],[89,105,112]]]}<\/div>\r\n&nbsp;\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nCan you see the image? Me neither. The image is contained as x,y connector points. If we pull out the x,y points from the file, we can see the drawing start to take shape.<\/span>\r\n<h6><\/h6>\r\n<table style=\"border: 2px solid black\">\r\n<tbody>\r\n<tr>\r\n<td style=\"border: 2px solid black;padding: 5px\"><strong>Stroke<\/strong><\/td>\r\n<td style=\"border: 2px solid black;padding: 5px\"><strong>X Values<\/strong><\/td>\r\n<td style=\"border: 2px solid black;padding: 5px\"><strong>Y Values<\/strong><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">1<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">27,17,16,21,34,50,49,34,23,17<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">47,58,73,81,84,67,54,46,47,51<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">2<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">22,0<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">51,18<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">3<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">41,46,43<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">45,11,0<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">4<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">53,65,64,69,91,119,135,148,159,158,149,126,87,68,62<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">68,68,58,51,36,34,38,48,64,78,85,90,90,83,73<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n&nbsp;\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nThe idea of the file is to capture individual \u201cstrokes,\u201d i.e. what was drawn without lifting the pen. Let\u2019s take Stroke #1:<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">The X and Y values plotted on the image look like this:<\/span>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td>Full Image:<\/td>\r\n<td>Zoomed In:<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 2px solid pink\">\r\n\r\n<div id=\"attachment_446\" style=\"width: 310px\" class=\"wp-caption alignleft\"><img aria-describedby=\"caption-attachment-446\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-446 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/points-300x278.png\" alt=\"\" width=\"300\" height=\"278\" \/><p id=\"caption-attachment-446\" class=\"wp-caption-text\">X,Y values from input file plotted in pink<\/p><\/div><\/td>\r\n<td style=\"border: 2px solid pink\">\r\n\r\n<div id=\"attachment_448\" style=\"width: 310px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-448\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-448 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/zoomedInPoints-300x273.png\" alt=\"\" width=\"300\" height=\"273\" \/><p id=\"caption-attachment-448\" class=\"wp-caption-text\">Same image, just zoomed in<\/p><\/div><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<span style=\"font-size: 16px\">\u00a0<\/span>\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">And then we play a quick game of \u201cconnect the dots\u201d and we get our first stroke resembling a drawing. Connect the dots is fairly easy in MATLAB with a function called iptui.intline<\/span>\r\n<pre>&gt;&gt; help iptui.intline\r\n[X, Y] = intline(X1, X2, Y1, Y2) computes an approximation to the line segment joining \r\n(X1, Y1) and (X2, Y2) with integer coordinates.<\/pre>\r\n<div id=\"attachment_428\" style=\"width: 310px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-428\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-428 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/ant_drawaing_piece-300x278.png\" alt=\"\" width=\"300\" height=\"278\" \/><p id=\"caption-attachment-428\" class=\"wp-caption-text\">X,Y values plotted (pink) and \"strokes\" connecting them (yellow)<\/p><\/div>\r\n\r\n<span style=\"font-size: 16px\">\r\nWe do that for the remaining strokes, and we get:<\/span>\r\n\r\n<div id=\"attachment_438\" style=\"width: 310px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-438\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-438 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/FullAnt-300x198.png\" alt=\"\" width=\"300\" height=\"198\" \/><p id=\"caption-attachment-438\" class=\"wp-caption-text\">The yellow coloring is just for visual emphasis. The actual images will have a black background and white drawing.<\/p><\/div>\r\n\r\n<span style=\"font-size: 16px\">\r\nFinally! A drawing slightly resembling an ant. <\/span>\r\n\r\n<span style=\"font-size: 16px\">\r\nNow that we can create images from these x,y points, we can create a function and quickly repeat this for all the ants in the file, and multiple categories too.<\/span>\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nNow, this dataset assumes that people drew with a pencil, or something thin, since the thickness of the line is only 1 pixel.\u00a0 We can quickly change the thickness of the drawing with image processing tools, like image dilation. I imagined that people would be playing this on a whiteboard with markers, so training on thicker lines will help with this assumption. <\/span>\r\n\r\n&nbsp;\r\n<pre>larger_im = imdilate(im2,strel('Disk',3));\r\n<\/pre>\r\n<span style=\"font-size: 16px\">And while we\u2019re cleaning things up, lets center the image too:<\/span>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"300\" height=\"167\" class=\"alignnone size-medium wp-image-436\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/editedAntImage-300x167.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">For this example, I pulled 5000 training images, and 500 test images. There are many (many many!) more example images available in the files, so feel free to increase these numbers if you\u2019re so inclined.<\/span>\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 20px\"><strong>Create and train the network\r\n<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nNow that our dataset is ready to go, let\u2019s start training.\r\n<\/span>\r\n<span style=\"font-size: 16px\">\r\nHere\u2019s the structure of the network:\r\n<\/span>\r\n<pre>layers = [\r\n\u00a0\u00a0\u00a0 imageInputLayer([256 256 1])\r\n\r\n\u00a0\u00a0\u00a0 convolution2dLayer(3,16,<span style=\"color: #a020f0\">'Padding'<\/span>,1)\r\n\u00a0\u00a0\u00a0 batchNormalizationLayer\r\n\u00a0\u00a0\u00a0 reluLayer\r\n\r\n\u00a0\u00a0\u00a0 maxPooling2dLayer(2,<span style=\"color: #a020f0\">'Stride'<\/span>,2)\r\n\r\n\u00a0\u00a0\u00a0 convolution2dLayer(3,32,<span style=\"color: #a020f0\">'Padding'<\/span>,1)\r\n\u00a0\u00a0\u00a0 batchNormalizationLayer\r\n\u00a0\u00a0\u00a0 reluLayer\r\n\r\n\u00a0\u00a0\u00a0 maxPooling2dLayer(2,<span style=\"color: #a020f0\">'Stride'<\/span>)\r\n\r\n\u00a0\u00a0\u00a0 convolution2dLayer(3,64,<span style=\"color: #a020f0\">'Padding'<\/span>,1)\r\n\u00a0\u00a0\u00a0 batchNormalizationLayer\r\n\u00a0\u00a0\u00a0 reluLayer\r\n\r\n\u00a0\u00a0\u00a0 fullyConnectedLayer(5)\r\n\u00a0\u00a0\u00a0 softmaxLayer\r\n\u00a0\u00a0\u00a0 classificationLayer];<\/pre>\r\n<span style=\"font-size: 16px\">\r\nHow did I pick this specific structure? Glad you asked. I stole from other people that had \u00a0already created a network. This specific network structure is 99% accurate on the MNIST dataset, so I figured it was a good starting point for these handwritten drawings.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nHere\u2019s a handy plot created with this code:<\/span>\r\n<pre>lgraph = layerGraph(layers);\r\nplot(layers)\r\n<\/pre>\r\n<h4>\u00a0<img decoding=\"async\" loading=\"lazy\" width=\"88\" height=\"300\" class=\"alignnone size-medium wp-image-452\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/boringLayerGraph-88x300.png\" alt=\"\" \/><\/h4>\r\n<span style=\"font-size: 16px\">\r\nI\u2019ll admit, this is a fairly boring layer graph since it\u2019s all a straight line, but if you were working with <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/02\/14\/create-a-simple-dag-network\/\">DAG networks<\/a>, you could easily see the connections of a complicated network.<\/span>\r\n<h4><\/h4>\r\n<span style=\"font-size: 16px\">\r\nI trained this with a zippy NVIDIA P100 GPU in roughly 20 minutes. Test images set aside give an accuracy of roughly 90%. For an autonomous driving scenario, I would need to go back and refine the algorithm. For a game of Pictionary, this is a perfectly acceptable number in my opinion.<\/span>\r\n<h6><\/h6>\r\n<pre>predLabelsTest = net.classify(uint8(imgDataTest));\r\ntestAccuracy = sum(predLabelsTest == labelsTest') \/ numel(labelsTest)<\/pre>\r\n<blockquote>testAccuracy = 0.8996<\/blockquote>\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 20px\"><strong>Debug the network<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nLet\u2019s drill down into the accuracy to give more insight into the trained network.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nOne way to look at the specific categories\u2019 predictions is to create a confusion matrix. A very simple option is to create a heatmap. This works similar to a confusion matrix \u2013 assuming you have the same number of images in each category \u2013 which we do: 500 test images per category.<\/span>\r\n<pre><span style=\"color: #7cb96e\">% visualize where the predicted label doesn't match the actual<\/span>\r\n\r\ntt = table(predLabelsTest, categorical(labelsTest'),'VariableNames',{'Predicted','Actual'});\r\nfigure('name','confusion matrix'); heatmap(tt,'Actual','Predicted');<\/pre>\r\n<span style=\"font-size: 16px\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-434 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/confusion.png\" alt=\"\" width=\"840\" height=\"630\" \/>\r\nOne thing that pops out is that ants and wristwatches tend to confuse the classifier. This seems like reasonable confusion. If we were confusing wine glasses with ants, then we might have a problem.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nThere are two reasons for error in our Pictionary classifier: <\/span>\r\n<ol>\r\n \t<li><span style=\"font-size: 16px\">The person <span style=\"text-decoration: underline\">guessing<\/span> can\u2019t identify the object, or <\/span><\/li>\r\n \t<li><span style=\"font-size: 16px\">The person <span style=\"text-decoration: underline\">drawing<\/span> doesn\u2019t describe the object well enough.<\/span><\/li>\r\n<\/ol>\r\n&nbsp;\r\n\r\n<span style=\"font-size: 16px\">If we are asking for a classifier to be 100% accurate, we are assuming that the person drawing never does a poor job for any of the object categories. <em>Highly unlikely.<\/em><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nDrilling down even further, let\u2019s look at the 67 ants that were misclassified.<\/span>\r\n<pre><span style=\"color: #7cb96e\">% pick out the times  where the predicted label doesn't match the actual <\/span>\r\n\r\nidx = find(predLabelsTest ~= labelsTest');\r\nloser_ants = idx(idx &lt; 500);\r\n\r\nmontage(imgDataTest(:,:,1,loser_ants));<\/pre>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-432 size-large\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/ants-1024x851.png\" alt=\"\" width=\"1024\" height=\"851\" \/>\r\nI\u2019m going to go out on a limb and say that at least 18 of these ants shouldn\u2019t be called ants at all. In defense of my classifier, let\u2019s say that you were playing Pictionary, and someone drew this: <\/span>\r\n<pre><span style=\"color: #7cb96e\">% select an image from the bunch <\/span>\r\nii = 169\r\nimg = squeeze(uint8(imgDataTest(:,:,1,ii)));\r\n\r\nactualLabel = labelsTest(ii);\r\npredictedLabel = net.classify(img);\r\n\r\nimshow(img,[]);\r\ntitle([<span style=\"color: #a020f0\">'Predicted: ' <\/span>char(predictedLabel) <span style=\"color: #a020f0\">', Actual: ' <\/span> char(actualLabel)])<\/pre>\r\n<h6>\u00a0<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-430 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/ant_umbrella.png\" alt=\"\" width=\"476\" height=\"391\" \/><\/h6>\r\n<span style=\"font-size: 16px\">\r\nWhat are the chances you would call that an ant?? If a computer classifies this as an umbrella, is that really an error??<\/span>\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 20px\"><strong>Try the classifier on new images<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nNow the whole point of this example was to see what it would be like for new images in real life. I drew an ant...<\/span>\r\n\r\n<div id=\"attachment_442\" style=\"width: 310px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-442\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-442 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/my_ant-300x219.png\" alt=\"\" width=\"300\" height=\"219\" \/><p id=\"caption-attachment-442\" class=\"wp-caption-text\">My very own ant drawing<\/p><\/div>\r\n\r\n<span style=\"font-size: 16px\">...and the trained model can now tell me what it thinks it is. Let\u2019s throw in a confidence rating too.<\/span>\r\n<h6><\/h6>\r\n<pre>s = snapshot(webcam); \r\nmyDrawing = segmentImage(s(:,:,2));\r\n\r\nmyDrawing = imresize(myDrawing,[256,256]); <span style=\"color: #7cb96e\">% ensure this is the right size for processing<\/span>\r\n\r\n[predval,conf] = net.classify(uint8(myDrawing));\r\nimshow(myDrawing);\r\ntitle(string(predval)+ sprintf(<span style=\"color: #a020f0\">' %.1f%%'<\/span>,max(conf)*100));<\/pre>\r\n&nbsp;\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"363\" height=\"362\" class=\"alignnone size-full wp-image-444\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/myAnt.png\" alt=\"\" \/>\r\n\r\n<span style=\"font-size: 16px\">\r\nI used a segmentation function created with image processing, that finds the object I drew and flips the black to white and white to black.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nLooks like my Pictionary skills are good enough!<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 16px\">\r\nThis code is on <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/66968-image-processing-and-computer-vision-with-matlab--code-examples\">FileExchange<\/a>, and you can see this example in a\u00a0<a href=\"https:\/\/www.mathworks.com\/videos\/image-processing-and-computer-vision-with-matlab-1524489939916.html\">webinar<\/a> I recorded with my colleague Gabriel Ha.<\/span>\r\n<span style=\"font-size: 16px\">\r\nLeave me a comment below if you have any questions. Join me next time when I talk to a MathWorks engineer about using CNNs for Point Cloud segmentation!<\/span>\r\n<h6><\/h6>","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/06\/InternalPosting-1.png\" onError=\"this.style.display ='none';\" \/><\/div><p>\r\nHello Everyone! Allow me to quickly introduce myself. My name is Johanna, and Steve has allowed me to take over the blog from time to time to talk about deep learning.\r\n\r\n\r\n\r\nToday I\u2019d like to kick... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/06\/22\/deep-learning-in-action-part-1\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/298"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=298"}],"version-history":[{"count":47,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/298\/revisions"}],"predecessor-version":[{"id":481,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/298\/revisions\/481"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=298"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=298"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}