{"id":5049,"date":"2020-09-30T09:31:55","date_gmt":"2020-09-30T13:31:55","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=5049"},"modified":"2021-04-06T15:45:44","modified_gmt":"2021-04-06T19:45:44","slug":"new-deep-learning-examples","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2020\/09\/30\/new-deep-learning-examples\/","title":{"rendered":"New Deep Learning Examples"},"content":{"rendered":"<span style=\"font-size: 14px;\">There are over 35 new deep learning related examples in the latest release. That\u2019s a lot to cover, and the release notes can get a bit dry, so I brought in reinforcements. I asked members of the documentation team to share a new example they created and answer a few questions about why they\u2019re excited about it. Feel free to ask questions in the comments section below!<\/span>\r\n<!--more-->\r\n<h6><\/h6>\r\n<span style=\"font-size: 19px; color: #c45c06;\">New Deep Network Designer Example<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Deep Network Designer (DND) has been Deep Learning Toolbox\u2019s flagship app since 2018. Last release (20a) introduced training inside the app, but you could only train for image classification. In 20b training is massively expanded to cover many more deep learning applications.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><strong>The new feature<\/strong> allows for importing and visualization new datatypes, which enables workflows such as time-series, image-to-image regression, and semantic segmentation. <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/create-simple-semantic-segmentation-network-in-deep-network-designer.html\"><strong><span style=\"text-decoration: underline;\">This example<\/span><\/strong><\/a> shows how to train a semantic segmentation network using DND.<\/span>\r\n<h6><\/h6>\r\n<table width=\"100%\">\r\n<tbody>\r\n<tr>\r\n<td style=\"padding: 20px;\">\r\n<p style=\"text-align: center;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-5063 size-full aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/DNDExample1a.png\" alt=\"\" width=\"570\" height=\"507\" \/><\/p>\r\n<p style=\"text-align: center;\">Deep Network Designer visualization of input data<\/p>\r\n<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>Give us the highlights:<\/em> There is much more flexibility in the app this release; you can import any datastore and train any network that works with <span style=\"font-family: courier;\">trainnetwork<\/span>. This opens up timeseries training and image-to-image regression workflows. You can also visualize the input data directly in the app prior to training. Although this is a simple example, it walks through each of these steps and trains semi-quickly.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>Any challenges when creating the example?<\/em>\u00a0Not challenges per se, but this example touches on a lot of components: semantic segmentation, image processing, computer vision, and how to use and explain it all within the context of the app. Also, the algorithm uses <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/unetlayers.html\"><span style=\"font-family: courier;\">unetlayers<\/span><\/a> so I got to read up on that too.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>What else?<\/em><\/span>\r\n<ul>\r\n \t<li><span style=\"font-size: 14px;\">I also created a \u201c<a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/import-data-into-deep-network-designer.html\">concept page<\/a>\u201d which was a result of trying and testing the app. I wanted to offer data that you can immediately use with the app. A lot of other examples require cleaning and preprocessing, so instead, I wanted to deliver out-of-the-box data that you can just run. If you start with blank MATLAB you can run any of these code snippets (time series, image, pixels etc), which can be used as a starting point for the DND workflow. <\/span><span style=\"font-size: 14px;\">This is the code for quickly importing the digits data:<\/span><\/li>\r\n<\/ul>\r\n<pre>dataFolder = fullfile(toolboxdir('nnet'),'nndemos','nndatasets','DigitDataset');\r\n\r\nimds = imageDatastore(dataFolder, 'IncludeSubfolders',true, ...\r\n    'LabelSource','foldernames');\r\n\r\nimageAugmenter = imageDataAugmenter( 'RandRotation',[1,2]);\r\naugimds = augmentedImageDatastore([28 28],imds,'DataAugmentation',imageAugmenter);\r\n\r\naugimds = shuffle(augimds);<\/pre>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Use these code snippets as a starting point and try adapting them for your own data set and application!<\/span>\r\n<h6><\/h6>\r\n<ul>\r\n \t<li><span style=\"font-size: 14px;\">There\u2019s also an image-to-image regression example I created for those interested in a semantic segmentation alternative. <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/image-to-image-regression-in-deep-network-designer.html\"><strong>This example<\/strong><\/a> also walks through using DND for the complete workflow using image deblurring.<\/span><\/li>\r\n<\/ul>\r\n<span style=\"font-size: 14px;\"><em>Besides your work, any recommendations for other examples to try in 20b?<\/em><\/span>\r\n<ul>\r\n \t<li><span style=\"font-size: 14px;\">LIME is pretty cool: it\u2019s interesting (and useful) to see what a network has actually learned (<em>we cover that in the next featured example below!<\/em>)<\/span><\/li>\r\n \t<li><span style=\"font-size: 14px;\">There\u2019s a new style transfer example <a href=\"https:\/\/github.com\/matlab-deep-learning\/artistic-style-transfer\"><strong>available on github<\/strong><\/a>\u00a0too!<\/span><\/li>\r\n<\/ul>\r\n<span style=\"font-size: 14px;\"><em>Thanks to Jess for the insight and recommendations! <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/create-simple-semantic-segmentation-network-in-deep-network-designer.html\">The example is here<\/a>. <\/em><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 19px; color: #c45c06;\">Visualize predictions with imageLIME<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Grad-CAM and occlusion sensitivity have been used in Deep Learning Toolbox for a release or two to visualize the areas of the data that make the network predict a specific class. This release features a new visualization technique called LIME. <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/understand-network-predictions-using-lime.html\"><strong>This new example<\/strong><\/a> uses <strong>imageLIME<\/strong> for visualizations.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>What is LIME - in 60 seconds or less?<\/em>\u00a0LIME stands for local interpretable model-agnostic explanation, and it\u2019s slightly more complicated than the Grad-CAM algorithm to implement. You provide a single data point to the LIME algorithm, and the algorithm perturbs the data to generate a bunch of sample data. The algorithm then uses those samples to fit a simple regression model that has the same classification behavior as the deep network. Because of the perturbations, you can see which parts of the data are most important for the class prediction. Model-agnostic means it doesn\u2019t matter how you got your original model, it\u2019s simply showing scores localized to the changes of the initial piece of data.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><strong>The main thing to keep in mind is<\/strong>\u00a0LIME computes maps of the feature importance to show you the areas of the image that have the most influence on the class score. These regions are essential for that particular class prediction, because if they are removed in the perturbed images the score goes down.<\/span>\r\n<h6><\/h6>\r\n<p style=\"text-align: center;\"><img decoding=\"async\" loading=\"lazy\" width=\"300\" height=\"249\" class=\"size-medium wp-image-5067 alignright\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/limeImage2-300x249.png\" alt=\"\" \/><\/p>\r\n\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>As an aside I have to ask: What\u2019s with the image of the dog?<\/em> We use it a lot for deep learning: it ships with MATLAB and it\u2019s nice to show! The dog\u2019s name is Sherlock and she belongs to a developer at MathWorks. We decided to use this image for the example because we use the same image with occlusion sensitivity and grad-CAM. Using the same image for all visualizations can help you compare and highlight the similarities or differences between the algorithms. In fact, in the example we <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/understand-network-predictions-using-lime.html#UnderstandNetworkPredictionsUsingLIMEExample-4\"><strong>compare LIME with Grad-CAM<\/strong><\/a>. <\/span>\r\n<h6><\/h6>\r\nYou can visualize all 3 algorithms side-by-side below:\r\n<table width=\"720\">\r\n<tbody>\r\n<tr>\r\n<td style=\"border: 1px solid black; text-align: center;\" width=\"33%\"><span style=\"font-size: 15px;\">Image LIME<\/span><\/td>\r\n<td style=\"border: 1px solid black; text-align: center;\"><span style=\"font-size: 15px;\">Grad-CAM<\/span><\/td>\r\n<td style=\"border: 1px solid black; text-align: center;\"><span style=\"font-size: 15px;\">Occlusion Sensitivity<\/span><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black; padding: 10px;\"><img decoding=\"async\" loading=\"lazy\" width=\"321\" height=\"328\" class=\"alignnone size-full wp-image-5147\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/sidebyside1a-1.png\" alt=\"\" \/><\/td>\r\n<td style=\"border: 1px solid black; padding: 10px;\"><img decoding=\"async\" loading=\"lazy\" width=\"220\" height=\"218\" class=\"alignnone size-full wp-image-5149\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/sidebyside2a-1.png\" alt=\"\" \/><\/td>\r\n<td style=\"border: 1px solid black; padding: 10px;\"><img decoding=\"async\" loading=\"lazy\" width=\"219\" height=\"215\" class=\"alignnone size-full wp-image-5151\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/sidebyside3a-1.png\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black; text-align: center;\">Link to <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/understand-network-predictions-using-lime.html\">example<\/a><\/td>\r\n<td style=\"border: 1px solid black; text-align: center;\">Link to <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/gradcam-explains-why.html\">example<\/a><\/td>\r\n<td style=\"border: 1px solid black; text-align: center;\">Link to <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ref\/occlusionsensitivity.html\">example<\/a><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<span style=\"font-size: 14px;\">Showing side-by-side comparison of visualization algorithms available. All algorithms can show heat maps, but LIME can also show the nice superpixel regions (as seen above).\u00a0<\/span>\r\n\r\n<span style=\"font-size: 14px;\">LIME results can also be plotted by showing only the most important few features:<\/span>\r\n<h6><\/h6>\r\n<p style=\"text-align: center;\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-5167 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/UnderstandNetworkPredictionsUsingLIMEExample_05.png\" alt=\"\" width=\"404\" height=\"317\" \/><\/p>\r\n\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>Any challenges when creating the example?<\/em> This example is a nice continuation from the other visualization work that came before, so it was fairly straightforward to create this example. The only challenge was deciding on the name of the function: Should we call it imageLIME, or just LIME, or even deepLIME. We debated this for a while.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>What else?<\/em><\/span>\r\n<ul>\r\n \t<li><span style=\"font-size: 14px;\">As the name implies, the function <span style=\"font-family: courier;\">imageLIME<\/span> is used primarily for images. However, it works with any network that uses an <span style=\"font-family: courier;\">imageInputLayer<\/span>, so it can work with time-series, spectral or <span style=\"text-decoration: underline;\"><a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/classify-text-data-using-convolutional-neural-network.html\">even text data<\/a><\/span><\/span>.<\/li>\r\n \t<li><span style=\"font-size: 14px;\">Here\u2019s a really fun example my colleague used as an augmentation of this example. She showed the algorithm a picture of many zoo animals, and then used LIME to home in on a particular animal.<\/span><\/li>\r\n<\/ul>\r\n<h2><\/h2>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td width=\"50%\" style= \"padding-top:10px;\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-5059 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/GettyImages-984312730.jpg\" alt=\"\" width=\"724\" height=\"483\" \/><\/td>\r\n<td style =\"padding-left:10px; padding-bottom:20px;\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-5109 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/gettyTigerOutput2-300x300.png\" alt=\"\" width=\"300\" height=\"300\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<span style=\"font-size: 14px;\">This example makes LIME work almost like a semantic segmentation network for animal detection!<\/span>\r\n<pre>net = googlenet;\r\ninputSize = net.Layers(1).InputSize(1:2);\r\nimg = imread(\"animals.jpg\");\r\n\r\nimg = imresize(img,inputSize);\r\nimshow(img)\r\n\r\nclassList = [categorical(\"tiger\") categorical (\"lion\") categorical(\"leopard\")];\r\n[map,featMap,featImp] = imageLIME(net,img,classList);\r\nfullMask = zeros(inputSize);\r\n\r\nfor j = 1:numel(classList)\r\n  [~,idx] = max(featImp(:,j));\r\n  mask = ismember(featMap,idx);\r\n  fullMask = fullMask + mask;\r\nend\r\n\r\nmaskedImg = uint8(fullMask).*img;\r\nimshow(maskedImg)<\/pre>\r\n<span style=\"font-size: 14px;\"><em>Besides this example, any other examples you like for 20b?<\/em><\/span>\r\n<ul>\r\n \t<li><span style=\"font-size: 14px;\"><span style=\"font-family: courier;\">minibatchqueue<\/span> is new. It\u2019s not super flashy but it\u2019s very useful. <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/minibatchqueue.html\">Minibatchqueue<\/a> is a new way to manage and process data for the custom training workflow. It\u2019s just a nicer way of managing data for custom training loops and much cleaner to read.<\/span><\/li>\r\n<\/ul>\r\n<span style=\"font-size: 14px;\"><em>Thanks to Sophia for the info and recommendations, especially the animal images!! <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/understand-network-predictions-using-lime.html\">The example is here<\/a>. <\/em><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 19px; color: #c45c06;\">New Feature Input for <span style=\"font-family: courier;\">trainnetwork<\/span><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Our final featured example today highlights a more advanced example. Prior to this release, network support was limited to image or sequence data. 20b introduced a new input layer: <span style=\"font-family: courier;\">featureInputLayer<\/span>.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">This layer unlocks new type of data: <em>generic data<\/em>. It\u2019s no longer a requirement to be continuous data, such as time-series data. The data featured in this example is gear shift data, so each column corresponds to one value from a sensor: such as temperature and other single values. Each row is an observation.<\/span>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"894\" height=\"315\" class=\"alignnone size-full wp-image-5075\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/advancedImage4-1.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Link to example is <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/train-network-on-data-set-of-numeric-features.html\"><strong>here.<\/strong><\/a><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>This workflow sounds like a traditional machine learning workflow. Any concerns about overlap?<\/em> This example closes the gap between traditional machine learning and allows the user to explore both ML and DL. Prior to this release, individual feature data could only work in a traditional machine learning workflow.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>Do you expect that feature inputs to run faster than other deep learning examples?<\/em> Of course, I have to say, \u201cit depends,\u201d but I\u2019ve found this example trains very fast (a few seconds). You\u2019re not dealing with large images, so it could train faster. Also, if your model is simpler as this one is, you may need less time as well.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>Any challenges when creating the example?<\/em>\u00a0After you get the data in, it\u2019s basically the same workflow as training any network, but keep in mind two things when going through this workflow:<\/span>\r\n<ol>\r\n \t<li><span style=\"font-size: 14px;\">You can\u2019t do image-based convolutions, but with data like this the network may not need to be as complex.<\/span><\/li>\r\n \t<li><span style=\"font-size: 14px;\">Another thing to consider is categorical data. Sensor data can sometimes be a string value, such as \u201con\u201d or \u201coff\u201d rather than a numeric value. Deep learning networks won\u2019t accept these values as <em>input<\/em>, so you have to use <span style=\"font-family: courier;\">onehotencode<\/span> (also a new function that makes this workflow possible) which will convert the categorical labels to binary.<\/span><\/li>\r\n<\/ol>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"552\" height=\"570\" class=\"alignnone size-full wp-image-5077\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/advancedImage5-1.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Showing easy implementation of <span style=\"font-family: courier;\">onehotencode<\/span><\/span>\r\n<h6><\/h6>\r\n<ul>\r\n \t<li><span style=\"font-size: 14px;\">Another extension of this example is using both image <em>and <\/em>feature data in the same network. <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/train-network-on-image-and-feature-data.html\"><strong>This example<\/strong><\/a> uses handwritten digits (images), and the angle (features) as input. It uses a custom training loop to handle the different inputs, but it unlocks a brand-new network type.<\/span><\/li>\r\n<\/ul>\r\n<em><span style=\"font-size: 14px;\">Besides this example, any other examples you like for 20b?<\/span><\/em>\r\n<ul>\r\n \t<li><span style=\"font-size: 14px;\">If you\u2019re doing seriously low-level stuff, you\u2019ll be using the model function option to define and train your network. <span style=\"text-decoration: underline;\"><a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/initialize-learnable-parameters-for-custom-training-loop.html\">This concept page<\/a><\/span> shows all built-in layers, the default initializations, and how implement it yourself.<\/span><\/li>\r\n \t<li><span style=\"font-size: 14px;\">There\u2019s also a new flowchart page which shows which method of training will be best for your particular deep learning problem. This will help you decide between the simpler <span style=\"font-family: courier;\">trainnetwork<\/span> option, to the more advanced custom training loop options.<\/span><\/li>\r\n<\/ul>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1276\" height=\"715\" class=\"alignnone size-full wp-image-5079\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/flowchart6-1.png\" alt=\"\" \/>\r\n\r\n<span style=\"font-size: 14px;\">Flowchart helping determine which training style to use. Link to the full flowchart example is <span style=\"text-decoration: underline;\"><a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/training-deep-learning-models-in-matlab.html\"><strong>here<\/strong><\/a><\/span>.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><em>Thanks to Ieuan for the info and recommendations! <a href=\"http:\/\/www.mathworks.com\/help\/deeplearning\/ug\/train-network-on-data-set-of-numeric-features.html\">Link to the example is here<\/a>.<\/em><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Straight from the doc team to you! I have to say I\u2019m a fan of the \u201cconcept pages\u201d and hope that trend continues! Thanks again to Jess, Sophia, and Ieuan. I hope you found this informative and helpful. If you have any questions for the team, leave a comment below!<\/span>\r\n<h6><\/h6>\r\n&nbsp;","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2020\/09\/DNDExample1a.png\" onError=\"this.style.display ='none';\" \/><\/div><p>There are over 35 new deep learning related examples in the latest release. That\u2019s a lot to cover, and the release notes can get a bit dry, so I brought in reinforcements. I asked members of the... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2020\/09\/30\/new-deep-learning-examples\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/5049"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=5049"}],"version-history":[{"count":73,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/5049\/revisions"}],"predecessor-version":[{"id":6101,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/5049\/revisions\/6101"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=5049"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=5049"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=5049"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}