{"id":27,"date":"2017-10-06T07:00:03","date_gmt":"2017-10-06T07:00:03","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=27"},"modified":"2021-04-06T15:52:57","modified_gmt":"2021-04-06T19:52:57","slug":"deep-learning-with-matlab-r2017b","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2017\/10\/06\/deep-learning-with-matlab-r2017b\/","title":{"rendered":"Deep Learning with MATLAB R2017b"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>The <a href=\"https:\/\/www.mathworks.com\/products\/new_products\/latest_features.html?s_tid=hp_release_2017b\">R2017b release<\/a> of MathWorks products shipped just two weeks ago, and it includes many new capabilities for deep learning. Developers on several product teams have been working hard on these capabilities, and everybody is excited to see them make it into your hands. Today, I'll give you a little tour of what you can expect when you get a chance to update to the new release.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#967cb0d6-fc69-4bcf-b011-ddc0ba52db42\">New network types and pretrained networks<\/a><\/li><li><a href=\"#b94472f2-f322-45d4-a9f7-f5ebe4a382c2\">New layer types<\/a><\/li><li><a href=\"#46ee0f47-b980-455f-9416-88151e2d8ea9\">Improvements in network training<\/a><\/li><li><a href=\"#36edbc58-9b23-43dc-aad8-a32d0f5764d4\">Semantic segmentation<\/a><\/li><li><a href=\"#11a97a12-e6ac-4d2a-ad55-59ac15d1682e\">Deployment to embedded systems<\/a><\/li><li><a href=\"#57fbde48-94f4-4051-8c4a-a3a8574071e4\">For more information<\/a><\/li><\/ul><\/div><h4>New network types and pretrained networks<a name=\"967cb0d6-fc69-4bcf-b011-ddc0ba52db42\"><\/a><\/h4><p>The heart of deep learning for MATLAB is, of course, the <a href=\"https:\/\/www.mathworks.com\/products\/neural-network.html\">Neural Network Toolbox<\/a>. The Neural Network Toolbox introduced two new types of networks that you can build and train and apply: directed acyclic graph (DAG) networks, and long short-term memory (LSTM) networks.<\/p><p>In a DAG network, a layer can have inputs from multiple layers instead of just one one. A layer can also output to multiple layers. Here's a sample from the example <a href=\"https:\/\/www.mathworks.com\/help\/nnet\/examples\/create-and-train-dag-network.html\">Create and Train DAG Network for Deep Learning<\/a>.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/dag-plot.png\" alt=\"\"> <\/p><p>You can try out the pretrained GoogLeNet model, which is a DAG network that you can load using <tt>googlenet<\/tt>.<\/p><p>Experiment also with <a href=\"https:\/\/www.mathworks.com\/help\/nnet\/ug\/long-short-term-memory-networks.html\">long short-term memory (LSTM) networks<\/a>, which have the ability to learn long-term dependencies in time-series data.<\/p><h4>New layer types<a name=\"b94472f2-f322-45d4-a9f7-f5ebe4a382c2\"><\/a><\/h4><p>There's a pile of new layer types, too: batch normalization, transposed convolution, max unpooling, leaky ReLU, clipped rectified ReLU, addition, and depth concatenation.<\/p><p>My colleague <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/profile\/authors\/692126-joe-hicklin\">Joe<\/a> used the Neural Network Toolbox to define his own type of network layer based on a paper he read a couple of months ago. I'll show you his work in detail a little later this fall.<\/p><h4>Improvements in network training<a name=\"46ee0f47-b980-455f-9416-88151e2d8ea9\"><\/a><\/h4><p>When you train your networks, you can now plot the training progress. You can also validate network performance and automatically halt training based on the validation metrics. Plus, you can find optimal network parameters and training options using <a href=\"https:\/\/www.mathworks.com\/help\/nnet\/examples\/deep-learning-using-bayesian-optimization.html\">Bayesian optimization<\/a>.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/accuracy-plot-600w.png\" alt=\"\"> <\/p><p><a href=\"https:\/\/www.mathworks.com\/help\/nnet\/ug\/preprocess-images-for-deep-learning.html\">Automatic image preprocessing and augmentation<\/a> is now available for network training. Image augmentation is the idea of increasing the training set by randomly applying transformations, such as resizing, rotation, reflection, and translation, to the available images.<\/p><h4>Semantic segmentation<a name=\"36edbc58-9b23-43dc-aad8-a32d0f5764d4\"><\/a><\/h4><p>As an image processing algorithms person, I am especially intrigued by the new semantic segmentation capability, which lets you classify pixel regions and visualize the results.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/semantic-segmentation-example.png\" alt=\"\"> <\/p><p>See <a href=\"https:\/\/www.mathworks.com\/help\/vision\/examples\/semantic-segmentation-using-deep-learning.html\">\"Semantic Segmentation Using Deep Learning\"<\/a> for a detailed example using the <a href=\"http:\/\/mi.eng.cam.ac.uk\/research\/projects\/VideoRec\/CamVid\/\">CamVid dataset<\/a> from the University of Cambridge.<\/p><h4>Deployment to embedded systems<a name=\"11a97a12-e6ac-4d2a-ad55-59ac15d1682e\"><\/a><\/h4><p>If you are implementing deep learning methods in embedded system, take a look at <a href=\"https:\/\/www.mathworks.com\/products\/gpu-coder.html\">GPU Coder<\/a>, a brand new product in the R2017b release. GPU Coder generates CUDA from MATLAB code for deep learning, embedded vision, and autonomous systems. The generated code is well optimized, as you can see from this performance benchmark plot.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/gpu-coder-benchmark-plot.png\" alt=\"\"> <\/p><p><i>MathWorks benchmarks of inference performance of AlexNet using GPU acceleration, Titan XP GPU, Intel&reg; Xeon&reg; CPU E5-1650 v4 at 3.60GHz, cuDNN v5, and Windows 10. Software versions: MATLAB (R2017b), TensorFlow (1.2.0), MXNet (0.10), and Caffe2 (0.8.1).<\/i><\/p><h4>For more information<a name=\"57fbde48-94f4-4051-8c4a-a3a8574071e4\"><\/a><\/h4><p>I have just scratched the surface of the deep learning capabilities in the ambitious R2017b release.<\/p><p>Here are some additional sources of information.<\/p><div><ul><li><a href=\"https:\/\/www.mathworks.com\/products\/new_products\/latest_features.html?s_tid=hp_release_2017b\">R2017b Highlights<\/a><\/li><li>Neural Network Toolbox (<a href=\"https:\/\/www.mathworks.com\/help\/nnet\/index.html\">doc<\/a>, <a href=\"https:\/\/www.mathworks.com\/help\/nnet\/release-notes.html\">release notes<\/a>)<\/li><li>Parallel Computing Toolbox (<a href=\"https:\/\/www.mathworks.com\/help\/distcomp\/index.html\">doc<\/a>, <a href=\"https:\/\/www.mathworks.com\/help\/distcomp\/release-notes.html\">release notes<\/a>)<\/li><li>Computer Vision System Toolbox (<a href=\"https:\/\/www.mathworks.com\/help\/vision\/index.html\">doc<\/a>, <a href=\"https:\/\/www.mathworks.com\/help\/vision\/release-notes.html\">release notes<\/a>)<\/li><li>Image Processing Toolbox (<a href=\"https:\/\/www.mathworks.com\/help\/images\/index.html\">doc<\/a>, <a href=\"https:\/\/www.mathworks.com\/help\/images\/release-notes.html\">release notes<\/a>)<\/li><li>GPU Coder (<a href=\"https:\/\/www.mathworks.com\/products\/gpu-coder.html\">product info<\/a>)<\/li><\/ul><\/div><script language=\"JavaScript\"> <!-- \r\n    function grabCode_bb89d9d0e7d142308863528639bee3e1() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='bb89d9d0e7d142308863528639bee3e1 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' bb89d9d0e7d142308863528639bee3e1';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2017 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_bb89d9d0e7d142308863528639bee3e1()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2017b<br><\/p><\/div><!--\r\nbb89d9d0e7d142308863528639bee3e1 ##### SOURCE BEGIN #####\r\n%%\r\n% The\r\n% <https:\/\/www.mathworks.com\/products\/new_products\/latest_features.html?s_tid=hp_release_2017b\r\n% R2017b release> of MathWorks products shipped just two weeks ago, and it\r\n% includes many new capabilities for deep learning. Developers on several\r\n% product teams have been working hard on these capabilities, and everybody\r\n% is excited to see them make it into your hands. Today, I'll give you a\r\n% little tour of what you can expect when you get a change to update to the\r\n% new release.\r\n%\r\n%% New network types and pretrained networks\r\n%\r\n% The heart of deep learning for MATLAB is, of course, the \r\n% <https:\/\/www.mathworks.com\/products\/neural-network.html Neural Network\r\n% Toolbox>. The Neural Network Toolbox introduced two new types of networks\r\n% that you can build and train and apply: directed acyclic graph (DAG)\r\n% networks, and long short-term memory (LSTM) networks.\r\n%\r\n% In a DAG network, a layer can have inputs from multiple layers instead of\r\n% just one one. A layer can also output to multiple layers. Here's a\r\n% sample from the example \r\n% <https:\/\/www.mathworks.com\/help\/nnet\/examples\/create-and-train-dag-network.html \r\n% Create and Train DAG Network for Deep Learning>.\r\n%\r\n% <<https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/dag-plot.png>>\r\n%\r\n% You can try out the pretrained GoogLeNet model, which is a DAG network that you\r\n% can load using |googlenet|.\r\n%\r\n% Experiment also with \r\n% <https:\/\/www.mathworks.com\/help\/nnet\/ug\/long-short-term-memory-networks.html \r\n% long short-term memory (LSTM) networks>, which have\r\n% the ability to learn long-term dependencies in time-series data.\r\n%\r\n%% New layer types\r\n%\r\n% There's a pile of new layer types, too: batch normalization, transposed\r\n% convolution, max unpooling, leaky ReLU, clipped rectified ReLU, addition,\r\n% and depth concatenation.\r\n%\r\n% My colleague <https:\/\/www.mathworks.com\/matlabcentral\/profile\/authors\/692126-joe-hicklin \r\n% Joe> used the Neural Network Toolbox to define his own type\r\n% of network layer based on a paper he read a couple of months ago. I'll\r\n% show you his work in detail a little later this fall.\r\n%\r\n%% Improvements in network training\r\n%\r\n% When you train your networks, you can now plot the training progress. You\r\n% can also validate network performance and automatically halt training\r\n% based on the validation metrics. Plus, you can find optimal network parameters\r\n% and training options using \r\n% <https:\/\/www.mathworks.com\/help\/nnet\/examples\/deep-learning-using-bayesian-optimization.html \r\n% Bayesian optimization>.\r\n%\r\n% <<https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/accuracy-plot-600w.png>>\r\n%\r\n% <https:\/\/www.mathworks.com\/help\/nnet\/ug\/preprocess-images-for-deep-learning.html \r\n% Automatic image preprocessing and augmentation> is now available for\r\n% network training. Image augmentation is the idea of increasing the\r\n% training set by randomly applying transformations, such as resizing,\r\n% rotation, reflection, and translation, to the available images. \r\n%\r\n%% Semantic segmentation\r\n%\r\n% As an image processing algorithms person, I am especially intrigued by\r\n% the new semantic segmentation capability, which lets you classify pixel\r\n% regions and visualize the results.\r\n%\r\n% <<https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/semantic-segmentation-example.png>>\r\n%\r\n% See <https:\/\/www.mathworks.com\/help\/vision\/examples\/semantic-segmentation-using-deep-learning.html \r\n% \"Semantic Segmentation Using Deep Learning\"> for a detailed example\r\n% using the <http:\/\/mi.eng.cam.ac.uk\/research\/projects\/VideoRec\/CamVid\/ \r\n% CamVid dataset> from the University of Cambridge.\r\n%\r\n%% Deployment to embedded systems\r\n%\r\n% If you are implementing deep learning methods in embedded system, take a\r\n% look at <https:\/\/www.mathworks.com\/products\/gpu-coder.html \r\n% GPU Coder>, a brand new product in the R2017b release. GPU Coder\r\n% generates CUDA from MATLAB code for deep learning, embedded vision, and\r\n% autonomous systems. The generated code is well optimized, as you can see from\r\n% this performance benchmark plot.\r\n%\r\n% <<https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/gpu-coder-benchmark-plot.png>>\r\n%\r\n% _MathWorks benchmarks of inference performance of AlexNet using GPU\r\n% acceleration, Titan XP GPU, Intel(R) Xeon(R) CPU E5-1650 v4 at 3.60GHz,\r\n% cuDNN v5, and Windows 10. Software versions: MATLAB (R2017b),\r\n% TensorFlow (1.2.0), MXNet (0.10), and Caffe2 (0.8.1)._\r\n%\r\n%% For more information\r\n% \r\n% I have just scratched the surface of the deep learning capabilities \r\n% in the ambitious R2017b release.\r\n%\r\n% Here are some additional sources of information.\r\n%\r\n% * <https:\/\/www.mathworks.com\/products\/new_products\/latest_features.html?s_tid=hp_release_2017b \r\n% R2017b Highlights>\r\n% * Neural Network Toolbox (<https:\/\/www.mathworks.com\/help\/nnet\/index.html doc>, \r\n% <https:\/\/www.mathworks.com\/help\/nnet\/release-notes.html release notes>)\r\n% * Parallel Computing Toolbox (<https:\/\/www.mathworks.com\/help\/distcomp\/index.html doc>, \r\n% <https:\/\/www.mathworks.com\/help\/distcomp\/release-notes.html release notes>)\r\n% * Computer Vision System Toolbox (<https:\/\/www.mathworks.com\/help\/vision\/index.html doc>, \r\n% <https:\/\/www.mathworks.com\/help\/vision\/release-notes.html release notes>)\r\n% * Image Processing Toolbox (<https:\/\/www.mathworks.com\/help\/images\/index.html doc>, \r\n% <https:\/\/www.mathworks.com\/help\/images\/release-notes.html release notes>)\r\n% * GPU Coder (<https:\/\/www.mathworks.com\/products\/gpu-coder.html product info>)\r\n\r\n\r\n##### SOURCE END ##### bb89d9d0e7d142308863528639bee3e1\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2017\/10\/dag-plot.png\" onError=\"this.style.display ='none';\" \/><\/div><p>The R2017b release of MathWorks products shipped just two weeks ago, and it includes many new capabilities for deep learning. Developers on several product teams have been working hard on these... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2017\/10\/06\/deep-learning-with-matlab-r2017b\/\">read more >><\/a><\/p>","protected":false},"author":42,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/27"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/42"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=27"}],"version-history":[{"count":3,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/27\/revisions"}],"predecessor-version":[{"id":31,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/27\/revisions\/31"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=27"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=27"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=27"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}