{"id":589,"date":"2018-10-04T12:41:53","date_gmt":"2018-10-04T12:41:53","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=589"},"modified":"2021-04-06T15:51:31","modified_gmt":"2021-04-06T19:51:31","slug":"3-trends-in-deep-learning","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/10\/04\/3-trends-in-deep-learning\/","title":{"rendered":"3 Trends in Deep Learning"},"content":{"rendered":"<span style=\"font-size: 14px;\"><em>And how MATLAB helps you take advantage of them.<\/em><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Last post*, Steve Eddins wrote about some of the <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/09\/21\/deep-learning-with-matlab-r2018b\/\">new features<\/a> in the latest release. Today, I\u2019d like to talk about how these new features fit into some larger trends we\u2019re seeing in deep learning.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">You may have noticed we continue to add more features for the <em>intermediate stage<\/em> of deep learning today: between building your first model and having a finished product.<\/span>\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1548\" height=\"459\" class=\"alignnone size-full wp-image-609\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/10\/2018-10-02_14-48-01_2.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">This void between starting and finishing is where we see a lot of engineers spend a huge portion of their time with various tasks such as:<\/span>\r\n<h6><\/h6>\r\n\r\n<ul style=\"margin-left: 20px;\">\r\n \t<span style=\"font-size: 14px;\"><li>Increasing the accuracy of a model with parameter tuning.<\/li><\/span>\r\n\r\n \t\r\n\r\n \t<span style=\"font-size: 14px;\"><li>Converting models to C or CUDA to take advantage of speed and hardware.<\/li><\/span>\r\n\r\n\r\n \t<span style=\"font-size: 14px;\"><li>Experimenting with new network architectures for transfer learning.<\/li><\/span>\r\n<\/ul>\r\n\r\n\r\n<span style=\"font-size: 14px;\">\r\nWe are starting to see new trends emerge in response to these tedious and time-consuming tasks.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">\r\nWhile the following trends aren\u2019t MATLAB specific, our latest release has the capabilities to fully embrace this intermediate stage of deep learning. If you\u2019re past <em>\u201cWhat is Deep Learning?\u201d<\/em>, read on to explore 3 trends that are emerging after getting into the weeds with deep learning. <\/span>\r\n\r\n<h4><\/h4>\r\n\r\n<hr width=\"50%\/\" \/>\r\n\r\n<span style=\"color: #567dbc; font-size: 18px;\"><strong>\r\nTrend #1: Cloud Computing<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">We all know training complicated networks takes time. Adding to that, techniques like <a href = \"https:\/\/www.mathworks.com\/help\/stats\/bayesian-optimization-workflow.html\">Bayesian optimization<\/a> - which will train your network multiple times with different training parameters \u2013 can provide powerful results at a cost: more time. An option to alleviate some of this pain is to move from local resources to clusters (HPC) or the cloud. The cloud is emerging as a great resource: providing the latest hardware, multiple GPUs at one time, and only paying for resources when they are needed.<\/span>\r\n<h4><\/h4>\r\n<span style=\"font-size: 14px;\">\r\nHow MATLAB helps you with this trend:<\/span>\r\n<h6><\/h6>\r\n\r\n<ul style=\"margin-left: 20px;\">\r\n\r\n \t<span style=\"font-size: 14px;\"><li>Check out the MATLAB <a href=\"https:\/\/www.mathworks.com\/cloud.html\">cloud computing<\/a> page.<\/li><\/span>\r\n\r\n \t<span style=\"font-size: 14px;\"><li>And MATLAB specific NVIDIA GPU Cloud (NGC) support in our <a href=\"https:\/\/www.mathworks.com\/help\/cloudcenter\/ug\/matlab-deep-learning-container-on-dgx.html\">documentation<\/a>.<\/li><\/span>\r\n \t<span style=\"font-size: 14px;\"><li>There's also a walk-through video on how to set up MATLAB and NGC <a href=\"https:\/\/www.mathworks.com\/videos\/setting-up-the-matlab-deep-learning-container-on-ngc-1537515024196.html\">here<\/a>.<\/li><\/span>\r\n \r\n<\/ul>\r\n<h6><\/h6>\r\n<h4><\/h4>\r\n<span style=\"color: #567dbc; font-size: 18px;\"><strong>\r\nTrend #2: Interoperability <\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">\r\nLet\u2019s face it: There isn\u2019t a single framework that can provide \u2018best-in-class\u2019 for everything about deep learning from start to finish. The trend of interoperability between deep learning frameworks, primarily through <a href=\"http:\/\/Onnx.ai\">ONNX.ai<\/a>, is allowing users switch in and out of deep learning frameworks at their convenience. MathWorks is part of a collective pushing this trend forward, which is why it\u2019s a great time to check out a variety of deep learning frameworks.<\/span>\r\n<h4><\/h4>\r\n<span style=\"font-size: 14px;\">\r\nHow MATLAB helps you with this trend:<\/span>\r\n<h6><\/h6>\r\n<ul style=\"margin-left: 20px;\">\r\n \t<span style=\"font-size: 14px;\"><li>MATLAB has ONNX <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/importonnxnetwork.html\">import<\/a> and <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/exportonnxnetwork.html\">export<\/a> capabilities through <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/67296-deep-learning-toolbox-converter-for-onnx-model-format\">this support package<\/a><\/li><\/span>\r\n<\/ul>\r\n<h6><\/h6>\r\n<span style=\"color: #567dbc; font-size: 18px;\"><strong>\r\nTrend #3: Multi-deployment options\r\n<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">\r\nLet\u2019s say you\u2019ve made it to the finish line: You have a deep learning model to perform the task you envisioned. Now you need to get the model to its final destination. Multi-deployment can have various definitions, so let me define this as \u201cdeploy your model to the right location depending on your specific need.\u201d This could be the web, your phone, embedded processors, or GPUs.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">\r\nIf your goal is GPUs, CUDA is providing the best and most efficient processing through code optimization. Yes, CUDA has been around for a while, but optimization libraries like <a href=\"https:\/\/www.mathworks.com\/videos\/pedestrian-detection-on-a-nvidia-gpu-with-tensorrt-1521713607470.html\">TensorRT<\/a> and Thrust are worth a look. It\u2019s not unheard of for TensorRT to speed up ordinary CUDA code 30%, and that\u2019s beyond the 200% speedup you can get converting framework-specific code to CUDA.<\/span> (We'll talk more specific numbers in future posts about performance.)\r\n<h4><\/h4>\r\n<span style=\"font-size: 14px;\">\r\nHow MATLAB helps you with this trend:\r\n<\/span>\r\n\r\n&nbsp;\r\n<ul style=\"margin-left: 20px;\">\r\n \t<span style=\"font-size: 14px;\"><li>GPU Coder is the product to watch. It was named Embedded Vision\u2019s 2018 <a href=\" https:\/\/www.embedded-vision.com\/product-awards-2018-winners\">product of the year<\/a>.<\/li><\/span>\r\n\r\n \t<span style=\"font-size: 14px;\"><li>MATLAB has coder tools and support packages to various devices: including iOS, Android, and FPGA to name a few<\/li><\/span>\r\n\r\n<span style=\"font-size: 14px;\"><li>Though not deep learning specific, I've heard <a href=\"https:\/\/www.mathworks.com\/products\/matlab\/app-designer.html\">App Designer<\/a> now supports Web App deployment.<\/li><\/span>\r\n<\/ul>\r\n\r\n\r\n<span style=\"font-size: 14px;\">\r\n<span style=\"color: #567dbc; font-size: 14px;\"><strong>With our most recent release,\r\n<\/strong><\/span>\r\nMATLAB has the capabilities allowing you to fully embrace these trends, and we'll continue to respond as the trends change and evolve. This release is a particularly good one for new deep learning features, and I encourage you to take a deeper look.<\/span>\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n<hr \/>\r\n\r\n<strong>*Introductions:<\/strong>\r\n\r\nAs Steve mentioned in his last post, I\u2019ll be taking over the blog, and I\u2019m very excited for this new challenge!\r\n\r\nFor those interested, I want to introduce myself and talk about my vision for this blog.\r\n<h6><\/h6>\r\n<h6><\/h6>\r\nAs you may have seen, I\u2019ve been warming up for this role as a guest blogger writing about \"deep learning in action\" (<a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/06\/22\/deep-learning-in-action-part-1\/\">part 1<\/a>, <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/07\/20\/deep-learning-in-action-part-2\/\">part 2<\/a> and <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/08\/21\/deep-learning-in-action-part-3\/\">part 3<\/a>) And prior to that, I took over the \u2018pick of the week\u2019 blog and wrote about our <a href=\"https:\/\/blogs.mathworks.com\/pick\/2017\/06\/02\/deep-learning-tutorial-series\/\">deep learning tutorial series<\/a>. I maintain the <a href=\"https:\/\/www.mathworks.com\/solutions\/deep-learning.html\">MATLAB for Deep Learning<\/a> content, and I appear in a few <a href=\"https:\/\/www.mathworks.com\/videos\/introduction-to-deep-learning-machine-learning-vs-deep-learning-1489503513018.html\">videos<\/a> on our site from time to time.\r\n<h6><\/h6>\r\n<strong>Background:<\/strong> I've been at MathWorks 5 years. I started as an Application Engineer, which meant I got to travel to customer sites and present to customers, specializing in image processing and computer vision. My theory is most of you reading this have never experienced a MathWorks seminar, and I\u2019d like that to change. I now work in marketing**: My current job is making sure everyone knows about the capabilities of the tools and how to solve their problems, and this blog fits well within this job description.\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<em>**Some people shudder when they hear the word marketing. It\u2019s still a technical role, I just happen to also like spending time on better wording, formatting and visualizations. It\u2019s a win-win for you. You\u2019ll see!<\/em>\r\n<h6><\/h6>\r\n<strong>Blog Vision:<\/strong> My vision for this blog in one word is <strong><em>access<\/em><\/strong>. I have access to insider information because I work here. My goal for this blog is to be the source of that information. I want to\u00a0talk about the behind-the-scenes deep learning things that you may not see by reading the documentation. While I can\u2019t share future plans, I can give insight, demos, developer Q&amp;A time, and code you won\u2019t find in the product. That\u2019s the vision. I hope you\u2019ll join me on this journey!\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<h6><\/h6>","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/10\/2018-10-02_14-48-01_2.png\" onError=\"this.style.display ='none';\" \/><\/div><p>And how MATLAB helps you take advantage of them.\r\n\r\nLast post*, Steve Eddins wrote about some of the new features in the latest release. Today, I\u2019d like to talk about how these new features fit into... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/10\/04\/3-trends-in-deep-learning\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/589"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=589"}],"version-history":[{"count":6,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/589\/revisions"}],"predecessor-version":[{"id":613,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/589\/revisions\/613"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=589"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=589"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=589"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}