{"id":5835,"date":"2021-01-26T09:54:58","date_gmt":"2021-01-26T14:54:58","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=5835"},"modified":"2021-04-06T15:45:30","modified_gmt":"2021-04-06T19:45:30","slug":"deep-learning-visualizations","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2021\/01\/26\/deep-learning-visualizations\/","title":{"rendered":"Deep Learning Visualizations"},"content":{"rendered":"<span style=\"font-size: 14px;\">Evaluating deep learning model performance can be done a variety of ways. A confusion matrix answers some questions about the model performance, but not all. How do we know that the model is identifying the right features? <\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Let's walk through some of the easy ways to explore deep learning models using visualization, with links to documentation examples for more information.<\/span>\r\n<h6><\/h6>\r\n<h2>Background: Data and Model Information<\/h2>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">For these visualizations, I'm using models created by my colleague Heather Gorr. You can find the code she used to train the models on GitHub <strong><a href=\"https:\/\/github.com\/hgorr\/deep-learning-wild-animals\">here<\/a><\/strong>.<\/span>\r\n\r\n<span style=\"font-size: 14px;\">The basic premise of the models and code is using wildlife data to see if MATLAB can be used to correctly identify classes of animals in the wild. The sample images look something like this:<\/span>\r\n<h6><\/h6>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td style=\"text-align: center;\">Bear\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5839\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5841\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear3.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5843\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear5.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Bighorn Sheep\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5845\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bhs2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Bighorn Sheep\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5847\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bhs3.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"text-align: center;\">Cow\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5849\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/cow1.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Cow\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5851\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/cow2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Coyote\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5853\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyote1.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Coyote\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5855\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyote2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">Dog\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5863\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/dog1a.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">The two models I will use today were trained on the following classes of images:<\/span>\r\n<ol>\r\n \t<li><span style=\"font-size: 14px;\">\u00a0[ Bear | Not-Bear ] All bears should be classified as \"bear\", anything else should be \"not bear\"<\/span><\/li>\r\n \t<li><span style=\"font-size: 14px;\">\u00a0[ Bear | Cow | Sheep ]: 3 classes of animals that must fit into the categories<\/span><\/li>\r\n<\/ol>\r\n<span style=\"font-size: 14px;\">Here are the sample images run through each network.<\/span>\r\n<h6><\/h6>\r\n<h3>Bear | Not-Bear Classifier (net1)<\/h3>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td style=\"text-align: center;\">bear\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5839\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5841\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear3.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5843\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear5.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">not bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5845\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bhs2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">not bear\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5847\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bhs3.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"text-align: center;\">not bear\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5849\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/cow1.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">not bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5851\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/cow2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">not bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5853\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyote1.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">not bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5855\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyote2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">not bear\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5863\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/dog1a.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h3>Bear | Cow | Sheep Classifier (net2)<\/h3>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td style=\"text-align: center;\">bear\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5839\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5841\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear3.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5843\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear5.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bighorn sheep\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5845\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bhs2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bighorn sheep\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5847\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bhs3.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"text-align: center;\">cow\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5849\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/cow1.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">cow\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5851\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/cow2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5853\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyote1.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">bear\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5855\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyote2.jpg\" alt=\"\" \/><\/td>\r\n<td style=\"text-align: center;\">cow\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1920\" height=\"1080\" class=\"alignnone size-full wp-image-5863\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/dog1a.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">You now can see that most predictions are correct, but once I add in a few animals not in the expected categories, the predictions get a little off. The rest of this post is using visualization to find <em>why <\/em>the models predicted a certain way.<\/span>\r\n<h6><\/h6>\r\n<h1>Popular Visualization Techniques<\/h1>\r\n<h2>...and how to use them<\/h2>\r\n<span style=\"font-size: 14px;\">Techniques like LIME, Grad-CAM and Occlusion Sensitivity can give you insight into the network, and why the network chose a particular option.<\/span>\r\n<h6><\/h6>\r\n<h2>LIME<\/h2>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\"><span style=\"font-family: courier;\">imageLIME<\/span> is newer technique added to MATLAB in 2020b. Let's investigate the coyote that was called a bear:<\/span>\r\n\r\n&nbsp;\r\n<table style=\"height: 227px;\" width=\"698\">\r\n<tbody>\r\n<tr>\r\n<td><span style=\"font-size: 14px;\">Start by displaying the predictions and the confidences.<\/span>\r\n<h6><\/h6>\r\n<pre>[YPred,scores] = classify(net2,img);\r\n[~,topIdx] = maxk(scores, 3);\r\ntopScores = scores(topIdx);\r\ntopClasses = classes(topIdx);\r\n\r\nfigure; imshow(img)\r\ntitleString = compose(\"%s (%.2f)\",topClasses,topScores'); %'\r\ntitle(sprintf(join(titleString, \"; \")));<\/pre>\r\n<\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-5881 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyoteLime1-300x294.png\" alt=\"\" width=\"300\" height=\"294\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<table style=\"height: 352px;\" width=\"745\">\r\n<tbody>\r\n<tr>\r\n<td style=\"width: 60%;\"><span style=\"font-size: 14px;\">Then, use <span style=\"font-family: courier;\">imageLime<\/span> to visualize the output.<\/span>\r\n<h6><\/h6>\r\n<pre>map = imageLIME(net2,img,YPred);\r\n\r\nfigure\r\nimshow(img,'InitialMagnification',150)\r\nhold on\r\nimagesc(map,'AlphaData',0.5)\r\ncolormap jet\r\ncolorbar\r\n\r\ntitle(sprintf(\"Image LIME (%s)\", YPred))\r\nhold off<\/pre>\r\n<\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-5883 size-medium\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/coyoteLime2-300x273.png\" alt=\"\" width=\"300\" height=\"273\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td colspan=\"2\">\r\n<span style=\"font-size: 14px;\">Here, imageLIME is indicating the reason for the prediction is in the lower corner of the image. While this is clearly an incorrect prediction, the strongest features are not surrounding the coyote. This would indicate those features were learned incorrectly for the bear class.<\/span><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n\r\n<h6><\/h6>\r\n<h2>Grad-CAM<\/h2>\r\n<span style=\"font-size: 14px;\">Let's focus on the bears using Grad-CAM, another visualization technique available. Here I think you'll find some very interesting things about the models we're using.<\/span>\r\n\r\n<span style=\"font-size: 14px;\">Keep in mind - <strong>net1 <\/strong>detects bear or not-bear, and <strong>net2<\/strong> detects bear, cow or sheep.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Let's set up for Grad-CAM.<\/span>\r\n<h6><\/h6>\r\n<pre><span class=\"comment\">%create a layer graph from the network, remove its final classification layer.<\/span>\r\n\r\nlgraph = layerGraph(net2);\r\nlgraph = removeLayers(lgraph, lgraph.Layers(end).Name);\r\ndlnet = dlnetwork(lgraph);\r\n\r\n<span class=\"comment\">% you need to know the probability layer and a feature layer for<\/span>\r\n<span class=\"comment\">% inceptionv3, using softmax (315) and the last ReLU layer<\/span>\r\nsoftmaxName = 'predictions_softmax';\r\nfeatureLayerName = 'activation_94_relu';<\/pre>\r\n<h6><\/h6>\r\n<pre>[classfn,score] = classify(net1,img); <span class=\"comment\">%bear1<\/span>\r\nimshow(img);\r\ntitle(sprintf(\"%s (%.2f)\", classfn, score(classfn)));\r\n\r\ndlImg = dlarray(single(img),'SSC'); \r\n[featureMap, dScoresdMap] = dlfeval(@gradcam, dlnet, dlImg, softmaxName, featureLayerName, classfn);\r\n\r\n<span class=\"comment\">% a few more lines for visualization<\/span>\r\ngradcamMap = sum(featureMap .* sum(dScoresdMap, [1 2]), 3);\r\ngradcamMap = extractdata(gradcamMap);\r\ngradcamMap = rescale(gradcamMap);\r\ngradcamMap = imresize(gradcamMap, inputSize, 'Method', 'bicubic');\r\n<\/pre>\r\n<span style=\"font-size: 14px;\">Running this code for bears 1,2 and 3, using the bear, not-bear model, here are the results:<\/span>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5913\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear1viz1.jpg\" alt=\"\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5915\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear2viz2.jpg\" alt=\"\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5917\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear3viz1.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<span style=\"font-size: 14px;\">Nothing too exciting with the images above. Those are definitely bears, and the visualization is more or less targeting the bear.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">NOW! Let's use the other model. [Same code as above, just replacing net1 with net2] Remember at the very beginning: The network predicted all bears correctly as bears. So why spend time to visualize correct results?<\/span>\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5921\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear1viz2.jpg\" alt=\"\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5923\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear2viz3.jpg\" alt=\"\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5925\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear3viz2.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<span style=\"font-size: 14px;\">Do you see it? This model is always activating on the lower left corner! So the model is predicting correctly, but <em>why<\/em> it predicted a bear is suspect.<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">One more thing before we move on: what if we just crop out the strongest\/incorrect features the network is focused on?<\/span>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5929\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/cownotcow.jpg\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">... interesting! Removing the strongest features in the image caused the network to predict incorrectly. This means the network is not likely learning real animal features, and focused on other aspects of the images. Most often I find that visualizations are great to gain insight into a model, but rarely have I been able prove an \"accurate\" model is not really predicting accurately at all!! My work here is done.<\/span>\r\n<h6><\/h6>\r\n<h2>Occlusion Sensitivity<\/h2>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">The final output visualization we'll cover today is occlusion sensitivity. This is by far the easiest to implement code-wise. <\/span>\r\n<pre>label = classify(net1,img);\r\n\r\nscoreMap = occlusionSensitivity(net1,img,label);\r\n\r\nfigure\r\nimshow(img)\r\nhold on\r\nimagesc(scoreMap,'AlphaData',0.5);\r\ncolormap jet<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"527\" height=\"434\" class=\"alignnone size-full wp-image-5933\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bhsviz-1.jpg\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Here you can see that the reason the sheep is labeled \"not bear\" is in the center of the animal. I was semi-disappointed to not have those big horns light up, but what can you do?<\/span>\r\n<h6><\/h6>\r\n<h2>Gradient Attribution Techniques<\/h2>\r\n<span style=\"font-size: 14px;\">Just another note that there are more visualization techniques, such as Gradient Attribution Techniques, but I think we covered a lot already. To learn more, you can check this <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/investigate-classification-decisions-using-gradient-attribution-techniques.html\">doc example<\/a>.<\/span>\r\n<table style=\"height: 303px;\" width=\"629\">\r\n<tbody>\r\n<tr>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-5939 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear3a.jpg\" alt=\"\" width=\"527\" height=\"434\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-5935 \" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bearGradient.png\" alt=\"\" width=\"442\" height=\"428\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<h1>Inside the Network Insights<\/h1>\r\n<h6><\/h6>\r\n<h2>Deep Dream<\/h2>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Here we can visualize the network's learned features. Let's visualize the first 25 features learned by the network at the second convolutional layer.<\/span>\r\n<h6><\/h6>\r\n<table style=\"height: 227px;\" width=\"698\">\r\n<tbody>\r\n<tr>\r\n<td>\r\n<h6><\/h6>\r\n<pre>layer = 'conv2d_2';\r\nchannels = 1:25;\r\n\r\nI = deepDreamImage(net2,layer,channels, ...\r\n    'PyramidLevels',1, ...\r\n    'Verbose',0);\r\n\r\nfigure\r\nfor i = 1:25\r\n    subplot(5,5,i)\r\n    imshow(I(:,:,:,i))\r\nend\r\n<\/pre>\r\n<\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" width=\"300\" height=\"225\" class=\"alignnone size-medium wp-image-5949\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/dd1-300x225.jpg\" alt=\"\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">I'll be honest, while I find deep dream visually pleasing, I don't tend to use this very often as a debugging technique, though I'd be interested to hear from anyone who has an example of success. Sometimes, deep dream can be helpful for later layers of the network, where other techniques will certainly fail.<\/span>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">However, in this particular example, I wasn't able to find anything particularly appealing even after running deep dream for 100 iterations.<\/span>\r\n<pre>iterations = 100;\r\nlayerName = 'new_fc';\r\n\r\nI = deepDreamImage(net1,layerName,1, ...\r\n    'Verbose',false, ...\r\n    'NumIterations',iterations);\r\n\r\nfigure\r\nimshow(I)<\/pre>\r\n<h6><img decoding=\"async\" loading=\"lazy\" width=\"601\" height=\"602\" class=\"alignnone size-full wp-image-5963\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/dd3.jpg\" alt=\"\" \/><\/h6>\r\n<span style=\"font-size: 14px;\">This image is visualizing the \"bear\" class. I'm not seeing particularly insightful to comment on, though I have seen very attractive deep dream images created from other pretrained networks in <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/deep-dream-images-using-googlenet.html#DeepDreamImagesUsingGoogLeNetExample-3\"><u>this example<\/u><\/a>.<\/span>\r\n<h6><\/h6>\r\n<h2>Activations<\/h2>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Similar to deep dream, you can use activations to visualize the input image after it passes through specific channels, which are showing the learned features from the network.<\/span>\r\n<pre>im = imread('bear2.JPG');\r\n\r\nimgSize = size(im);\r\nimgSize = imgSize(1:2);\r\n\r\nact1 = activations(net1,im,'conv2d_7');\r\n\r\nsz = size(act1);\r\nact1 = reshape(act1,[sz(1) sz(2) 1 sz(3)]);\r\n\r\nI = imtile(mat2gray(act1),'GridSize',[7 7]);\r\nimshow(I)\r\n<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1334\" height=\"751\" class=\"alignnone size-full wp-image-5955\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bearsActivations7.jpg\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Then, you can use activations to quickly pull the strongest channel activating.<\/span>\r\n<h6><\/h6>\r\n<pre><span class=\"comment\">% find the strongest channel<\/span>\r\n[maxValue,maxValueIndex] = max(max(max(act1)));\r\nact1chMax = act1(:,:,:,maxValueIndex);\r\nact1chMax = mat2gray(act1chMax);\r\nact1chMax = imresize(act1chMax,imgSize);\r\n\r\nI = imtile({im,act1chMax});\r\nimshow(I)\r\n<\/pre>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1508\" height=\"495\" class=\"alignnone size-full wp-image-5957\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bearsActivations7max.jpg\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">See a related <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/visualize-activations-of-a-convolutional-neural-network.html\">documentation example<\/a> for more ways to use activations.<\/span>\r\n<h6><\/h6>\r\n<h2>TSNE<\/h2>\r\n<span style=\"font-size: 14px;\">Maria wrote a <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/01\/18\/neural-network-feature-visualization\/\">blog post<\/a> about this a while back, and I'm happy to report a new example is in documentation <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/view-network-behavior-using-tsne.html\"><strong>here<\/strong><\/a>. This can help show similarities and differences between classes.<\/span>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/01\/clustering1.png\" width=\"560\" height=\"420\" \/>\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px;\">Any other visualizations you like that I should add to this collection? Let me know in the comments below!<\/span>\r\n<h6><\/h6>\r\n<p><a href=\"https:\/\/twitter.com\/jo_pings?ref_src=twsrc%5Etfw\" class=\"twitter-follow-button\" data-size=\"large\" data-show-count=\"false\">Follow @jo_pings<\/a><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2021\/01\/bear2.jpg\" onError=\"this.style.display ='none';\" \/><\/div><p>Evaluating deep learning model performance can be done a variety of ways. A confusion matrix answers some questions about the model performance, but not all. How do we know that the model is... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2021\/01\/26\/deep-learning-visualizations\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/5835"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=5835"}],"version-history":[{"count":86,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/5835\/revisions"}],"predecessor-version":[{"id":6067,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/5835\/revisions\/6067"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=5835"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=5835"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=5835"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}