{"id":3141,"date":"2019-11-25T07:15:05","date_gmt":"2019-11-25T07:15:05","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=3141"},"modified":"2021-04-06T15:49:16","modified_gmt":"2021-04-06T19:49:16","slug":"scene-classification-using-deep-learning","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/11\/25\/scene-classification-using-deep-learning\/","title":{"rendered":"Scene Classification Using Deep Learning"},"content":{"rendered":"<span style=\"font-family: courier;\">This is a post from <a href=\"http:\/\/www.ogemarques.com\/\">Oge Marques, PhD<\/a> and Professor of Engineering and Computer Science at FAU, and of course [MathWorks blog] famous for his post on\u00a0<a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/08\/22\/data-augmentation-for-image-classification-applications-using-deep-learning\/\">image augmentation<\/a>. He's back to talk about scene classification, with great code for you to try. You can also follow him on Twitter (<a href=\"https:\/\/twitter.com\/ProfessorOge\">@ProfessorOge<\/a>)<\/span>\r\n<h6><\/h6>\r\nAutomatic scene classification (sometimes referred to as <em>scene<\/em> <em>recognition<\/em>, or <em>scene<\/em> <em>analysis<\/em>) is a longstanding research problem in computer vision, which consists of assigning a label such as 'beach', 'bedroom', or simply 'indoor' or 'outdoor' to an image presented as input, based on the image's overall contents.\r\n<h6><\/h6>\r\nIn this blog post, I will show you how to design and implement a computer vision solution that can classify an image of a scene into its <strong>category<\/strong> (<em>bathroom<\/em>, <em>kitchen<\/em>, <em>attic<\/em>, or <em>bedroom<\/em> for indoor; <em>hayfield<\/em>, <em>beach<\/em>, <em>playground<\/em>, or <em>forest<\/em> for outdoor) (Fig. 1) using a deep neural network.\r\n<h6><\/h6>\r\n<table style=\"width: 75%;\">\r\n<tbody>\r\n<tr>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-3169 size-thumbnail aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/Places365_val_00002197-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p style=\"text-align: center;\">forest<\/p>\r\n<\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3161 size-thumbnail aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/bedroom1-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p style=\"text-align: center;\">bedroom<\/p>\r\n<\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3163 size-thumbnail aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/kitchen1-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p style=\"text-align: center;\">kitchen<\/p>\r\n<\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3165 size-thumbnail aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/Places365_val_00000106-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p style=\"text-align: center;\">hayfield<\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3157 size-thumbnail aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/attick1-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n<p style=\"text-align: center;\">attic<\/p>\r\n<\/td>\r\n<td style=\"text-align: center;\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3167 size-thumbnail\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/Places365_val_00000119-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n\r\nbeach<\/td>\r\n<td style=\"text-align: center;\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3171 size-thumbnail\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/playground1-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\n\r\nplayground<\/td>\r\n<td style=\"text-align: center;\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3159 size-thumbnail\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/bathroom1-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/>\r\nbathroom<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nFig. 1: Examples of images from the MIT Places dataset [1] with their corresponding categories.\r\n<h6><\/h6>\r\nFirst let me guide you through the basics of scene recognition by humans and the history of scene recognition using computer vision.\r\n<h6><\/h6>\r\n<h2>Scene recognition by humans<\/h2>\r\nFor the sake of this discussion, let\u2019s use a working definition of a <em>scene<\/em> as \"a view of a <strong>real-world environment <\/strong>that contains <strong>multiple surfaces and objects, <\/strong>organized in a <strong>meaningful way.<\/strong>\"<strong> [2]<\/strong>\r\n\r\nHumans are capable of recognizing and classifying scenes in a tenth of a second or less, thanks to our ability to capture the <em>gist<\/em> of the scene, even though this usually means having missed many of its details <strong>[3]<\/strong>. For example, we can tell an image of a <em>bathroom<\/em> from one of a <em>bedroom<\/em> quickly, but would be dumbfounded if asked (after the image is no longer visible) about specifics of the scene (for example, how many nightstands \/ sinks did you see?)\r\n<h6><\/h6>\r\n<h2><strong>Scene recognition in computer vision, before and after deep learning<\/strong><\/h2>\r\nPrior to deep learning, early efforts included the design and implementation of a computational model of holistic scene recognition based on a very low dimensional representation of the scene, known as its <em>Spatial Envelope <\/em><strong>[3]<\/strong>. This also gave us access to important data sets (such as <em>Places365<\/em><strong> [1]<\/strong>), which have been crucial to the success of deep learning in scene recognition research. The training set of <em>Places365-Standard<\/em> has ~1.8 million images from 365 scene categories, with as many as 5000 images per category.\r\n\r\nThe use of deep learning, particularly Convolutional Neural Networks (CNNs), for scene classification has received great attention from the computer vision community <strong>[4]<\/strong>. Several baseline CNNs pretrained on the <em>Places365-Standard<\/em> dataset are available at <a href=\"https:\/\/github.com\/CSAILVision\/places365\">https:\/\/github.com\/CSAILVision\/places365<\/a>.\r\n<h6><\/h6>\r\n<h2><strong>Scene recognition using deep learning in MATLAB<\/strong><\/h2>\r\nNext, I want to show how to implement a scene classification solution using a subset of the MIT Places dataset <strong>[1]<\/strong> and a pretrained model, Places365GoogLeNet <strong>[5, 6]<\/strong>. To maximize the learning experience, we will build, train, and evaluate different CNNs and compare the results. In \"Part 1\", we will build a simple CNN from scratch, train it, and evaluate it. In \"Part 2\", we will use a pretrained model, Places365GoogLeNet, \"as is\". In \"Part 3\", we follow a <em>transfer learning<\/em> approach that demonstrates some of the latest features and best practices for image classification using transfer learning in MATLAB. Finally, in \"Part 4\", we employ image data augmentation techniques to see whether they lead to improved results.\r\n<h6><\/h6>\r\n<h4><em>Data Preparation<\/em><\/h4>\r\n<ol>\r\n \t<li>We build an <strong>ImageDatastore<\/strong> consisting of eight folders (corresponding to the eight categories: 'attic', 'bathroom', 'beach', 'bedroom', 'forest', 'hayfield', 'kitchen', and 'playground') with 1000 images each.<\/li>\r\n \t<li>We split the data into training (70%) and validation (30%) sets.<\/li>\r\n \t<li>We create an\u00a0<strong>augmentedImageDatastore<\/strong> to handle image resizing, specifying the training images and the size of output images, which must be compatible with the size expected by the input layer of the neural network. This is more elegant and efficient than running batch image resizing (and saving the resized images back to disk).<\/li>\r\n<\/ol>\r\nCreate image datastore\r\n<code>imds = imageDatastore(fullfile('MITPlaces'),...\r\n'IncludeSubfolders',true,'FileExtensions','.jpg','LabelSource','foldernames');\r\n<\/code>\r\nCount number of images per label and save the number of classes\r\n<code>labelCount = countEachLabel(imds);\r\nnumClasses = height(labelCount);\r\n<\/code>\r\nCreate training and validation sets\r\n<code>[imdsTraining, imdsValidation] = splitEachLabel(imds, 0.7);\r\n<\/code>\r\nUse image data augmentation to handle the resizing the original images are 256-by-256. The input layer of the CNNs used in this example expects them to be 224-by-224.\r\n<code>inputSize = [224,224,3];\r\naugimdsTraining = augmentedImageDatastore(inputSize(1:2),imdsTraining);\r\naugimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);\r\n<\/code>\r\n<h6><\/h6>\r\n<h4><em>Model development \u2013 Part 1 (Building and training a CNN from scratch)<\/em><\/h4>\r\nWe build a simple CNN from scratch (Fig. 3), specify its training options, train it, and evaluate it.\r\n<h6><\/h6>\r\nDefine Layers\r\n<code>layers = [\r\nimageInputLayer([224 224 3])\r\nconvolution2dLayer(3,16,'Padding',1)\r\nbatchNormalizationLayer\r\nreluLayer\r\nmaxPooling2dLayer(2,'Stride',2)\r\nconvolution2dLayer(3,32,'Padding',1)\r\nbatchNormalizationLayer\r\nreluLayer\r\nmaxPooling2dLayer(2,'Stride',2)\r\nconvolution2dLayer(3,64,'Padding',1)\r\nbatchNormalizationLayer\r\nreluLayer\r\nfullyConnectedLayer(8)\r\nsoftmaxLayer\r\nclassificationLayer];\r\n<\/code>\r\nSpecify Training Options\r\n\r\n<code>options = trainingOptions('sgdm',...\r\n'MaxEpochs',30, ...\r\n'ValidationData',augimdsValidation,...\r\n'ValidationFrequency',50,...\r\n'InitialLearnRate', 0.0003,...\r\n'Verbose',false,...\r\n'Plots','training-progress');\r\n<\/code>\r\n\r\nTrain network\r\n\r\n<code>baselineCNN = trainNetwork(augimdsTraining,layers,options);<\/code>\r\n\r\nClassify and Compute Accuracy\r\n\r\n<code>predictedLabels = classify(baselineCNN,augimdsValidation);\r\nvalLabels = imdsValidation.Labels;\r\nbaselineCNNAccuracy = sum(predictedLabels == valLabels)\/numel(valLabels);\r\n<\/code>\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"600\" height=\"341\" class=\"size-full wp-image-3387 alignleft\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/11\/fig3_cnnbaseline2.png\" alt=\"\" \/>\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\nFig. 3: A baseline CNN used in \"Part 1\".\r\n<h6><\/h6>\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n&nbsp;\r\n\r\n<img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-3205 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/fig4_trainingprogress.png\" alt=\"\" width=\"1730\" height=\"910\" \/>\r\nFig. 4: Learning curves for baseline CNN. Notice the telltale signs of overfitting: accuracy and loss keep improving for training data but have already flattened out for the validation dataset.\r\n<h6><\/h6>\r\nUnsurprisingly, the network's accuracy is modest (~60%) and it suffers from overfitting (Fig. 4).\r\n<h6><\/h6>\r\n<h4><em>Model development \u2013 Part 2 (Using a pretrained model, Places365GoogLeNet, \"as is\")<\/em><\/h4>\r\nWe use a pretrained model, <em>Places365GoogLeNet<\/em>, \"as is\". Since the model has been trained as a 365-class classifier, its performance will be suboptimal (validation accuracy ~53%), in part due to cases in which the model predicted a related\/more specific category with greater confidence than any of the 8 categories selected for this exercise (Fig. 5).\r\n<h6><\/h6>\r\nLoad pretrained Places365GoogLeNet, download and install the Deep Learning Toolbox Model for GoogLeNet Network support package. See <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/googlenet.html\">https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/googlenet.html<\/a> for instructions.\r\n\r\n<code>places365Net = googlenet('Weights','places365');<\/code>\r\n\r\nClassify and Compute Accuracy\r\n\r\n<code>YPred = classify(places365Net,augimdsValidation);\r\nYValidation = imdsValidation.Labels;\r\nplaces365NetAccuracy = sum(YPred == YValidation)\/numel(YValidation);<\/code>\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"500\" height=\"449\" class=\"alignnone size-full wp-image-3407\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/11\/fig5_predictions2-1.png\" alt=\"\" \/>\r\n\r\nFig. 5: Using a pre-trained CNN \"as is\": examples of classification errors resulting from predicting similar (or more specific) classes.\r\n<h6><\/h6>\r\n<h4><em>Model development \u2013 Part 3 (Transfer Learning)<\/em><\/h4>\r\nWe will now follow a principled <em>transfer learning<\/em> approach. We start by locating the last learnable layer and the classification layer using <strong>layerGraph <\/strong>and <strong>findLayersToReplace<\/strong>.\r\n<h6><\/h6>\r\n<code>lgraph = layerGraph(places365Net);\r\n[learnableLayer,classLayer] = findLayersToReplace(lgraph);\r\n[learnableLayer,classLayer]<\/code>\r\n\r\nNext, we replace them with appropriate equivalent layers (following the example at <strong>[7]<\/strong>), updating the network's graph with two calls to replaceLayer, while freezing the initial layers (i.e., setting the learning rates in those layers to zero) using freezeWeights: freezing the weights of the initial layers can significantly speed up network training and, since our new dataset is small, can also prevent those layers from overfitting to the new dataset. <strong>[7]<\/strong>\r\n\r\n<code>newLayer =\r\nfullyConnectedLayer(numClasses, ...\r\n'Name','new_fc', ...\r\n'WeightLearnRateFactor',10, ...\r\n'BiasLearnRateFactor',10);\r\n\r\nlgraph = replaceLayer(lgraph,learnableLayer.Name,newLayer);\r\n\r\nnewClassLayer = classificationLayer('Name','new_classoutput');\r\nlgraph = replaceLayer(lgraph,classLayer.Name,newClassLayer);\r\n\r\n<span class=\"comment\">% Freeze initial layers<\/span>\r\nlayers = lgraph.Layers;\r\nconnections = lgraph.Connections;\r\n\r\nlayers(1:10) = freezeWeights(layers(1:10));\r\nlgraph = createLgraphUsingConnections(layers,connections);<\/code>\r\n\r\nWe then train the network and evaluate the classification accuracy on the validation set: ~95%. The resulting confusion matrix (Fig. 6) gives us additional insights on which categories are misclassified more frequently by the model \u2013 in this case, bathroom scenes classified as kitchen (18 instances) and bedroom scenes labeled as attic (12 cases).\r\n\r\n&nbsp;\r\n<h6><img decoding=\"async\" loading=\"lazy\" width=\"400\" height=\"395\" class=\"alignnone size-full wp-image-3409\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/11\/fig6_confusion2.png\" alt=\"\" \/><\/h6>\r\nFig. 6: Confusion matrix for the scene classification solution using a pretrained model, Places365GoogLeNet, and best practices in transfer learning.\r\n<h6><\/h6>\r\nUpon inspecting some of the misclassified images, you can see that they result from a combination of incorrect labels, ambiguous scenes, and \"non-iconic\" images <strong>[8]<\/strong> (Fig. 7).\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3419 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/11\/fig7_result1_2.png\" alt=\"\" width=\"200\" height=\"186\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3421 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/11\/fig7_result2_2.png\" alt=\"\" width=\"200\" height=\"188\" \/><\/td>\r\n<td><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-3423 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/11\/fig7_result3_2.png\" alt=\"\" width=\"200\" height=\"203\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nFig. 7: Examples of classification errors from a retrained Places365GoogLeNet, due to (left to right, respectively): incorrect labels, ambiguous scenes (a bedroom in the attic), and \"non-iconic\" images.\r\n<h6><\/h6>\r\n<h4><em>Model development \u2013 Part 4 (Data Augmentation)<\/em><\/h4>\r\nWe employ data augmentation (covered in detail <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/08\/22\/data-augmentation-for-image-classification-applications-using-deep-learning\/\">in my previous post<\/a><strong> [9]<\/strong>) specifying <em>another<\/em> <strong>augmentedImageDatastore<\/strong> (which uses images that might be randomly processed with left-right flip, translation, and scaling) as the data source for the\u00a0<strong>trainNetwork<\/strong> function.\r\n<h6><\/h6>\r\nThe resulting classification accuracy and confusion matrix turn out to be almost identical to the ones obtained without data augmentation, which shouldn\u2019t come as a surprise, since our analysis of the classification errors (such as the ones displayed in Fig. 7) suggests that the reasons why our model's predictions are occasionally incorrect (wrong labels, ambiguity in the scenes, and \"non-iconic\" images) are not mitigated by offering additional variations (scaled, flipped, translated) of each image to the model during training. This goes to reinforce Andrew Ng's advice to invest time in performing human error analysis and tabulating the reasons behind the mistakes in a machine learning solution before deciding on the best ways to improve it <strong>[10]<\/strong>.\r\n<h6><\/h6>\r\nThe complete code and images are available at the MATLAB File Exchange<strong> [11]<\/strong>. You can adapt it to use different pretrained CNNs, datasets, and\/or model parameters and hyperparameters. If you do, drop us a note in the comments section telling us what you did and how well it worked.\r\n<h6><\/h6>\r\nTo summarize, this blog post has shown how to use MATLAB and deep neural networks to perform scene classification on images from a publicly available dataset. The references below provide links to materials to learn more details.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n&nbsp;\r\n<h1>References<\/h1>\r\n<ul>\r\n \t<li style=\"list-style-type: none !important;\">[1] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, \"Places: A 10 million Image Database for Scene Recognition\", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. <a href=\"http:\/\/places2.csail.mit.edu\/\">http:\/\/places2.csail.mit.edu\/<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[2] A. Oliva, \"Visual Scene Perception\", <a href=\"http:\/\/olivalab.mit.edu\/Papers\/VisualScenePerception-EncycloPerception-Sage-Oliva2009.pdf\">http:\/\/olivalab.mit.edu\/Papers\/VisualScenePerception-EncycloPerception-Sage-Oliva2009.pdf<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[3] A. Oliva and A. Torralba (2001). \"Modeling the shape of the scene: a holistic representation of the spatial envelope\", International Journal of Computer Vision, Vol. 42(3): 145-175.\r\nPaper, dataset, and MATLAB code available at: <a href=\"http:\/\/people.csail.mit.edu\/torralba\/code\/spatialenvelope\/\">http:\/\/people.csail.mit.edu\/torralba\/code\/spatialenvelope\/<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[4] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, \"Learning Deep Features for Scene Recognition using Places Database,\" NIPS 2014<\/li>\r\n \t<li style=\"list-style-type: none !important;\">[5] MathWorks. \"googlenet: Pretrained GoogLeNet convolutional neural network\". <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/googlenet.html\">https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/googlenet.html<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[6] MathWorks. Pretrained Places365GoogLeNet convolutional neural network (code) <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/70987-deep-learning-toolboxtm-model-for-places365-googlenet-network\">https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/70987-deep-learning-toolboxtm-model-for-places365-googlenet-network<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[7] MathWorks. \"Train Deep Learning Network to Classify New Images\". <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/examples\/train-deep-learning-network-to-classify-new-images.html\">https:\/\/www.mathworks.com\/help\/deeplearning\/examples\/train-deep-learning-network-to-classify-new-images.html<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[8] Lin, Tsung-Yi, et al. \"Microsoft COCO: Common objects in context.\" European conference on computer vision. Springer, Cham, 2014.<\/li>\r\n \t<li style=\"list-style-type: none !important;\">[9] O. Marques, \"Data Augmentation for Image Classification Applications Using Deep Learning\", <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/08\/22\/data-augmentation-for-image-classification-applications-using-deep-learning\/\">https:\/\/blogs.mathworks.com\/deep-learning\/2019\/08\/22\/data-augmentation-for-image-classification-applications-using-deep-learning\/<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[10] A. Ng, \"Machine Learning Yearning\" <a href=\"https:\/\/www.deeplearning.ai\/machine-learning-yearning\/\">https:\/\/www.deeplearning.ai\/machine-learning-yearning\/<\/a><\/li>\r\n \t<li style=\"list-style-type: none !important;\">[11] Oge Marques (2019). Scene Classification Using Deep Learning (<a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/73333-scene-classification-using-deep-learning\">https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/73333-scene-classification-using-deep-learning<\/a>), MATLAB Central File Exchange.<\/li>\r\n<\/ul>","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2019\/10\/Places365_val_00002197-150x150.jpg\" onError=\"this.style.display ='none';\" \/><\/div><p>This is a post from Oge Marques, PhD and Professor of Engineering and Computer Science at FAU, and of course [MathWorks blog] famous for his post on\u00a0image augmentation. He's back to talk about scene... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2019\/11\/25\/scene-classification-using-deep-learning\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/3141"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=3141"}],"version-history":[{"count":62,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/3141\/revisions"}],"predecessor-version":[{"id":6075,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/3141\/revisions\/6075"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=3141"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=3141"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=3141"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}