{"id":460,"date":"2018-07-20T18:00:51","date_gmt":"2018-07-20T18:00:51","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=460"},"modified":"2021-04-06T15:51:49","modified_gmt":"2021-04-06T19:51:49","slug":"deep-learning-in-action-part-2","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/07\/20\/deep-learning-in-action-part-2\/","title":{"rendered":"Deep Learning in Action &#8211; part 2"},"content":{"rendered":"<span style=\"font-size: 14px\">\r\nHello Everyone! It's <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/profile\/authors\/4758135-johanna-pingel\">Johanna<\/a>, and Steve has allowed me to take over the blog from time to time to talk about deep learning.\r\n<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nI'm back for another episode of:\r\n<\/span>\r\n<p style=\"margin: 1% 13%;background-color: #86c5da;text-align: center\"><span style=\"color: #ffffff;font-size: 20px\">\u201cDeep Learning in Action:<\/span>\r\n<span style=\"color: #ffffff;font-size: 16px\">Cool projects created at MathWorks\r\n<\/span><\/p>\r\n&nbsp;\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nThis aims to give you insight into what we\u2019re working on at MathWorks: I\u2019ll show some demos, and give you access to the code and maybe even post a video or two.\r\n<\/span>\r\n<span style=\"font-size: 14px\">\r\nToday\u2019s demo is called \"<strong>Sentiment Analysis\"<\/strong> and it\u2019s the second article in a series of posts, including:<\/span>\r\n<h6><\/h6>\r\n&nbsp;\r\n<ul>\r\n \t<li>3D Point Cloud Segmentation using CNNs<\/li>\r\n \t<li>GPU Coder<\/li>\r\n \t<li>Age Detection<\/li>\r\n \t<li><a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/\">Pictionary<\/a><\/li>\r\n<\/ul>\r\n<span style=\"font-size: 14px\">The developer of the demo is <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/profile\/authors\/3968667-heather-gorr\">Heather Gorr<\/a>, who's been doing lots of work with Deep Learning lately. She is a demo creator and presenter extraordinaire. She creates demos that we present to customers, but for those of you not able to attend one of these presentations, we'd like to show you one of these demos here. This example is based on the new capabilities in the Text Analytics Toolbox, and proves it's not always about images in this blog. <\/span>\r\n\r\n<hr width=\"50%\/\" \/>\r\n\r\n<span style=\"color: #e67e22;font-size: 20px\"><strong>Demo: Sentiment Analysis<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nImagine typing in a term and instantly getting a sense of how that term is perceived? That's what we're going to do today.<\/span>\r\n\r\n<span style=\"font-size: 14px\">\r\nWhat better place to start when talking about sentiment, than Twitter? Twitter is filled with positively and negatively charged statements, and companies are always looking for insight into how their company is perceived without reading every tweet. Sentiment analysis can have many practical applications, such as branding, political campaigning, and advertising.<\/span>\r\n<h6><\/h6>\r\n\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nIn this example we'll analyze Twitter data to see whether the sentiment surrounding a specific term or phrase is generally positive or negative. \r\n<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nMachine learning was (and still is) commonly used for sentiment analysis. It is often used to analyze individual words, whereas deep learning can be applied to complete sentences, greatly increasing its accuracy.\r\n<\/span>\r\n\r\n<span style=\"font-size: 14px\">Here is an app that Heather built to quickly show sentiment analysis in MATLAB. It ties into live Twitter data, shows a word cloud of the popular words associated with a term, and the overall sentiment score: <\/span>\r\n\r\n<h6><\/h6>\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"962\" height=\"777\" class=\"alignnone size-full wp-image-469\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/07\/SentimentApp.png\" alt=\"Live Twitter Analysis App\" \/>\r\n\r\n\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n<h4><\/h4>\r\n\r\n<hr width=\"50%\/\" \/>\r\n\r\n<span style=\"font-size: 13px\"><em>Before we get into the demo, I have two shameless plugs: <\/span>\r\n\r\n<ul>\r\n\t<li>If you are doing any sort of text analysis, you should really check out the new <a href=\"https:\/\/www.mathworks.com\/products\/text-analytics.html\">Text Analytics Toolbox<\/a>. I\u2019m not a text analytics expert, and I take for granted all the processing that needs to happen to turn natural language into something a computer can understand. In this example, there are functions to take all the hard work out of processing text, something that will save you hundreds of hours. <\/em><\/span>\r\n\r\n\r\n<span style=\"font-size: 13px\"><em>\r\n<li>\r\nSecondly, I just discovered that if you want to plug into live Twitter feed data, we have a toolbox for that too! This is the <a href=\"https:\/\/www.mathworks.com\/products\/datafeed.html\">Datafeed Toolbox<\/a>. It lets you access live feeds like Twitter, and real-time market data from leading financial data providers. Any day traders out there? Might be worth considering this toolbox!\r\n<\/em><\/span>\r\n\r\n<\/ul>\r\n\r\n\r\n<hr width=\"50%\/\" \/>\r\n\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 18px\"><strong>Training Data<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nThe original dataset that was used contains 1.6 million pre-classified tweets. This subset contains 100,000 tweets. The original dataset can be found at <a href=\"http:\/\/thinknook.com\/twitter-sentiment-analysis-training-corpus-dataset-2012-09-22\/.\">http:\/\/thinknook.com\/twitter-sentiment-analysis-training-corpus-dataset-2012-09-22\/<\/a>\r\n<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">Here is a sampling of training tweets: <\/span>\r\n<h6><\/h6>\r\n<table style=\"border: 2px solid black\">\r\n<tbody>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\"><strong>Tweet<\/strong><\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\"><strong>Sentiment<\/strong><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">\"I LOVE @Health4UandPets u guys r the best!! \"<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">Positive<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">\"@nicolerichie: your picture is very sweet \"<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">Positive<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">\"Dancing around the room in Pjs, jamming to my ipod. Getting dizzy. Well twitter, you asked! \"<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">Positive<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">\"Back to work! \"<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">Negative<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">\"tired but can't sleep \"<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">Negative<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">\"Just has the worst presentation ever! \"<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">Negative<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"border: 1px solid black;padding: 5px\">\"So it snowed last night. Not enough to call in for a snow day at work though. \"<\/td>\r\n<td style=\"border: 1px solid black;padding: 5px\">Negative<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nHow were these classified as positive and\/or negative? Great question! After all, we could have an argument\/debate on whether \u201cBack to work!\u201d could also be a positive tweet.<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nIn fact, even in the link to the training data, the author suggests: <\/span>\r\n<blockquote>\u202610% of sentiment classification by humans can be debated\u2026<\/blockquote>\r\n\r\n<\/span>\r\n<\/span>\r\n\r\n<span style=\"font-size: 14px\">\r\nThe categories of training data can be determined through manual labeling, using emojis to label sentiment, using a machine learning or deep learning model to get determine sentiment, or a combination of these. \r\n\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nI'd recommend verifying that you agree with the dataset categories depending on what you want the outcome to be. For example: move \"Back to work!\" into the positive category if you think this is a positive statement; its up to you to determine how your model is going to respond.\r\n<\/span>\r\n\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 18px\"><strong>Data Prep<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nWe first clean the data by removing punctuation and URLs:\r\n<\/span>\r\n<h6><\/h6>\r\n<pre> % Preprocess tweets\r\n tweets = lower(tweets);\r\n tweets = eraseURLs(tweets);\r\n tweets = removeHashtags(tweets);\r\n tweets = erasePunctuation(tweets);\r\n t = tokenizedDocument(tweets);<\/pre>\r\n<span style=\"font-size: 14px\">\r\nWe can also remove \u201cstop words\u201d such as \u201cthe\u201d, \u201cand\u201d, which do not add helpful information that will help the algorithm learn.\r\n<\/span>\r\n<h6><\/h6>\r\n<pre>% Edit stop words list to take out words that could carry important meaning\r\n% for sentiment of the tweet\r\nnewStopWords = stopWords;\r\nnotStopWords = [<span style=\"color: #a020f0\">\"are\", \"aren't\", \"arent\", \"can\", \"can't\", \"cant\", ...\r\n\"cannot\", \"could\", \"couldn't\", \"did\", \"didn't\", \"didnt\", \"do\", \"does\",...\r\n\"doesn't\", \"doesnt\", \"don't\", \"dont\", \"is\", \"isn't\", \"isnt\", \"no\", \"not\",...\r\n\"was\", \"wasn't\", \"wasnt\", \"with\", \"without\", \"won't\", \"would\", \"wouldn't\"<\/span>];\r\nnewStopWords(ismember(newStopWords,notStopWords)) = [];\r\nt = removeWords(t,newStopWords);\r\n\r\nt = removeWords(t,{<span style=\"color: #a020f0\">'rt','retweet','amp','http','https',...\r\n'stock','stocks','inc'<\/span>});\r\nt = removeShortWords(t,1);\r\n<\/pre>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nAnd then perform \u201cword embedding\u201d which has been explained to me as turning words into vectors to be used as training. More officially, it can be used to create a word vector based on unsupervised learning of word co-occurrences in text. Word embedding are typically unsupervised models, which can be trained in MATLAB, or there are several pre-trained word vector models with varying sizes of vocabulary and dimensions from a number of sources like Wikipedia and Twitter.\r\n<\/span>\r\n\r\n<span style=\"font-size: 14px\">\r\nAnother shameless plug for text analytics toolbox, which makes this step very simple.\r\n<\/span>\r\n<h6><\/h6>\r\n<pre>embeddingDimension = 100;\r\nembeddingEpochs = 50;\r\nemb = trainWordEmbedding(tweetsTrainDocuments, ...\r\n<span style=\"color: #a020f0\">'Dimension'<\/span>,embeddingDimension, ...\r\n<span style=\"color: #a020f0\">'NumEpochs'<\/span>,embeddingEpochs, ...\r\n<span style=\"color: #a020f0\">'Verbose'<\/span>,0)\r\n<\/pre>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nWe can then set up the structure of the network.<\/span>\r\n\r\n<span style=\"font-size: 14px\">\r\nIn this example we'll use a long short-term memory ( LSTM) network, a recurrent neural network (RNN) that can learn dependencies over time. At a high level, LSTMs are good for classifying sequence and time-series data. For text analytics, this means that an LSTM will take into account not only the words in a sentence, but the structure and combination of words, as well.<\/span>\r\n<h6><\/h6>\r\n&nbsp;\r\n\r\nThe network itself is very simple:\r\n\r\n&nbsp;\r\n<h6><\/h6>\r\n<pre>layers = [ sequenceInputLayer(inputSize)\r\nlstmLayer(outputSize,<span style=\"color: #a020f0\">'OutputMode','last'<\/span>)\r\nfullyConnectedLayer(numClasses)\r\nsoftmaxLayer\r\nclassificationLayer ]\r\n<\/pre>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\nWhen run on a GPU, it trains very quickly, taking just 6 minutes for 30 epochs (complete passes through the data).\r\n<\/span>\r\n\r\n<h5><\/h5>\r\n\r\n<div id=\"attachment_471\" style=\"width: 1303px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-471\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-471 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/07\/trainingProgress.png\" alt=\"Training Plot Progress tool in MATLAB\" width=\"1293\" height=\"772\" \/><p id=\"caption-attachment-471\" class=\"wp-caption-text\"><span style=\"font-size: 12px\"> Here is our ever-famous training plot. Just in case you haven\u2019t tried this, in training options: set 'Plots','training-progress'<\/span><\/p><\/div>\r\n\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 18px\"><strong>Test the Model<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\n\r\nOnce we've trained the model, it can be used to see how well it predicts on new data.\r\n<h6><\/h6>\r\n<\/span>\r\n<pre>\r\n[YPred,scores] = classify(net,XTest);\r\naccuracy = sum(YPred == YTest)\/numel(YPred)\r\n<\/pre>\r\n\r\n<span style=\"font-size: 12px;color: #7c7f84\">\r\naccuracy = 0.6606\r\n<\/span>\r\n\r\n<pre>\r\nheatmap(table(YPred,YTest,'VariableNames',{'Predicted','Actual'}),...\r\n    'Predicted','Actual');\r\n<\/pre>\r\n\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1053\" height=\"652\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/07\/heatmap.png\" alt=\"\" class=\"alignnone size-full wp-image-473\" \/>\r\n\r\n<span style=\"font-size: 14px\">\r\nI spoke with Heather about the results of this model: There are lots of ways to interpret the results, and you can spend lots of time improving these models. For example, these are the results of very generic Tweets, and you could make them more application specific if you wanted to bias the data towards those results. \r\n\r\nAlso, using the Stanford word embedding increases the model accuracy to 75%.\r\n<\/span>\r\n<h6><\/h6>\r\n\r\n<span style=\"font-size: 14px\">\r\nBefore you judge the results, it\u2019s often helpful to try this out on data that you want the model to be successful.  \r\nWe can take a few sample tweets, or make our own! These are the few that I decided to try. \r\n<\/span>\r\n\r\n<h6><\/h6>\r\n\r\n<pre>\r\ntw = [<span style=\"color: #a020f0\">\"I'm really sad today. I was sad yesterday too\"\r\n    \"This is super awesome! The best!\"\r\n    \"Everyone should be buying this!\"\r\n    \"Everything better with bacon\"\r\n    \"There is no more bacon.\"<\/span>];\r\ns = preprocessTweets(tw);\r\nC = doc2sequence(emb,s);\r\n[pred,score] = classify(net,C)\r\n<\/pre>\r\n<h6><\/h6>\r\n<span style=\"font-size: 12px\">\r\n\r\n<table>\r\n<tr>\r\n<td style=\"padding: 5px\"> pred = 5x1 categorical array<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">neutral <\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">  positive<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">  positive <\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">     positive <\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">   negative <\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n<table>\r\n<tr>\r\n<td style=\"padding: 5px\"> score = 5\u00d73 single matrix<\/td>\r\n<\/tr>\r\n<\/table>\r\n<table>\r\n<tr>\r\n<td style=\"padding-left: 16px\">0.4123 <\/td>\r\n<td style=\"padding-left: 16px\">0.5021 <\/td>\r\n<td style=\"padding-left: 16px\">0.0856 <\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 16px\">0.0206 <\/td>\r\n<td style=\"padding-left: 16px\">0.0370 <\/td>\r\n<td style=\"padding-left: 16px\">0.9424 <\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 16px\">0.2423 <\/td>\r\n<td style=\"padding-left: 16px\">0.0526 <\/td>\r\n<td style=\"padding-left: 16px\">0.7052 <\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 16px\">0.1280 <\/td>\r\n<td style=\"padding-left: 16px\">0.0180 <\/td>\r\n<td style=\"padding-left: 16px\">0.8540 <\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 16px\">0.9696 <\/td>\r\n<td style=\"padding-left: 16px\">0.0278 <\/td>\r\n<td style=\"padding-left: 16px\">0.0027 <\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n\r\n<\/span>\r\n\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n\r\n<pre>\r\ntotalScore = calculateScore(score)\r\n<\/pre>\r\n<table>\r\n<tr>\r\n<td style=\"padding: 5px\"> totalScore = 5\u00d71 single column vector<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 16px\">-0.0043<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">  0.9259<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">   0.8949<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\">  0.9641<\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding-left: 20px\"> 0.9444<\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n\r\n<h6><\/h6>\r\n<span style=\"color: #e67e22;font-size: 18px\"><strong>Q&amp;A with Heather<\/strong><\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\n1.\tWhen choosing training data, did you pick random samples of all Tweets, or did you capture a certain category of tweets, for example political, scientific, celebrities, etc.?\r\n<\/span>\r\n<h6><\/h6>\r\n\r\n<table>\r\n<tr>\r\n<td style=\"border-left: 2px solid #b245ad;padding: 10px\"><strong>If I'm working with finance data, I pick stock price discussions to make sure we're getting the right content and context. But it's also good practice to include some generic text as well, since it's good to have your model have samples of \"normal\" language too. \r\n<h6><\/h6>\r\nIt's just like any other example of deep learning - if you're really intent on identify cats and dogs in pictures, make sure you have lots of pictures of cats and dogs. You may also want to throw in some pictures of images that could easily confuse the model too, so that it learns those differences.    <\/strong><\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n\r\n\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\n2. Is this code available?\r\n<\/span>\r\n<h6><\/h6>\r\n\r\n\r\n\r\n<table>\r\n<tr>\r\n<td style=\"border-left: 2px solid #b245ad;padding: 10px\"><strong>I will have the demo on <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/profile\/authors\/3968667-heather-gorr\">FileExchange<\/a> (in about a week or two), but the live Twitter portion has been disabled, since you'll need the Datafeed toolbox and your own Twitter credentials for that to work. If you're serious about that part, you can look at the demos in <a href=\"https:\/\/www.mathworks.com\/help\/datafeed\/twitter.html\">Datafeed toolbox<\/a> which will walk you through those steps.   <\/strong><\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n\r\n\r\n\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\n3. The calculateScore function, is that something you created? How do you determine the score?\r\n<\/span><h6><\/h6>\r\n\r\n<table>\r\n<tr>\r\n<td style=\"border-left: 2px solid #b245ad;padding: 10px\"><strong>I made it up! There's lots of more sophisticated ways of doing score, but since it brings back the probabilities of the 3 classes, I set neutral to zero, and normalized the positive and negative score to create a final score.  <\/strong><\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n\r\n<span style=\"font-size: 14px\">\r\n<h6><\/h6>\r\n4. Why text analytics?\r\n<\/span>\r\n<h6><\/h6>\r\n\r\n\r\n\r\n<table>\r\n<tr>\r\n<td style=\"border-left: 2px solid #b245ad;padding: 10px\"><strong>Text analytics is a really interesting and rich research area. There's still new research coming out, and it's up and coming. Preprocessing text is really different and offers a completely different set of challenges than images. Numbers are more predictable, and in some ways easier. How you preprocess the text can have a huge impact on the results of the model. <\/strong><\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n<span style=\"font-size: 14px;color: #b245ad\">\r\n\r\n<\/span>\r\n<h6><\/h6>\r\n<span style=\"font-size: 14px\">\r\n5. Can you really predict stocks with this information?\r\n<\/span>\r\n\r\n<h6><\/h6>\r\n\r\n\r\n<table>\r\n<tr>\r\n<td style=\"border-left: 2px solid #b245ad;padding: 10px\"><strong>Tweets really do track with finance data, and can give you insight into certain stocks. Bloomberg provides social sentiment analytics, and has written a few articles about the topic. So you could just use this score, but you have no control over the model and the data. Plus we already mentioned that preprocessing can have a huge effect on the outcome of the score. There's a lot more insight if you are actually doing it yourself.  <\/strong><\/td>\r\n<\/tr>\r\n<\/table>\r\n\r\n\r\n<h6><\/h6>\r\n\r\n<span style=\"font-size: 14px\">\r\nThanks to Heather for the demo, and taking the time to walk me through it! I hope you enjoyed it as well. Anything else you'd like to ask Heather? What type of demo would you like to see next? Leave a comment below!\r\n<\/span>\r\n<h6><\/h6>\r\n<h6><\/h6>\r\n","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2018\/07\/SentimentApp.png\" onError=\"this.style.display ='none';\" \/><\/div><p>\r\nHello Everyone! It's Johanna, and Steve has allowed me to take over the blog from time to time to talk about deep learning.\r\n\r\n\r\n\r\nI'm back for another episode of:\r\n\r\n\u201cDeep Learning in... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2018\/07\/20\/deep-learning-in-action-part-2\/\">read more >><\/a><\/p>","protected":false},"author":156,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/460"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/156"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=460"}],"version-history":[{"count":2,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/460\/revisions"}],"predecessor-version":[{"id":651,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/460\/revisions\/651"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=460"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=460"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=460"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}