{"id":4067,"date":"2018-10-22T12:00:23","date_gmt":"2018-10-22T17:00:23","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=4067"},"modified":"2018-10-23T00:58:17","modified_gmt":"2018-10-23T05:58:17","slug":"teaching-a-newcomer-about-teaching-calculus-to-a-deep-learner","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2018\/10\/22\/teaching-a-newcomer-about-teaching-calculus-to-a-deep-learner\/","title":{"rendered":"Teaching a Newcomer About Teaching Calculus to a Deep Learner"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>Two months ago I wrote a blog post about <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2018\/08\/06\/teaching-calculus-to-a-deep-learner\/\">Teaching Calculus to a Deep Learner<\/a>.  We wrote the code for that post in one afternoon in the MathWorks booth at the SIAM Annual Meeting. Earlier that day, during his invited talk, MIT Professor Gil Strang had spontaneously wondered if it would possible to teach calculus to a deep learning computer program.  None of us in the booth were experts in deep learning.<\/p><p>But MathWorks does have experts in deep learning.  When they saw my post, they did not hesitate to suggest some significant improvements. In particular, Conor Daly, in our MathWorks UK office, contributed the code for the following post.  Conor takes up the Gil's challenge and begins the process of learning about derivatives.<\/p><p>We are going to employ two different neural nets, a convolutional neural net, which is often used for images, and a recurrent neural net, which is often used for sounds and other signals.<\/p><p>Is a derivative more like an image or a sound?<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#ff53f0cc-defe-49db-b9b1-f0182ddee2da\">Functions and their derivatives<\/a><\/li><li><a href=\"#b3d22c44-0f84-4ab9-a092-308904fc8d97\">Parameters<\/a><\/li><li><a href=\"#f60a48e8-f4d0-4be3-af54-b7faa9403013\">Training Set<\/a><\/li><li><a href=\"#42d6d7a5-5e8d-42ed-a672-b8e3ab9262df\">Convolutional Neural Network (CNN)<\/a><\/li><li><a href=\"#896d2a10-6069-41b2-be0e-e93aaca62608\">Train CNN<\/a><\/li><li><a href=\"#e8373a55-c8cb-4b53-a121-70f0272cf924\">Plot Test Results<\/a><\/li><li><a href=\"#125f2287-950c-4233-a9fe-3c991a6f0c68\">Recurrent Neural Network (RNN)<\/a><\/li><li><a href=\"#ebecfbc2-9b0c-4ded-9c67-187bd1195079\">Train RNN<\/a><\/li><li><a href=\"#5634450b-19b6-479f-bc67-4076a4288e27\">Plot Test Results<\/a><\/li><li><a href=\"#b5817b90-d6fa-48e5-af9b-94f84466158e\">Convert data to CNN format<\/a><\/li><li><a href=\"#c3586ba9-b392-4b89-9706-c1580f57bed3\">Conclusions<\/a><\/li><\/ul><\/div><h4>Functions and their derivatives<a name=\"ff53f0cc-defe-49db-b9b1-f0182ddee2da\"><\/a><\/h4><p>Here are the functions and derivatives that we are going to consider.<\/p><pre class=\"codeinput\">F =  {@(x) x, @(x) x.^2, @(x) x.^3, @(x) x.^4, <span class=\"keyword\">...<\/span>\r\n      @(x) sin(pi*x), @(x) cos(pi*x) };\r\ndF = { @(x) ones(size(x)), @(x) 2*x, @(x) 3*x.^2, @(x) 4*x.^3, <span class=\"keyword\">...<\/span>\r\n       @(x) pi.*cos(pi.*x), @(x) -pi*sin(pi*x) };\r\n\r\nFchar = { <span class=\"string\">'x'<\/span>, <span class=\"string\">'x^2'<\/span>, <span class=\"string\">'x^3'<\/span>, <span class=\"string\">'x^4'<\/span>, <span class=\"string\">'sin(\\pi x)'<\/span>, <span class=\"string\">'cos(\\pi x)'<\/span> };\r\ndFchar = { <span class=\"string\">'1'<\/span>, <span class=\"string\">'2x'<\/span>, <span class=\"string\">'3x^2'<\/span>, <span class=\"string\">'4x^3'<\/span>, <span class=\"string\">'\\pi cos(\\pi x)'<\/span>, <span class=\"string\">'-\\pi sin(\\pi x)'<\/span> };\r\n<\/pre><h4>Parameters<a name=\"b3d22c44-0f84-4ab9-a092-308904fc8d97\"><\/a><\/h4><p>Set some parameters. First, the random number generator state.<\/p><pre class=\"codeinput\">rng(0)\r\n<\/pre><p>A function to generate uniform random variables on [-1, 1].<\/p><pre class=\"codeinput\">randu = @(m,n) (2*rand(m,n)-1);\r\n<\/pre><p>A function to generate random +1 or -1.<\/p><pre class=\"codeinput\">randsign = @() sign(randu(1,1));\r\n<\/pre><p>The number of functions.<\/p><pre class=\"codeinput\">m = length(F);\r\n<\/pre><p>The number of repetitions, i.e. independent observations.<\/p><pre class=\"codeinput\">n = 500;\r\n<\/pre><p>The number of samples in the interval.<\/p><pre class=\"codeinput\">nx = 100;\r\n<\/pre><p>The white noise level.<\/p><pre class=\"codeinput\">noise = .001;\r\n<\/pre><h4>Training Set<a name=\"f60a48e8-f4d0-4be3-af54-b7faa9403013\"><\/a><\/h4><p>Generate the training set predictors <tt>X<\/tt> and the responses <tt>T<\/tt>.<\/p><pre class=\"codeinput\">X = cell(n*m, 1);\r\nT = cell(n*m, 1);\r\n<span class=\"keyword\">for<\/span> j = 1:n\r\n    x = sort(randu(1, nx));\r\n    <span class=\"keyword\">for<\/span> i = 1:m\r\n        k = i + (j-1)*m;\r\n\r\n        <span class=\"comment\">% Predictors are x, a random vector from -1, 1, and +\/- f(x).<\/span>\r\n        sgn = randsign();\r\n        X{k} = [x; sgn*F{i}(x)+noise*randn(1,nx)];\r\n\r\n        <span class=\"comment\">% Responses are +\/- f'(x)<\/span>\r\n        T{k} = sgn*dF{i}(x)+noise*randn(1,nx);\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre><p>Separate the training set from the test set.<\/p><pre class=\"codeinput\">idxTest = ismember( 1:n*m, randperm(n*m, n) );\r\nXTrain = X( ~idxTest );\r\nTTrain = T( ~idxTest );\r\nXTest = X( idxTest );\r\nTTest = T( idxTest );\r\n<\/pre><p>Choose some test indices to plot.<\/p><pre class=\"codeinput\">iTest = find( idxTest );\r\nidxM = mod( find(idxTest), m );\r\nidxToPlot = zeros(1, m);\r\n<span class=\"keyword\">for<\/span> k = 0:(m-1)\r\n    im = find( idxM == k );\r\n    <span class=\"keyword\">if<\/span> k == 0\r\n        idxToPlot(m) = im(1);\r\n    <span class=\"keyword\">else<\/span>\r\n        idxToPlot(k) = im(1);\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre><h4>Convolutional Neural Network (CNN)<a name=\"42d6d7a5-5e8d-42ed-a672-b8e3ab9262df\"><\/a><\/h4><p>Re-format the data for CNN.<\/p><pre class=\"codeinput\">[XImgTrain, TImgTrain] = iConvertDataToImage(XTrain, TTrain);\r\n[XImgTest, TImgTest] = iConvertDataToImage(XTest, TTest);\r\n<\/pre><p>Here are the layers of the CNN architecture.  Notice that the <tt>'ReLU'<\/tt>, or \"rectified linear unit\", that I was so proud of in my previous post has been replaced by the more appropriate <tt>'leakyRelu'<\/tt>, which does not completely cut off negative values.<\/p><pre class=\"codeinput\">layers = [ <span class=\"keyword\">...<\/span>\r\n    imageInputLayer([1 nx 2], <span class=\"string\">'Normalization'<\/span>, <span class=\"string\">'none'<\/span>)\r\n    convolution2dLayer([1 5], 128, <span class=\"string\">'Padding'<\/span>, <span class=\"string\">'same'<\/span>)\r\n    batchNormalizationLayer()\r\n    leakyReluLayer(0.5)\r\n    convolution2dLayer([1 5], 128, <span class=\"string\">'Padding'<\/span>, <span class=\"string\">'same'<\/span>)\r\n    batchNormalizationLayer()\r\n    leakyReluLayer(0.5)\r\n    convolution2dLayer([1 5], 1, <span class=\"string\">'Padding'<\/span>, <span class=\"string\">'same'<\/span>)\r\n    regressionLayer() ];\r\n<\/pre><p>Here are the options for CNN.  The solver is <tt>'sgdm'<\/tt>, which stands for \"stochastic gradient descent with momentum\".<\/p><pre class=\"codeinput\">options = trainingOptions( <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'sgdm'<\/span>, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'MaxEpochs'<\/span>, 30, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'Plots'<\/span>, <span class=\"string\">'training-progress'<\/span>, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'MiniBatchSize'<\/span>, 200, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'Verbose'<\/span>, false, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'GradientThreshold'<\/span>, 1, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'ValidationData'<\/span>, {XImgTest, TImgTest} );\r\n<\/pre><h4>Train CNN<a name=\"896d2a10-6069-41b2-be0e-e93aaca62608\"><\/a><\/h4><p>Train the network.  This requires a little over 3 minutes on my laptop. I don't have a GPU.<\/p><pre class=\"codeinput\">convNet = trainNetwork(XImgTrain, TImgTrain, layers, options);\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/deepLearningDerivatives_01.png\" alt=\"\"> <h4>Plot Test Results<a name=\"e8373a55-c8cb-4b53-a121-70f0272cf924\"><\/a><\/h4><p>Here are plots of randomly selected results.  The limits on the y axes are set to the theoretical max and min.  Three of the six plots have their signs flipped.<\/p><pre class=\"codeinput\">PImgTest = convNet.predict( XImgTest );\r\n<span class=\"keyword\">for<\/span> k = 1:m\r\n    subplot(3, 2, k);\r\n    plot( XImgTest(1, :, 1, idxToPlot(k)), TImgTest(1, :, 1, idxToPlot(k)), <span class=\"string\">'.'<\/span> )\r\n    plot( XImgTest(1, :, 1, idxToPlot(k)), PImgTest(1, :, 1, idxToPlot(k)), <span class=\"string\">'o'<\/span> )\r\n    title([ <span class=\"string\">'('<\/span> Fchar{k} <span class=\"string\">')'' = '<\/span> dFchar{k}  ] );\r\n    <span class=\"keyword\">switch<\/span> k\r\n        <span class=\"keyword\">case<\/span> {1,2}, set(gca,<span class=\"string\">'ylim'<\/span>,[-2 2])\r\n        <span class=\"keyword\">case<\/span> {3,4}, set(gca,<span class=\"string\">'ylim'<\/span>,[-k k],<span class=\"string\">'ytick'<\/span>,[-k 0 k])\r\n        <span class=\"keyword\">case<\/span> {5,6}, set(gca,<span class=\"string\">'ylim'<\/span>,[-pi pi],<span class=\"string\">'ytick'<\/span>,[-pi 0 pi], <span class=\"keyword\">...<\/span>\r\n                <span class=\"string\">'yticklabels'<\/span>,{<span class=\"string\">'-\\pi'<\/span> <span class=\"string\">'0'<\/span> <span class=\"string\">'\\pi'<\/span>})\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/deepLearningDerivatives_02.png\" alt=\"\"> <h4>Recurrent Neural Network (RNN)<a name=\"125f2287-950c-4233-a9fe-3c991a6f0c68\"><\/a><\/h4><p>Here are the layers of the RNN architecture, including <tt>'bilstm'<\/tt> which stands for \"bidirectional long short-term memory.\"<\/p><pre class=\"codeinput\">layers = [ <span class=\"keyword\">...<\/span>\r\n    sequenceInputLayer(2)\r\n    bilstmLayer(128)\r\n    dropoutLayer()\r\n    bilstmLayer(128)\r\n    fullyConnectedLayer(1)\r\n    regressionLayer() ];\r\n<\/pre><p>Here are the RNN options.  <tt>'adam'<\/tt> is not an acronym; it is an extension of stochastic gradient descent derived from adaptive moment estimation.<\/p><pre class=\"codeinput\">options = trainingOptions( <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'adam'<\/span>, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'MaxEpochs'<\/span>, 30, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'Plots'<\/span>, <span class=\"string\">'training-progress'<\/span>, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'MiniBatchSize'<\/span>, 200, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'ValidationData'<\/span>, {XTest, TTest}, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'Verbose'<\/span>, false, <span class=\"keyword\">...<\/span>\r\n    <span class=\"string\">'GradientThreshold'<\/span>, 1);\r\n<\/pre><h4>Train RNN<a name=\"ebecfbc2-9b0c-4ded-9c67-187bd1195079\"><\/a><\/h4><p>Train the network.  This takes almost 22 minutes on my machine. It makes me wish I had a GPU.<\/p><pre class=\"codeinput\">recNet = trainNetwork(XTrain, TTrain, layers, options);\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/deepLearningDerivatives_03.png\" alt=\"\"> <h4>Plot Test Results<a name=\"5634450b-19b6-479f-bc67-4076a4288e27\"><\/a><\/h4><pre class=\"codeinput\">PTest = recNet.predict( XTest );\r\n<span class=\"keyword\">for<\/span> k = 1:m\r\n    subplot(3, 2, k);\r\n    plot( XTest{idxToPlot(k)}(1,:), TTest{idxToPlot(k)}(1,:), <span class=\"string\">'.'<\/span> )\r\n    plot( XTest{idxToPlot(k)}(1,:), PTest{idxToPlot(k)}(1,:), <span class=\"string\">'o'<\/span> )\r\n    title([ <span class=\"string\">'('<\/span> Fchar{k} <span class=\"string\">')'' = '<\/span> dFchar{k}  ] );\r\n    <span class=\"keyword\">switch<\/span> k\r\n        <span class=\"keyword\">case<\/span> {1,2}, set(gca,<span class=\"string\">'ylim'<\/span>,[-2 2])\r\n        <span class=\"keyword\">case<\/span> {3,4}, set(gca,<span class=\"string\">'ylim'<\/span>,[-k k],<span class=\"string\">'ytick'<\/span>,[-k 0 k])\r\n        <span class=\"keyword\">case<\/span> {5,6}, set(gca,<span class=\"string\">'ylim'<\/span>,[-pi pi],<span class=\"string\">'ytick'<\/span>,[-pi 0 pi], <span class=\"keyword\">...<\/span>\r\n                <span class=\"string\">'yticklabels'<\/span>,{<span class=\"string\">'-\\pi'<\/span> <span class=\"string\">'0'<\/span> <span class=\"string\">'\\pi'<\/span>})\r\n    <span class=\"keyword\">end<\/span>\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"http:\/\/blogs.mathworks.com\/cleve\/files\/deepLearningDerivatives_04.png\" alt=\"\"> <h4>Convert data to CNN format<a name=\"b5817b90-d6fa-48e5-af9b-94f84466158e\"><\/a><\/h4><pre class=\"codeinput\"><span class=\"keyword\">function<\/span> [XImg, TImg] = iConvertDataToImage(X, T)\r\n    <span class=\"comment\">% Convert data to CNN format<\/span>\r\n    <span class=\"comment\">% Re-format data for CNN<\/span>\r\n    XImg = cat(4, X{:});\r\n    XImg = permute(XImg, [3 2 1 4]);\r\n    TImg = cat(4, T{:});\r\n    TImg = permute(TImg, [3 2 1 4]);\r\n<span class=\"keyword\">end<\/span>\r\n<\/pre><h4>Conclusions<a name=\"c3586ba9-b392-4b89-9706-c1580f57bed3\"><\/a><\/h4><p>I used to teach calculus.  I have been critical of the way calculus is sometimes taught and more often learned.  Here is a typical scenario.<\/p><p><i>Instructor<\/i>: What is the derivative of $x^4$?<\/p><p><i>Student<\/i>: $4x^3$.<\/p><p><i>Instructor<\/i>: Why?<\/p><p><i>Student<\/i>: You take the $4$, put it in front, then subtract one to get $3$, and put that in place of the $4$ . . .<\/p><p>I am afraid we're doing that here.  The learner is just looking for patterns.  There is no sense of <i>velocity<\/i>, <i>acceleration<\/i>, or <i>rate of change<\/i>.  The is little chance of differentiating an expression that is not in the training set.  There is no <i>product rule<\/i>, no <i>chain rule<\/i>, no <i>Fundamental Theorem of Calculus<\/i>.<\/p><p>In short, there is little <i>understanding<\/i>.  But maybe that is a criticism of machine learning in general.<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_eb6abf9d00cc48eabe348a37fe0cfbef() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='eb6abf9d00cc48eabe348a37fe0cfbef ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' eb6abf9d00cc48eabe348a37fe0cfbef';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2018 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_eb6abf9d00cc48eabe348a37fe0cfbef()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2018b<br><\/p><\/div><!--\r\neb6abf9d00cc48eabe348a37fe0cfbef ##### SOURCE BEGIN #####\r\n%% Teaching a Newcomer About Teaching Calculus to a Deep Learner\r\n% Two months ago I wrote a blog post about\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2018\/08\/06\/teaching-calculus-to-a-deep-learner\/\r\n% Teaching Calculus to a Deep Learner>.  We wrote the code for that post\r\n% in one afternoon in the MathWorks booth at the SIAM Annual Meeting.\r\n% Earlier that day, during his invited talk, MIT Professor Gil Strang\r\n% had spontaneously wondered if it would possible to teach calculus to\r\n% a deep learning computer program.  None of us in the booth were\r\n% experts in deep learning.\r\n%\r\n% But MathWorks does have experts in deep learning.  When they saw\r\n% my post, they did not hesitate to suggest some significant improvements.\r\n% In particular, Conor Daly, in our MathWorks UK office, contributed the\r\n% code for the following post.  Conor takes up the Gil's challenge and\r\n% begins the process of learning about derivatives.\r\n%\r\n% We are going to employ two different neural nets, a convolutional\r\n% neural net, which is often used for images, and a recurrent neural\r\n% net, which is often used for sounds and other signals.\r\n%\r\n% Is a derivative more like an image or a sound?\r\n\r\n%% Functions and their derivatives\r\n% Here are the functions and derivatives that we are going to consider.\r\n\r\nF =  {@(x) x, @(x) x.^2, @(x) x.^3, @(x) x.^4, ...\r\n      @(x) sin(pi*x), @(x) cos(pi*x) };\r\ndF = { @(x) ones(size(x)), @(x) 2*x, @(x) 3*x.^2, @(x) 4*x.^3, ...\r\n       @(x) pi.*cos(pi.*x), @(x) -pi*sin(pi*x) };\r\n\r\nFchar = { 'x', 'x^2', 'x^3', 'x^4', 'sin(\\pi x)', 'cos(\\pi x)' };\r\ndFchar = { '1', '2x', '3x^2', '4x^3', '\\pi cos(\\pi x)', '-\\pi sin(\\pi x)' };\r\n\r\n%% Parameters\r\n% Set some parameters. First, the random number generator state.\r\n\r\nrng(0)\r\n\r\n%%\r\n% A function to generate uniform random variables on [-1, 1].\r\n\r\nrandu = @(m,n) (2*rand(m,n)-1);\r\n\r\n%%\r\n% A function to generate random +1 or -1.\r\n\r\nrandsign = @() sign(randu(1,1));\r\n\r\n%%\r\n% The number of functions.\r\n\r\nm = length(F);\r\n\r\n%%\r\n% The number of repetitions, i.e. independent observations.\r\nn = 500;\r\n\r\n%%\r\n% The number of samples in the interval.\r\nnx = 100;\r\n\r\n%%\r\n% The white noise level.\r\nnoise = .001;\r\n\r\n%% Training Set\r\n% Generate the training set predictors |X| and the responses |T|.\r\n\r\nX = cell(n*m, 1);\r\nT = cell(n*m, 1);\r\nfor j = 1:n\r\n    x = sort(randu(1, nx));\r\n    for i = 1:m\r\n        k = i + (j-1)*m;\r\n        \r\n        % Predictors are x, a random vector from -1, 1, and +\/- f(x).\r\n        sgn = randsign();\r\n        X{k} = [x; sgn*F{i}(x)+noise*randn(1,nx)];\r\n        \r\n        % Responses are +\/- f'(x)\r\n        T{k} = sgn*dF{i}(x)+noise*randn(1,nx);\r\n    end\r\nend\r\n\r\n%%\r\n% Separate the training set from the test set.\r\n\r\nidxTest = ismember( 1:n*m, randperm(n*m, n) );\r\nXTrain = X( ~idxTest );\r\nTTrain = T( ~idxTest );\r\nXTest = X( idxTest );\r\nTTest = T( idxTest );\r\n\r\n%%\r\n% Choose some test indices to plot.\r\niTest = find( idxTest );\r\nidxM = mod( find(idxTest), m );\r\nidxToPlot = zeros(1, m);\r\nfor k = 0:(m-1)\r\n    im = find( idxM == k );\r\n    if k == 0\r\n        idxToPlot(m) = im(1);\r\n    else\r\n        idxToPlot(k) = im(1);\r\n    end\r\nend\r\n\r\n%% Convolutional Neural Network (CNN)\r\n%\r\n% Re-format the data for CNN.\r\n\r\n[XImgTrain, TImgTrain] = iConvertDataToImage(XTrain, TTrain);\r\n[XImgTest, TImgTest] = iConvertDataToImage(XTest, TTest);\r\n\r\n%%\r\n% Here are the layers of the CNN architecture.  Notice that the\r\n% |'ReLU'|, or \"rectified linear unit\", that I was so proud of\r\n% in my previous post has been replaced by the more appropriate\r\n% |'leakyRelu'|, which does not completely cut off negative values.\r\n\r\nlayers = [ ...\r\n    imageInputLayer([1 nx 2], 'Normalization', 'none')\r\n    convolution2dLayer([1 5], 128, 'Padding', 'same')\r\n    batchNormalizationLayer()\r\n    leakyReluLayer(0.5)\r\n    convolution2dLayer([1 5], 128, 'Padding', 'same')\r\n    batchNormalizationLayer()\r\n    leakyReluLayer(0.5)\r\n    convolution2dLayer([1 5], 1, 'Padding', 'same')\r\n    regressionLayer() ];\r\n\r\n%%\r\n% Here are the options for CNN.  The solver is |'sgdm'|, which stands for\r\n% \"stochastic gradient descent with momentum\".\r\n\r\noptions = trainingOptions( ...\r\n    'sgdm', ...\r\n    'MaxEpochs', 30, ...\r\n    'Plots', 'training-progress', ...\r\n    'MiniBatchSize', 200, ...\r\n    'Verbose', false, ...\r\n    'GradientThreshold', 1, ...\r\n    'ValidationData', {XImgTest, TImgTest} );\r\n\r\n%% Train CNN\r\n% Train the network.  This requires a little over 3 minutes on my laptop.\r\n% I don't have a GPU.\r\n\r\nconvNet = trainNetwork(XImgTrain, TImgTrain, layers, options);\r\n\r\n%% Plot Test Results\r\n% Here are plots of randomly selected results.  The limits on the y axes\r\n% are set to the theoretical max and min.  Three of the six plots\r\n% have their signs flipped.\r\n\r\nPImgTest = convNet.predict( XImgTest );\r\nfor k = 1:m\r\n    subplot(3, 2, k);\r\n    plot( XImgTest(1, :, 1, idxToPlot(k)), TImgTest(1, :, 1, idxToPlot(k)), '.' )\r\n    plot( XImgTest(1, :, 1, idxToPlot(k)), PImgTest(1, :, 1, idxToPlot(k)), 'o' )\r\n    title([ '(' Fchar{k} ')'' = ' dFchar{k}  ] );\r\n    switch k\r\n        case {1,2}, set(gca,'ylim',[-2 2])\r\n        case {3,4}, set(gca,'ylim',[-k k],'ytick',[-k 0 k])\r\n        case {5,6}, set(gca,'ylim',[-pi pi],'ytick',[-pi 0 pi], ...\r\n                'yticklabels',{'-\\pi' '0' '\\pi'})\r\n    end\r\nend\r\n\r\n    \r\n%% Recurrent Neural Network (RNN)\r\n\r\n%%\r\n% Here are the layers of the RNN architecture, including |'bilstm'| \r\n% which stands for \"bidirectional long short-term memory.\"\r\n%% \r\n\r\nlayers = [ ...\r\n    sequenceInputLayer(2)\r\n    bilstmLayer(128)\r\n    dropoutLayer()\r\n    bilstmLayer(128)\r\n    fullyConnectedLayer(1)\r\n    regressionLayer() ];\r\n\r\n%%\r\n% Here are the RNN options.  |'adam'| is not an acronym; it is an\r\n% extension of stochastic gradient descent derived from adaptive\r\n% moment estimation.\r\n\r\noptions = trainingOptions( ...\r\n    'adam', ...\r\n    'MaxEpochs', 30, ...\r\n    'Plots', 'training-progress', ...\r\n    'MiniBatchSize', 200, ...\r\n    'ValidationData', {XTest, TTest}, ...\r\n    'Verbose', false, ...\r\n    'GradientThreshold', 1);\r\n\r\n%% Train RNN\r\n% Train the network.  This takes almost 22 minutes on my machine.\r\n% It makes me wish I had a GPU.\r\n\r\nrecNet = trainNetwork(XTrain, TTrain, layers, options);\r\n\r\n%% Plot Test Results\r\n\r\nPTest = recNet.predict( XTest );\r\nfor k = 1:m\r\n    subplot(3, 2, k);\r\n    plot( XTest{idxToPlot(k)}(1,:), TTest{idxToPlot(k)}(1,:), '.' )\r\n    plot( XTest{idxToPlot(k)}(1,:), PTest{idxToPlot(k)}(1,:), 'o' )\r\n    title([ '(' Fchar{k} ')'' = ' dFchar{k}  ] );\r\n    switch k\r\n        case {1,2}, set(gca,'ylim',[-2 2])\r\n        case {3,4}, set(gca,'ylim',[-k k],'ytick',[-k 0 k])\r\n        case {5,6}, set(gca,'ylim',[-pi pi],'ytick',[-pi 0 pi], ...\r\n                'yticklabels',{'-\\pi' '0' '\\pi'})\r\n    end\r\nend\r\n    \r\n%% Convert data to CNN format\r\n\r\nfunction [XImg, TImg] = iConvertDataToImage(X, T)\r\n    % Convert data to CNN format\r\n    % Re-format data for CNN\r\n    XImg = cat(4, X{:});\r\n    XImg = permute(XImg, [3 2 1 4]);\r\n    TImg = cat(4, T{:});\r\n    TImg = permute(TImg, [3 2 1 4]);\r\nend\r\n\r\n%% Conclusions\r\n% I used to teach calculus.  I have been critical of the way calculus\r\n% is sometimes taught and more often learned.  Here is a typical scenario.\r\n%\r\n% _Instructor_: What is the derivative of $x^4$?\r\n%\r\n% _Student_: $4x^3$.\r\n%\r\n% _Instructor_: Why?\r\n%\r\n% _Student_: You take the $4$, put it in front, then subtract one\r\n% to get $3$, and put that in place of the $4$ . . .\r\n%\r\n% I am afraid we're doing that here.  The learner is just looking for\r\n% patterns.  There is no sense of _velocity_, _acceleration_, or\r\n% _rate of change_.  The is little chance of differentiating an\r\n% expression that is not in the training set.  There is no _product rule_,\r\n% no _chain rule_, no _Fundamental Theorem of Calculus_.\r\n%\r\n% In short, there is little _understanding_.  But maybe that is a\r\n% criticism of machine learning in general.\r\n\r\n##### SOURCE END ##### eb6abf9d00cc48eabe348a37fe0cfbef\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/deepLearningDerivatives_04.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction--><p>Two months ago I wrote a blog post about <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2018\/08\/06\/teaching-calculus-to-a-deep-learner\/\">Teaching Calculus to a Deep Learner<\/a>.  We wrote the code for that post in one afternoon in the MathWorks booth at the SIAM Annual Meeting. Earlier that day, during his invited talk, MIT Professor Gil Strang had spontaneously wondered if it would possible to teach calculus to a deep learning computer program.  None of us in the booth were experts in deep learning.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2018\/10\/22\/teaching-a-newcomer-about-teaching-calculus-to-a-deep-learner\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":4075,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[12,8],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4067"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=4067"}],"version-history":[{"count":2,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4067\/revisions"}],"predecessor-version":[{"id":4079,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/4067\/revisions\/4079"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/4075"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=4067"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=4067"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=4067"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}