{"id":2225,"date":"2017-01-05T12:00:52","date_gmt":"2017-01-05T17:00:52","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=2225"},"modified":"2017-05-16T07:34:45","modified_gmt":"2017-05-16T12:34:45","slug":"fitting-and-extrapolating-us-census-data","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2017\/01\/05\/fitting-and-extrapolating-us-census-data\/","title":{"rendered":"Fitting and Extrapolating US Census Data"},"content":{"rendered":"<div class=\"content\"><!--introduction--><p>A headline in the <a href=\"https:\/\/www.nytimes.com\/2016\/12\/22\/us\/usa-population-growth.html\"><i>New York Times<\/i><\/a> at the end of 2016 said  \"Growth of U.S. Population Is at Slowest Pace Since 1937\". This prompted me to revisit an old chestnut about fitting and extrapolating census data.  In the process I have added a couple of nonlinear fits, namely the logistic curve and the double exponential Gompertz model.<\/p><!--\/introduction--><h3>Contents<\/h3><div><ul><li><a href=\"#df897051-32e8-44d1-95b3-de44f9179c9e\">Oldie, But Goodie<\/a><\/li><li><a href=\"#f45a43de-8e59-4639-bd31-3c4b92067929\">Data<\/a><\/li><li><a href=\"#006e057d-7eac-49f3-bf87-8075091be676\">App<\/a><\/li><li><a href=\"#daecab2f-864f-4b1f-a3f3-12700f8794f5\">Models<\/a><\/li><li><a href=\"#6fb28651-8b67-43f1-92a7-12b2c77b1082\">2016<\/a><\/li><li><a href=\"#1955c6bf-e409-4e6c-abed-05c7fc0968f2\">Polynomials<\/a><\/li><li><a href=\"#4504a66d-7c19-4d6c-918e-5f4479dbae67\">Splines<\/a><\/li><li><a href=\"#527effe2-ef70-4433-b01f-158222da9742\">Three Exponentials<\/a><\/li><li><a href=\"#9fdaaee4-13cb-455d-bbd2-28556dc454bf\">Exponential<\/a><\/li><li><a href=\"#f32a6bc8-2463-4da8-b50f-1b0787f7bb0d\">Logistic<\/a><\/li><li><a href=\"#81531f7c-8adc-4bcc-bfcf-c879a047520f\">Gompertz<\/a><\/li><li><a href=\"#6190dc42-e4a0-450a-a202-9b4eae1225f8\">Results<\/a><\/li><li><a href=\"#15d6695d-e30d-4f4a-a44d-85c8d61ed6a6\">Software<\/a><\/li><li><a href=\"#2c422679-6357-4031-a43c-071ffc57ceff\">Homework<\/a><\/li><\/ul><\/div><h4>Oldie, But Goodie<a name=\"df897051-32e8-44d1-95b3-de44f9179c9e\"><\/a><\/h4><p>This experiment is older than MATLAB.  It started as an exercise in <i>Computer Methods for Mathematical Computations<\/i>, by Forsythe, Malcolm and Moler, published by Prentice-Hall 40 years ago. We were using Fortran back then. The data set has been updated every ten years since. Today, MATLAB graphics makes it easier to vary the parameters and see the results, but the underlying mathematical principles are unchanged:<\/p><div><ul><li>Using polynomials of even modest degree to predict   the future by extrapolating data is a risky business.<\/li><\/ul><\/div><p>Recall that the famous computational scientist Yogi Berra said<\/p><div><ul><li>\"It's tough to make predictions, especially about the future.\"<\/li><\/ul><\/div><h4>Data<a name=\"f45a43de-8e59-4639-bd31-3c4b92067929\"><\/a><\/h4><p>The data are from the decennial census of the United States for the years 1900 to 2010.  The population is in millions.<\/p><pre>   1900    75.995\r\n   1910    91.972\r\n   1920   105.711\r\n   1930   123.203\r\n   1940   131.669\r\n   1950   150.697\r\n   1960   179.323\r\n   1970   203.212\r\n   1980   226.505\r\n   1990   249.633\r\n   2000   281.422\r\n   2010   308.746<\/pre><p>The task is to extrapolate beyond 2010.  Let's see how an extrapolation of just six years to 2016 matches the Census Bureau announcement. Before you read any further, pause and make your own guess.<\/p><h4>App<a name=\"006e057d-7eac-49f3-bf87-8075091be676\"><\/a><\/h4><p>Here's is the opening screen of the January 2017 edition of my <tt>censusapp<\/tt>, which is included in <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/59085-cleve-laboratory\">Cleve's Laboratory<\/a>. The plus and minus buttons change the extrapolation year in the title. If you go beyond 2030, the plot zooms out.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/fig0.png\" alt=\"\"> <\/p><h4>Models<a name=\"daecab2f-864f-4b1f-a3f3-12700f8794f5\"><\/a><\/h4><p>The pull-down menu offers these models.  Forty years ago we had only polynomials.<\/p><pre class=\"codeinput\">   models = {<span class=\"string\">'census data'<\/span>,<span class=\"string\">'polynomial'<\/span>,<span class=\"string\">'pchip'<\/span>,<span class=\"string\">'spline'<\/span>, <span class=\"keyword\">...<\/span>\r\n             <span class=\"string\">'exponential'<\/span>,<span class=\"string\">'logistic'<\/span>,<span class=\"string\">'gompertz'<\/span>}'\r\n<\/pre><pre class=\"codeoutput\">models =\r\n  7&times;1 cell array\r\n    'census data'\r\n    'polynomial'\r\n    'pchip'\r\n    'spline'\r\n    'exponential'\r\n    'logistic'\r\n    'gompertz'\r\n<\/pre><h4>2016<a name=\"6fb28651-8b67-43f1-92a7-12b2c77b1082\"><\/a><\/h4><p>The Census Bureau <a href=\"http:\/\/census.gov\/newsroom\/press-releases\/2016\/cb16-214.html\">news release<\/a> that prompted the story in <i>The Times<\/i> said the official population in 2016 was 323.1 million.  That was on Census Day, which is April 1 of each year. The Census Bureau also provides a dynamic <a href=\"http:\/\/www.census.gov\/popclock\/?intcmp=w_200x402\">Population Clock<\/a> that operates continuously.  But let's stick with the 323.1 number.<\/p><h4>Polynomials<a name=\"1955c6bf-e409-4e6c-abed-05c7fc0968f2\"><\/a><\/h4><p>Polynomials like to wiggle.  Constrained to match data in a particular interval, they go crazy outside that interval.  Today, there are 12 data points.  The app lets you vary the polynomial degree between 0 and 11. Polynomials with degree less than 11 approximate the data in a least squares sense.  The polynomial of degree 11 interpolates the data exactly.  As the degree is increased, the approximation of the data becomes more accurate, but the behavior beyond 2010 (or before 1900) becomes more violent.  Here are degrees 2 and 7, 9, 11, superimposed on one plot.<\/p><p>The quadratic fit is the best behaved.  When evaluated at year 2016, it misses the target by six million.  Of course, there is no reason to believe that the US population grows like a second degree polynomial in time.<\/p><p>The interpolating polynomial of degree 11 tries to escape even before it gets to  2010, and it goes negative late in 2014.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/fig1.png\" alt=\"\"> <\/p><h4>Splines<a name=\"4504a66d-7c19-4d6c-918e-5f4479dbae67\"><\/a><\/h4><p>MATLAB has two piecewise cubic interpolating polynomials. The classic <tt>spline<\/tt> is smooth because it has two continuous derivatives. Its competitor <tt>pchip<\/tt> sacrifices a continuous second derivate to preserve shape and avoid overshoots.  I blogged about <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2012\/07\/16\/splines-and-pchips\">splines and pchips<\/a> a few years ago.<\/p><p>Neither is intended for extrapolation, but we will do it anyway.  Their behavior beyond the interval is determined by their end conditions. The classic <tt>spline<\/tt> uses the so-called <i>not-a-knot<\/i> condition. It is actually a single cubic in the last two subintervals.  That cubic is also used for extrapolation beyond the endpoint.  <tt>pchip<\/tt> uses just the last three data points to create a different shape-preserving cubic for use in the last subinterval and beyond.<\/p><p>Let's zoom in on the two.  Both are predicting a decreasing rate of growth beyond 2010, just as the Census Bureau is observing.  <tt>pchip<\/tt> gets lucky and comes within 0.2 million of the announcement for 2016.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/fig2.png\" alt=\"\"> <\/p><h4>Three Exponentials<a name=\"527effe2-ef70-4433-b01f-158222da9742\"><\/a><\/h4><p>As I said, there is no good reason to model population growth by a polynomial, piecewise or not.  But because the rate of growth can be expected to be proportional to the size of the population, there is good reason to use an exponential.<\/p><p>$$ p(t) \\approx a e^{bt} $$<\/p><p>There have been many proposals for ways to modify this model to avoid its unbounded growth.  I have just added two of these to <tt>censusapp<\/tt>. One is the logistic model.<\/p><p>$$ p(t) \\approx \\frac{a}{1+b e^{-ct}} $$<\/p><p>And the other is the Gompertz double exponential model, named after Benjamin Gompertz, a 19th century self-educated British mathematician and astronomer.<\/p><p>$$ p(t) \\approx a e^{-b e^{-ct}} $$<\/p><p>In both of these models the growth is limited because the approximating term approaches $a$ as $t$ approaches infinity.<\/p><p>In all three of the exponential models, the parameters $a$, $b$, and possibly $c$, appear nonlinearly. In principle, I could use <tt>lsqcurvefit<\/tt> to search in two or three dimensions for a least squares fit to the census data.  But I have an alternative.  By taking one or two logarithms, I have a <i>separable least squares<\/i> model where at most one parameter, $a$, appears nonlinearly.<\/p><h4>Exponential<a name=\"9fdaaee4-13cb-455d-bbd2-28556dc454bf\"><\/a><\/h4><p>For the exponential model, take one logarithm.<\/p><p>$$ \\log{p} \\approx \\log{a} + bt $$<\/p><p>Fit the logarithm of the data by a straight line and then exponentiate the result.  No search is required.<\/p><h4>Logistic<a name=\"f32a6bc8-2463-4da8-b50f-1b0787f7bb0d\"><\/a><\/h4><p>For the logistic model, take one logarithm.<\/p><p>$$ \\log{(a\/p-1)} \\approx \\log{b} - ct $$<\/p><p>For any value of $a$, the parameters $\\log{b}$ and $c$ appear linearly and can be found without a search.  So use a one-dimensional minimizer to search for $a$.  I could use <tt>fminbnd<\/tt>, or its textbook version, <tt>fmintx<\/tt>, from <i>Numerical Methods with MATLAB<\/i>.<\/p><h4>Gompertz<a name=\"81531f7c-8adc-4bcc-bfcf-c879a047520f\"><\/a><\/h4><p>For the Gompertz model, take two logarithms.<\/p><p>$$ \\log{\\log{a\/p}} \\approx \\log{b} - ct $$<\/p><p>Again, do a one-dimensional search for the minimizing $a$, solving for $\\log{b}$ and $c$ at each step.<\/p><h4>Results<a name=\"6190dc42-e4a0-450a-a202-9b4eae1225f8\"><\/a><\/h4><p>Here are the three resulting fits, extrapolated over more than 200 years to the year 2250.  The pure exponential model reaches 5 billion people by that time, and is growing ever faster.  I think that's unreasonable.<\/p><p>The value of $a$ in the Gompertz fit turns out to be 4309.6, so the population will be capped at 4.3 billion.  But it has only reached 1.5 billion two hundred years from now.  Again unlikely.<\/p><p>The value of $a$ in the logistic fit turns out to be 756.4, so the predicted US population will slightly more than double over the next two hundred years.  Despite the Census Bureau's observation that our rate of growth has slowed recently, we are not yet even half the way to our ultimate population limit.  I'll let you be the judge of that prediction.<\/p><p><img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/fig3.png\" alt=\"\"> <\/p><h4>Software<a name=\"15d6695d-e30d-4f4a-a44d-85c8d61ed6a6\"><\/a><\/h4><p>I have recently updated <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/59085-cleve-laboratory\">Cleve's Laboratory<\/a> in the MATLAB Central file exchange. One of the updates changes the name of <tt>censusgui<\/tt> to <tt>censusapp<\/tt> and adds the two exponential models.  If you do install this new version of the Laboratory, you can answer the following question.<\/p><h4>Homework<a name=\"2c422679-6357-4031-a43c-071ffc57ceff\"><\/a><\/h4><p>The fit generated by <tt>pchip<\/tt> defines a cubic for use beyond the year 2000 that predicts the population will reach a maximum in the not too distant future and decrease after that.  What is that maximum and when does it happen?<\/p><script language=\"JavaScript\"> <!-- \r\n    function grabCode_e7d64edc40e34ac2bc76929ae8594791() {\r\n        \/\/ Remember the title so we can use it in the new page\r\n        title = document.title;\r\n\r\n        \/\/ Break up these strings so that their presence\r\n        \/\/ in the Javascript doesn't mess up the search for\r\n        \/\/ the MATLAB code.\r\n        t1='e7d64edc40e34ac2bc76929ae8594791 ' + '##### ' + 'SOURCE BEGIN' + ' #####';\r\n        t2='##### ' + 'SOURCE END' + ' #####' + ' e7d64edc40e34ac2bc76929ae8594791';\r\n    \r\n        b=document.getElementsByTagName('body')[0];\r\n        i1=b.innerHTML.indexOf(t1)+t1.length;\r\n        i2=b.innerHTML.indexOf(t2);\r\n \r\n        code_string = b.innerHTML.substring(i1, i2);\r\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\r\n\r\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \r\n        \/\/ in the XML parser.\r\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\r\n        \/\/ doesn't go ahead and substitute the less-than character. \r\n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\r\n\r\n        copyright = 'Copyright 2017 The MathWorks, Inc.';\r\n\r\n        w = window.open();\r\n        d = w.document;\r\n        d.write('<pre>\\n');\r\n        d.write(code_string);\r\n\r\n        \/\/ Add copyright line at the bottom if specified.\r\n        if (copyright.length > 0) {\r\n            d.writeln('');\r\n            d.writeln('%%');\r\n            if (copyright.length > 0) {\r\n                d.writeln('% _' + copyright + '_');\r\n            }\r\n        }\r\n\r\n        d.write('<\/pre>\\n');\r\n\r\n        d.title = title + ' (MATLAB code)';\r\n        d.close();\r\n    }   \r\n     --> <\/script><p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\"><br><a href=\"javascript:grabCode_e7d64edc40e34ac2bc76929ae8594791()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \r\n      the MATLAB code <noscript>(requires JavaScript)<\/noscript><\/span><\/a><br><br>\r\n      Published with MATLAB&reg; R2017a<br><\/p><\/div><!--\r\ne7d64edc40e34ac2bc76929ae8594791 ##### SOURCE BEGIN #####\r\n%% Fitting and Extrapolating US Census Data\r\n% A headline in the <http:\/\/nyti.ms\/2hcZUoE _New York Times_> at the end\r\n% of 2016 said  \"Growth of U.S. Population Is at Slowest Pace Since 1937\".\r\n% This prompted me to revisit an old chestnut about fitting and\r\n% extrapolating census data.  In the process I have added a couple of\r\n% nonlinear fits, namely the logistic curve and the double exponential\r\n% Gompertz model.\r\n\r\n%% Oldie, But Goodie\r\n% This experiment is older than MATLAB.  It started as an exercise in\r\n% _Computer Methods for Mathematical Computations_, by Forsythe,\r\n% Malcolm and Moler, published by Prentice-Hall 40 years ago.\r\n% We were using Fortran back then.\r\n% The data set has been updated every ten years since.\r\n% Today, MATLAB graphics makes it easier to vary the parameters and see\r\n% the results, but the underlying mathematical principles are unchanged:\r\n%\r\n% * Using polynomials of even modest degree to predict\r\n%   the future by extrapolating data is a risky business.\r\n%\r\n \r\n%%\r\n% Recall that the famous computational scientist Yogi Berra said\r\n%\r\n% * \"It's tough to make predictions, especially about the future.\"\r\n\r\n%% Data\r\n% The data are from the decennial census of the United States for the\r\n% years 1900 to 2010.  The population is in millions.\r\n%\r\n%     1900    75.995\r\n%     1910    91.972\r\n%     1920   105.711\r\n%     1930   123.203\r\n%     1940   131.669\r\n%     1950   150.697\r\n%     1960   179.323\r\n%     1970   203.212\r\n%     1980   226.505\r\n%     1990   249.633\r\n%     2000   281.422\r\n%     2010   308.746\r\n\r\n%%\r\n% The task is to extrapolate beyond 2010.  Let's see how an extrapolation\r\n% of just six years to 2016 matches the Census Bureau announcement.\r\n% Before you read any further, pause and make your own guess.\r\n\r\n%% App\r\n% Here's is the opening screen of the January 2017 edition of my\r\n% |censusapp|, which is included in\r\n% <https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/59085-cleve-laboratory\r\n% Cleve's Laboratory>.\r\n% The plus and minus buttons change the extrapolation year in the title.\r\n% If you go beyond 2030, the plot zooms out.\r\n%\r\n% <<fig0.png>>\r\n\r\n%% Models\r\n% The pull-down menu offers these models.  Forty years ago we\r\n% had only polynomials.\r\n\r\n   models = {'census data','polynomial','pchip','spline', ...\r\n             'exponential','logistic','gompertz'}'\r\n\r\n%% 2016\r\n% The Census Bureau\r\n% <http:\/\/census.gov\/newsroom\/press-releases\/2016\/cb16-214.html \r\n% news release> that prompted the story in _The Times_ said\r\n% the official population in 2016 was 323.1 million.  That was on\r\n% Census Day, which is April 1 of each year.\r\n% The Census Bureau also provides a dynamic\r\n% <http:\/\/www.census.gov\/popclock\/?intcmp=w_200x402 Population Clock>\r\n% that operates continuously.  But let's stick with the 323.1 number.\r\n\r\n%% Polynomials\r\n% Polynomials like to wiggle.  Constrained to match data in a particular\r\n% interval, they go crazy outside that interval.  Today, there are 12 data\r\n% points.  The app lets you vary the polynomial degree between 0 and 11.\r\n% Polynomials with degree less than 11 approximate the data in a least\r\n% squares sense.  The polynomial of degree 11 interpolates the data\r\n% exactly.  As the degree is increased, the approximation of the data\r\n% becomes more accurate, but the behavior beyond 2010 (or before 1900)\r\n% becomes more violent.  Here are degrees 2 and 7, 9, 11, superimposed on\r\n% one plot.\r\n\r\n%%\r\n% The quadratic fit is the best behaved.  When evaluated at year 2016,\r\n% it misses the target by six million.  Of course, there is no\r\n% reason to believe that the US population grows like a second degree\r\n% polynomial in time.\r\n\r\n%%\r\n% The interpolating polynomial of degree 11 tries to escape even before\r\n% it gets to  2010, and it goes negative late in 2014.\r\n%\r\n% <<fig1.png>>\r\n\r\n%% Splines\r\n% MATLAB has two piecewise cubic interpolating polynomials.\r\n% The classic |spline| is smooth because it has two continuous derivatives.\r\n% Its competitor |pchip| sacrifices a continuous second derivate to\r\n% preserve shape and avoid overshoots.  I blogged about\r\n% <https:\/\/blogs.mathworks.com\/cleve\/2012\/07\/16\/splines-and-pchips\r\n% splines and pchips> a few years ago.\r\n\r\n%%\r\n% Neither is intended for extrapolation, but we will do it anyway.  Their\r\n% behavior beyond the interval is determined by their end conditions.\r\n% The classic |spline| uses the so-called _not-a-knot_ condition.\r\n% It is actually a single cubic in the last two subintervals.  That\r\n% cubic is also used for extrapolation beyond the endpoint.  |pchip| uses\r\n% just the last three data points to create a different shape-preserving\r\n% cubic for use in the last subinterval and beyond.\r\n\r\n%%\r\n% Let's zoom in on the two.  Both are predicting a decreasing rate of\r\n% growth beyond 2010, just as the Census Bureau is observing.  |pchip|\r\n% gets lucky and comes within 0.2 million of the announcement for 2016.\r\n%\r\n% <<fig2.png>>\r\n\r\n%% Three Exponentials\r\n% As I said, there is no good reason to model population growth by\r\n% a polynomial, piecewise or not.  But because the rate of growth can\r\n% be expected to be proportional to the size of the population, there is\r\n% good reason to use an exponential.\r\n%\r\n% $$ p(t) \\approx a e^{bt} $$\r\n%\r\n\r\n%%\r\n% There have been many proposals for ways to modify this model to avoid\r\n% its unbounded growth.  I have just added two of these to |censusapp|.\r\n% One is the logistic model.\r\n%\r\n% $$ p(t) \\approx \\frac{a}{1+b e^{-ct}} $$\r\n%\r\n% And the other is the Gompertz double exponential model, named after \r\n% Benjamin Gompertz, a 19th century self-educated British mathematician\r\n% and astronomer.\r\n%\r\n% $$ p(t) \\approx a e^{-b e^{-ct}} $$\r\n%\r\n\r\n%%\r\n% In both of these models the growth is limited because the approximating\r\n% term approaches $a$ as $t$ approaches infinity.\r\n\r\n%%\r\n% In all three of the exponential models, the parameters $a$, $b$, \r\n% and possibly $c$, appear nonlinearly. In principle, I could use\r\n% |lsqcurvefit| to search in two or three dimensions for a least squares\r\n% fit to the census data.  But I have an alternative.  By taking one or\r\n% two logarithms, I have a _separable least squares_ model where at most\r\n% one parameter, $a$, appears nonlinearly.\r\n\r\n%% Exponential\r\n% For the exponential model, take one logarithm.\r\n%\r\n% $$ \\log{p} \\approx \\log{a} + bt $$\r\n%\r\n% Fit the logarithm of the data by a straight line and then exponentiate\r\n% the result.  No search is required.\r\n\r\n%% Logistic\r\n% For the logistic model, take one logarithm.\r\n%\r\n% $$ \\log{(a\/p-1)} \\approx \\log{b} - ct $$\r\n%\r\n% For any value of $a$, the parameters $\\log{b}$ and $c$ appear linearly\r\n% and can be found without a search.  So use a one-dimensional minimizer\r\n% to search for $a$.  I could use |fminbnd|, or its textbook version,\r\n% |fmintx|, from _Numerical Methods with MATLAB_.\r\n\r\n%% Gompertz\r\n% For the Gompertz model, take two logarithms.\r\n%\r\n% $$ \\log{\\log{a\/p}} \\approx \\log{b} - ct $$\r\n%\r\n% Again, do a one-dimensional search for the minimizing $a$, solving\r\n% for $\\log{b}$ and $c$ at each step.\r\n\r\n%% Results\r\n% Here are the three resulting fits, extrapolated over more than 200 years\r\n% to the year 2250.  The pure exponential model reaches 5 billion people by\r\n% that time, and is growing ever faster.  I think that's unreasonable.\r\n\r\n%%\r\n% The value of $a$ in the Gompertz fit turns out to be 4309.6,\r\n% so the population will be capped at 4.3 billion.  But it has only\r\n% reached 1.5 billion two hundred years from now.  Again unlikely.\r\n\r\n%%\r\n% The value of $a$ in the logistic fit turns out to be 756.4,\r\n% so the predicted US population will slightly more than double over the\r\n% next two hundred years.  Despite the Census Bureau's observation\r\n% that our rate of growth has slowed recently, we are not yet even\r\n% half the way to our ultimate population limit.  I'll let you be the\r\n% judge of that prediction.\r\n%\r\n%\r\n% <<fig3.png>>\r\n\r\n%% Software\r\n% I have recently updated \r\n% <https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/59085-cleve-laboratory\r\n% Cleve's Laboratory> in the MATLAB Central file exchange.\r\n% One of the updates changes the name of |censusgui| to |censusapp|\r\n% and adds the two exponential models.  If you do install this new version\r\n% of the Laboratory, you can answer the following question.\r\n\r\n%% Homework\r\n% The fit generated by |pchip| defines a cubic for use beyond the year\r\n% 2000 that predicts the population will reach a maximum in the \r\n% not too distant future and decrease after that.  What is that maximum\r\n% and when does it happen?\r\n\r\n\r\n\r\n##### SOURCE END ##### e7d64edc40e34ac2bc76929ae8594791\r\n-->","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/fig1.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction--><p>A headline in the <a href=\"https:\/\/www.nytimes.com\/2016\/12\/22\/us\/usa-population-growth.html\"><i>New York Times<\/i><\/a> at the end of 2016 said  \"Growth of U.S. Population Is at Slowest Pace Since 1937\". This prompted me to revisit an old chestnut about fitting and extrapolating census data.  In the process I have added a couple of nonlinear fits, namely the logistic curve and the double exponential Gompertz model.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2017\/01\/05\/fitting-and-extrapolating-us-census-data\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":2229,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[11,12,4,16],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2225"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=2225"}],"version-history":[{"count":3,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2225\/revisions"}],"predecessor-version":[{"id":2486,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/2225\/revisions\/2486"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/2229"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=2225"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=2225"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=2225"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}