{"id":11276,"date":"2024-05-19T12:55:03","date_gmt":"2024-05-19T16:55:03","guid":{"rendered":"https:\/\/blogs.mathworks.com\/cleve\/?p=11276"},"modified":"2024-05-19T21:54:59","modified_gmt":"2024-05-20T01:54:59","slug":"a-sixty-year-old-program-for-predicting-the-future","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/cleve\/2024\/05\/19\/a-sixty-year-old-program-for-predicting-the-future\/","title":{"rendered":"A Sixty-Year Old Program for Predicting the Future"},"content":{"rendered":"\n<div class=\"content\"><!--introduction-->\n<p>The graphics in <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2024\/05\/04\/r-squared-is-bigger-better\/\">my post about <tt>R^2<\/tt><\/a> were produced by an updated version of a sixty-year old program involving the U.S. census. Originally, the program was based on census data from 1900 to 1960 and sought to predict the population in 1970. The software back then was written in Fortran, the predominate technical programming language a half century ago. I have updated the MATLAB version of the program so that it now uses census data from 1900 to 2020.<\/p>\n<!--\/introduction-->\n<h3>Contents<\/h3>\n<div>\n<ul>\n<li>\n<a href=\"#ed0d1704-7246-4ff0-aee9-9201ae55dd94\"><tt>censusapp2024<\/tt><\/a>\n<\/li>\n<li>\n<a href=\"#5a2ec27b-78d2-4b75-8f0e-6f7c19b85732\">Risky Business<\/a>\n<\/li>\n<li>\n<a href=\"#0d1987d5-e344-48c8-b9bf-e2cfc496958c\">Splines<\/a>\n<\/li>\n<li>\n<a href=\"#50e89d1a-1806-49a3-b8cc-5059019cfcac\">Exponentials<\/a>\n<\/li>\n<li>\n<a href=\"#d2bb8851-f7b5-48e2-837c-c2f17cb1d372\">Predictions<\/a>\n<\/li>\n<li>\n<a href=\"#e6446a7e-0465-4aec-afc6-18dc3e5b4d55\">Conclusion<\/a>\n<\/li>\n<li>\n<a href=\"#f9c00d28-d078-4a51-b5f0-209a9edb1805\">Blogs<\/a>\n<\/li>\n<li>\n<a href=\"#c66d0525-cfb3-4ecd-960d-abbc70e23220\">FMM<\/a>\n<\/li>\n<li>\n<a href=\"#74a17bf3-790c-4b0e-b897-45be8acd2c4d\">Software<\/a>\n<\/li>\n<\/ul>\n<\/div>\n<h4>\n<tt>censusapp2024<\/tt><a name=\"ed0d1704-7246-4ff0-aee9-9201ae55dd94\"><\/a>\n<\/h4>\n<p>The latest version of the census application is now available at <a href=\"https:\/\/blogs.mathworks.com\/cleve\/files\/censusapp_2024.m\">censusapp2024<\/a>. Here are the data and the opening screenshot.<\/p>\n<pre class=\"codeinput\">[t,p]=UScensus;fprintf(<span class=\"string\">'%12d%12.3f\\n'<\/span>,[t,p]')\n<\/pre>\n<pre class=\"codeoutput\">        1900      75.995\n        1910      91.972\n        1920     105.711\n        1930     123.203\n        1940     131.669\n        1950     150.697\n        1960     179.323\n        1970     203.212\n        1980     226.505\n        1990     249.633\n        2000     281.422\n        2010     308.746\n        2020     331.449\n<\/pre>\n<p>\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/screenshot.png\" alt=\"\"> <\/p>\n<h4>Risky Business<a name=\"5a2ec27b-78d2-4b75-8f0e-6f7c19b85732\"><\/a>\n<\/h4>\n<p>Today, MATLAB makes it easier to vary parameters and visualize results, but the underlying mathematical principles are unchanged:<\/p>\n<div>\n<ul>\n<li>Using polynomials to predict the future by extrapolating data is a risky business.<\/li>\n<\/ul>\n<\/div>\n<p>One new observation is added to the data every 10 years, when the United States does the decennial census. Originally there were only 7 observations; today there are 13. The program now allows you to fit the data exactly by interpolation with a polynomial of degree 12 or fit it approximately by polynomials of degree less than 12.<\/p>\n<p>Here are the least-squares fits with linear, cubic, and degree seven polynomials and the interpolating polynomial. As the polynomial degree increases, so does <tt>R^2<\/tt>, until <tt>R^2<\/tt> reaches one with the exact fit.<\/p>\n<p>Do any of these fits look like they could be used to predict future population growth?<\/p>\n<p>\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/polys.png\" alt=\"\"> <\/p>\n<h4>Splines<a name=\"0d1987d5-e344-48c8-b9bf-e2cfc496958c\"><\/a>\n<\/h4>\n<p>In addition to polynomials, you can choose interpolation by three different <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2019\/04\/29\/makima-piecewise-cubic-interpolation\/\">piecewise Hermite cubics<\/a>.<\/p>\n<div>\n<ul>\n<li>\n<tt>spline<\/tt> Continuous second derivate, \"not-a-knot\" end condition.<\/li>\n<li>\n<tt>pchip<\/tt> Continuous first derivative, strictly shape-preserving.<\/li>\n<li>\n<tt>makima<\/tt> Continuous first derivative, relaxed shape-preserving.<\/li>\n<\/ul>\n<\/div>\n<p>Since these fits interpolate the data, all their <tt>R^2<\/tt> values are one. But before 1900 and after 2020 these functions are cubic polynomials that are not designed for extrapolation.<\/p>\n<p>\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/splines.png\" alt=\"\"> <\/p>\n<h4>Exponentials<a name=\"50e89d1a-1806-49a3-b8cc-5059019cfcac\"><\/a>\n<\/h4>\n<p>It is also possible to do nonlinear least squares fits by an exponential, a logistic sigmoid, and an exponential of an exponetial known as the Gompertz model.<\/p>\n<div>\n<ul>\n<li>\n<tt>exponential exp(b*t+c)<\/tt>\n<\/li>\n<li>\n<tt>logistic a.\/(1+exp(-b*(t-c)))<\/tt>\n<\/li>\n<li>\n<tt>gompertz a*exp(-b*exp(-c*t))<\/tt>\n<\/li>\n<\/ul>\n<\/div>\n<p>An article by Kathleen and Even Tj&oslash;rve, from the Inland Norway University of Applied Sciences in Elverum, Norway, in the journal <a href=\"https:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0178691\" target=\"_blank\">PLOS ONE<\/a> has this to say about Gompertz. \"The Gompertz model has been in use as a growth model even longer than its better known relative, the logistic model. The model, referred to at the time as the Gompertz theoretical law of mortality, was first suggested and first applied by Mr. Benjamin Gompertz in 1825. He fitted it to the relationship between increasing death rate and age, what he referred to as 'the average exhaustions of a man&rsquo;s power to avoid death&rdquo; or the 'portion of his remaining power to oppose destruction.' \"<\/p>\n<p>\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/expos.png\" alt=\"\"> <\/p>\n<h4>Predictions<a name=\"d2bb8851-f7b5-48e2-837c-c2f17cb1d372\"><\/a>\n<\/h4>\n<p>Which fits are suitable for predicting future population size?<\/p>\n<p>Despite their large R^2 values, polynomials of any degree are not suitable because outside of the time interval they behave like polynomials and do not provide realistic predictions.<\/p>\n<p>Splines were never intended for extrapolation.<\/p>\n<p>That leaves the exponentials. The simple exponential model grows exponentially and is not suitable. The Gompertz fit does approach a finite asymptotic limit, but the value is an astronimical <tt>a<\/tt> = 2101, corresponding to 2.1 $\\times 10^9$ inhabitants. Hopefully, that is out of the question.<\/p>\n<p>The logistic fit has an asymptotic limit of <tt>a<\/tt> = 655.7. We recently passed the value of <tt>t<\/tt> where <tt>p(t)<\/tt> reaches <tt>a\/2<\/tt>, namely <tt>c<\/tt> = 2018. So, the logistic model predicts that the long-term size of the U.S. population will be about twice its current value. Is that realistic? Probably not.<\/p>\n<p>\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/expos_future.png\" alt=\"\"> <\/p>\n<h4>Conclusion<a name=\"e6446a7e-0465-4aec-afc6-18dc3e5b4d55\"><\/a>\n<\/h4>\n<p>The British statistician George Box once said, \"all models are wrong, some are useful.\" This is true of the models of the U. S. Census that I have discussed over the past sixty years.<\/p>\n<p>Here is <tt>censusapp2024<\/tt> after all its buttons have been pushed. The extrapolation date is set to 2040. White noise has been added to the data. The model is a fourth-degree polynomial with an <tt>R^2<\/tt> = 0.99. The <tt>R^2<\/tt> value and the error estimates produced by <tt>errs<\/tt> account for errors in the data, but not in the model.<\/p>\n<p>This particular model does a lousy job of predicting even twenty years in the future. Some of the other models are better, many are worse. Hopefully, their study is worthwhile.<\/p>\n<p>\n<img decoding=\"async\" vspace=\"5\" hspace=\"5\" src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/predict.png\" alt=\"\"> <\/p>\n<h4>Blogs<a name=\"f9c00d28-d078-4a51-b5f0-209a9edb1805\"><\/a>\n<\/h4>\n<p>I have made blog posts about the census before, in <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2020\/11\/06\/anticipating-official-u-s-census-for-2020\">2020<\/a> and in <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2017\/01\/05\/fitting-and-extrapolating-us-census-data\">2017<\/a>.<\/p>\n<h4>FMM<a name=\"c66d0525-cfb3-4ecd-960d-abbc70e23220\"><\/a>\n<\/h4>\n<p>Predicting population growth is featured in <i>Computer Methods for Mathematical Computations<\/i>, by George Forsythe, Mike Malcolm and myself, published by Prentice-Hall in 1977. That textbook is now available from an interesting smorgasbord of sources, including <a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=rldfxOMAAAAJ&amp;citation_for_view=rldfxOMAAAAJ:buQ7SEKw-1sC\" target=\"_blank\">Google Scholar<\/a>, <a href=\"https:\/\/www.amazon.com\/exec\/obidos\/ASIN\/0131653326\/acmorg-20\" target=\"_blank\">Amazon<\/a>, <a href=\"https:\/\/www.etsy.com\/listing\/1676520741\/vintage-textbook-computer-methods-for\" target=\"_blank\">dizhasneatstuff<\/a>, <a href=\"https:\/\/www.abebooks.com\/servlet\/BookDetailsPL?bi=22650690419\" target=\"_blank\">Abe Books<\/a>, <a href=\"https:\/\/archive.org\/details\/computermethodsf00fors\/page\/18\/mode\/2up\" target=\"_blank\">Internet Archive<\/a>, <a href=\"https:\/\/www.pdas.com\/fmm.html\" target=\"_blank\">PDAS<\/a>, <a href=\"https:\/\/search.worldcat.org\/title\/1150302502\" target=\"_blank\">WorldCat (Chinese)<\/a>.<\/p>\n<h4>Software<a name=\"74a17bf3-790c-4b0e-b897-45be8acd2c4d\"><\/a>\n<\/h4>\n<p>\n<tt>censusapp2024<\/tt> is available at <a href=\"https:\/\/blogs.mathworks.com\/cleve\/files\/censusapp_2024.m\">censusapp2024<\/a>.<\/p>\n<script language=\"JavaScript\"> <!-- \n    function grabCode_5c4db23ec2b5483abf7a7a087b9cd04e() {\n        \/\/ Remember the title so we can use it in the new page\n        title = document.title;\n\n        \/\/ Break up these strings so that their presence\n        \/\/ in the Javascript doesn't mess up the search for\n        \/\/ the MATLAB code.\n        t1='5c4db23ec2b5483abf7a7a087b9cd04e ' + '##### ' + 'SOURCE BEGIN' + ' #####';\n        t2='##### ' + 'SOURCE END' + ' #####' + ' 5c4db23ec2b5483abf7a7a087b9cd04e';\n    \n        b=document.getElementsByTagName('body')[0];\n        i1=b.innerHTML.indexOf(t1)+t1.length;\n        i2=b.innerHTML.indexOf(t2);\n \n        code_string = b.innerHTML.substring(i1, i2);\n        code_string = code_string.replace(\/REPLACE_WITH_DASH_DASH\/g,'--');\n\n        \/\/ Use \/x3C\/g instead of the less-than character to avoid errors \n        \/\/ in the XML parser.\n        \/\/ Use '\\x26#60;' instead of '<' so that the XML parser\n        \/\/ doesn't go ahead and substitute the less-than character. \n        code_string = code_string.replace(\/\\x3C\/g, '\\x26#60;');\n\n        copyright = 'Copyright 2024 The MathWorks, Inc.';\n\n        w = window.open();\n        d = w.document;\n        d.write('<pre>\\n');\n        d.write(code_string);\n\n        \/\/ Add copyright line at the bottom if specified.\n        if (copyright.length > 0) {\n            d.writeln('');\n            d.writeln('%%');\n            if (copyright.length > 0) {\n                d.writeln('% _' + copyright + '_');\n            }\n        }\n\n        d.write('<\/pre>\\n');\n\n        d.title = title + ' (MATLAB code)';\n        d.close();\n    }   \n     --> <\/script>\n<p style=\"text-align: right; font-size: xx-small; font-weight:lighter;   font-style: italic; color: gray\">\n<br>\n<a href=\"javascript:grabCode_5c4db23ec2b5483abf7a7a087b9cd04e()\"><span style=\"font-size: x-small;        font-style: italic;\">Get \n      the MATLAB code <noscript>(requires JavaScript)<\/noscript>\n<\/span><\/a>\n<br>\n<br>\n      Published with MATLAB&reg; R2024a<br>\n<\/p>\n<\/div>\n<!--\n5c4db23ec2b5483abf7a7a087b9cd04e ##### SOURCE BEGIN #####\n%% A Sixty-Year Old Program for Predicting the Future\n% The graphics in\n% <https:\/\/blogs.mathworks.com\/cleve\/2024\/05\/04\/r-squared-is-bigger-better\/\n% my post about |R^2|> were produced by an updated version of\n% a sixty-year old program involving the U.S. census.\n% Originally, the program was based on census data from 1900 to 1960\n% and sought to predict the population in 1970.\n% The software back then was written in Fortran, \n% the predominate technical programming language a half century ago. \n% I have updated the MATLAB version of the program\n% so that it now uses census data from 1900 to 2020.\n\n%% |censusapp2024|\n% The latest version of the census application is now available at\n% <https:\/\/blogs.mathworks.com\/cleve\/files\/censusapp_2024.m\n% censusapp2024>.  Here are the data and the opening screenshot.\n\n   [t,p] = UScensus; fprintf('%12d%12.3f\\n',[t,p]')\n\n%%\n% <<screenshot.png>>\n\n%% Risky Business\n% Today, MATLAB makes it easier to  vary parameters and visualize\n% results, but the underlying mathematical principles are unchanged:\n%\n% * Using polynomials to predict\n%    the future by extrapolating data is a risky business.\n%\n% One new observation is added to the data \n% every 10 years, when the United States does the decennial census.  \n% Originally there were only 7 observations; today there are 13.\n% The program now allows you to fit the data exactly by interpolation with\n% a polynomial of degree 12 or fit it approximately by\n% polynomials of degree less than 12. \n%\n% Here are the least-squares fits with linear, cubic, and degree seven \n% polynomials and the interpolating polynomial.\n% As the polynomial degree increases, so does |R^2|, until |R^2| reaches\n% one with the exact fit.  \n%\n% Do any of these fits look like they could be used to predict future\n% population growth?\n%\n% <<polys.png>>\n\n%% Splines\n% In addition to polynomials, you can choose\n% interpolation by three different \n% <https:\/\/blogs.mathworks.com\/cleve\/2019\/04\/29\/makima-piecewise-cubic-interpolation\/\n% piecewise Hermite cubics>.\n%\n% * |spline|    Continuous second derivate, \"not-a-knot\" end condition.\n% * |pchip|     Continuous first derivative, strictly shape-preserving.\n% * |makima|    Continuous first derivative, relaxed shape-preserving.\n%\n% Since these fits interpolate the data, all their |R^2| values are one.\n% But before 1900 and after 2020 these functions are cubic polynomials\n% that are not designed for extrapolation.\n%\n% <<splines.png>>\n\n%% Exponentials\n% It is also possible to do nonlinear least squares fits by an exponential,\n% a logistic sigmoid, and an exponential of an exponetial known as the\n% Gompertz model.\n%\n% * |exponential   exp(b*t+c)|\n% * |logistic      a.\/(1+exp(-b*(t-c)))|\n% * |gompertz      a*exp(-b*exp(-c*t))|\n% \n% An article by Kathleen and Even Tj\u00f8rve, from the \n% Inland Norway University of Applied Sciences in Elverum, Norway,\n% in the journal\n% <https:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0178691\n% PLOS ONE> has this to say about Gompertz.\n% \"The Gompertz model has been in use as a growth model even longer\n% than its better known relative, the logistic model. The model, \n% referred to at the time as the Gompertz theoretical law of mortality, was \n% first suggested and first applied by Mr. Benjamin Gompertz in 1825. \n% He fitted it to the relationship between increasing death rate and age, \n% what he referred to as 'the average exhaustions of a man\u2019s power to \n% avoid death\u201d or the 'portion of his remaining power to oppose \n% destruction.' \"\n%\n% <<expos.png>>\n\n%% Predictions\n% Which fits are suitable for predicting future population size?\n%\n% Despite their large R^2 values, polynomials of any degree\n% are not suitable because outside of the time interval they behave\n% like polynomials and do not provide realistic predictions.\n%\n% Splines were never intended for extrapolation.\n%\n% That leaves the exponentials. The simple exponential model grows\n% exponentially and is not suitable. The Gompertz fit does approach\n% a finite asymptotic limit, but the value is an astronimical |a| = 2101,\n% corresponding to 2.1 $\\times 10^9$ inhabitants. Hopefully, that is\n% out of the question.\n%\n% The logistic fit has an asymptotic limit of |a| = 655.7.\n% We recently passed the value of |t| where |p(t)| reaches |a\/2|,\n% namely |c| = 2018.  So, the logistic model predicts that\n% the long-term size of the U.S. population will be about twice its\n% current value.  Is that realistic?  Probably not.\n%\n% <<expos_future.png>>\n%\n\n%% Conclusion\n% The British statistician George Box once said, \"all models are wrong,\n% some are useful.\"  This is true of the models of the\n% U. S. Census that I have discussed over the past sixty years.\n%\n% Here is |censusapp2024| after all its buttons have been pushed.\n% The extrapolation date is set to 2040.\n% White noise has been added to the data.\n% The model is a fourth-degree polynomial with an |R^2| = 0.99.\n% The |R^2| value and the error estimates produced by |errs|\n% account for errors in the data, but not in the model.\n%\n% This particular model does a lousy job of predicting even twenty\n% years in the future.\n% Some of the other models are better, many are worse.\n% Hopefully, their study is worthwhile.\n%\n% <<predict.png>>\n\n%% Blogs\n% I have made blog posts about the census before, in\n% <https:\/\/blogs.mathworks.com\/cleve\/2020\/11\/06\/anticipating-official-u-s-census-for-2020\n% 2020> and in\n% <https:\/\/blogs.mathworks.com\/cleve\/2017\/01\/05\/fitting-and-extrapolating-us-census-data\n% 2017>.\n\n%% FMM\n% Predicting population growth is featured in\n% _Computer Methods for Mathematical Computations_, \n% by George Forsythe, Mike Malcolm and myself,\n% published by Prentice-Hall in 1977.\n% That textbook is now available from an interesting smorgasbord of \n% sources, including\n% <https:\/\/scholar.google.com\/citations?view_op=view_citation&hl=en&user=rldfxOMAAAAJ&citation_for_view=rldfxOMAAAAJ:buQ7SEKw-1sC\n% Google Scholar>,\n% <https:\/\/www.amazon.com\/exec\/obidos\/ASIN\/0131653326\/acmorg-20 Amazon>,\n% <https:\/\/www.etsy.com\/listing\/1676520741\/vintage-textbook-computer-methods-for\n% dizhasneatstuff>,\n% <https:\/\/www.abebooks.com\/servlet\/BookDetailsPL?bi=22650690419\n% Abe  Books>,\n% <https:\/\/archive.org\/details\/computermethodsf00fors\/page\/18\/mode\/2up\n% Internet Archive>,\n% <https:\/\/www.pdas.com\/fmm.html PDAS>,\n% <https:\/\/search.worldcat.org\/title\/1150302502 WorldCat (Chinese)>.\n\n%% Software\n% |censusapp2024| is available at\n% <https:\/\/blogs.mathworks.com\/cleve\/files\/censusapp_2024.m\n% censusapp2024>.\n##### SOURCE END ##### 5c4db23ec2b5483abf7a7a087b9cd04e\n-->\n","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/cleve\/files\/prevue.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><!--introduction-->\n<p>The graphics in <a href=\"https:\/\/blogs.mathworks.com\/cleve\/2024\/05\/04\/r-squared-is-bigger-better\/\">my post about <tt>R^2<\/tt><\/a> were produced by an updated version of a sixty-year old program involving the U.S. census. Originally, the program was based on census data from 1900 to 1960 and sought to predict the population in 1970. The software back then was written in Fortran, the predominate technical programming language a half century ago. I have updated the MATLAB version of the program so that it now uses census data from 1900 to 2020.... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/cleve\/2024\/05\/19\/a-sixty-year-old-program-for-predicting-the-future\/\">read more >><\/a><\/p>","protected":false},"author":78,"featured_media":11282,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[5,23,4,16,48],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/11276"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/comments?post=11276"}],"version-history":[{"count":6,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/11276\/revisions"}],"predecessor-version":[{"id":11309,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/posts\/11276\/revisions\/11309"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media\/11282"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/media?parent=11276"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/categories?post=11276"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/cleve\/wp-json\/wp\/v2\/tags?post=11276"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}