{"id":17762,"date":"2025-07-14T20:23:25","date_gmt":"2025-07-15T00:23:25","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=17762"},"modified":"2025-07-21T09:29:52","modified_gmt":"2025-07-21T13:29:52","slug":"physics-informed-machine-learning-methods-and-implementation","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2025\/07\/14\/physics-informed-machine-learning-methods-and-implementation\/","title":{"rendered":"Physics-Informed Machine Learning: Methods and Implementation"},"content":{"rendered":"<h6><\/h6>\r\n<em>This blog post is from <\/em><a href=\"https:\/\/www.linkedin.com\/in\/mae-markowski-a461b7224\/\">Mae Markowski<\/a><em>, Senior Product Manager at MathWorks.<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\nIn our <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2025\/06\/23\/what-is-physics-informed-machine-learning\/\">previous post<\/a>, we laid the groundwork for physics-informed machine learning, exploring what it is, why it matters, and how it can be applied across different science and engineering domains. We used a pendulum example to make the concepts discussed more concrete.\r\n<h6><\/h6>\r\nIn this post, we\u2019ll dive deeper into specific physics-informed machine learning methods, categorized by their primary objectives: <strong>modeling complex systems from data<\/strong>, <strong>discovering governing equations<\/strong>, and<strong> solving known equations<\/strong>.\r\n<h6><\/h6>\r\nTo illustrate the main ideas behind each method, we will apply them to the familiar pendulum example with accompanying MATLAB code snippets. For the full set of examples featured in this post, see <a href=\"https:\/\/github.com\/matlab-deep-learning\/SciML-and-Physics-Informed-Machine-Learning-Examples\/tree\/main\/phiml-blog-supporting-code\">Physics-Informed Machine Learning Methods and Implementation supporting code<\/a>, and for more advanced examples, check out the Github repository <a href=\"https:\/\/github.com\/matlab-deep-learning\/SciML-and-Physics-Informed-Machine-Learning-Examples\/tree\/main\/inverse-problems-using-physics-informed-neural-networks\">SciML and Physics-Informed Machine Learning Examples<\/a>.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Modeling Unknown Dynamics from Data<\/strong><\/p>\r\n<p style=\"font-size: 18px;\"><strong>Neural Ordinary Differential Equation<\/strong><\/p>\r\nWhen modeling physical systems like a pendulum, we often use differential equations of the form:\r\n\\[\\dot{x} = f(x, u) ,\\]\r\n<h6><\/h6>\r\nwhere \\(x \\)\u00a0represents the state (like the pendulum\u2019s angular position and velocity), \\(u \\)\u00a0represents any external inputs, and \\(f \\)\u00a0represents the system\u2019s dynamics. However, in many scenarios, the dynamics function \\(f \\)\u00a0is either unknown or is too complex to describe analytically.\r\n<h6><\/h6>\r\n<a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/dynamical-system-modeling-using-neural-ode.html\">Neural Ordinary Differential Equations (Neural ODEs)<\/a> address this challenge by using a neural network to learn \\(f \\) directly from data. This makes neural ODEs a powerful tool for modeling systems with unknown dynamics, especially when working with time-series data that is irregularly sampled. While neural ODEs do not embed known physical laws directly into the model, they can be generalized to incorporate partially known dynamics, such as in Universal Differential Equations, which we\u2019ll discuss later in the post.\r\n<h6><\/h6>\r\nFor example, suppose you have measurements of a pendulum\u2019s angular position and velocity but lack the explicit governing equations. You can train a neural ODE to learn the pendulum dynamics from the given trajectory data. Below is a code snippet illustrating how to set up a neural ODE using Deep Learning Toolbox.\r\n<pre>% Model unknown dynamics f(x) using multilayer perceptron (MLP)\r\nfLayers = [\r\n    featureInputLayer(2, Name=\"input\")\r\n    fullyConnectedLayer(32)\r\n    geluLayer\r\n    fullyConnectedLayer(32)\r\n    geluLayer\r\n    fullyConnectedLayer(2, Name=\"output\")];\r\n\r\nfNet = dlnetwork(fLayers);\r\n\r\n% Construct a neural ODE using neuralODElayer to solve ODE system for x\r\nnODElayers = [\r\n    featureInputLayer(2, Name=\"IC_in\")\r\n    neuralODELayer(fNet, tTrain, ...\r\n    GradientMode=\"adjoint\", ... \r\n    Name=\"ODElayer\")];\r\n\r\nnODEnet = dlnetwork(nODElayers);\r\n<\/pre>\r\n<h6><\/h6>\r\nThis code defines the neural network <span style=\"font-family: Consolas, monospace;\">fnet<\/span> to represent the unknown pendulum dynamics, then uses a <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.neuralodelayer.html\">neuralODELayer<\/a> to solve the ODE system \\(\\dot{x} = f(x) \\), where the righthand side is given by the output of fNet. Once trained, the model predicts the system\u2019s states over a specified time interval by numerically integrating the ODE forward in time from a given initial state, using an ODE solver like <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/ode45.html\">ode45<\/a>.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17786\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/NeuralODE_image.png\" alt=\"\" width=\"400\" height=\"257\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 1:<\/strong> Neural ODE predicts the pendulum\u2019s trajectory from noisy measurement data.<\/em>\r\n<h6><\/h6>\r\nWhile Neural ODEs model how system states evolve, they don\u2019t account for the variables we want to observe, which are essential for control, estimation, and system identification. Neural state-space models address this by also learning an observation equation that relates states to outputs, which we will explore next.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>Neural State-Space<\/strong><\/p>\r\n<a href=\"https:\/\/www.mathworks.com\/help\/ident\/ug\/training-a-neural-state-space-model-for-a-simple-pendulum-system.html\">Neural State-Space models<\/a> extend neural ODEs by incorporating a state-space structure:\r\n\\[\\dot{x} = f(x, u), \\quad y = g(x, u) ,\\]\r\nwhere the state dynamics function \\(f\\) and the output function \\(g\\) are each modeled using neural networks.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1073\" height=\"219\" class=\"alignnone size-full wp-image-17798\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/Neural_SS_idea.jpeg\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 2:<\/strong> Neural state-space predictions of the system\u2019s state and output. Here,\u00a0\\( x_1 \\)\u200b and\u00a0\\( x_2 \\)\u00a0\u200brepresent the angular position and angular velocity, and\u00a0\\( y_1 \\)\u00a0and \\( y_2 \\)\u00a0correspond to the horizontal and vertical positions of the pendulum\u2019s point mass, respectively. For the full example, see\u00a0<a href=\"https:\/\/www.mathworks.com\/help\/ident\/ug\/training-a-neural-state-space-model-for-a-simple-pendulum-system.html\">Neural State-Space Model of Simple Pendulum System<\/a>.<\/em>\r\n<h6><\/h6>\r\nConsider a pendulum system where the states are angular position and velocity, the input is an applied torque, and the outputs are the horizontal and vertical positions of the mass. You can create a neural state-space model using System Identification Toolbox to learn both the state dynamics and output function of the system directly from data.\r\n<pre>% Create neural state-space model with 2 states, 1 input, 4 outputs\r\nsys = idNeuralStateSpace(2,NumInputs=1,NumOutput=4);\r\n\r\n% State network\r\nsys.StateNetwork = createMLPNetwork(sys,'state', ...\r\n   LayerSizes=[128 128], ...\r\n   Activations=\"tanh\");\r\n\r\n% Output network\r\nsys.OutputNetwork(2) = createMLPNetwork(sys,'output', ...\r\n   LayerSizes=[128 128], ...\r\n   Activations=\"tanh\");\r\n<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"560\" height=\"420\" class=\"alignnone size-full wp-image-17804\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/Neural_SS_prediction.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 3:<\/strong> Neural state-space predictions of the state (angular position and velocity) and output (horizontal and vertical positions of point mass). For the full example, see <a href=\"https:\/\/www.mathworks.com\/help\/ident\/ug\/training-a-neural-state-space-model-for-a-simple-pendulum-system.html\">Neural State-Space Model of Simple Pendulum System<\/a>.<\/em>\r\n<h6><\/h6>\r\nNeural state-space models are especially useful in controls, estimation, optimization, and <a href=\"https:\/\/www.mathworks.com\/help\/ident\/ug\/reduced-order-modeling-of-electric-vehicle-battery-system-using-neural-state-space-model.html\">reduced order modeling<\/a>. However, neural state-space models, like neural ODEs, treat the system\u2019s dynamics as entirely unknown even when parts of the dynamics are well known, a limitation that is addressed by Universal Differential Equations.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>Universal Differential Equation<\/strong><\/p>\r\nFor some systems, part of the dynamics may be well understood, while other effects are difficult to capture with traditional physics-based models. In such cases, <a href=\"https:\/\/github.com\/matlab-deep-learning\/SciML-and-Physics-Informed-Machine-Learning-Examples\/tree\/main\/universal-differential-equations\">Universal Differential Equations (UDEs)<\/a> offer a hybrid approach that combines known physics with machine-learned components.\r\n<h6><\/h6>\r\nFor the pendulum, you might know the main dynamics (e.g. angular acceleration \\( \\ddot{\\theta} \\) and restoring force \\( -\\omega_0^2 \\sin \\theta \\)) but suspect there is unmodeled friction. You can write the system as:\r\n\r\n\\[\r\n\\frac{d}{dt}\r\n\\begin{bmatrix}\r\n\\theta \\\\\r\n\\dot{\\theta}\r\n\\end{bmatrix}\r\n=\r\n\\begin{bmatrix}\r\n\\dot{\\theta} \\\\\r\n- \\omega_0^2 \\sin \\theta + h(\\theta, \\dot{\\theta})\r\n\\end{bmatrix}\r\n=\r\ng(\\theta, \\dot{\\theta}) +\r\n\\begin{bmatrix}\r\n0 \\\\\r\nh(\\theta, \\dot{\\theta})\r\n\\end{bmatrix},\r\n\\]\r\n<h6><\/h6>\r\nwhere \\(g\\) represents the known dynamics and \\(h \\) represents the unknown friction force, which is learned from data using a neural network.\r\n<h6><\/h6>\r\nWith Deep Learning Toolbox, you can implement a UDE for this problem by defining the known physics with a <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.functionlayer.html\">functionLayer<\/a>, modeling the unknown friction with a neural network, combining them into a single network, and then wrapping in a neuralODElayer to solve the full system.\r\n<pre>% Define the known physics in functionLayer\r\ngFcn = @(X) [X(2,:); -omega0^2*sin(X(1,:))];\r\ngLayer = functionLayer(gFcn, Acceleratable=true, Name=\"g\");\r\n\r\n% Model unknown friction force using a multilayer perceptron (MLP)\r\nhLayers = [\r\n    fullyConnectedLayer(16,Name=\"fc1\")\r\n    geluLayer\r\n    fullyConnectedLayer(1,Name=\"h\")];\r\n\r\n% Combine known and unknown components\r\ncombineFcn = @(x, y) [x(1,:); x(2,:) + y];\r\ncombineLayer = functionLayer(combineFcn,Name=\"combine\",Acceleratable=true);\r\nfNet = [featureInputLayer(2,Name=\"input\") \r\n    gLayer\r\n    combineLayer];\r\n\r\nfNet = dlnetwork(fNet,Initialize=false);\r\nfNet = addLayers(fNet,hLayers);\r\nfNet = connectLayers(fNet,\"input\",\"fc1\");\r\nfNet = connectLayers(fNet,\"h\",\"combine\/in2\");\r\n\r\n% Wrap in neuralODELayer\r\nnODElayers = [\r\n    featureInputLayer(2, Name=\"IC_in\")\r\n    neuralODELayer(fNet, tTrain, ...\r\n    GradientMode=\"adjoint\", ...\r\n    Name=\"ODElayer\")];\r\n\r\nnODEnet = dlnetwork(nODElayers);\r\n<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17813\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/UDE_image.png\" alt=\"\" width=\"400\" height=\"259\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 4:<\/strong> UDEs model the unknown damping force of the pendulum to predict its trajectory from data.<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>Hamiltonian Neural Network<\/strong><\/p>\r\nUnlike the previous methods we discussed which learn the system\u2019s state evolution directly, <a href=\"https:\/\/github.com\/matlab-deep-learning\/SciML-and-Physics-Informed-Machine-Learning-Examples\/tree\/main\/hamiltonian-neural-network\">Hamiltonian Neural Networks (HNNs)<\/a> learn the Hamiltonian \\(H \\), a function that represents the system\u2019s total energy in terms of position \\( q \\) and momentum \\( p \\). Once trained, the system state is recovered by applying Hamilton\u2019s equations to the learned Hamiltonian:\r\n\\[\r\n\\dot{q} = \\frac{\\partial H}{\\partial p}, \\quad\r\n\\dot{p} = -\\frac{\\partial H}{\\partial q}.\r\n\\]\r\n<h6><\/h6>\r\nThis added structure makes HNNs well-suited for modeling systems where total energy is conserved, like the undamped pendulum.\r\n<h6><\/h6>\r\nTo train the model, a custom loss function penalizes the difference between the observed values of \\( \\dot{q} \\) and \\( \\dot{p} \\)\u00a0and the corresponding partial derivatives of the network, computed using <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/deep-learning-with-automatic-differentiation-in-matlab.html\">automatic differentiation<\/a>.\r\n<h6><\/h6>\r\nThe code snippet below constructs a network to learn the Hamiltonian from the undamped pendulum trajectory data, defines a custom loss function based on Hamilton\u2019s equations, and shows how to make predictions with the trained network using an ODE solver like ode45.\r\n<pre>% Define a neural network to learn the Hamiltonian function H(q,p)\r\nfcBlock = [\r\n    fullyConnectedLayer(64)\r\n    tanhLayer];\r\nlayers = [\r\n    featureInputLayer(2)\r\n    repmat(fcBlock,[2 1])\r\n    fullyConnectedLayer(1)];\r\n\r\nnet = dlnetwork(layers);\r\nnet = dlupdate(@double, net);\r\n\r\n% Define custom loss function to penalize deviations from Hamilton\u2019s equations\r\nfunction [loss,gradients] = modelLoss(net, qp, qpdotTarget)\r\nqpdot = model(net, qp);\r\nloss = l2loss(qpdot, qpdotTarget, DataFormat=\"CB\");\r\ngradients = dlgradient(loss,net.Learnables);\r\nend\r\n\r\nfunction qpdot = model(net, qp)\r\nH = forward(net,dlarray(qp,'CB'));\r\nH = stripdims(H);\r\nqp = stripdims(qp);\r\ndH = dljacobian(H,qp,1);\r\nqpdot = [dH(2,:); -1.*dH(1,:)];\r\nend\r\n\r\n% Enforce Hamiltonian structure by solving Hamilton\u2019s equations with learned H \r\naccModel = dlaccelerate(@model);\r\ntspan = tTrain;\r\nx0 = dlarray([q(1); p(1)]); % initial conditions\r\nodeFcn = @(ts,xs) dlfeval(accModel, net, xs);\r\n[~, qp] = ode45(odeFcn, tspan, x0);\r\nqp = qp'; % Transpose to return to (2)x(N)\r\n<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17816\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/HNN_image.png\" alt=\"\" width=\"400\" height=\"257\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 5:<\/strong> HNNs predict the pendulum\u2019s trajectory and are designed to conserve energy.<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Discovering Equations<\/strong><\/p>\r\n<p style=\"font-size: 18px;\"><strong>Sparse Identification of Nonlinear Dynamics<\/strong><\/p>\r\nWhile the methods we\u2019ve discussed so far are effective at capturing system behavior, they don\u2019t necessarily yield interpretable models. In some applications, like <a href=\"https:\/\/www.mathworks.com\/discovery\/quantitative-systems-pharmacology.html\">quantitative systems pharmacology<\/a>, understanding the dynamics can be just as important as learning them. In these cases, equation discovery techniques like <a href=\"https:\/\/github.com\/matlab-deep-learning\/SciML-and-Physics-Informed-Machine-Learning-Examples\/tree\/main\/universal-differential-equations\">Sparse Identification of Nonlinear Dynamics (SINDy)<\/a> offer a way to extract mathematical models describing the dynamics directly from data.\r\n<h6><\/h6>\r\nSINDy assumes that the system\u2019s dynamics can be expressed as a sparse linear combination of candidate functions, such as polynomials and trigonometric functions, and identifies the most relevant terms using sparse regression. This results in a compact, interpretable model that captures the underlying physics.\r\n<h6><\/h6>\r\nFor example, in the case of the damped pendulum, we can combine the previously described UDE approach with SINDy to recover a mechanistic form for the friction force. In this example, the true friction term is\r\n\\[ h(\\theta, \\dot{\\theta}) = -(c_1 + c_2 \\dot{\\theta}) \\dot{\\theta},\r\n\\]\r\n<h6><\/h6>\r\nwhere \\(c_1=0.2\\) and \\(c_2= 0.1\\).\r\n<h6><\/h6>\r\nThe following code extracts the learned friction term from the UDE and uses SINDy to uncover a mathematical model that describes it.\r\n<pre>% Extract trained network representing the learned friction from the UDE\r\nfNetTrained = nODEnet.Layers(2).Network;\r\nhNetTrained = removeLayers(fNetTrained,[\"g\",\"combine\"]);\r\nlrn = hNetTrained.Learnables;\r\nlrn = dlupdate(@dlarray, lrn);\r\nhNetTrained = initialize(hNetTrained);\r\nhNetTrained.Learnables = lrn;\r\n\r\n% Evaluate the learned friction term at the training data points\r\nhEval = predict(hNetTrained,Y',InputDataFormats=\"CB\");\r\n% Define candidate basis functions for SINDy: omega, omega^2, omega^3 \r\ne1 = @(X) X(2,:);\r\ne2 = @(X) X(2,:).^2;\r\ne3 = @(X) X(2,:).^3;\r\nE = @(X) [e1(X); e2(X); e3(X)];\r\n\r\n% Evaluate the basis functions at the training points\r\nEEval = E(Y');\r\n\r\n% Sequentially solve h = W*E with thresholding to induce sparsity\r\niters = 10;\r\nthreshold = 0.05;\r\nWs = cell(iters,1);\r\n\r\n% Initial least-squares solution for W\r\nW = hEval\/EEval;\r\nWs{1} = W;\r\nfor iter = 2:iters\r\n    % Zero out small coefficients for sparsity\r\n    belowThreshold = abs(W)&lt;threshold;\r\n    W(belowThreshold) = 0;\r\n    % Recompute nonzero coefficients using least squares\r\n    for i = 1:size(W,1)\r\n        aboveThreshold_i = ~belowThreshold(i,:);\r\n        W(i,aboveThreshold_i) = hEval(i,:)\/EEval(aboveThreshold_i,:);\r\n    end\r\n    Ws{iter} = W;\r\nend\r\n\r\n% Display the identified equation for the friction term\r\nWidentified = Ws{end};\r\nfprintf(...\r\n    \"Identified h = %.2f y + %.2f y^2 + %.2f y^3 \\n\", ...\r\n    Widentified(1), Widentified(2), Widentified(3))\r\n<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1701\" height=\"530\" class=\"alignnone size-full wp-image-17825\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/SINDy.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 6:<\/strong> Here SINDy is used to identify a mechanistic form for the friction from pendulum trajectory data.<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Solving Known Equations<\/strong><\/p>\r\n<p style=\"font-size: 18px;\"><strong>Physics-Informed Neural Network<\/strong><\/p>\r\nSo far, we\u2019ve looked at methods that learn the system\u2019s dynamics. Rather than learning the dynamics themselves, <a href=\"https:\/\/www.mathworks.com\/discovery\/physics-informed-neural-networks.html\">Physics-Informed Neural Networks (PINNs)<\/a> learn the <em>solution<\/em> to a known differential equation by embedding this equation directly into the loss function. This is typically achieved using automatic differentiation or other numerical differentiation techniques. PINNs incorporate the governing equations as soft constraints in the loss function, penalizing deviations from the physical laws during training. They can also easily incorporate available measurement data of the solution function into the loss function as an additional, supervised term.\r\n<h6><\/h6>\r\nFor the pendulum, you can use Deep Learning Toolbox to define a custom loss function that penalizes deviations from the governing equation \\(\\ddot{\\theta} = -\\omega_0^2 \\sin \\theta \\) :\r\n<pre>function loss = physicsInformedLoss(net,T,omega_0)\r\n\tTheta = forward(net,T);\r\n\tThetatt = dllaplacian(stripdims(Theta),stripdims(T),1);\r\n\tresidual = Thetatt + omega0^2*sin(Theta);\r\n\tloss = mean(residual.^2 ,'all');\r\nend\r\n<\/pre>\r\n<h6><\/h6>\r\nFunctions like <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/dlarray.dljacobian.html\">dljacobian<\/a> and <a href=\"https:\/\/www.mathworks.com\/help\/releases\/R2025a\/deeplearning\/ref\/dlarray.dllaplacian.html\">dllaplacian<\/a> make it straightforward to compute the derivative terms needed for PINNs. Alternatively, you can generate a PINN loss function directly from a symbolic differential equation using the functionality in <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/172049-pinn-loss-function-generation-with-symbolic-math\">PINN Loss Function Generation with Symbolic Math<\/a>, available for download on File Exchange. The code snippet below provides an example usage of this functionality for the same pendulum equation:\r\n<pre>syms theta(t)\r\npendulumODE = diff(theta,t,t) \u2013 omega0^2*sin(theta(t)) == 0;\r\nphysicsInformedLoss = ode2PinnLossFunction(pendulumODE,ComputeMeanSquaredError=true);\r\n<\/pre>\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17834\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/PINN_image.png\" alt=\"\" width=\"400\" height=\"257\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 7:<\/strong> PINN predicts the angle over time using the governing pendulum equation and observations of the pendulum\u2019s angle.<\/em>\r\n<h6><\/h6>\r\nWhile PINNs may not outperform traditional numerical methods for simple, low-dimensional problems like the pendulum, they can provide advantages for <a href=\"https:\/\/blogs.mathworks.com\/finance\/2025\/01\/07\/physics-informed-neural-networks-pinns-for-option-pricing\/\">high-dimensional partial differential equations (PDEs)<\/a>, <a href=\"https:\/\/github.com\/matlab-deep-learning\/SciML-and-Physics-Informed-Machine-Learning-Examples\/tree\/main\/inverse-problems-using-physics-informed-neural-networks\">inverse problems<\/a>, and cases with a blend of data and physical equations.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>Neural Operator<\/strong><\/p>\r\nNeural operators, such as the <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/solve-pde-using-fourier-neural-operator.html\">Fourier Neural Operator (FNO)<\/a>, are a class of neural PDE solvers designed to learn mappings between function spaces. Unlike PINNs, which are typically trained to solve a specific instance of an ODE or PDE, neural operators learn a mapping from the space of input functions (such as initial conditions or forcing functions) directly to the space of solution functions, enabling fast prediction for new scenarios without the need for retraining.\r\n<h6><\/h6>\r\nAlthough neural operators do not inherently encode physical laws, their architectures are inspired by the mathematical structure of PDEs. When the governing equations are known, neural operators can be combined with the PINN methodology by embedding those equations into the loss function, resulting in physics-informed neural operators.\r\n<h6><\/h6>\r\nFor example, consider a pendulum which is subject to a time-dependent forcing function , so that\r\n\\[\\ddot{\\theta} + \\omega_0^2 \\sin \\theta = f(t) .\\]\r\n<h6><\/h6>\r\nA neural operator can be applied to learn the map from the space of forcing functions to the corresponding space of solution functions. Once trained, the model can rapidly predict the pendulum\u2019s response to new forcing functions.\r\n<h6><\/h6>\r\nFourier Neural Operators can be implemented in MATLAB by defining a <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/define-custom-deep-learning-layers.html\">custom layer<\/a> in Deep Learning Toolbox, as shown in the documentation example <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/solve-pde-using-fourier-neural-operator.html\">Solve PDE Using Fourier Neural Operator<\/a>.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1671\" height=\"839\" class=\"alignnone size-full wp-image-18044\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/FNO_image_v2.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em><strong>Figure 8:<\/strong> Fourier Neural Operator learns mappings between function spaces, such as the map from forcing function to angular position in the pendulum example.<\/em>\r\n<h6><\/h6>\r\nWhile a neural operator is not necessary for simple problems like the pendulum, since re-solving the pendulum equation for a new \\(f\\) is computationally inexpensive, they are especially useful for repeated large-scale PDE simulations where traditional solvers can require substantial computation time.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>GNN for Geometric Deep Learning<\/strong><\/p>\r\nWhile the FNO is well-suited for problems defined on regular grids, many engineering applications involve data on complex or irregular geometries. In these cases, <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2025\/02\/04\/graph-neural-networks-in-matlab\/\">Graph Neural Networks (GNNs)<\/a> can provide a powerful alternative by operating directly on mesh or graph-based representations. This makes GNNs particularly effective for large-scale PDE simulations on complex domains.\r\n<h6><\/h6>\r\nWhile GNNs may not be relevant for this simple 1D pendulum example, they become valuable tools for more complex domains, such as predicting displacement fields in a robotic arm for different geometric designs. Once trained, GNNs can deliver rapid predictions for new designs, enabling real-time exploration of \u201cwhat-if\u201d scenarios. To learn how to train a GNN on finite element analysis data for PDE simulations, see the example <a href=\"https:\/\/www.mathworks.com\/help\/pde\/ug\/solve-heat-equation-using-graph-neural-network.html\">Solve Heat Equation Using Graph Neural Network<\/a>.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Summary of Methods<\/strong><\/p>\r\nSee the following summary table for a quick reference of all the physics-informed machine learning methods we discussed throughout this post.\r\n<h6><\/h6>\r\n<table width=\"90%;\">\r\n<tbody>\r\n<tr style=\"border: solid 1px #bfbfbf; background-color: #555555; color: white; font-size: 110%;\">\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Objective<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Approach<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>What\u2019s Learned?<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Physics Embedded?<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>How is Physics Embedded?<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Example Systems<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Notes<\/strong><\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf; border-bottom: solid 2px #bfbfbf;\">\r\n<th style=\"padding: 10px; text-align: left;\" rowspan=\"4\"><strong>Modeling unknown dynamics<\/strong><\/th>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Neural ODE<\/strong><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Dynamics function\r\n\r\n\\(f\\)<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17873 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/circle.png\" alt=\"\" width=\"27\" height=\"27\" \/><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>Architecture:<\/strong> ODE structure<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Any ODE system<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Flexible for any ODE; handles irregularly sampled time-series data<\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf;\">\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>Neural State-Space<\/strong><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">State-update\/output functions<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17873 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/circle.png\" alt=\"\" width=\"27\" height=\"27\" \/><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>Architecture:<\/strong> ODE &amp; state-space structure<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Any ODE system with observable outputs<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Can be continuous and discrete; handles irregular sampling if continuous<\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf;\">\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>UDE<\/strong><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Unknown part of dynamics function<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17882 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/tick.png\" alt=\"\" width=\"34\" height=\"34\" \/><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>Objective:<\/strong> blends physics with learned corrections; <strong>Architecture:<\/strong> ODE structure<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">ODE systems with partially known dynamics<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Leverages partial physics; handles irregular sampling<\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf;\">\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>HNN<\/strong><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Hamiltonian\r\n\r\n\\(H\\)<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17882 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/tick.png\" alt=\"\" width=\"34\" height=\"34\" \/><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>Loss:<\/strong> Hamilton\u2019s equations encoded as soft constraints; <strong>Architecture:<\/strong> Network predicts Hamiltonian \\(H \\)<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Mechanical<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Accounts for conservation of energy through model structure<\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf;\">\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Discovering equations<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>SINDy<\/strong><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Sparse coefficients (equations)<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17882 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/tick.png\" alt=\"\" width=\"34\" height=\"34\" \/><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>Architecture:<\/strong> Sparse regression over function library<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Any measurable system<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Produces interpretable models<\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf;\">\r\n<th style=\"padding: 10px; text-align: left;\" rowspan=\"3\"><strong>Solving known equations<\/strong><\/th>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>PINN<\/strong><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Solution to governing equation<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17882 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/tick.png\" alt=\"\" width=\"34\" height=\"34\" \/><\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\"><strong>Loss: <\/strong>PDE\/ODE residuals<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Mass-spring, Navier-Stokes<\/td>\r\n<td style=\"padding: 10px; text-align: left; border: solid 1px #bfbfbf;\">Requires explicit equations; particularly suited for limited data, inverse problems, high-dimensional PDEs<\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf;\">\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>FNO<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\">Solution operator<\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17873 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/circle.png\" alt=\"\" width=\"27\" height=\"27\" \/><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Architecture: <\/strong>Motivated by spectral methods for PDEs<\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\">PDE families, fluid, wave equations<\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\">Operates on uniform grids; capable of generalizing across grid resolutions; enables fast what-if analysis after training\r\n\r\n&nbsp;<\/td>\r\n<\/tr>\r\n<tr style=\"border: solid 1px #bfbfbf;\">\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>GNN<\/strong><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\">Solution on irregular geometries<\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-17873 aligncenter\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/circle.png\" alt=\"\" width=\"27\" height=\"27\" \/><\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\"><strong>Architecture: <\/strong>Message-passing on graph\/mesh<\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\">Heat transfer, structural mechanics, CFD on mesh data<\/td>\r\n<td style=\"padding: 10px; border: 1px solid #bfbfbf; text-align: left;\">Well-suited for complex, irregular geometries; enables fast what-if analysis after training<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n<strong><em>Table 1:<\/em><\/strong><em> Summary of physics-informed machine learning approaches for modeling unknown dynamics, identifying governing equations, and solving known PDEs and ODEs. The \u201cPhysics Embedded?\u201d column indicates the degree to which physical knowledge is incorporated: <\/em><em>a circle denotes a weak embedding (e.g., physics-inspired architecture), while <\/em><em>a checkmark indicates direct and explicit use of physics in the model objective, structure or loss. The \u201cNotes\u201d column highlights typical contexts where each approach is particularly suitable.<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Conclusion and Final Thoughts<\/strong><\/p>\r\nThroughout this two-part blog series, we have surveyed different scientific and engineering tasks suited to physics-informed machine learning, the types of physics knowledge that can be incorporated, how this knowledge is embedded, and provided educational MATLAB examples along the way. Together, these posts have shown how physics-informed machine learning bridges the gap between data-driven modeling and established scientific principles, and can help with more accurate, reliable, and interpretable predictions.\r\n<h6><\/h6>\r\nWhether you are just starting to explore this area or looking to implement advanced techniques in your own work, I hope this series has provided you with a solid foundation and practical guidance on the ever-evolving field of physics-informed machine learning. Stay tuned for future posts, and let us know in the comments if there are any topics or techniques that you\u2019d like to learn more about!\r\n<h6><\/h6>","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/07\/Neural_SS_idea.jpeg\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><p>\r\nThis blog post is from Mae Markowski, Senior Product Manager at MathWorks.\r\n\r\n&nbsp;\r\n\r\nIn our previous post, we laid the groundwork for physics-informed machine learning, exploring what it is, why... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2025\/07\/14\/physics-informed-machine-learning-methods-and-implementation\/\">read more >><\/a><\/p>","protected":false},"author":194,"featured_media":17798,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[9,12],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/17762"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/194"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=17762"}],"version-history":[{"count":88,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/17762\/revisions"}],"predecessor-version":[{"id":18221,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/17762\/revisions\/18221"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media\/17798"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=17762"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=17762"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=17762"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}