Constrained deep learning is an advanced approach to training robust deep neural networks by incorporating constraints into the learning process. These constraints can be based on physical laws, logical rules, or any other domain-specific knowledge. For example, if you are designing a deep learning model for a mobile phone’s battery state of charge (SoC), it makes sense that the SoC will be monotonically increasing when the phone is plugged in and charging, and monotonically decreasing when the phone is unplugged and being used.
Imposing constraints in the learning process guarantees that certain desirable properties are present in the trained neural network by design. This makes it easier to ensure that the neural network meets the specified requirements, and therefore easier to verify the network. Constrained deep learning is particularly relevant when developing deep learning models in safety-critical or regulated industries, such as aerospace, automotive, healthcare, and finance. To learn more about Verification and Validation for AI, check out
this blog post series.
A newly available repository provides you with code, examples, and technical information for designing constrained models that meet the desired behavior. More specifically, this repository brings the concepts of monotonicity, convexity, and Lipschitz continuity as constraints embedded into deep learning models.
You have two options to access the repository:
The repository provides
introductory examples to get you started with constrained deep learning. Check out these examples to learn how to use the buildConstrainedNetwork function to design monotonic, convex, and Lipschitz continuous neural networks.
With one of these introductory examples, you can learn how to create a 1-D fully monotonic neural network (FMNN) architecture. Fully monotonic neural networks adhere to a specific class of neural network architectures with constraints applied to the network architecture and weights. You can build a simple FMNN using fully connected layers and fullsort (that is, gradient norm preserving activation functions). Specify the ResidualScaling to balance monotonic growth with smoothness of solution and that the MonotonicTrend is "increasing".
inputSize = 1;
numHiddenUnits = [16 8 4 1];
fmnnet = buildConstrainedNetwork("fully-monotonic",inputSize,numHiddenUnits,...
Activation="fullsort",...
ResidualScaling=2,...
MonotonicTrend="increasing")
fmnnet =
dlnetwork with properties:
Layers: [11x1 nnet.cnn.layer.Layer]
Connections: [11x2 table]
Learnables: [8x3 table]
State: [0x3 table]
InputNames: {'input'}
OutputNames: {'addition'}
Initialized: 1
View summary with summary.
The repository also provides longer
workflow examples, such as the
Remaining Useful Life Estimation Using Monotonic Neural Networks example, which shows how to guarantee monotonically decreasing prediction on a remaining useful life (RUL) task by combining partially and fully monotonic networks.
The results for a specific test engine show that the fully monotonic neural network (FMNN) performs well in estimating the RUL for turbo engine data. The FMNN outperforms the CNN (also trained in this example) for this test engine, most likely because it’s guaranteed to always provide a monotonically decreasing solution. Additionally, even though no restriction was set on the FMNN that the network output should be linear, the FMNN displays linear behavior and follows closely the true RUL.
This example aims to demonstrate a viable approach for obtaining an overall monotonic RUL network. Consider that results can be improved if the signal data is preprocessed with denoising, smoothing, or other techniques.
If you want to delve deeper into the techniques for and applications of building robust neural networks with constrained deep learning, the repository provides comprehensive
technical articles. You don’t have to understand or even read the technical articles to use the code in the repository. However, if you are feeling brave or curious enough, these articles explain key concepts of AI verification in the context of constrained deep learning. They include discussions on how to achieve the specified constraints in neural networks at construction and training time, as well as deriving and proving useful properties of constrained networks in AI verification applications.
So, check out the
AI Verification: Constrained Deep Learning repository to learn how to build robust neural networks with constrained deep learning, try out the code included in the repository, and comment below to tell your fellow AI practitioners how and where you applied constrained deep learning.
Comments
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.