Today, Ajay Puvvala is back to talk about testing.
In last week's post, we looked at how we could apply MATLAB Unit Testing Framework to Simulink context. We authored a scripted test to verify the output of the generated code of a simple model against normal mode simulation. In that test, we:
- Simulated the system under test in normal and software-in-the-loop to obtain expected and actual simulation results
- Leveraged the MATLAB Unit qualification API verifyEqual to compare the numerics with and without tolerance
- Developed a simple SDI diagnostic to visualize the signal differences for failure investigation
As you probably remember, steps 2 & 3 required writing a reasonable amount of code to extract the signals to compare and then defining a diagnostic that could help with failure analysis. That isn't unexpected, since the MATLAB Unit Testing Framework is a general purpose tool to test any software with MATLAB API. It doesn't specialize in a particular application domain.
That's where Simulink Test comes into picture, bringing in specialization for testing Simulink models. It provides tools for authoring, executing and managing simulation-based tests for models and generated code. It also provides the ability to create test harnesses that assist with testing selected components in a model independently. Its integration with products such as Simulink Verification and Validation provides a great platform for model-based testing.
Have not tried it yet? I strongly recommend you check out the product page.
We want to validate the results of generated code, via Software-in-The-Loop simulation, against normal mode simulation of a simple model.
Test Manager Setup
After launching the Test Manager from the Analysis menu of the model, I select to create a new Test File From Model:
and I specify the type of test:
When you select an equivalence test, it allows you to define simulation settings for two simulations. In this case, we will define Simulation 1 as normal mode simulation and Simulation 2 as software-in-the-loop simulation.
In the Equivalence Criteria section, you select which signal you want to compare. Using the Capture button, the Test Manager will analyze the model and list all the logged signals that could potentially be used in the comparison. In our case, it finds the Outport block, for which the logging is enabled.
Once the signals are in the table, you can specify tolerances. As we did last week, we specify an absolute tolerance to allow expected small differences between the two simulations.
Running the Test
Now it's time to click the Run button:
When the test terminates, we can inspect the results. The integration of the Simulation Data Inspector within the Test Manager makes it convenient to inspect the results without writing a single line of code. In our example, we can see the small difference within the specified tolerance.
Notice the Highlight in Model button. It is convenient to analyze failures in large models where many signals are logged.
I want to mention a few more items I find very useful in the Test Manager:
- Debug: If you enable the Debug button, the Test Manager will setup everything and pause the simulation at t=0, allowing you to go step by step through the simulation to understand what is going wrong.
- Parallel: With the Parallel Computing Toolbox, this button will run the tests in parallel, possibly saving you lots of time.
- Time-Based Tolerances: When specifying the tolerances in the equivalence criteria, it is possible to specify leading and lagging tolerances. For example, if I am simulating a vehicle and testing in which gear the transmission is, I probably want to allow shifting to happen slightly before or after the baseline.
- Programmatic API: Once your tests are defined and saved, it is easy to run them programmatically. With just three lines of code, you can load the test file, run the tests, and view the results.
Now it's your turn
Let us know what you think of the Test Manager by leaving a comment below.
4 CommentsOldest to Newest
Not sure if there is another article planned so I thought I would ask a question about how the two testing frameworks can be connected? Since Matlab UTF supports CI integration with systems like Jenkins, does that make it easier to support it with Simulink Test? Is there a best practice for using both of the tools or is it better to stay in one or the other? Thanks.
Yes, we are working on a follow-up post about CI integration systems like Jenkins. This will be coming soon.
In the meantime (while we put a CI post together), check out https://www.mathworks.com/help/sltest/ug/run-test-files-using-matlab-unit-test.html, if you haven’t had a chance yet. Section “Test a Model for Continuous Integration Systems” should give some idea of Simulink Test/MATLAB Unit Test framework & CI.
Thank you for this post to introduce Simulink Test Manager which greatly simplifies Simulink testing.
One example in which I have used both Simulink Test Manager+ MATLAB Unit Test together is to define a “custom criteria” which is a section available in the test definition in the Test Manager. Say I want to inject a chirp signal into my model to get a frequency response and verify that it lies within expected bounds. I can use the full power of MATLAB to process the data and generate the frequency response and figure, and then use MATLAB unit test verifyLessThan / verifyGreaterThan syntax to compare the result with the bounds. The result is is neatly returned to the Test Manager, and I can get this result and the Bode/Nichols figure included in the test report without any additional coding required – powerful and neat.