By Guy Rouleau
This week I received a large model giving different results when used as a referenced model in Accelerated mode, compared to the Normal Mode. To give you an idea, the top model in this application looked like this:
After a quick inspection of the model, I found nothing obvious. Accelerated mode uses code generation technology to run the simulation. Real-Time Workshop Embedded Coder offers the Code Generation Verification (CGV) API to verify the numerical equivalence of generated code and the simulation results. Using this API, I decided to test the code generated for this referenced model. Here is how it works.
Log Input Data
To begin, I enabled signal logging for the signals entering the referenced model in Normal mode. Then I played the top model. This data will allow me to test the Controller subsystem alone, without the plant model.
Setup the model to be analyzed
From now on, I only need the referenced model. In this model, I enabled data import from the model configuration, so that the previously logged data can be read by the Inport block.
Then I enabled signal logging for the signals I want to compare. For a first run, I logged only the controller output, to confirm that this setup reproduces the issue.
Write a test script for SIL testing
The CGV API is designed to perform Software In the Loop (SIL) and Processor In the Loop (PIL) testing for a model. To write a script validating the numerical equivalence of the generated code, I started with the documentation section Example of Verifying Numerical Equivalence Between Two Modes of Execution of a Model. Based on this example it took me only a few minutes to write the following:
cgvModel = 'Isolated'; % Configure the model cgvCfg = cgv.Config( cgvModel, 'Connectivity', 'sil') cgvCfg.configModel(); % Execute the simulation cgvSim = cgv.CGV( cgvModel, 'Connectivity', 'sim') result1 = cgvSim.run() % Execute the generated code cgvSil = cgv.CGV( cgvModel, 'Connectivity', 'sil') result2 = cgvSil.run() % Get the output data simData = cgvSim.getOutputData(1); silData = cgvSil.getOutputData(1); % Compare results [matchNames, ~, mismatchNames, ~] = ... cgv.CGV.compare( simData, silData, 'Plot', 'mismatch')
At the completion of this script, the following figure appeared, confirming that the results from the generated code are different from the simulation.
Identify the origin of the difference
To identify the origin of the difference, I enabled logging for more signals in the model. After a few iterations, without modifying the above script I identified the following subsystem where the input was identical, but the output was different:
When looking at the generated code, I found out that the line for this subsystem is:
(*rty_Out2) = 400.0F * (*rtu_In1) * 100.0F;
I made a quick test and manually changed the code to:
tmp = 400.0F * (*rtu_In1);
(*rty_Out2) = tmp * 100.0F;
Surprisingly, this modified code produces results identical to simulation!
Consult an expert
I have to admit, I had no idea how these two similar codes could lead to different results, so I asked one of our experts in numerical computation.
I learned that some compilers use a technology called extended precision. With this technology, when you have a line of code including more than one operation, the compiler can use a larger container to store intermediary results. The goal of this technology is to provide more accurate results, however in this case it also leads to surprises.
After understanding this behavior, we recommended to the user a few options to avoid this type of expression folding in the generated code and consequently avoid this behavior of the compiler.
Without the Code Generation Verification API, I would have spent a lot of time wiring debugging signals in the model. This tool helped me to quickly identify the root cause of the problem without modifying the model. Note that the CGV API can do a lot more than what I showed here. Look at the CGV documentation for more details.
Now it's your turn
How do you verify that the generated code is numerically equivalent to your model? Leave a comment here.
8 CommentsOldest to Newest
You state that you made some recommendations based on this analysis. I would really like to see your recommendations for the system you described.
Yes, what are the recommandations that you did?
This a tough problem that often occurs unfortunately.
Thank you Jim and Thierry for your interest. We recommended two approaches, one affecting the entire model, one affecting only a specific signal:
- You can avoid block computations to be collapsed into single expressions in the generated code by disabling the optimization option “Eliminate superfluous local variables (Expression folding)”:
- Since expression folding can dramatically improve the efficiency of generated code, it might not be appropriate to disable it for your entire model. In that case you can declare a specific signal as a Test Point. This will disable code generation opitimizations for this specific signal, without affecting the rest of the model:
@Jim Ross, @Thierry – Thanks for the questions… Guy and I talked about this some more and have updated the post with some clarifying comments. Mostly repeated here:
The root cause of this difference is not the RTW Expression Folding option. Turning expression folding off just happens to work around the behavior of the compiler. The right way to prevent the compiler from using extended precision is to provide the compiler flags that force it to do the computation as written.
We see this problem often and our models are so complicated we usually have no idea where to begin to resolve it by setting test points or looking at generated code.
Unfortunately we also don’t have time to change a model to model reference and test each block individually to see if it starts to deviate from compiled code.
Model Advisor usually advises the user to turn on these options which not only cause differences, but also can cause repeated recompiles when parameters are changed. Some thought should be given to adjusting Model Advisor to offer advice to solve this problem.
hi i have a ssmall problem with simulink.when i first simulated my induction motor block diagram,it gave an error stating there is no valid compiler. i m presently using matlab 7.6,r2008a. so i installed microsoft visual c++ and sdk from web. now wen i entered mex -setup, it still is not detecting ny compiler. how do i interface da present compiler with matlab? any help will be appreciated. thanks in advance
@Abirami, You can find the list of supported compiler on MathWorks website. In your case, the compilers supported for R2008a are listed here:
hi again. i need to know one more thing. in the web page u mentioned, it is said i should use microsoft c++ 2005 or 2008(9th prof edition.) but wen i searched the supported compiler site , mine being a 64 bit platform it says i should download and install MS c++2010 along with SDK 7.1. can you please be a more specific as to what i should do? and sdhould i download a seperate c compiler also? it is said in your site.