Developer Zone

Advanced Software Development with MATLAB

Comma Separated Goodness 4

Posted by Andy Campbell,

Hi folks, today I'd like to introduce ChangQing Wang. ChangQing is the lead developer on MATLAB's performance framework, and in addition to all the great performance testing features he has delivered, he has also found a really easy way to integrate performance results into Jenkins. Check it out!


MATLAB performance testing on Jenkins

"Is it build 3301 or build 3319?" CQ scratched his head, confusion written all over his face. Somehow he noticed a significant increase in the code runtime, but had no clue which change caused it. He wished he had logged the performance of every change in the project.

As continuous integration is becoming one of the key principles in Agile processes, and as more and more products are adopting continuous delivery practices, performance testing is a crucial step to add to the workflow. CQ never wanted to introduce a performance regression, yet realized too late that there is a real risk of this every time he touches the code for a bug fix or a new feature. "A passing build does not mean everything is OK", CQ pondered, "How can I monitor the performance of my MATLAB project on CI system?"

The anwer to this question is actually two fold:

  1. He needs to add performance tests for the project.
  2. He needs to schedule performance testing runs for each build and report the result.

If you have a MATLAB project and wonder how to write a performance test using the latest (and coolest) testing framework in MATLAB, this page is a good starting point. You can also look at the other blog posts we've made in the past on the topic. In this blog, I will not go through the details of how to write performance tests but rather show an example project with performance tests already written. The project I am using to highlight this example is a super lightweight "library" for three different matrix operations, computing the mean, sum, and eigenvalues of a given matrix.


function out = matrix_mean(M)
% CQ's library for matrix operation

sum = matrix_sum(M);
nrow = size(M, 1);
ncol = size(M, 2);
out = sum/(nrow*ncol);



function out = matrix_sum(M)
% CQ's library for matrix operation

out = 0;
nrow = size(M,1);
ncol = size(M,2);
for i = 1:nrow
    for j = 1:ncol
        out = out + M(i,j);



function out = matrix_eig(M)
% CQ's library for matrix operation

out = roots(round(poly(M)));



classdef tMatrixLibrary < matlab.perftest.TestCase
        TestMatrix = struct('midSize', magic(600),...
            'largeSize', magic(1000));
        function testSum(testCase, TestMatrix)
        function testMean(testCase, TestMatrix)
        function testEig(testCase, TestMatrix)
            testCase.assertReturnsTrue(@() size(TestMatrix,1) == size(TestMatrix,2), ...
                'Eig only works on square matrix');

The performance test tMatrixLibrary has three parameterized tests for each of the source files. Notice in testEig, we use an assertTrue qualification to guarantee the matrix passed in the test is square, as well as start/stopMeasuring to designate the measurement boundary in the test point. There are multiple ways to run the performance tests in MATLAB, but the easiest is probably to use runperf to obtain the results. Once we have the results it is easy to get a high level overview using sampleSummary:

results = runperf('tMatrixLibrary.m')
Running tMatrixLibrary
Done tMatrixLibrary

results = 

  1×6 MeasurementResult array with properties:


   6 Valid, 0 Invalid.

ans =

  6×7 table

                        Name                         SampleSize      Mean       StandardDeviation       Min        Median         Max   
    _____________________________________________    __________    _________    _________________    _________    _________    _________

    tMatrixLibrary/testSum(TestMatrix=midSize)            7        0.0021399       0.00013117        0.0020023    0.0020896    0.0023467
    tMatrixLibrary/testSum(TestMatrix=largeSize)         17        0.0082113       0.00092846        0.0050781    0.0084503    0.0095599
    tMatrixLibrary/testMean(TestMatrix=midSize)          12        0.0021527       0.00020086        0.0019554    0.0021054     0.002559
    tMatrixLibrary/testMean(TestMatrix=largeSize)         8        0.0085206       0.00062801        0.0077265    0.0084615    0.0093073
    tMatrixLibrary/testEig(TestMatrix=midSize)            4          0.15444        0.0010901          0.15364      0.15405      0.15604
    tMatrixLibrary/testEig(TestMatrix=largeSize)          4          0.41783         0.013677          0.40623      0.41421      0.43668

These are nice numbers to evaluate the performance of the project from the MATLAB Command Window. Now let's see how we can report them in a CI system. If we use Jenkins as an example, we can create a "Simple Matrix Library" project containing the source and test files shown above:

As a prerequisite, to enable logging of performance data on Jenkins, you can use the performance plugin on Jenkins. The plugin can be searched and installed from the Jenkins plugin manager . It enables a post build process to capture reports from major testing tools and then generates trend plots over the build history. In addition, it allows setting the latest build status as passed, unstable or failed, based on the reported error percentage. There are several supported report formats including Final Stats XML, JMeter format, JUnit XML, and so forth. However, we pick the JMeter CSV format for our MATLAB project since the output measurement result object from runperf is already storing the information in tabular form and as you will see it is quite straightforward to generate a JMeter CSV out of these tables. Here are the detailed steps:

Step 1: Convert the performance results to CSV format

To kickoff, we will create a JMeter CSV file from a measurement result object. First we need to gather the information required. The standard JMeter CSV format includes 16 variables: timeStamp, elapsed , label , responseCode , responseMessage , threadName , dataType , success , failureMessage , bytes , sentBytes , grpThreads, allThreads, latency, IdleTime, and connect. Some of these variables are important for our use case and some we can ignore. Four of them are available in the TestActivity table from the measurement result: timeStamp, elapsed (from "MeasuredTime"), label (from "Name") and success (from "Passed"). So let's use the results from our runperf call above. We can extract these columns into a samplesTable and rename the variables:

activityTable = vertcat(results.TestActivity);
ans =

  12×1 cell array

    {'Name'         }
    {'Passed'       }
    {'Failed'       }
    {'Incomplete'   }
    {'MeasuredTime' }
    {'Objective'    }
    {'Timestamp'    }
    {'Host'         }
    {'Platform'     }
    {'Version'      }
    {'TestResult'   }

samplesTable = activityTable(activityTable.Objective == categorical({'sample'}),:);
nrows = size(samplesTable, 1);

% Trim the table and change variable names to comply with JMeter CSV format
samplesTable = samplesTable(:, {'Timestamp', 'MeasuredTime', 'Name', 'Passed'});
samplesTable.Properties.VariableNames = {'timeStamp', 'elapsed', 'label', 'success'};

A couple of things to note are that the timestamp in JMeter is in unix style format, and the elapsed time reported in JMeter is in milliseconds, both of which are different from the MATLAB measurement result. Also, for failed cases, we need to replace the missing values NaN and NaT in the measurement result by some acceptable values in JMeter. Let's address both of these cleanup items:

% Convert timestamp to unix format, and fill NaT with previous available time
samplesTable.timeStamp = fillmissing(samplesTable.timeStamp,'previous');
samplesTable.timeStamp = posixtime(samplesTable.timeStamp)*1000;

% Convert MeasuredTime to millisecond, and fill NaN with 0
samplesTable.elapsed = fillmissing(samplesTable.elapsed,'constant',0);
samplesTable.elapsed = floor(samplesTable.elapsed*1000);

The "Passed" column by default stores logical values, we need to convert them to string for JMeter CSV:

% Convert pass/fail logical to string
samplesTable.success = string(samplesTable.success);

Next, we need to create some default values for the 12 other variables that are less important for us, and append them to the samplesTable:

% Generate additional columns required in JMeter CSV format
responseCode = zeros(nrows, 1);
responseMessage = strings(nrows, 1);
threadName = strings(nrows, 1);
dataType = strings(nrows, 1);
failureMessage = strings(nrows, 1);
bytes = zeros(nrows, 1);
sentBytes = zeros(nrows, 1);
grpThreads = ones(nrows, 1);
allThreads = ones(nrows, 1);
latency = zeros(nrows, 1);
idleTime = zeros(nrows, 1);
connect = zeros(nrows, 1);

auxTable = table(responseCode, responseMessage, threadName, dataType, ...
    failureMessage, bytes, sentBytes, grpThreads, allThreads, ...
    latency, idleTime, connect);

% Append additional columns to the original table
JMeterTable = [samplesTable, auxTable];

Voila! We now have a table in JMeter format with the full set of 16 variables. We can now simply use the writetable function to write it to a CSV file. Notice the strings are quoted to ensure the commas in the test name are not treated as delimiters.

% Write the full table to a CSV file
writetable(JMeterTable, 'PerformanceTestResult.csv', 'QuoteStrings', true);

Step 2: Configure build and post-build actions

Now we can set up performance monitoring on Jenkins! The good news is that after the hard work in step 1, the rest is super easy. Just put the conversion code we've developed above into a function (we called it convertToJMeterCSV) and make sure it is available from the workspace of your Jenkins build. Then you just invoke that function as part of the Jenkins build. Open the project configuration page, add an "Execute Windows batch command" build step and write the following into the command:

matlab -nodisplay -wait -log -r "convertToJMeterCSV(runperf('tMatrixLibrary.m')); exit"

The output from runperf will be converted and saved to "PerformanceTestResult.csv" locally.

Next, click "Add a post-build action". With the performance plugin successfully installed on Jenkins, the "Publish Performance test result report" option should appear. Select that, and enter the csv file name in the "Source data files" field. There are also other options to tweak but we will leave them as they are for now. Click the save button to exit the configuration page.

Step 3: Build the project and review the trend

Everything is done, you can build the project several times, and click the "Performance Trend" link on the left to view the trend plots of the response time and percentage of errors:

Notice the statistics in the response time trend are calculated over all tests, which is why the median value can be very different from the average. We can reach another view of the trend by clicking into any build (say #14 in our case) and then click the "Performance Trend" link:

Here we will see nice summary table presenting the stats of all tests with green/red indicators showing the result difference compared to the previous build. All non-passing tests will show up in red, glad we don't have any of those.

That's how we can add a performance report for our MATLAB project on Jenkins, isn't that easy? Share your thoughts on how the process can be improved. Is there any other performance statistics trend you would like to follow for your MATLAB project?

Get the MATLAB code

Published with MATLAB® R2018a

4 CommentsOldest to Newest

Michael Wutz replied on : 2 of 4
I just try to extend my test from tap reporting to coverage and performance tests reporting. However while tap and coverage can be handled by "addplugin" of the runner object - I don't find this feature for performance testing (R2018a). Currently the way to run the performance test is only the "runperf ? Maybe its against the nature of the performance test to be done along with coverage and tap test at once ? While this post is really useful and I just copy&paste the transformation to the JMeter file to my own runner object, it would be cool to have this integrated in a matlab runner object in first place. Currently my runner Object accepts inputs like:
 myRunner = runner('TapTesting',true,'PerformanceTesting', false, 'CoverageTesting', false). 
and all addplugin etc runperf is handled inside the runner Object. In addition I don't find the possibility to give matlab suites as input parameter to runperf
 mySuite = [matlab.unittest.TestSuite.fromClass(? myTest)]; 
Hey Michael, Great questions. Performance testing is different that functional testing, even in terms of the test environment setup and execution. So, while runperf _uses_ a TestRunner (that usually can have plugins added to it) under the hood, it is not really running the test to determine pass/fail type of functionality. Instead it is more of a measurement operation. We are running these tests to measure something, and in this case that which we are measuring is subject to noise, so we need to measure it multiple times (take multiple samples). Anyway, because these are really two different types of runs I suggest using two separate build steps for them. So you can have one build step which runs your tests and controls whether the job fails or passes. Then you can have a second build step which runs your performance tests to gather and report on this performance data and you can do this as a separate operation. In fact, you may want to consider having a performance test suite that is different than your functional test suite. This may be important because you want to choose carefully our performance test bed. For performance, you really want to focus on the most time critical areas of your code, and if inconsequential sections of code get slower it may not be worth looking into or getting notified about. It is OK for things that aren't in your critical workflow to get slower, and you don't want to be bothered by the noise of performance regressions that don't really matter in your workflows. This reduces the noise for the actual important performance regressions that you _DO_ want to be notified for. Note this is different than functional testing, where it needs to be bug free in every single corner case of your software. The takeaway is that since these purposes are different you should consider separating your "performance" test suite from your "functional" test suite. Then you can run them as different build steps. Also, there is an API for running performance tests from TestSuites, you need to create a TimeExperiment which is analogous to a TestRunner. The default experiment used by runperf is a TimeExperiment.limitingSamplingError. Currently plugins cannot be added to TimeExperiments but we can consider than for a future release if we understand what the needs are. One puzzle is that many plugins might not do what you expect when running performance tests because, for example, performance testing runs each of the tests multiple times in order to collect multiple samples of the measurement. The TAPPlugin, for example, would then print a line for every single iteration of each test, which might not be desirable. Hope that helps! Andy