# An Ode to Configuration as Code

It was a dark and stormy night. The dedicated engineer careened across the rain and mud to deliver her critical package through the challenging terrain of the notorious Pipeline Traverse. She had a delivery of the utmost importance and time was of the essence. However, her confidence was strong. After all, she had delivered similar packages hundreds of times. She quickly but carefully ushered this important delivery, navigating around all of the familiar potholes, hairpin turns, and imposing cliff faces. She was just beginning to really enjoy the thrill of the efficiency and precision of her path when the disaster struck.

You see, the familiar path of the Pipeline Traverse changed! It was no longer the same route that she was expecting, and unfortunately she was never notified of the change. The crash was big. And costly. But worst of all, she now found herself lost in the middle of an unfamiliar path with no idea how she could make her delivery. She was lost, and the package could not be delivered as needed.

Let me reign myself in before I get too engrossed in this story and quit my day job to pursue cheeseball novels. Cheese aside, isn't our protagonist in this story relatable to anyone involved in managing a software delivery pipeline? It seems just when we are most confident in our delivery pipeline that disaster strikes (usually at 4:30 on a Friday afternoon amirite ?).

And if the problem is in your pipeline or your pipeline configuration, when it strikes it can be really challenging to debug. This is especially true if your CI/CD pipeline is buried in the UI. Have you ever found yourself muttering things like:

"Where did I set this pipeline up?"

"Who changed my pipeline?!"

"I wish I didn't need this boiler plate for all my projects!"

...or, if you ever have the unpleasant experience of data loss or corruption of your CI config data...

"How in the world did I set that up again (over the course of 3 years and multiple changes)?"

Well these sentiments are what invariably lead to a better way. Sooner or later most projects address the problem of pipeline management through the principle of Configuration as Code. This is one of the many blog topics I have promised you we would address and discuss here, and so finally here we are.

Configuration as Code follows the principle that we minimize CI configuration in a CI server UI. What we do in the pipeline should not be defined as obscure and opaque metadata that is held onto by a CI server instance somewhere (where? who knows!). We should instead define that process and pipeline with the code - where the changes happen. Where the changes are recorded, can be reverted, and are completely independent of a particular CI server instance and it's state.

With our protagonist delivering along the Pipeline Traverse, while she may not have been able to avoid all crashes, when a problem does show up it is much easier to diagnose and pinpoint the cause and ultimately the way out of the canyon.

Our Jenkins plugin supports this. Also most other platforms we support, like CircleCI, Travis CI, Azure DevOps, and GitHub Actions follow this principle as well.

To see this work for our Jenkins plugin, let me show you how you can translate the project we discussed here over to a Jenkinsfile using the plugin's support for a feature called Jenkins Pipelines. This project includes a MEX compile step and a subsequent step to run the tests and produce test result and coverage artifacts.

It's actually quite easy. This project had the following steps in its pipeline:

1. Tell Jenkins where MATLAB is (there's actually more news which makes this easier to manage as well, but maybe that's a future blog post)
2. Compile the mex files
3. Run the tests
4. Process the test result and coverage artifacts

Now instead of defining these in the UI, we are going to create a Jenkinsfile that contains these steps of the pipeline, and this we can check into our project right alongside our code.

Let's start with a basic pipeline that specifies it can use any available Jenkins agent and has a placeholder to add our stages to the pipeline.

pipeline {
agent any
stages {
}
}


Step 1

Remember our pipeline had 4 steps, so let's start with step 1. We need to tell Jenkins where MATLAB is. Actually this is really just done by setting the environment such that the machine itself knows where MATLAB is on the system PATH. When invoking MATLAB the plugin will simply call MATLAB from the shell, so enabling this to work correctly simply means prepending the PATH environment variable with MATLAB for the duration of this pipeline so that invoking matlab from the shell will invoke the desired MATLAB set up on the build agent. This looks like so:

environment {
PATH = "/Applications/MATLAB_R2020b.app/bin:${PATH}" }  Step 2 The next step is to build the mex files. As a reminder the example project we are working with here, libDirectional, has a nice function called compileAll that does the trick for us. Remember, we showed here that putting this in the form of a MATLAB project helps get our environment setup correctly. So our command is just a quick opening of the project and a call to the nifty compileAll function. Here we use the runMATLABCommand step that comes as part of the plugin. We don't need to worry about how MATLAB launches (which varies with different releases), we just worry about the command we need to invoke. It's as easy as adding a stage with this step inside of it: stage('Compile MEX files') { steps { runMATLABCommand 'openProject(pwd); compileAll' } }  Step 3 Now let's run the tests! Last time we touched on this we ran all the tests defined in the project and we generated test results in the TAP format and code coverage in the Cobertura format. I am going to make a slight tweak here, because the TAPPlugin, which processes these TAP results, doesn't support Jenkins pipelines as well as the JUnit plugin. As you might be starting to see, this can be done with some straightforward pipeline syntax that matches the same kind of thing we can do through the UI. stage('Run MATLAB tests') { steps { runMATLABTests( testResultsJUnit: 'matlabTestArtifacts/junittestreport.xml', codeCoverageCobertura: 'matlabTestArtifacts/cobertura.xml' ) } }  Step 4 ...and finally, now that we have built our project, tested it and generated some test results and coverage artifacts, we can now process those artifacts using other plugins meant for that purpose (the JUnit plugin and the Code Coverage API plugin). I'll just add them to the same stage as the test run. stage('Run MATLAB tests') { steps { runMATLABTests( testResultsJUnit: 'matlabTestArtifacts/junittestreport.xml', codeCoverageCobertura: 'matlabTestArtifacts/cobertura.xml' ) junit 'matlabTestArtifacts/junittestreport.xml' publishCoverage adapters: [coberturaAdapter('matlabTestArtifacts/cobertura.xml')] } }  That's it, that's all we need. Here is the whole Jenkinsfile for reference: pipeline { agent any environment { PATH = "/Applications/MATLAB_R2020b.app/bin:${PATH}"
}
stages {
stage('Compile MEX files') {
steps {
runMATLABCommand 'openProject(pwd); compileAll'
}
}
stage('Run MATLAB tests') {
steps {
runMATLABTests(
testResultsJUnit: 'matlabTestArtifacts/junittestreport.xml',
codeCoverageCobertura: 'matlabTestArtifacts/cobertura.xml'
)
junit 'matlabTestArtifacts/junittestreport.xml'
}
}
}
}


Now just check something like that into the root of your repository, and you will be good to go. Here's how you create a Jenkins job that uses it:

...and now you can see the pipeline runs just fine and dandy as it pulls its pipeline instructions from the Jenkinsfile instead of the Jenkins UI:

...and now since this pipeline configuration is checked in, it travels with the code, it can be reverted with the code, it can be reviewed with the code, and it can be reasoned with the code. Code is beautiful, especially when backed by a little bit of source control, MATLAB projects, and what we hope are useful CI platform integrations.

Now let's all burst out in song!!

Published with MATLAB® R2020b

|