This week’s blog post is by the 2019 Gold Award winner of the Audio Engineering Society MATLAB Plugin Student Competition. Introduction My name is Christian Steinmetz and I am currently a... read more >>
]]>My name is Christian Steinmetz and I am currently a master student at Universitat Pompeu Fabra studying Sound and Music Computing. I have experience both as an audio engineer, working to record, mix, and master music, as well as a researcher, building new tools for music creators and audio engineers. My interest in the application of signal processing and machine learning is towards problems in the field of music production. The project I will share here, flowEQ, is my latest attempt at using machine learning to make an existing audio signal processing tool, the parametric equalizer, easier to use.
This project was my entry in the Audio Engineering Society MATLAB Plugin Student Competition. I presented this work at the 147th AES Convention in New York City, along with other students from around the world (all of the entries can be found here), and my project was selected for the Gold Award.
The goal of the competition was to use the Audio Toolbox to build a realtime audio plugin that helps audio engineers achieve something new. The Audio Toolbox is unique since it allows you to write MATLAB code that defines how you want to process the audio, and then automatically compile it to a VST/AU plugin that can be used in most digital audio workstations (DAWs). This made it fairly straightforward to build the plugin, and we could then focus more on the development of the algorithms.
The goal of flowEQ is to provide a highlevel interface to a traditional five band parametric equalizer that simplifies the process of applying timbral processing (i.e. changing how a recording sounds) for novice users. In order to effectively utilize the parametric EQ, an audio engineer must have an intimate understanding of the gain, center frequency, and Q controls, as well as how multiple bands can be used in tandem to achieve a desired timbral adjustment. For the amateur audio engineer or musician, this often presents too much complexity, and flowEQ aims to solve this problem by providing an intelligent interface geared towards these kinds of users. In addition, this interface can provide experienced engineers with a new method of searching across multiple timbral profiles very quickly and also has the potential to unlock new creative effects.
To achieve this, flowEQ uses a disentangled variational autoencoder (βVAE) in order to construct a low dimensional representation of the parameter space of the equalizer. By traversing this learned latent space of the decoder network, the user can more quickly search through the configurations of a five band parametric equalizer. This methodology promotes using one’s ears to determine the proper equalizer settings over looking at transfer functions or specific frequency controls.
Here is a demonstration of the final plugin in action. You can see as the sliders on the left are moved, the frequency response of the equalizer shown in the top right changes in a smooth manner. Each of the five bands (five biquad filters in series) produces an overall frequency adjustment, and what once would have required changing 13 parameters at the same time to achieve, can now be achieved by adjusting two sliders (in the 2 dimensional mode). In the following sections we will get into the details behind how this works from a high level, as well as how it was implemented. (To hear what this sounds like check out the short live plugin demonstration video.)
To train any kind of model we need data. For this project we use the SAFEDB Equalizer dataset, which features a collection of settings used by real audio engineers from a five band parametric equalizer, along with semantic descriptors for each setting. Each sample in the dataset contains a configuration of the equalizer (settings for the 13 parameters) as well as a semantic descriptor (e.g. warm, bright, sharp, etc.).
In our formulation, we make the realization that the parameter space of the equalizer is very large (if we say each parameter could take on 20 different values, that would give us ~4e15 possible configurations, more than the number of cells in the human body.) We then make the assumption that the samples in the dataset represent a portion of this parameter space that is most likely to be utilized while audio engineers are processing music signals. We then aim to build a model that learns a wellstructured, low dimensional organization of this space so we can sample from it.
To achieve this we use a variational autoencoder. For a good introduction to the topic I recommend this YouTube video from the Arxiv Insights channel. An autoencoder is a unique formulation for learning about a data distribution in an unsupervised manner. This is done by forcing the model to reconstruct its own input, after passing the input through a bottleneck (so the model cannot simply pass the input to the output). The variational autoencoder extends the general autoencoder formulation to provide some nice characteristics for our use case. Here I will provide a brief overview of how we use this model to build the core of the plugin.
During training, our model learns to reconstruct the 13 parameters of the equalizer after passing the original input through a lower dimensional bottleneck (1, 2, or 3 dimensional). We measure the error between the output and the input (reconstruction loss), and then update the weights of the encoder and decoder to decrease this error for the current example.
While this may not seem like a useful task, we find that if we use the decoder portion of the model, which takes as input a low dimensional vector, we can reconstruct a wide range of equalizer curves using a very only small number of knobs (1, 2, or 3 depending on the dimensionality of the latent space). The diagram below demonstrates this operation. Here we have discarded the encoder and sample points from a 2 dimensional plane and feed these points to the decoder. The decoder then attempts to reconstruct the full 13 parameters. This lower dimensional latent space provides an easy way to search across the space of possible equalizer parameters.
To provide the user with more flexibility and experiment with the complexity of the latent space, we train models with different latent space dimensionalities (1, 2, and 3). In the plugin, the user will be able to select among these, which will change the number of sliders that need to be changed in order to control the entire equalizer. For example, in the 1 dimensional case the user need only move a single slider to control the equalizer.
We extend this even further by introducing the disentangled variational autoencoder (βVAE), which makes a slight modification to the loss function (see the paper for details). The important bit is this provides us with a new hyperparameter, β, to modify while we are training to change what kind of representation the model will learn. Therefore, we train a total of 12 models, all at different values of β and different latent space dimensionalities. We then provide each of these models in the plugin, which the user can select among, and evaluate by listening to.
Now that we understand the model from a high level we will briefly go over some of the implementation details. The encoder and decoder each have just a single fully connected hidden layer with 1024 units and a ReLU activation. The central bottleneck layer has either, 1, 2, or 3 hidden units with a linear activation. The final output layer of the decoder is a fully connected layer with 13 units and a sigmoid activation function (all inputs have been normalized between 0 and 1). This makes for a really small model (about 30k parameters), but due to the nonlinearities we can learn a powerful mapping (more powerful than PCA, for example.) A small model is nice in that we can train it faster, but also the inference time is much faster. A forward pass through the decoder network takes only about 300 μs seconds on CPU.
The models were implemented and trained with the Keras framework and you can see all the code for training the model along with the final weights in the train directory of the GitHub repository. These models were later implemented in MATLAB so they could be included in the plugin. See the Challenges section below for details on how we achieved that.
The plugin can be divided into two main sections: the filters and the trained decoder. We implement a basic five band parametric equalizer, which is composed of five biquad filters placed in series (this mirrors the construction of the equalizer used in the process of building the original training data). The lowest and highest bands are shelving filters, and the center three bands are peaking filters. For more details on the filter implementation see the classic Audio EQ cookbook. The shelving filters have two controls: gain and cutoff frequency, while the peaking filters have three: gain, cutoff frequency, and Q. These controls make up the 13 parameters of the equalizer. We use the value of these parameters and the aforementioned filter formulae to calculate the coefficients for all the filters whenever they are changed and then use the basic filter function in MATLAB to apply the filter to a block of audio.
Now we implement the decoder in MATLAB and connect its output to the controls of the equalizer. When the user moves the x, y, z latent space sliders, these values are passed through the decoder to generate the corresponding equalizer parameters, and new filters coefficients are calculated. There are two main modes of operation: Traverse and Semantic.
The Traverse mode allows the user to freely investigate the latent space of the models. In this mode the three x, y, z sliders can be used to traverse the latent space of the decoder. Each latent vector decodes to a set of values for the 13 parameters in the five band equalizer.
The Semantic mode allows for a different method of sampling from the latent space. The x, y, z sliders are deactivated, and the Embedding A and Embedding B combo boxes are used, along with the Interpolate slider. After training, the semantic labels are used to identify relevant clusters within the latent space. These clusters represent areas of the latent space which are associated with certain semantic descriptors. The Interpolate control allows users to seamlessly move between the two semantic descriptors in the latent space.
The block diagram above provides an overview of all the elements we have mentioned so far. The top portion shows the audio signal processing path with the five cascaded biquad filters. The central portion shows the decoder, which has its output connected to the parameters of the filters. At the bottom, we see the controls that the user can adjust from the plugin interface (shown below in detail). The Tuning parameters select among the 12 different trained decoder models irrespective of the mode, and the parameters for the Traverse and Semantic modes are shown as well. In Manual mode the original 13 parameters shown at the bottom of the interface are active to control the equalizer instead of using the decoder.
Evaluation of a model of this nature is challenging since we do not have an objective measure of a ‘good’ latent representation. Remember, our goal is not necessarily to create a model that can perfectly reconstruct any set of parameters, but instead to have a nicely structured latent representation that allows for the user to search the space quickly to find the sound they are looking for. Therefore, our first measure of evaluation is by visually inspecting the manifold of the latent space. We will do this for the 2 dimensional models since this is the nicest to visualize.
Here we show the frequency response curve (with yaxis as gain 20 dB to 20 dB and xaxis as frequency from 20 Hz to 20 kHz) for each point in the 2 dimensional latent space from 2 to 2 in both the x and y dimension, which gives us this grid of examples. We do this for each of our 2 dimensional models at all of the values of β that we trained with. This will let us see the effect of β as well as the structure of the latent space for each model.
We observe that as β increases, the latent space becomes more regularized, and for the case where β=0.020, many of the points appear to decode to the same equalizer transfer function, and we do not desire this behavior. This is an example of what might be considered overregularization. Therefore the model with β=0.001, may be the best choice since it shows the greatest diversity while maintaining a coherent and smooth structure. This means that as the user searches across the space it will have some interpretable change to the sound and will change in a way that is not too abrupt. In short, using larger values of β forces the model to structure the latent space in the shape of a unit gaussian, and hence lose some of its expressivity.
The best method for evaluation of these different models would be to conduct a user study where audio engineers are blindly given different models and asked to achieve a certain target. The model in which users can find their desired target the fastest would be the best model. For this reason, we include all 12 of the models in the plugin, in hopes that we can get user feedback on which models work the best.
One of the current challenges with implementing deep learning models developed in commonlyused opensource frameworks (e.g. Keras) within audio software (e.g. audio plugins or embedded software) is the lack of an automated method to transfer these networks to C or C++. MATLAB provides an automated method to construct and run our Keras model with the importKerasNetwork function from the Deep Learning Toolbox. This would load the HDF5 model weights and architecture after training in Keras and implement the model as a DAGNetwork object. Unfortunately, those objects don’t currently support automatic generation of generic C++ code (although other types of network architectures and layers from Deep Learning Toolbox can generate optimized CUDA and C code for running on GPUs, ARM, or Intel cores). For our audio plugin we ultimately required targetindependent C++ code to run on different CPU targets across operating systems.
To solve this, we implement the network in pure MATLAB code ourselves. This is fairly simple since our network is relatively small. We first convert the .h5 files with the saved weights from Keras to .mat files and then load these weights as matrices (W1 and W2 for the decoder hidden layer and output layer). The prediction function is shown below and is composed of just a few matrix operations with the input latent variable z and the layer weights, plus the activation functions. To see the entire implementation of the model see the Decoder class we built.
function y_hat = predict(obj, z) % takes a latent vector z with the appropriate dimensionality % output is a 13x1 vector of normalized (0 to 1) equalizer parameters. z1 = (z * obj.W1) + obj.b1; a1 = obj.ReLU(z1); z2 = (a1 * obj.W2) + obj.b2; a2 = obj.sigmoid(z2); y_hat = a2; end
Incidentally, I have also come across a tool developed internally by MathWorks, which is able to automatically generate lowlevel MATLAB code similar to the snippet above from highlevel deep network objects. For this project, that would have further simplified the transition from the trained Keras model to the plugin implementation. I understand that tool isn’t currently released with any official MATLAB addon product, but you may want to reach out to MathWorks if you are interested.
Implementing deep learning models in realtime audio plugins remains relatively unexplored territory. We are still without clear methods for achieving this with minimal friction, regardless of what framework is used. Realtime audio applications also impose strict runtime constraints, which means that our deep models must be fast enough so as not to cause interruptions in the audio stream, or a poor user experience with audible lag as the user changes parameters.
flowEQ is still very much a proof of concept, and the current implementation is somewhat limited by the MATLAB framework. Below are some future areas of development to further improve the plugin and expand its functionality.
If you found this project interesting follow me on Twitter @csteinmetz1 for updates on my latest projects and also checkout the other projects I’ve worked on with MATLAB + Deep Learning + Audio, like NeuralReverberator, which synthesizes new reverb effects using a convolutional autoencoder.
]]>This blog post is by Liping Wang, the technical evangelist of student competitions in China. When I was a signal and information processing student, I knew that MATLAB provides a series of powerful... read more >>
]]>When I was a signal and information processing student, I knew that MATLAB provides a series of powerful signal processing toolboxes, such as Signal Processing Toolbox and Wavelet Toolbox. However, I came to know just recently that besides these toolboxes, MATLAB also provides a series of interactive apps with user interfaces to facilitate more convenient use by users. This helps users who are not familiar with MATLAB commands to carry out their necessary work quickly. These applications involve all aspects of MATLAB application fields, such as signal processing and communications, math, statistics and optimization, machine learning and deep learning, etc.
Last August, as a member of the MathWorks Student Competition team, I was honored to participate in the judging process of the MATLAB award in the China Graduate Electronics Design Contest (GEDC) finals. After attending the event and talking to the students, I felt that the new capabilities of using apps for signal processing could help the students at GEDC to accelerate their relevant tasks. Hence, we made a video on how to use the MATLAB apps on signal processing.
In general, with signal process tasks, we need to
The basic operations above in the signal processing workflow can be completed through the Signal Analyzer App. With this app, we can quickly visualize signals by dragging a signal to the display area. This app enables us to view the spectrogram and scalogram of signals with one click. It can also be used to extract regions of interest for further analysis and generate MATLAB scripts.
Considering the most common filter design and analysis tasks in signal processing, we can easily design and analyze digital filters through pulldown menus and filling in parameters using the Filter Builder and Filter Designer apps.
In situations where you want to carry out multiresolution analysis of signals, the Signal Multiresolution Analyzer could be helpful. Or if you want to denoise the signals, the Wavelet Signal Denoiser may be of assistance.
If you find an introduction through text boring, please watch the video we made on how to use the MATLAB signal processing apps with a demo on preprocessing and analyzing an ECG signal here as well as posted below. You can also find the Chinese version of the video here.
Through the video, we can see that with a more friendly user interface, MATLAB apps let us conduct signal analysis and processing tasks more easily. In addition, graphic user interfaces provide us a convenient way to adjust parameters of different algorithms. We hope the video will help not only the participants for GEDC but also other signal processing students and engineers.
You can learn more about our support for student competitions on signal processing from our website. As always, feel free to leave us a comment below or email us at studentcompetitions@mathworks.com.
]]>This week’s post is by Owen Paul, who works on the MathWorks Student Competitions Program team. Introduction Many companies are using Simulink particularly in the fields of automotive, robotics, and... read more >>
]]>Many companies are using Simulink particularly in the fields of automotive, robotics, and aerospace for model based design. But I bet if you were to survey most engineering students on what Simulink is used for they might not know what to say. This is one of the main reasons we run the Simulink Student Challenge each year. We at MathWorks want to highlight students using Simulink and inspire you to use Simulink in your projects. With that being said, you might be getting ready to exit out of this blog because you think I’m going to spend the next 1,000 words to convince you why you should enter the competition. NOPE! In this blog I will discuss the two first place winning videos of the 2019 Simulink Student Challenge and discuss the cool projects these students are working on.
But before I dive into the projects, I will have to start with discussing the Simulink challenge briefly to give some background. The Simulink Student Challenge is an online competition that MathWorks hosts annually. In this challenge, we ask college students around the world one simple question. How do you use Simulink? To answer this question, students make a short video showcasing a project in which they’ve used Simulink and then post it to YouTube with the tag #SimulinkChallenge2019. We then judge these videos on three categories: Appropriateness of entry to contest theme, Creativity and originality of the video, and Depth of product knowledge demonstrated in the challenge solution. Now that you know what the challenge is, let’s dive in!
How would you feel if you bought a brand new car and a few days later you notice a bump on one of the passenger doors? Probably not very happy. To avoid this anger and frustration quality testing of materials is extremely important. Out first, first place winner has a solution that can help with this quality assurance. Felix Schneider at Bochum University of Applied Sciences is developing a highaccuracy optical length and velocity sensor named ‘VADER.’ This VADER sensor includes a bright LED to illuminate the surface of an object and a line camera to take pictures of the moving surface (figure 1).
The way that this works at a high level is that a sheets of metal move across an assembly line at a certain velocity. The VADER sensor is positioned facing the sheet of metal with the line camera positioned in the direction the sheet metal is moving. As the material moves across the camera imperfections are found using spatial filtering velocimetry techniques. If this doesn’t sound complex enough, Felix also had to account for the fact that the camera records data at a high rate of 1.6 Gbs/sec and this data must be preprocessed and filtered. To solve this complex problem, Felix turned to Simulink.
To process the data coming in from the VADER sensor, filtering is done on a custom built circuit board using a Field Programmable Gate Array (FPGA). This FPGA has three main purposes; to decode the data coming from the line camera, filter the data, and process the data to be outputted to a Texas Instruments (TI) development board. Specifically, a TI board with a Digital Signal Processor (DSP) chip. To solve all these tasks, Felix modelled the FPGA system in Simulink separating each one of these tasks into subsystems (figure 2).
Something that I found particularly interesting in this model is the filtering subsystem. This is where the spatial filtering mentioned previously is implemented. Because Felix needed to use 8 parallel spatial filters, he used a for each subsystem block; allowing him to automatically run the data through the filter 8 times only using one block.
An edge detection algorithm was also added in the filter subsystem to detect when the material starts and ends. From this information, you can easily derive the material’s length as well as the velocity that it is moving at. To implement the edge detection Felix first used Simulink logic blocks (figure 3) which he said became a “large design that’s hard to verify.”
After discovering this, he turned to Stateflow, “which made things a lot easier.” At first glance we can already see that the model using Stateflow (figure 4) uses much fewer blocks and is easier to read. But what really made this implementation better for Felix is the fact that he could verify that the output from the edge detection is the correct response. With this new implementation plots were created showing the data before and after the edge detection algorithm was applied. From these plots it back intuitive to decipher where the material started and ended.
As mentioned previously, the FPGA board would have to handle a high rate of data but also interface with a TI board. Simulink Test was used to ensure any issues or bugs in the model could be debugged before manufacturing the FPGA board or testing on any hardware. Using Simulink test, Felix was able to input real or simulated camera data into the model and test how accurate the results were and where errors might occur.
Once the models were properly tested and Felix knew it would work, it was time to start writing the C and HDL code for the hardware… Oh wait no they didn’t have to write any code? That’s right, not a single line of code was written because HDL Coder was also used to deploy the FPGA Simulink model onto the FPGA board. Felix also had a Simulink model for the TI board in which he used Embedded coder to deploy C code. Felix said that HDL coder was probably the biggest benefit to using Simulink for this project. He stated that,
“Developing test benches in HDL projects by hand is very tiresome and in contrast, in Simulink I can plot every signal, and, in most instances, the error can be spotted from the waveforms very quickly. The whole project wouldn’t have been possible without HDLCoder / Simulink Coder.”
Learn more about this project by watching the video here! And be sure to keep an eye on his work because according to Felix’s current simulations “there might be a new version of VADER soon, that reaches measurement errors far below those of commercially available sensors.”
Our next 1st place winner gives us a glimpse into the near distant future of selfdriving cars. Mustafa Saraoğlu at Technische Universität Dresden is looking create a ‘SafeTown’ in which vehicles are aware of each other’s positions and adjust where they go according to this information. But there are two main elements to this problem. The first one is the autonomous car element in which the vehicle must follow the road while avoiding other vehicles. The other part is intersections. When a vehicle arrives at an intersection how does it know when there are other vehicles at the intersection and who should go first?
To solve the first problem, an autonomous vehicle model was developed using Simulink and Stateflow. This model wasn’t built from scratch however. Mustafa’s team started with a simple line tracking example for the LEGO MINDSTORM EV3 robot that the team was using. From here, a PID controller was added and the parameters were tuned until the team was satisfied with the vehicle’s performance with line following. After that, a Stateflow model was added to control switching between the following modes: line following, stopping, and crossing an intersection. Mustafa told us that using Simulink for the controls was key because they “could use a variety of different controllers and make quick assessments, tune if needed, or change [their] approach. [He] can’t think of another environment suitable for that much a rapid development with such possibilities.”
For the second problem, a camera was placed above the ‘town’ to track the position of the Lego robots and identify intersections. An image recognition algorithm was used to identify the Lego robots on the map. To create this algorithm Mustafa’s team started by using the ground truth labeling app to identify one Lego robot in any given frame of a video. Machine learning algorithms such as RCNN, Fast RCNN, Faster RCNN, and ACF were then trained in MATLAB using the data generated from the ground truth labeling app. These algorithms were then tested using a sample prerecorded video with multiple Lego robots on the map (figure 5). Using this video, Mustafa’s team was able to find the best algorithm, ACF detector, to use in this project and tune the parameters to ensure that the algorithm was accurately identifying the Lego robots, intersections, and when there are Lego robots at an intersection. To use the image recognition algorithm developed in the Simulink model a MATLAB function block was used.
Now that we’ve seen how Mustafa’s team has tackled the two main problems identified, there is one more crucial element to think about. Communication! The camera workstation can identify when the Lego robots are at an intersection but now it must tell the robots whether they should wait or go. This communication was done through WiFi using a User Datagram Protocol (UDP). Integrating the UDP was made easier due to the fact that there is premade blocks for this. The Lego robot has a UDP Simulink block provided in the hardware support package for the Lego EV3 robots. As for the camera workstation, UDP blocks in the Instrument Control Toolbox were used. With these addons installed all that had to be done was drag those UDP blocks into their model and set the address and port.
After watching the video, I became curious on the background on this project. When asked about project SafeTown Mustafa said,
“SafeTown is a very useful project for students to test and try their algorithms on real hardware. This also contributes to the overall understanding of the concepts related to control and automation engineering. I had always wondered how control theory works in real life as I was a bachelor student. So now, granting that opportunity to undergraduate students, working together with them on such projects, makes me feel happy and satisfied. I hope we can improve it in different aspects as new students join to write their theses and add values collectively.”
To learn more about this project and watch the Lego robots in action click here!
Lastly, to watch the other winning videos or find out more about the competition click here.
Want to get learn how to use Simulink yourself? Take the free Simulink Onramp course and maybe next year I will be writing about your Simulink project.
]]>Introduction Hello all, I am Neha Goel, Technical Lead for AI/Data Science competitions on the MathWorks Student Competition team. MathWorks is excited to support WiDS Datathon 2020 by providing... read more >>
]]>Hello all, I am Neha Goel, Technical Lead for AI/Data Science competitions on the MathWorks Student Competition team. MathWorks is excited to support WiDS Datathon 2020 by providing complimentary MATLAB Licenses, tutorials, and getting started resources to each participant.
To request your complimentary license, go to the MathWorks site, click the “Request Software” button, and fill out the software request form. You will get your license within 72 business hours.
The WiDS Datathon 2020 focuses on patient health through data from MIT’s GOSSIS (Global Open Source Severity of Illness Score) initiative. Brought to you by the Global WiDS team, the West Big Data Innovation Hub, and the WiDS Datathon Committee, open until February 24, 2020.
The Datathon task is to train a model that takes as input the patient record data and outputs a prediction of how likely it is that the patient survives. In this blog post I will walk through basic starter code in MATLAB. Additional resources for other training methods are linked at the bottom of the blog post.
Register for the competition and download the data files from Kaggle. “training.csv” is the training data file and “unlabeled.csv” is the test data.
Tip: Save the csv files as .xlsx to avoid end of file blank rows.
Once you download the files, make sure that the files are in the MATLAB path. Here I use the readtable function to read the files and store it as tables. TreatAsEmpty is the placeholder text to treat empty values to numeric columns in file. Table elements corresponding to characters ‘NA’ will be set as ‘NaN’ when imported. You can also import data using the MATLAB Import tool.
TrainSet = readtable('training.xlsx','TreatAsEmpty','NA');
The biggest challenge with this dataset is that the data is messy. 186 predictor columns, 91713 observations with lot of missing values. Data transformation and modelling will be the key area to work on to avoid overfitting the problem.
Using the summary function, I analyzed the types of the predictors, the min, max, median values and number of missing values for each predictor column. This helped me derive relevant assumptions to clean the data.
summary(TrainSet);
There are many different approaches to work with the missing values and predictor selection. We will go through one of the approaches in this blog. You can also refer to this document to learn about other methods: Clean Messy and Missing Data.
Note: This approach of data cleaning demonstrated is chosen arbitrarily to cut down number of predictor columns.
Remove the character columns of the table
The reason behind this is that the algorithm I chose to train the model is fitclinear and it only allows numeric matrix as the input arguments.
TrainSet = removevars(TrainSet, {'ethnicity','gender','hospital_admit_source','icu_admit_source',... 'icu_stay_type','icu_type','apache_3j_bodysystem','apache_2_bodysystem'});
Remove minimum values from all the vitals predictors
After analyzing the WiDS Datathon 2020 dictionary.csv file provided with the Kaggle data, I noticed that the even columns from column 42 to 168 correspond to minimum values of predictors in the vital category.
TrainSet = removevars(TrainSet, [42:2:168]);
Remove the observations which have 30 or more missing predictors
The other assumption I made is the observations (patients) which have 30 or more missing predictor values can be removed.
TrainSet = rmmissing(TrainSet,1,'MinNumMissing',30);
Fill the missing values
The next step is to fill in all the NaN values. One approach is to use the fillmissing function to fill data using linear interpolation. Other approaches include replacing NaN values with mean or median values and removing the outliers using the CurveFitting app.
TrainSet = fillmissing(TrainSet,'linear');
In this step I move our label predictor hospital_death to the last column of the table because for some algorithms in MATLAB and in Classification learner app the last column is the default response variable.
TrainSet = movevars(TrainSet,'hospital_death','After',114);
Once I have the cleaned training data. I separate the label predictor hospital_death from the training set and create two separate tables XTrain: Predictor data , YTrain:Class labels
XTrain = removevars(TrainSet,{'hospital_death'}); YTrain = TrainSet.hospital_death;
Download the unlabeled.csv file from Kaggle. Read the file using the readtable function to store it as a table.
XTest = readtable('unlabeled.xlsx','TreatAsEmpty','NA');
I used a similar approach for cleaning test data as the training data above. XTest is the test data with no label predictor.
Remove the character columns of the table
As the unlabeled.csv file contains the hospital_death with NA values, I removed it along with other character type columns.
XTest = removevars(XTest, {'hospital_death','ethnicity','gender','hospital_admit_source',... 'icu_admit_source','icu_stay_type','icu_type','apache_3j_bodysystem','apache_2_bodysystem'});
Remove minimum values from all the vitals predictors
After removing the hospital_death column, the minimum values of the vital category are now offset so they correspond to the odd columns from column 41 to 167.
XTest = removevars(XTest, [41:2:167]);
Fill the missing values
XTest = fillmissing(XTest,'linear');
In MATLAB you can train a model using two different methods.
Here I walkthrough steps for doing both the methods. I would encourage to try both the approaches and train the model using different algorithms and parameters. It will help in optimization and comparing different model’s scores.
A Binary classification problem can be approached using various algorithms like Decision tress, svm and logistic regression. Here I train using fitclinear classification model. It trains the linear binary classification models with high dimensional predictor data.
Convert the table to a numeric matrix because fitclinear function takes only numeric matrix as an input argument.
XTrainMat = table2array(XTrain); XTestMat = table2array(XTest);
Fit the model
The name value pair input arguments within the function gives the options of tuning the model. Here I use solver as sparsa (Sparse Reconstruction by Separable Approximation), which has default lasso regularization. To optimize the model, I do some Hyperparameter Optimization.
‘OptimizeHyperparameters‘ as ‘auto‘ uses {lambda, learner} and acquisition function name lets you modify the behavior when the function is overexploiting an area per second.
You can further cross validate the data within input arguments using crossvalidation options: crossval, KFold, CVPartition etc. Check out the fitclinear document to know about input arguments.
Mdl = fitclinear(XTrainMat,YTrain,'ObservationsIn','rows','Solver','sparsa',... 'OptimizeHyperparameters','auto','HyperparameterOptimizationOptions',... struct('AcquisitionFunctionName','expectedimprovementplus'))
Predict on the Test Set
Once we have your model ready, you can perform predictions on your test set using predict function. It takes as input the fitted model and Test data with similar predictors as training data. The output is the predicted labels and scores.
[labelOpt1,scoresOpt1] = predict(Mdl,XTestMat);
Second method of training the model is by using the Classification Learner app. It lets you interactively train, validate and tune classification model. Let’s see the steps to work with it.
Predict on the Test Set
Exported model is saved as trainedModel in the workspace. You can then predict labels and scores using predictFcn.
The label is the predicted labels on Test set. Scores are the scores of each observation for both positive and negative class.
[labelOpt2,scoresOpt2] = trainedModel.predictFcn(XTest)
After a classification algorithm has trained on data, we examine the performance of the algorithm on our test dataset. To inspect the classifier performance more closely I plotted a Receiver Operating Characteristic (ROC) curve. By definition, a ROC curve shows true positive rate versus false positive rate for different thresholds of the classifier output.
The AUC (Area Under Curve) is the area enclosed by the ROC curve. A perfect classifier has AUC = 1 and a completely random classifier has AUC = 0.5. Usually, your model will score somewhere in between the range of possible AUC values is [0, 1].
Confusion matrix plot is used to understand how the currently selected classifier performed in each class. To view the confusion matrix after training a model, you can use the MATLAB plotconfusion function.
To perform evaluation model, MATLAB has perfcurve function. It calculates the false positive, true positive, threshold and auc score. The input arguments to the function include test labels, scores and the positive class label. For your selfevaluation purpose you can create the test label (YTest) by partitioning a subset from XTest. I used the scores generated by option 2 above which correspond to the trainedModel created by the Classification Learner app.
[fpr,tpr,thr,auc] = perfcurve(YTest,scoresOpt2(:,2),'1');
I get an AUC of 0.85 and the below ROC Curve.
Note: the auc calculated through this function might differ from the auc calculated on Kaggle leaderboard.
Create a table of the results based on the IDs and prediction scores. The desired file format for submission is:
encounter_id, hospital_death
You can place all the test results in a MATLAB table, which makes it easy to visualize and to write to the desired file format. I stored the positive labels (second column) of the scores.
testResults = table(XTest.encounter_id,scoresOpt2(:,2),'VariableNames',{'encounter_id','hospital_death'});
Write the results to a CSV file. This is the file you will submit for the challenge.
writetable(testResults,'testResults.csv');
Thanks for following along with this code! We are excited to find out how you will modify this starter code and make it yours. I strongly recommend looking at our Resources section below for more ideas on how you can improve our benchmark model.
Feel free to reach out to us in the Kaggle forum or email us at studentcompetitions@mathworks.com if you have any further questions.
]]>
In this post, I will discuss robot modeling and simulation with Simulink®, Simscape™, and Simscape Multibody™. To put things in context, I will walk you through a walking robot example (get... read more >>
]]>First of all… why simulate? I’ve broken down the benefits into two categories.
We will now look at a typical robot simulation architecture, which consists of multiple layers. Depending on your goals, you may only need to implement a subset of these for your simulation.
Simscape Multibody lets you model the 3D rigid body mechanics of your robot. There are two ways to do this.
Regardless of how you create the robot model, the next step is to add dynamics to it.
As shown in the simulation architecture diagram earlier, the actuator is the “glue” between the algorithm and the model (or robot). Actuator modeling consists of two parts: one on the controller side, and one on the robot side.
Different design tasks may need different model detail. Depending on this, simulation speed could range from much faster than realtime to much slower than realtime, and this is an important tradeoff. Let’s take the following example. Suppose you’re designing a robot which has both a highlevel motion planning algorithm and a lowlevel electronic motor controller with highfrequency pulsewidth modulation (PWM).
Ideally, you’d like to have reusable and configurable model components for different scale simulations. Simulink facilitates this with modeling features such as variants, block libraries, and model referencing.
To see how this was done with the walking robot actuator models, watch the video below.
[Video] Modeling and Simulation of Walking Robots
Motion planning can be an openloop or closedloop activity.
You can read more about motion planning and control for walking robots in our next blog post. In this example, we already designed an initial openloop walking pattern that makes our simulated robot walk stably. To further improve this walking pattern, you can add closedloop components for stability and/or reference tracking, or use techniques such as optimization to refine the walking pattern.
Optimization tools are useful in many aspects of robot modeling and simulation, such as
Designing an openloop motion profile through optimization can be a good start, but this may not be robust to variations in the physical parameters, terrain, or other external disturbances. In theory, you could use optimization and simulation to test against scenarios that cover all the challenges you expect in the real world. In practice, a closedloop system — or a system that can react to the environment — is better suited to handle these challenges.
Closedloop motion controllers require information about the environment through sensors. Common sensors for legged robots include joint position/velocity sensors, accelerometers/gyros, force/pressure sensors, cameras, and range sensors. An overall control policy can then be determined using modelbased methods like Internal Model Control, or with machine learning techniques like reinforcement learning.
The video below shows how you can repeatedly simulate a model and collect results to optimize openloop trajectories for a walking robot. Running simulations in batch can similarly help you perform tasks such as tuning controller or motion planning algorithms using optimization and machine learning.
[Video] Optimizing Walking Robot Trajectories
You have now seen how simulation can help you design and control a legged robot.
For more information, watch the videos above and read our next blog post on walking robot control. You can download the example files from the File Exchange or GitHub. You can also find a fourlegged Running Robot example on the File Exchange.
Are you working on legged robot locomotion? We’d be interested in hearing from you.
– Sebastian
]]>MATLAB and Simulink Release 2019b has been a major release regarding automotive features. The following article focuses on the automated driving highlights, namely the 3D simulation features. These... read more >>
]]>Developing and tuning control algorithms for active safety or automated driving applications requires either a massive amount of logged sensor data or a virtual development environment. Today, both are required as one approach can’t fully replace the other.
In the first method, sensor data is gathered while driving on the roads. This data will be used and simply said replayed into the control algorithm. The amount of data increases with increasing demands regarding algorithm robustness and safety. Thus, you want to cover a sufficient number of roads under varying conditions. Alternatively, you may rely on a virtualized workflow where varying conditions can be done in software and typically with a lot less effort.
The base ingredients to virtually develop perception systems are these:
The Simulation 3D Camera sensor not only provides RGB image data of the world but also provides a depth map and also labelled information. Possible outputs are shown below, namely a RGB image (left), a depth map whose grayscales represent distance (center) and the outputs of semantic segmentation (right). There are two types of cameras available, one with a standard focal length and one with a fisheye lens. They both come with a distortion model which you can calibrate to represent custom cameras. The depth map can be compared to what a stereo camera would output after post processing. You may have used a stereo camera before, such as a Kinect in conjunction with your Xbox. Stereo vision as a concept aims to recover depth information from camera images by comparing two or more views of the same scene. The technique of semantic segmentation associates pixels of an image with a class label, such as road, sky, traffic sign, car or pedestrian. In order to be able assign labels, a semantic segmentation network needs to be trained with example data using e.g. deep learning.
Lidar sensors which are sometimes called laser scanners, allow you to measure distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor. The output of a 3D lidar is a point cloud, namely a set of data points in space, that gets updated based on the horizontal resolution of the device. (There are 2D lidars as well. In automated driving applications mainly 3D lidars are used). Find an example point cloud in the illustration below.
This example is a good starting point to explore the concept of developing a perception algorithm based on virtual lidar sensor data. As a side note, allow me to link two more examples showing what you can do with lidar in terms of tracking and map building: Track Vehicles Using Lidar and Build a Map from Lidar Data.
Concluding this section about 3D virtual environments and sensor models, I recommend checking out this example of a closed loop controls model called LaneFollowing Control with Monocular Camera Perception where virtual sensor data is used to control a car in a realistic driving situation.
The above examples use the Unreal Engine 4, which by itself is quite demanding in terms of compute resources. If you are in the early stages of automated driving development, you may prefer a simpler and faster environment. Typical use cases would be evaluating and comparing sensor configurations or algorithmic concepts. Here the Driving Scenario Designer can come into play. Among MathWorks staff, the tool is typically called ‘cuboid world’ because actors are represented in a simplified manner as cuboids. Find below an example that shows how to model a radar’s hardware, signal processing, and propagation environment – and the cuboids of course (see top left area).
The beauty of the tool comes from its simplicity. You may create road and actor models using a draganddrop interface. You may also import OpenDRIVE® data if it is available for your desired scenarios. In the context of safety critical applications, such as emergency braking, emergency lane keeping, and lane keep assist systems, a library of prebuilt scenarios representing European New Car Assessment Programme (Euro NCAP®) test protocols is available.
Find below a flowchart of how you would use Driving Scenario Designer in conjunction with Simulink.
To keep the spirit of this blog post, I am also linking an example here where you can where you can try out the functionality. It is called “Test ClosedLoop ADAS Algorithm Using Driving Scenario”.
Overall, I hope you found this blog post interesting and relevant for your work. Since automated driving is a huge topic, we actually welcome your comments and guidance in terms of what we should cover in future.
Thanks, and Best,
Christoph
Today we’re talking to Nathan Gyger from Speedgoat. As a student at Bern University of Applied Sciences between 2012 and 2015, Nathan participated in Formula Student Germany. What Did You Learn in... read more >>
]]>“By participating in an engineering competition it was possible to learn how to properly engineer.”
Before my studies I was working as a car mechanic and I always had the desire to be a part of the engineering that is required to build such a complex machine. In the Formula Student competition it is possible to contribute and implement your ideas and knowledge in the design of a race car. This possibility fascinated me and I was immediately sure that I would not want to miss this opportunity. I built soapboxes in my youth, but by participating in an engineering competition it was possible to learn how to properly engineer. In addition to the more theoretical lessons, some practical work motivated me to invest more in my studies.
I was part of the founding team of Bern Formula Student and we took the challenge to build the teams first car. My main working field was the design and development of the powertrain and the associated control units. We wanted to design and build our own control unit, but we struggled a lot. I wish that I would have known of the tools and hardware that are available from MathWorks together with the Speedgoat hardware. This would have simplified our work a lot, but unfortunately there was no knowledge available at that time in our team. Since we have been quite a small and very inexperienced team, a lot of other problems had to be solved. I was always involved in both mechanical and electric tasks.
After my studies I was working as a scientific employee at the university, where I had the privilege to take the role of an advisor for the team. With this I was able to transfer the learnings from our mistakes to the next teams, which I really enjoyed.
We used MATLAB and Simulink to set up a very basic vehicle model in order to determine the best transmission ratio for the gearbox. It was no fancy model, but it suited our need and it was easy to setup. A big advantage of Simulink compared to handwritten code is the readability by a new user. Especially for inexperienced students now joining the Formula Student team, understanding modelbased design is very intuitive and much easier to understand than handwritten code.
“For my current job, indepth knowledge in MATLAB and Simulink is an indispensability.”
In order to master any tool you have to use it and work with it. With time, your skills and knowledge will become better and better. By using MATLAB and Simulink already during my studies, I gained the experience, how to apply these tools in order to effectively master technical problems. With the Formula Student competition, I could use these tools not only to solve mathematical homework, but also to develop and implement a product that will eventually become reality. When I started my first job I already knew how to use this tools in a project together with a bigger team. For my current job, indepth knowledge in MATLAB and Simulink is an indispensability.
A unique feature of building a vehicle for this engineering competition is that you are involved at every stage of the vehicle’s development. Especially for us, we had to start from nothing and all the design and communication processes had to be defined. This made it possible for me that every subject from my studies, found an application in our team. Besides the rather theoretical study, such a practical application is the best supplement that there can be. In a Formula Student team I had to work in a team towards a common goal and do all the work until a fixed date. This improved my planning ability and I learned how to deal with unexpected situations, that had both technical and interpersonal origins.
In addition I got the chance to work directly together with our sponsors in a professional environment and I learned to always keep the bigger picture of the problem that we had to solve. When I started in the industry I could already profit from my experiences in the team and it was easier to understand the collaboration between the different departments. When we built our first car, we did a lot of mistakes which are normal when you have to solve a technical problem for the first time. Since I have already had this experience before, I knew very well where I had to pay particular attention to ensuring that my project had a good chance of success.
“The modelbased design workflow … makes Simulink a tool I use on a daily basis.”
At Speedgoat, we specialize in stateoftheart realtime systems for realtime testing using Simulink and Simulink RealTime, the realtime operating system from MathWorks. In my role as a technical support and training engineer, I give training and support our customer that follow the modelbased design workflow of Simulink together with the realtime target machines of Speedgoat. This makes Simulink a tool I use on a daily basis. Due to my past experiences I’m now able to better understand the needs and struggles of the customer that are using the same tools as I did during my involvement in the Formula Student team.
To learn more about Speedgoat support for student teams, see these stories from GreenTeam Uni Stuttgart (Formula Student) and TU Munich (RoboRace).
“Motivation is the key to success.”
In my experience, motivation is the key to success. If you are motivated and also fascinated about a subject, everything that is associated with that, will have a positive impact to your work. Others around you will notice that and they will also benefit from your attitude. So it is important to know what you are motivated and passionate about and then also choose your field of work accordingly. Know your weaknesses, work on them but don’t let them drag you down. Promote your strengths and invest in them proactively to get better and then also don’t hide them when you have the chance. Always make sure that you see the reason for your work and that you are sure that it makes sense to invest so much time of your life.
Formula Student gave me the chance to prove to myself and others that I’m capable to perform good engineering work and it showed me where I have my weaknesses. This is something that no engineer should be denied, which is why I can recommend participation in the Formula Student competition to everyone. It has also shown me that my work can make a difference. I would like to thank all companies that support a Formula Student team in any way, because what you can learn as a student in this intensive time, you will never forget and will accompany you for a lifetime.
]]>NOTE: While this post will talk specifically about manipulators, many of the concepts discussed apply to other types of systems such as selfdriving cars and unmanned aerial vehicles. Trajectory... read more >>
]]>Trajectory planning is a subset of the overall problem that is navigation or motion planning. The typical hierarchy of motion planning is as follows:
The biggest question is usually “what’s the difference between path planning and trajectory planning?”. If you are to take one thing away: a trajectory is a description of how to follow a path over time, as shown in the following picture
In this post, we will assume that a set of waypoints from our task planner is already available, and we want to generate a trajectory for a manipulator to follow these waypoints over time. We will look at various ways to build and execute trajectories and explore some common design tradeoffs.
One of the first design choices you have is whether you want to generate a jointspace or taskspace trajectory.
The main difference is that taskspace trajectories tend to look more “natural” than jointspace trajectories because the end effector is moving smoothly with respect to the environment even if the joints are not. The big drawback is that following a taskspace trajectory involves solving inverse kinematics (IK) more often than a jointspace trajectory, which means a lot more computation especially if your IK solver is based on optimization.
[Left] Taskspace trajectory [Right] Jointspace trajectory
You can read more about manipulator kinematics from our Robot Manipulation, Part 1: Kinematics blog post. The following table also lists the pros and cons of planning and executing trajectories in task space vs. joint space.
Task Space 
Joint Space 

Pros 


Cons 


Regardless of whether you choose a taskspace or jointspace trajectory, there are various ways to create trajectories that interpolate pose (or joint configurations) over time. We will now talk about some of the most popular approaches.
Trapezoidal velocity trajectories are piecewise trajectories of constant acceleration, zero acceleration, and constant deceleration. This leads to a trapezoidal velocity profile, and a “linear segment with parabolic blend” (LSPB) or scurve position profile.
This parameterization makes them relatively easy to implement, tune, and validate against requirements such as position, speed, and acceleration limits.
With Robotics System Toolbox, you can use the trapveltraj function in MATLAB or the Trapezoidal Velocity Profile Trajectory block in Simulink.
You can interpolate between two waypoints using polynomials of various orders. The most common orders used in practice are:
Similarly, higherorder trajectories can be used to match higherorder derivatives of positions at the waypoints.
Polynomial trajectories are useful for continuously stitching together segments with zero or nonzero velocity and acceleration, because the acceleration profiles are smooth unlike with trapezoidal velocity trajectories. However, validating them is more difficult because instead of directly tuning maximum velocities and accelerations you are now setting boundary conditions that may be overshot between trajectory segments.
With Robotics System Toolbox, you can use the cubicpolytraj and quinticpolytraj functions in MATLAB, or the Polynomial Trajectory Block in Simulink.
The animation below compares a trapezoidal velocity trajectory with zero velocity at the waypoints (left) and a quintic polynomial trajectory with nonzero velocity at the waypoints (right).
Another way to build interpolating trajectories is through splines. Splines are also piecewise combinations of polynomials; but unlike polynomial trajectories, which are polynomials in time (so one polynomial for each segment), splines are polynomials in space that can be used to create complex shapes. The timing aspect comes in by following the resulting splines at a uniform speed.
There are many types of splines, but one commonly used type for motion planning is BSplines (or basis splines). BSplines are parameterized by defining intermediate control points that the spline does not exactly pass through, but rather is guaranteed to stay inside the convex hull of these points. As a designer, you can tune these control points to meet motion requirements without worrying about the trajectory going outside those points.
Effects of modifying control points for a 2D bspline
In Robotics System Toolbox, you can use the bsplinepolytraj function.
So far, we only showed you trajectories in position, but you probably also want to control the orientation of the end effector. Unlike positions, interpolating orientation can be a bit more challenging since angles are continuously wrapping. With some orientation representations like Euler angles there are multiple representations for the same configuration.
One way around this is by interpolating orientation using quaternions, which are a way to represent orientation unambiguously. One such technique is called Spherical Linear Interpolation (Slerp), which finds a shortest path between two orientations with constant angular velocity about a fixed axis. You can learn more about these techniques from this paper by Ken Shoemake.
With Robotics System Toolbox, you can use the rottraj and transformtraj functions in MATLAB, or the Rotation Trajectory and Transform Trajectory blocks in Simulink, respectively.
Rotation trajectory on the end effector using the Slerp method
While Slerp assumes linear interpolation at a constant velocity, you can incorporate what is known as time scaling to change the behavior of the trajectory. Instead of sampling the trajectory at a uniform time spacing, you can apply some of the trajectories discussed in the previous section to “warp” the time vector.
For example, a trapezoidal velocity time scaling will cause your trajectory to start and end each segment with zero velocity and reach its maximum velocity in the middle of the segment. With the following MATLAB commands, you can create and visualize transform trajectory with trapezoidal velocity time scaling.
T0 = trvec2tform([0 0 0]); Tf = trvec2tform([1 2 3])*eul2tform([pi/2 0 pi/4],'ZYX'); tTimes = linspace(0,1,51); tInterval = [0 5]; [s,sd,sdd] = trapveltraj([0 1],numel(tTimes)); [T,dT,ddT] = transformtraj(T0,Tf,tInterval,tTimes,'TimeScaling',[s;sd;sdd]); plotTransforms(tform2trvec(T),tform2quat(T));
Transform trajectory with trapezoidal velocity time scaling
We have covered several ways to generate motion trajectories for robot manipulators. Since trajectories are parametric, they give us analytical expressions for position, velocity, and acceleration over time in either task space or joint space.
Having the reference derivatives of position available is helpful to verify trajectories against safety limits, but also is great for lowlevel control of your manipulator. For example, the velocity trajectory can serve as direct input to the derivative branch of PID controllers; or you can use position, velocity, and acceleration to calculate forward dynamics for modelbased controllers. If you want to know more about lowlevel control of robot manipulators, check out our Robot Manipulation, Part 2: Dynamics and Control blog post.
As we did in this post, we started by designing trajectories on simple kinematic models. The next step is to try this using dynamic simulations – ranging anywhere from simple closedloop motion models to a full 3D rigid body simulation.
Of course, the end goal is to try this on your favorite manipulator hardware.
If you want more indepth knowledge on trajectory planning, I found this presentation to be a great resource. To learn more about trajectory planning with MATLAB and Simulink, watch our video below and download the files from File Exchange.
For anything else, leave us a comment below or email us at roboticsarena@mathworks.com.
Today’s blog is written by Liping Wang. Liping joined the MathWorks Student Competition team in August of 2019. Before that, she was a system engineer that designed algorithms and verified new... read more >>
]]>One month after joining MathWorks, Liping had the opportunity to view the work that had been done by student teams in Innovate Malaysia Design Competition (IMDC) 2019.
IMDC is the largest design competition in Malaysia that is open to all thirdyear or final year undergraduate science, engineering, computer science, and mathematics students. In this competition, MathWorks has a system design and modeling technical track where students use MATLAB and Simulink.
In 2019, there were 86 teams who registered and 46 who submitted. All of the 46 teams used MATLAB and Simulink. Some of the most popular applications were connecting to lowcost hardware, Internet of Things (IoT), signal processing, and statistics and machine learning.
The best use of MathWorks tools was done by Universiti Tun Hussein Onn Malaysia (UTHM) students who developed a smart blood pressure monitoring system. The team used the Simulink Support Package for Arduino to acquire photoplethysmography signals, with which the heart rate and the blood pressure can be measured. The signals were then processed in MATLAB to get the systolic and diastolic blood pressure details which were then displayed on a web page using ThingSpeak for the physician and to a ThingView mobile application for the patient.
Download the files for this submission from File Exchange.
The project of the second prize winner of the MathWorks track combined signals from BrainComputer Interface (BCI), electroencephalogram (EEG) signal, and electromyography (EMG) to perform intelligent upper limb exoskeleton control, which has great impact for our society especially for paralyzed patients. This project used MATLAB, IoT applications, signal processing, statistics and machine learning, and connection to both Arduino and Raspberry Pi 3.
Download the files for this submission from File Exchange.
In the project of the third prize winner of the MathWorks track, MATLAB was used to process of electroencephalogram (EEG) signals from the brain to extract and classify their features that are used for controlling an external electronic device. Statistics and Machine Learning Toolbox was used to extract features and perform classification. Meanwhile, App Designer was used to create a user interface for simulation presentation purposes.
Download the files for this submission from File Exchange.
There were two other teams that won consolation prizes for their work using MATLAB and Simulink. You can see their presentations below.
In IMDC 2019, MathWorks provided complimentary software for the participants, as well as design resources that enable students to have exposure to relevant cuttingedge technologies. We also did extra engagement to several selected teams, providing them with examples and knowledge of the functionality based on their challenges.
This year the winners of the MathWorks track were all related to digital healthcare. Of course, there are excellent works in other areas such as precision farming, smart transportation, and smart manufacturing. You can find more about this here.
Next year, the IMDC final will be held in July 2020. We plan to continue supporting this competition with software and technical expertise, and plan to hold more inperson meetings with the teams. We expect we will have a chance to visit this exciting event next year to see more interesting works in other areas like automotive software design and development, for which MATLAB and Simulink products can provide various solutions.
You can learn more about our support for IMDC from our Web site. As always, feel free to leave us a comment below or email us at studentcompetitions@mathworks.com.
]]>
Today we’re talking to Roni Deb from Ather Energy. As a student in SRM University, Chennai between 2015 and 2017, Roni participated in Formula Student Germany, Formula Student India, and... read more >>
]]>“We realised that in order to understand and predict the vehicle performances at an early concept stage, simulation was necessary.”
As a freshman in college, I was deeply touched with the concept of creating a racecar at an undergraduate level and competing with other universities. Things became more interesting when I learnt about the amount of planning, both monetary and resources, that goes into building a single car. Hence, joining my college Formula Student Team was a nobrainer.
Initially, as a 1st year recruit I was involved in doing odd jobs such as running errands for the team for getting parts manufactured. As time went by, I gained an interest in Vehicle Dynamics. By my final year, I took up the responsibility of Chief Vehicle Dynamics Engineer, where I tried to ensure that the development of the chassis, suspension and steering is aligned with our goals for performance and durability.
Yes. Me and my team had an interest in building models using the most basic elementary algorithms first and slowly developing our way up. We realised that in order to understand and predict the vehicle performances at an early concept stage, simulation was necessary.
“Projects, whether in university level or professional level require the same level of commitment and determination.”
Yes, It has become easier for me to visualize the scope for any new developing project. On top of that, as we enter the professional industry, our previous experiences count a lot. Having a previous background in mathematical model development and simulation is a great start in one’s career, especially if you wish to continue working on a similar turf.
What I have realised is that projects, whether in university level or professional level require the same level of commitment and determination. The scale of monetary or technical resources might vary, but from an individual’s point of view, you need to be equally focused. So yes, It was a huge support in preparing for the professional industry.
Read the MathWorks user story from Ather Energy.
Our team uses MATLAB and Simulink a daily basis to assess, understand and optimise vehicle dynamics parameters at a conceptual phase. This is similar to what we have been doing during our Formula Student days. Estimates made using hardwareintheloop (HIL) simulations led us to better visualisation of the mechanical strains on chassis and suspension components.
As of today, Ather Energy is the first smart electric scooter to hit the Indian roads. Our current goal is to attract the mass market with the benefits of going electric. I see tremendous potential in what we are trying to do.
“Be proactive. There is always something to learn.”
Be proactive. There is always something to learn; whether it is in the technicalities of your subsystem or the team and resource management discussions. Being enthusiastic about your desired field of work will surely land you in a good professional environment, or it will even help you create your own entrepreneurial environment.
MATLAB and Simulink have become highly essential tools throughout the technical industry. The earlier the exposure the better. Formula Student has given us the chance to showcase the capabilities of using such simulation tools for practical purposes, which is definitely not possible in classrooms. On top of that, there is a need for students involved in such practical use of such simulation tools in the industry right now. Also, student support shown by MathWorks for Formula Student competitions is highly appreciated by all and should be utilised to the fullest. I heavily recommend the use of MATLAB and Simulink.
]]>