This week’s post is by Owen Paul, who works on the MathWorks Student Competitions Program team.
Many companies are using Simulink particularly in the fields of automotive, robotics, and aerospace for model based design. But I bet if you were to survey most engineering students on what Simulink is used for they might not know what to say. This is one of the main reasons we run the Simulink Student Challenge each year. We at MathWorks want to highlight students using Simulink and inspire you to use Simulink in your projects. With that being said, you might be getting ready to exit out of this blog because you think I’m going to spend the next 1,000 words to convince you why you should enter the competition. NOPE! In this blog I will discuss the two first place winning videos of the 2019 Simulink Student Challenge and discuss the cool projects these students are working on.
But before I dive into the projects, I will have to start with discussing the Simulink challenge briefly to give some background. The Simulink Student Challenge is an online competition that MathWorks hosts annually. In this challenge, we ask college students around the world one simple question. How do you use Simulink? To answer this question, students make a short video showcasing a project in which they’ve used Simulink and then post it to YouTube with the tag #SimulinkChallenge2019. We then judge these videos on three categories: Appropriateness of entry to contest theme, Creativity and originality of the video, and Depth of product knowledge demonstrated in the challenge solution. Now that you know what the challenge is, let’s dive in! 😊
Velocimeter using Advances Digital Filters (VADER)
How would you feel if you bought a brand new car and a few days later you notice a bump on one of the passenger doors? Probably not very happy. To avoid this anger and frustration quality testing of materials is extremely important. Out first, first place winner has a solution that can help with this quality assurance. Felix Schneider at Bochum University of Applied Sciences is developing a high-accuracy optical length and velocity sensor named ‘VADER.’ This VADER sensor includes a bright LED to illuminate the surface of an object and a line camera to take pictures of the moving surface (figure 1).
The way that this works at a high level is that a sheets of metal move across an assembly line at a certain velocity. The VADER sensor is positioned facing the sheet of metal with the line camera positioned in the direction the sheet metal is moving. As the material moves across the camera imperfections are found using spatial filtering velocimetry techniques. If this doesn’t sound complex enough, Felix also had to account for the fact that the camera records data at a high rate of 1.6 Gbs/sec and this data must be pre-processed and filtered. To solve this complex problem, Felix turned to Simulink.
To process the data coming in from the VADER sensor, filtering is done on a custom built circuit board using a Field Programmable Gate Array (FPGA). This FPGA has three main purposes; to decode the data coming from the line camera, filter the data, and process the data to be outputted to a Texas Instruments (TI) development board. Specifically, a TI board with a Digital Signal Processor (DSP) chip. To solve all these tasks, Felix modelled the FPGA system in Simulink separating each one of these tasks into subsystems (figure 2).
Something that I found particularly interesting in this model is the filtering subsystem. This is where the spatial filtering mentioned previously is implemented. Because Felix needed to use 8 parallel spatial filters, he used a for each subsystem block; allowing him to automatically run the data through the filter 8 times only using one block.
An edge detection algorithm was also added in the filter subsystem to detect when the material starts and ends. From this information, you can easily derive the material’s length as well as the velocity that it is moving at. To implement the edge detection Felix first used Simulink logic blocks (figure 3) which he said became a “large design that’s hard to verify.”
After discovering this, he turned to Stateflow, “which made things a lot easier.” At first glance we can already see that the model using Stateflow (figure 4) uses much fewer blocks and is easier to read. But what really made this implementation better for Felix is the fact that he could verify that the output from the edge detection is the correct response. With this new implementation plots were created showing the data before and after the edge detection algorithm was applied. From these plots it back intuitive to decipher where the material started and ended.
As mentioned previously, the FPGA board would have to handle a high rate of data but also interface with a TI board. Simulink Test was used to ensure any issues or bugs in the model could be debugged before manufacturing the FPGA board or testing on any hardware. Using Simulink test, Felix was able to input real or simulated camera data into the model and test how accurate the results were and where errors might occur.
Once the models were properly tested and Felix knew it would work, it was time to start writing the C and HDL code for the hardware… Oh wait no they didn’t have to write any code? That’s right, not a single line of code was written because HDL Coder was also used to deploy the FPGA Simulink model onto the FPGA board. Felix also had a Simulink model for the TI board in which he used Embedded coder to deploy C code. Felix said that HDL coder was probably the biggest benefit to using Simulink for this project. He stated that,
“Developing test benches in HDL projects by hand is very tiresome and in contrast, in Simulink I can plot every signal, and, in most instances, the error can be spotted from the waveforms very quickly. The whole project wouldn’t have been possible without HDL-Coder / Simulink Coder.”
Learn more about this project by watching the video here! And be sure to keep an eye on his work because according to Felix’s current simulations “there might be a new version of VADER soon, that reaches measurement errors far below those of commercially available sensors.”
Our next 1st place winner gives us a glimpse into the near distant future of self-driving cars. Mustafa Saraoğlu at Technische Universität Dresden is looking create a ‘SafeTown’ in which vehicles are aware of each other’s positions and adjust where they go according to this information. But there are two main elements to this problem. The first one is the autonomous car element in which the vehicle must follow the road while avoiding other vehicles. The other part is intersections. When a vehicle arrives at an intersection how does it know when there are other vehicles at the intersection and who should go first?
To solve the first problem, an autonomous vehicle model was developed using Simulink and Stateflow. This model wasn’t built from scratch however. Mustafa’s team started with a simple line tracking example for the LEGO MINDSTORM EV3 robot that the team was using. From here, a PID controller was added and the parameters were tuned until the team was satisfied with the vehicle’s performance with line following. After that, a Stateflow model was added to control switching between the following modes: line following, stopping, and crossing an intersection. Mustafa told us that using Simulink for the controls was key because they “could use a variety of different controllers and make quick assessments, tune if needed, or change [their] approach. [He] can’t think of another environment suitable for that much a rapid development with such possibilities.”
For the second problem, a camera was placed above the ‘town’ to track the position of the Lego robots and identify intersections. An image recognition algorithm was used to identify the Lego robots on the map. To create this algorithm Mustafa’s team started by using the ground truth labeling app to identify one Lego robot in any given frame of a video. Machine learning algorithms such as R-CNN, Fast R-CNN, Faster R-CNN, and ACF were then trained in MATLAB using the data generated from the ground truth labeling app. These algorithms were then tested using a sample pre-recorded video with multiple Lego robots on the map (figure 5). Using this video, Mustafa’s team was able to find the best algorithm, ACF detector, to use in this project and tune the parameters to ensure that the algorithm was accurately identifying the Lego robots, intersections, and when there are Lego robots at an intersection. To use the image recognition algorithm developed in the Simulink model a MATLAB function block was used.
Now that we’ve seen how Mustafa’s team has tackled the two main problems identified, there is one more crucial element to think about. Communication! The camera workstation can identify when the Lego robots are at an intersection but now it must tell the robots whether they should wait or go. This communication was done through Wi-Fi using a User Datagram Protocol (UDP). Integrating the UDP was made easier due to the fact that there is pre-made blocks for this. The Lego robot has a UDP Simulink block provided in the hardware support package for the Lego EV3 robots. As for the camera workstation, UDP blocks in the Instrument Control Toolbox were used. With these add-ons installed all that had to be done was drag those UDP blocks into their model and set the address and port.
After watching the video, I became curious on the background on this project. When asked about project SafeTown Mustafa said,
“SafeTown is a very useful project for students to test and try their algorithms on real hardware. This also contributes to the overall understanding of the concepts related to control and automation engineering. I had always wondered how control theory works in real life as I was a bachelor student. So now, granting that opportunity to undergraduate students, working together with them on such projects, makes me feel happy and satisfied. I hope we can improve it in different aspects as new students join to write their theses and add values collectively.”
To learn more about this project and watch the Lego robots in action click here!
Lastly, to watch the other winning videos or find out more about the competition click here.
Want to get learn how to use Simulink yourself? Take the free Simulink Onramp course and maybe next year I will be writing about your Simulink project. 😉
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.