MathWorks was recently at RoboCup 2018 in Montreal, Canada. Over the 7 days of this event, we got a lot done. In this post, Sebastian Castro will discuss one of the collaboration efforts he worked on. Introduction One of my favorite things about working with student competitions is the chance to collaborate... read more >>

]]>One of my favorite things about working with student competitions is the chance to collaborate with teams and organizers. Last year, I got in touch with two professors involved in the RoboCupRescue Simulation League. Allow me to introduce them:

- Arnoud Visser (University of Amsterdam)
- Luis Gustavo Nardin (Brandenburg University of Technology Cottbus)

We decided to focus on the Agent Simulation Competition. Participants of this competition need to program a collection of autonomous *agents* in a simulated disaster scenario; the overall goal being to save as many civilian lives as possible. There are 3 kinds of programmable agents in this challenge:

**Ambulance team:**Pick up injured civilians and take them to shelters**Fire brigade:**Put out building fires to prevent them from spreading**Police force:**Clear road blockades to let agents move around the map

*Screenshot of a typical RoboCupRescue Agent Simulation*

All RoboCup major leagues strive to provide a platform for advancing robotics research. Autonomous behavior and decision-making is increasingly driven by machine learning, and it so happens that MATLAB contains design tools, models, and other functionality for machine learning. As a result, we decided to try integrating MATLAB with the RoboCupRescue Simulation (RCRS) Server using their Agent Development Framework (ADF). Both of these tools are written in Java.

After some time working together, both remotely and at the RoboCup German Open 2018 (Magdeburg, Germany) we came up with a solid proof-of-concept and the idea to deliver a workshop for competition participants. In RoboCup 2018 (Montreal, Canada) we presented a 2-hour “teaser” workshop, had a poster, and won 1st place in the RoboCupRescue Simulation Infrastructure competition! Now, we want to share our work with you.

*Team “Joint Rescue Forces” at RoboCup 2018: Sebastian Castro, Luis Gustavo Nardin, and Arnoud Visser*

Just as we began our workshop, we will begin with an extremely high-level picture of what Machine Learning is, and how it fits in with its commonly associated buzzwords: Artificial Intelligence and Deep Learning. In the context of robotics, we present the following summaries.

**Artificial Intelligence:**Describes a broad set of problems, where an agent has information about the environment and automatically takes action to achieve a goal.**Machine learning:**A subset of artificial intelligence, where an agent uses*data*to automatically train itself to take action**Deep learning:**A subset of machine learning, which specifically uses neural networks as mathematical models. “Deep” refers to a neural network with many layers, and is a nod to the recent resurfacing of large-scale neural networks due to the computing power available nowadays.

*AI vs. Machine Learning vs. Deep Learning [Source]*

Regardless of the machine learning algorithm or model selected (see the next subsection), the same set of tools can be used to solve many types of problems. Below are the four main types of machine learning problems for robotics.

**1. Classification**

- Labeling input data from a known, finite set of categories
- Examples: Diagnosing disease, identifying types of animals (cats, dogs, horses, etc.)

**2. Regression**

- Predicting a continuous output from input data
- Examples: Predicting weather (temperature, % rainfall, etc.), calculating actuator forces/torques for robot locomotion

**3. Detection**

- Locating, counting, and identifying objects of interest in data
- Usually consists of some combination of classification and regression
- Examples: Pedestrian detection, finding key objects and grasping points in cluttered environments

**4. Generation**

- We can think of this as the “inverse” of classification: synthesizing representative data given a requested category
- Examples: Music/literature generation given a specified style, video game character generation

Recall that machine learning is defined by the fact that it relies on *data*. The basic idea is: we provide data to the agent and it forms a generalization, or model, of the problem it needs to solve. A good machine algorithm will be able to accept new, independent data, and correctly solve this problem.

Depending on the format or availability of data, machine learning algorithms can fall into various categories. The main types include:

- Finding patterns from
*unlabeled data* - The agent develops its own insights and we have to make sense of them as best as we can

*Unsupervised Learning Algorithms in MATLAB*

*[Left] K-Means Clustering for Simple 2D Data | [Right] Euclidean Distance Clustering for Point Cloud Data*

- Determining a model, or fitting model parameters, from
*labeled data* - Since the data is labeled, it is possible for humans to validate models by checking whether the trained model correctly identifies labels on independent test data.

*Supervised Learning Algorithms in MATLAB*

*[Left] Decision Tree | [Right] Support Vector Machine (SVM)*

- Technically, this is a type of supervised learning
- The “label” in this case is a mathematical reward function that the agent needs to maximize
- The agent repeatedly interacts with a physical system (simulated or real-world), evaluates its reward, and learns to maximize it over time.

*DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills **[Source]*

**NOTE:** Deep learning is not a type of *algorithm*, but rather describes the type of *model* used by the agent. For example, you might see the term “deep reinforcement learning”. This means that the agent is applying reinforcement learning to tune parameters for its internal deep neural network model.

Now that we’ve briefly introduced machine learning, let’s discuss what we did with RoboCupRescue Agent Simulation.

All agents must navigate the roads in a city map to get to their targets. These maps are typically represented as undirected graphs. Graph search is considered an AI problem, but not a machine learning problem since no generalization to new data is required — you simply search over the whole map.

We implemented two alternative solutions for working with graphs: using MATLAB graph objects and graphs from Peter Corke’s Robotics Toolbox.

In both cases, we could generate a graph in MATLAB from the simulator, and then search for the shortest path between any two nodes on the graph using various algorithms. More importantly, once the graph was in MATLAB, each agent could add or remove nodes and edges based on new information (for example, road blockages). These are both important because the simulation:

- Has a precompute step in which the initial map can be calculated and shared among all agents
- Then, at each simulation step, each agent must work with the map independently, including on-the-fly modification and replanning

*[Left] Simple test map with two shortest path solutions
*

Suppose that your police force consists of 5 police agents. You want to assign different “zones” to them so they can evenly distribute all the tasks that need to be done within the map. How do you position and dispatch your police agents so they can respond to blockades as quickly as possible?

Many teams are already doing this using *clustering*, which is a type of *unsupervised learning*. There are many built-in functions for cluster analysis in Statistics and Machine Learning Toolbox.

buildings = importBuildingsData('data/unsupervised/buildings.csv'); % Generated by Import Tool numMeans = 5; [indices,centroids] = kmeans([buildings.x buildings.y],numMeans);

*Example map showing k-Means Clustering and the corresponding centroids*

Now for the final example. Suppose you are an ambulance agent and you are faced with a very uncomfortable, but perhaps realistic, decision: If you have 3 injured civilians to rescue, and some information about them, how do you decide which one to save first?

This is where we can use *supervised learning* for target selection. We recorded data from previous simulations to a file, which gives us historical information on whether or not a civilian survives being transported to a shelter. The factors, or features, that we logged include:

- Distance from the ambulance
- Health points and injury level when discovered
- The state of the building

An autonomous agent could use this information to make future predictions and prioritize which civilian to rescue, with the intent of maximizing the overall score of the rescue team. For example, the agent could favor rescuing agents predicted to be in a critical state, but still survive the rescue mission.

For the input data, we first used the Import Tool in MATLAB to read spreadsheets and automatically generate a MATLAB function that converts the data to a table. We could then employ techniques such as dimensionality reduction or feature selection to reduce the amount of input data needed to train a model and make a prediction. Ideally, this would lead to a more computationally efficient model, with little to no impact on prediction accuracy.

For the output data, we had to choose the type of machine learning problem to solve. Since our raw output was the number of hit points (HP), from 1 to 10000, we tried the following 3 approaches:

**Binary classification:**Dead (0 HP) vs. Alive (1-10000 HP)**Multiclass classification:**Dead (0 HP), Critical (1-3000 HP), Injured (3001-7000 HP), and Stable (7001-10000 HP)**Regression:**Predict the actual HP value from 0 to 10000

The Classification Learner and Regression Learner apps allowed us to try different types of models and find the one with the best accuracy. Then, we could export our trained model and use it to predict on an independent test data set. If the test accuracy was good enough, we could integrate this model into the simulator so each agent could make predictions on new simulation runs.

*Classification Learner app showing multiclass predictions and confusion matrix for our sample data set. We got a maximum accuracy of 78.9% with K-Nearest Neighbors (KNN), which is far better than the 25% we could get from random guessing.*

**NOTE:** We also tried deep learning on this dataset (because, why not?). Since we had a small number of data points and features, and all features were scalar, numeric data, we did not gain much accuracy from the added complexity and nonlinearity of neural networks. However, our repository includes deep learning examples and we encourage you to try them and improve on our work!

Finally, we wanted to discuss integrating the MATLAB based machine learning work with the Java based simulation framework.

The MATLAB Engine API for Java lets you call MATLAB code from Java, and pass information between MATLAB and Java, provided that a MATLAB session is currently open on your machine. This was a good first step for prototyping, and we were able to demonstrate this worked with path planning, resource allocation, and target selection tasks from the previous section.

Given the MATLAB Engine API functionality, we were able to explore some design tradeoffs:

- Multiple agents starting separate MATLAB session vs. connecting to a shared MATLAB session
- Evaluating MATLAB commands in the (shared) base workspace vs. calling functions, which have their own data scope
- Calling MATLAB code synchronously (waiting to receive output) vs. asynchronously (executing the code and getting the results later)

*By the way:* the MATLAB Engine API is available in many other languages as well, including C++ and Python. Refer to the MATLAB documentation for more information.

This approach worked during the precompute step of simulation, but would not scale well to multiple agents because it would require a large number of MATLAB sessions and/or multiple agents trying to access the same shared MATLAB session. Also, there are security/cheating concerns because, after the precompute step of simulation, agents are not allowed to share data with each other.

So, how do we handle multiple agents calling MATLAB code without sharing data or computational resources? The answer: don’t use MATLAB! (and yes, I still work at MathWorks)

Using MATLAB Coder, you can generate portable C/C++ code from the algorithms we described above. This could result in code only, or the code could be automatically compiled into an executable or shared library/object (depending on your operating system).

The best approach we found was generating a shared library and then calling these from each independent agent. No need to have even a single MATLAB instance open, the generated C/C++ code may run faster than the MATLAB code, and there is no need to worry about agents sharing data because each of them will load the library separately.

One *small* hurdle: Calling C/C++ code from Java requires the Java Native Interface (JNI). Luckily, there are tools available such as Simplified Wrapper and Interface Generator (SWIG) that can do the work for you. Reach out to us if you want to know more.

In this post, we introduced our own definition of machine learning and some of the common problems and algorithms associated with it. Then, we showed how MATLAB helped us go from design concept to integration with an external software framework… including all the design explorations and tradeoffs we performed along the way.

Our resources are all available online. You can download our code, read our paper, and access our presentation.

Hopefully, we’ve shown you some new things you didn’t know MATLAB could do. If you participate in RoboCup, we hope to see you in an upcoming workshop. Else, we would still like to hear from you in the comments!

]]>In today’s post, Wojciech Regulski introduces you to modeling fluid dynamics using MATLAB. Wojciech has a PhD in mechanical engineering from Warsaw University of Technology, Poland, and has specialized in Computational Fluid Dynamics (CFD) in his research work. Wojciech also co-founded the QuickerSim company that specializes in development of fluid flow simulation software. – ... read more >>

]]>– –

CFD modeling has become indispensable in many areas. It is used to determine flow conditions such as velocity, pressure, or temperature for diverse kind of problems. Knowing these is pivotal for e.g. automotive or aircraft aerodynamics, heating or cooling problems and even weather forecasts or climatology.

Depending on the extent and complexity of its application, CFD requires:

- lots of computing resources,
- dedicated software and
- quite sophisticated engineering knowledge.

As you are reading this post, let’s not worry too much about #3. #1 becomes more and more negligible as even laptops come with multi-core CPUs and even GPUs. #2 will be tackled in this post; we want to give you an idea of a basic CFD simulation in MATLAB. We will also share example and give insights into projects that student teams are currently conducting.

QuickerSim CFD Toolbox, a dedicated CFD Toolbox for MATLAB, offers functions for performing standard flow simulations and associated heat transfer in fluids and solids. The toolbox is based on the Finite Element Method (FEM) and uses the MATLAB Partial Differential Equation Toolbox data format. It operates much like a standard CFD solver – a set of routines executes consecutive simulation steps, see Figure 1. First, the computational mesh is read in and the solution is initialized. Then the iterative process of solving the equations takes place. Finally, the user can resort to various post-processing tools and manipulate the data flow on his own.

Fig. 1. Script and illustration of a flow simulation past an array of pipes.

While practically any aspect of a car design is subject to CFD analysis, many areas are very demanding. Topics such as fuel combustion or noise generation are tackled only by large and experienced engineering departments. Also, the numerical models used there are very complex and have been validated using extensive experimental trials. Still, areas such as battery cooling or simple aerodynamic cases can be analyzed using standard tools by individuals in relatively short time frames.

Figure 2 shows results of a 2-Dimensional heat exchanger simulation where the fluid passes past an array of pipes perpendicular to the flow direction as already shown in Figure 1. Observe the development of the so-called thermal boundary layer on the pipe walls, whose thickness depends on the fluid properties. Resolving flow and heat phenomena in the vicinity of the solid walls is critical for accurate prediction of heat fluxes and aerodynamic drag. The toolbox contains a set of functions that lets you generate and improve the boundary layer mesh.

Although the geometry here is simple, the basic rule remains the same for more complex cases. First, one resolves the fluid flow, what constitutes most of the computational effort. On top of that, the heat transfer equations are solved, which is less expensive computationally. All that is performed by 40 lines of code within a few minutes of calculation on a laptop.

You can get the code here: https://quickersim.com/cfdtoolbox/tutorial/tutorial-15-laminar-heat-exchanger/

Fig. 2. A 2D simulation of a laminar heat exchanger. Temperature fields for two different thermal conductivities

The SAE team Form UL from Université Laval, Québec, has created a numerical model of their racing car in MATLAB. One of its modules deals with the issue of unsteady heat transfer in the batteries shown in Figure 3. Pierre Olivier Cimon, one of the team leaders and the developer of this model, recognizes its limitations. “Standard engineering correlations do not account for the transient effects and the flow confinement in the compartment. Hence, we resorted to the QuickerSim CFD Toolbox with hope of getting better heat transfer and pressure drop data.”

Fig. 3 The battery pack as used by the Form UL SAE Team from Canada

The work is still in progress, since the team would like to simulate the entire 30-minute-long run of the car. This can hardly be achieved with a direct simulation and rather requires a clever choice of simplifying assumptions.

The first step is the evaluation of the numerical model for the unsteady flow with conjugate heat transfer using just a few battery cells. You can check out the joint movie below for the initial results.

Team Strom + Söhne from TH Nürnberg, employed the toolbox in their design of the spoiler, see Figure 4. “We aim to increase the downward force of our vehicle while keeping the drag as low as possible”, says Andreas Fischer, one of the leaders of Strohm + Söhne. Simulating the entire component is a Formula 1-level of complexity challenge, but one can still get valuable insights using 2D simulations. The first results for their aileron-flap system are shown in Figure 5 below. You can notice the under-pressure zone on the lower side of the airfoil.

Figure 4. The perspective view of the racing car of Strohm + Söhne team.

The most challenging simulations will be those where the flap is significantly deflected. The flow will most likely become very chaotic and steady CFD-models will be of little use then. Before that, however, the steady-state turbulence models (the so-called RANS models) give accurate results.

Fig. 5 The velocity and pressure fields around the spoiler of the S+S Team form Nürnberg.

Indeed, one will often look for improvement of the airfoil performance by manipulating its shape. Usually he would like to maximize the lift-to-drag ratio (L/D). Such a procedure can be carried out automatically. A basic turbulent flow simulation from the toolbox is hence nested in the MATLAB optimization code. In each optimization step the mesh is deformed accordingly and another CFD simulation is carried out.

Below you can check out the movie showing the outcomes of consecutive steps of such procedure. Watch the large initial variations of the trailing edge shape and the ratio L/D. After just a few steps the improvements are only slight and the process is terminated.

You can get the code here: https://quickersim.com/cfdtoolbox/tutorial/tutorial-22-automatic-shape-optimization/

To sum up, we hope that we managed to convince you that:

- Basic flow cases can be dealt with in MATLAB as part of a larger workflow.
- Even complex 3D flow problems can be approximated with 2D surrogates.

If you want to test the QuickerSim CFD Toolbox yourself, is available here: https://quickersim.com/cfdtoolbox/download/. The basic version is free of charge, it deals with steady laminar flows and heat transfer in two dimensions only. If you want to unlock the full potential, just ask us for the trial license. The teams belonging to the Racing Lounge may get a free access the full license for the whole year – just contact us at contact@quickersim.com

]]>

Today, allow me to introduce the MathWorks online training on physical modeling, more specifically on vehicle modeling for student teams. I will give an overview of the learning outcomes and share how teams successfully used vehicle modeling. Why Vehicle Modeling? There are some scenarios where trial-and-error wouldn’t be an option, right?! Just... read more >>

]]>There are some scenarios where trial-and-error wouldn’t be an option, right?! Just imagine a Mars orbiter with only one attempt to succeed. Similarly, think about huge systems like large ships where failing would cost a fortune.

(right) Artist rendition of Mars Reconnaissance Orbiter (image courtesy of NASA).

Steve Miller, a fellow MathWorker, puts is likes this: *“Good engineers can build something that will work at least once. Better engineers create something that works many times. Great engineers will find the best design […], and confidently rise above the competition. Those engineers reach for the run button.”*

Let’s see how some student teams excelled with modeling and simulation.

Imagine you want to evaluate design ideas and have no car or no hardware to test it. Guys from team TUfast at TU Muenchen have developed a lap time simulator. They use it to alter vehicle parameters virtually such as wheel base, center of gravity or setup of aero package, and evaluate performance of their racecar during early design stages.

Find out more details about their approach in this blog post including a video interview with the team.

Developing a racecar needs a wide range of expertise. Software developers, aerodynamicist and mechanical engineers need to collaborate effectively. Team Ka.Race.Ing from KIT united all the disciplines in simulation. The team developed their torque vectoring system using Simulink and programmed their torque vectoring system using automated code generation.

Find here a YouTube video outlining their approach:

Now that you are on-board with the idea of vehicle modeling ;-), I will guide you through what you can expect in our complimentary online training.

**Disclaimer:** In the following, vehicle modeling is conducted using the Simscape environment only. Certainly, you have more options than this. When in doubt, I suggest to check out this blog post first to get an overview about feasible modeling strategies.

The training will help you to get started with modeling, simulating, and analyzing automotive systems. There are two parts to the training: Longitudinal vehicle dynamics and 3D suspension modeling. The first part will enable you to set up a vehicle model, and use it to predict for example lap times, fuel consumption or battery life. This training is applicable for both combustion engine and electric powertrains. Part 2 is about 3D multibody dynamics simulation. Expect to evaluate kinematics and dynamics of coupled bodies. Additionally, you will be exposed to the concept of design optimization using numerical optimization.

**Note:** Expect to spend 6-8 hours with the vehicle modeling training materials to ramp yourself up and become a proficient Simscape user.

**Introduction to Simscape**– Explore the concept of plant modeling with Simscape and the physical network approach. Your first model will be a battery (electrical domain) coupled to a powertrain and gearbox (mechanical domain).**Simscape Fundamentals**– Learn the fundamental concepts of Simulink and Simscape such as using foundation libraries, creating multidomain physical components, dividing components into subsystems, setting initial conditions for physical variables.**Introduction to Vehicle Modeling**– See how you can model vehicle bodies, tires, brakes, and how to incorporate wind and terrain effects.**Powertrain Modeling**– Learn about the specifics of powertrain modeling and such as how to actuate vehicle models with power sources, build driveline mechanisms, create multi-speed transmissions, and model engines.**Vehicle Drive and Control**– This part is about closing the loop any tying your vehicle model into your control systems. Learn about vehicle control concepts including how to implement a DC motor drive mechanism, PWM (pulse width modulation) actuation, and running simulations with imported drive cycle data.

**Introduction to Multibody Simulation**– Discover the concept of multibody modeling with Simscape Multibody. Simscape Multibody extends Simscape with the ability to model rigid body mechanical systems in 3D.**Building Components**– We’ll show you how to create geometries, extruded ones and revolved solids, and compound bodies. You will gradually create your vehicle’s wishbone suspension.**Building Assemblies**– Learn to assemble components. You’ll see how to implement coordinate transforms specify body interfaces for reusability. We continue to sense and log simulation results and add internal mechanics to joints.**Importing CAD Models**– Learn to import CAD models for dynamic simulations. You’ll discover how to visualize bodies with CAD geometries, export models from CAD software, and import CAD models.**Design Optimization**– Learn how Simulink Design Optimization helps to select design parameters, set requirements or design goals, and optimize model parameters.

I hope that you found that post helpful. Let me again share the relevant link with you:

mathworks.com/academia/student-competitions/physical-modeling-training.html.

We are curious to hear from you and would be happy to give further advice. Let us know in the comment section of this page. Thanks!

]]>In today’s blog post Jose Avendano Arbelaez, who already blogged in the racing lounge will introduce you to a video series of training materials that will enable your team to get started with designing and simulating common mobile robotics algorithms in MATLAB and Simulink. – – MathWorks supports many different types of... read more >>

]]>– –

MathWorks supports many different types of student competitions. Students constantly impress us by building and programming cars, robots, boats, drones, and everything in between. One of the common trends in robotics competitions is that regardless of the hardware, designs often must complete tasks by themselves or autonomously. Knowledge of mobile robotics has transitioned from being an exclusive advantage to becoming an essential skillset. In real life, mobile robotics represents the building blocks of autonomous driving, swarm robotics, and industrial automation. To get started programming mobile robots, you have to understand some robot dynamics and how to pair them with suitable logic operations and sensors. These are exactly the types of lessons that you will find in the complimentary Mobile Robotics Online Training created by the MathWorks Student Competition team. Everything necessary to understand how to program robots to orient themselves, follow lines, avoid obstacles and transition between different modes of operation. Here is how we got to program a smart robot like the one below.

Depending on the competition, you will be loading up your robot with a different variety of sensors. You will likely need to get a sense of the robot position so that you use reference points to navigate through a course or environment. When you move your robot with respect to its previous position in space, this is called dead reckoning. To perform dead reckoning you need to measure your displacement, and this often requires encoder sensors. These tell you how many rotations a motor shaft or a wheel has performed. This can really help to determine the robot’s position in space. Encoders can lead to odometry systems that will give you enough information to position your robot or navigate through reference points. Our student competitions mobile robotics training goes into detail on how to process encoder data to become useful odometry such as distanced traveled and robot orientation. Other common sensors are distance sensors, color sensors and line sensors. In a basic implementation, the information obtained from these sensors will be used in conjunction with logical statements to achieve some desired motion. Once you become a more advance roboticist you might also start using 3D scanners and lidars. Nevertheless, if you want to make sure your robot is efficient and accurate, your robot will need to make smarter choices than enclosing your decisions in IF and ELSE statements. This is where you might have heard the term “PID Controller” get tossed around in conversation.

Robots must handle changing conditions and dynamic environments, combine this uncertainty with sensor tolerances and the error derived makes it necessary to implement control theory to program your robot to improve the robustness and response time of your robot. PID stands for Proportional Integral and Derivative controller, and it is one of the most popular control approaches since it can achieve excellent results with some simple tuning. You will find this type of controller anywhere from machinery automation, aircraft control, and complex robotic systems such as humanoid robots. In fact, this type of controller is so versatile that it can be used for both low and high-level control of a system. All these applications make it incredibly important to make sure you understand the basics of this type of controller and build proficiency with its usage. In the student competitions training you will find an extensive video lesson walking you from how to setup PID algorithms according to your hardware and requirements, to the significance of each of the control parameters. To make it easier, the mobile robotics training is accompanied by the Mobile Robotics Training Toolbox which includes a robot simulator and sensors that enable you to immediately follow along with exercises and understand the effects on robot motion when implementing various types of control algorithms.

Once you have mastered working with sensors and setting up controllers for basic robot behavior, you will find yourself having to piece together information and controller actions. Perhaps your robot needs to reach a location first, and then complete a task. Maybe it should also move to a different location afterwards. Conceptually it can become a little confusing how this sequence of events should unfold, it is always useful to draw diagrams to organize all the different actions. Stateflow is a great tool for prototyping complex robot behavior. It allows you to organize your logic and get instant debugging insight into your simulations. Take the obstacle detection example from the picture below, you can immediately relate the distance to an obstacle and the current execution state of the robot. Imagine piecing together multiple of these simple tasks, and suddenly being able to track the code executed in real-time becomes a great time saving tool.Once you have mastered working with sensors and setting up controllers for basic robot behavior, you will find yourself having to piece together information and controller actions. Perhaps your robot needs to reach a location first, and then complete a task. Maybe it should also move to a different location afterwards. Conceptually it can become a little confusing how this sequence of events should unfold, it is always useful to draw diagrams to organize all the different actions. Stateflow is a great tool for prototyping complex robot behavior. It allows you to organize your logic and get instant debugging insight into your simulations. Take the obstacle detection example from the picture below, you can immediately relate the distance to an obstacle and the current execution state of the robot. Imagine piecing together multiple of these simple tasks, and suddenly being able to track the code executed in real-time becomes a great time saving tool.

Stateflow is also a plug and play platform for any control algorithms that you develop using Simulink. It makes it seamless to call and integrate your PID algorithms to operate within the larger tasks that your robot should achieve. Specifically, if you want to get hands-on experience on how to piece together multiple tasks, the mobile robotics training has self-paced lessons that will not only explain how to implement Simulink models on both VEX and LEGO based robots, but also cover how to program control algorithms for common competition challenges such as:

- Dead reckoning
- Obstacle avoidance
- Line following
- Path navigation
- Combinations of all the above

Getting started programming mobile robots can be a daunting task. Making sure that you have the right knowledge and a wide array of tools at your disposal can make the difference between qualifying for a competition or even finishing up a robotics project on time. Make sure you understand the design of your robot and include the necessary sensors depending on the intent of your build. Use simulations to verify your programmed algorithms behave as intended before moving to a trial and error approach on your hardware. Take advantage of tried and true pre-packaged algorithms such as PID controllers to improve the performance of your robots. Make sure you can understand and implement proficiently all the above to save development time and rise to the top of competition rankings. You can always sign up for the complimentary mobile robotics training provided by the student competitions team. This can serve you both as a compliment to your current robotics skills and a place to get started with robot simulations, or as a starting point for untethering your robots from human input and remote controls.

Looking forward to seeing what type of robots you can unleash – let us know about your ideas!

]]>In today’s article, our guest blogger Connell D’Souza who already introduced you to app building, will talk about how you can learn to use MATLAB for Computer Vision for autonomous vehicles. Deep Learning vs. Computer Vision Deep learning for vision promises quicker and more accurate detections. As expected student competition teams have... read more >>

]]>Deep learning for vision promises quicker and more accurate detections. As expected student competition teams have jumped right on the band wagon and have begun including deep learning for vision in their workflows. But where does that leave classical computer vision? Over the last year I have tried to analyze how teams are incorporating deep learning into their workflows and a lovely example is RoboSub. RoboSub is a competition run by RoboNation and challenges students to develop autonomous underwater vehicles that can perform tasks like object detection, classification and avoidance. A perfect candidate for deep learning, right? Around 30% teams at the competition went down the deep learning route with varying degrees of success. They could set up deep learning networks using transfer learning on popular pre-trained networks like YOLO, AlexNet, GoogLeNet or VGG-16. One piece of feedback that teams had was that setting up these networks was time consuming because of two reasons. Firstly, deep learning has a “black-box” like nature. And secondly, large amounts of data need to be collected, labelled and pre-processed to train and test these networks. Another interesting point to note is, none of the finalists at the competition employed deep learning on their competition vehicles and the consensus was “Deep Learning is the way forward and we are exploring it, but we didn’t have the time to implement deep learning given our constraints”.

Computer vision algorithms have matured well with time, and there is a lot of literature, technical articles and code available. Deep learning on the other hand is a lot younger and still in the exploration stage. As a competition team you should invest resources to investigate and research how deep learning can help you but given the 1-year time constraint to design, manufacture and test a vehicle or a robot, it may make sense to take a leaf out of the RoboSub finalists’ book and stick to classical computer vision until your team has done substantial research on deep learning.

The student competition team at the MathWorks put together a training course on Computer Vision in MATLAB. This course is designed to teach you how to design and deploy computer vision algorithms and contains about 8 hours of self-paced video training material that covers key fundamental concepts in image processing and computer vision like

- image registration,
- feature extraction,
- object detection and tracking, and
- point cloud processing.

Most importantly this course is free. All you need to do is fill out a form, download all the files and code along with the video, easy!

I am going to try and give you a high-level overview of the workflow in a typical computer vision system application for autonomous systems and highlight how this training can help you.

The first step in designing any vision system is to import and visualize a video stream. MATLAB allows you to switch easily between working on video files in the prototyping stage or stream in video feed directly from a camera using the Image Acquisition Toolbox. You will find support packages for a variety of cameras that will help you interface with these devices with just one line of code. Once the video is imported the next step is to begin building an algorithm. This usually involves preprocessing like image resizing and filtering to enhance the image quality or remove noise. Even if you are using a deep neural net these are steps that you must take to ensure the image being fed into your network is compliant with its expected dimensions and image type.

Try this, I managed to catch my colleague Jose Avendano representing the MATLAB and Simulink Robotics Arena

%% Click Image from a camera and scale it vidObj = imaq.VideoDevice('winvideo', 1); img = step(vidObj); img = imresize(vidObj, 0.5); imshow(img);

An important step for autonomous vehicles is stitching together multiple frames of a video stream to create a picture that will have a wider field of vision than a single frame. Think of your robot or vehicle scanning a wide region to identify what task it needs to perform next. If the field of vision is restricted to a single frame, you might miss something that is in the robot’s blind spot.

When stitching multiple images to create a panorama, you first need to detect features of a common reference object, match them with the corresponding features in the next image and perform geometric transforms i.e. register the image. Now, there are quite a few algorithms that you can use like like Maximally Stable External Regions (MSER), Speeded Up Robust Features (SURF) etc., each with their own tunable parameters. The training course contains videos that teaches participants to use these algorithms in MATLAB and optimize it for your application. Estimating geometric transforms and feature detection is a building block for many other computer vision algorithms.

Once the robot can see what its surroundings look like, the next step is to detect objects of interest in your field of vision like traffic signs, lanes, text characters, etc. Object detection is the most critical computer vision application for autonomous systems. This is where deep learning is making tremendous progress. This course will teach participants to use and tune object detection methods like blob analysis, template matching and cascade object detectors to identify an object of interest in an image and give its location. You will learn how to implement important algorithms in computer vision like Viola-Jones algorithm, Hough transform for line detection, optical character recognition (OCR) for text recognition in MATLAB.

Another important technique is motion detection and tracking, this could be either detecting moving objects in the field of vision or identifying the direction of motion of your vehicle with respect to the environment. You will also want to track the object once detected. This training has examples showing you how to use foreground detection, and optical flow to detect motion between successive frames in a video stream. Optical flow algorithms like Horn-Schunk, Lucas-Kanade and Farneback and their implementation and tuning parameters are discussed in detail.

Once the object is detected, there is a need to track it to make sure the systems is aware that the object still exists even if gets occluded for a short period of time. Remember, with vision, what you see is literally what you get, so you should put in code to make sure there isn’t a threat to your system that is hiding behind another object or under a shadow. You can use Kalman Filters for this and the course will teach you how to do it.

Finally, aside from all that is discussed above, if you are using stereo vision or LiDAR’s this course teaches you to calibrate stereo cameras using the stereo calibration app and reconstruct a scene using stereovision. The video on point clouds goes into a little detail about down sampling, denoising, transforming and fitting shapes to a point cloud.

So, what are you waiting for? You now know at a high level what setting up a vision system includes, sign-up and learn the fundamentals of Computer Vision in MATLAB! Also, we encourage you to get in touch with us, either using the comments below or sending an email to roboticsarena@mathworks.com.

]]>In this blog post, Sebastian Castro will talk about robot manipulation with MATLAB and Simulink. The previous part discussed kinematics (if you have not read it, we recommend you do), while this part discusses dynamics. – – Introduction To motivate the importance of low-level robot manipulator control, I want to introduce... read more >>

]]>– –

To motivate the importance of low-level robot manipulator control, I want to introduce a couple of engineering archetypes.

**Robot programmers**usually start with a robot that has controllable joint or end effector positions. If you are a robot programmer, you are probably implementing motion planning algorithms and integrating the manipulator with other software components, such as perception and decision-making.**Robot designers**have a goal of enabling robot programmers. If you are a robot designer, you need to deliver a manipulator that can safely and reliably accept joint or end effector commands. You will likely apply some of the control design techniques discussed in this post and implement these controllers on embedded systems.

Of course, nothing is quite as rigidly separated in real life. Chances are that robot manufacturers will provide their own controllers, but may also decide to expose control parameters, options, or maybe even a direct interface to the actuator torques.

To recap the previous part, kinematics maps the joint positions of a robot manipulator to the positions and orientation of a coordinate frame of interest – usually, the end effector. Dynamics, on the other hand, maps the required joint forces and torques to their position, velocity, and acceleration.

To move from kinematics to dynamics, we need more information about the manipulator’s mechanics. Specifically, we need the following inertial properties:

**Mass**: Newton’s second law relates mass to force and linear acceleration.**Inertia**: This is a 3×3 matrix, commonly called the inertia tensor, relating torque and angular acceleration. Since this matrix is skew-symmetric, it can be defined with 6 parameters:- 3 diagonal elements, or
**moments of inertia**, which relate torque about an axis with acceleration about that same axis. - 3 off-diagonal elements, or
**products of inertia**, which relate torque about an axis with acceleration about the other two axes.

- 3 diagonal elements, or
**Center of mass:**If the center of mass is not located at the body coordinate frame we defined, we need to apply the parallel-axis theorem to convert the rotations about the center of mass to rotations about our coordinate frame of interest.

Typically, you will import a robotics.RigidBodyTree from an existing manipulator description – for example, a URDF file. In this case, the inertial properties will be automatically placed in each robotics.RigidBody that comprises the tree.

A robot manipulator controller can contain the following components.

**Feedback:**Uses desired and measured motion to compute joint inputs. This usually involves a control law that minimizes the error between the desired and measured motion.**Feedforward:**Uses desired motion only to compute joint inputs. This often – but not necessarily – involves a model of the manipulator mechanics to calculate an open-loop input.

In our video “Controlling Robot Manipulator Joints”, we explore two different examples of joint controllers, featuring the 4-DOF ROBOTIS OpenManipulator platform. You can also download the example files from the MATLAB Central File Exchange.

**[Video]** MATLAB and Simulink Robotics Arena: Controlling Robot Manipulator Joints

First, **inverse kinematics (IK)** is used to convert the reference end effector position to a set of reference joint angles. The controller then operates exclusively in the **configuration space** – that is, on joint positions.

- The
**feedforward**term uses inverse dynamics on our manipulator model. This calculates the required joint forces/torques such that the manipulator follows the desired motion, as well as compensates for gravity. - The
**feedback**term uses PID control. Each joint (4 revolute joints + gripper) has independent controllers that minimizes the error between desired and measured motion.

For smooth motion, we typically want a closed-form trajectory such as a curve equation. This is because inverse dynamics requires positions, velocities, and accelerations to calculate required joint forces/torques. So, having a differentiable reference trajectory makes this much easier.

Theoretically, inverse dynamics should be enough to control a robot arm. However, there are factors such as joint mechanics (stiffness, damping, friction, etc.), unmeasurable disturbances, sensor/actuator noise, or even numerical error, that can easily impact the robustness of a fully open-loop controller. Therefore, an additional feedback compensator is always recommended.

While the feedforward and feedback control portions are relatively easy to implement and computationally inexpensive, this controller structure relies on solving IK. As we discussed in the previous part, the Robotics System Toolbox implementation uses a numerical solution and therefore can require significant computation. You can address this by providing a good initial guess (usually the previous measurement), limiting the maximum number of iterations, or switching to an analytical IK solution.

This second controller performs the control in the **task space** – that is, on the end effector positions and orientations. In addition, it avoids the need for inverse kinematics by using the **geometric Jacobian.**

The geometric Jacobian is a function of the robot configuration **q** (joint angles/positions), which is why it is often denoted as **J(q)**. The Jacobian is a mapping from the joint velocities to the world velocities of a coordinate frame of interest. However, with a bit of math you can find that it also maps the joint forces/torques to world forces/torques. I found this blog post to be a helpful reference.

- The
**feedforward**term in this controller only does one thing: compensates for gravity. - The
**feedback**term performs PID control on the XYZ positions of the end effector (we ignore orientation here, but you really shouldn’t!) to calculate desired forces at the end effector coordinate frame. Then, the Jacobian converts the control outputs to joint torques and forces.

Below is a screenshot of the Simulink model for this example controller. Unlike the schematic above, the model contains other realistic artifacts such as filters, rate limiters, saturation, and a decoupled gripper controller with basic logic. You can download this model from the MATLAB Central File Exchange.

Once you have a model of your manipulator, there are many tools in MATLAB and Simulink that can help you design joint controllers. These include

- PID Tuner for single-input, single-output (SISO) compensators
- Control System Designer and Control System Tuner for multi-input, multi-output (MIMO) systems
- MPC Designer for model-predictive controllers

*PID Tuner output on the “shoulder” joint of the ROBOTIS OpenManipulator model*

Traditional control design relies on linearization, or finding a linear approximate of a nonlinear model about a specific operating point – for example, the “home”, or equilibrium, position of the manipulator. A controller designed about an approximate linear region can become less effective, and potentially unstable, as the robot state deviates from that region.

Nonlinear control techniques can address this issue by considering the measured state of the system (in our case, joint or end effector positions). Feedforward techniques like inverse dynamics, or calculating the geometric Jacobian on the fly, can ensure that the controller accounts for nonlinearities in the model. Another popular technique is gain scheduling, which can be used for both traditional controllers and MPC controllers.

Another alternative is to employ model-free techniques such as:

**Optimization:**You can optimize control parameters using simulation, which is enabled by Simulink Design Optimization. While optimization makes no guarantees on stability, it lets you automatically tune a breadth of parameters such as gains, control effort/rate limits, thresholds, etc. which could lead to good results – especially on highly nonlinear systems.**Machine learning:**Reinforcement learning, or automatically learning by trial and error, is a common technique being employed for robot manipulation. For example, this paper and video shows deep reinforcement learning – in other words, learning parameters of a deep neural network using reinforcement learning techniques.

Now you’ve seen an overview of kinematics and dynamics for robot manipulator design. I hope this was a useful introduction to the language in this domain, some common techniques used in practice, and areas where MATLAB and Simulink can help you design and control robots.

We hope that Simulink can help you during your design phase as you explore different architectures, integrate supervisory logic, perform tradeoff studies, and more. Also, recall that Simulink lets you automatically generate standalone C/C++ code from your control algorithms, so they can be deployed to hardware or middleware such as ROS.

If you want to see more material on robot manipulation, or other topics in robotics, feel free to leave us a comment or email us at roboticsarena@mathworks.com. I hope you enjoyed reading!

– Sebastian

]]>In this blog post, Sebastian Castro will talk about robot manipulation with MATLAB and Simulink. This part will discuss kinematics, and the next part will discuss dynamics. – – Crash Course on Robot Manipulators Let’s start with a quick comparison of kinematics and dynamics. Kinematics is the analysis of motion without considering forces.... read more >>

]]>– –

Let’s start with a quick comparison of kinematics and dynamics.

**Kinematics**is the analysis of motion without considering forces. Here, we only need geometric properties such as lengths and degrees of freedom of the manipulator bodies.**Dynamics**is the analysis of motion caused by forces. In addition to geometry, we now require parameters like mass and inertia to calculate the acceleration of bodies.

Robot manipulators are often composed of several **joints**. Joints are composed of **revolute** (rotating) or **prismatic** (linear) **degrees of freedom** (DOF). Therefore, joint positions can be controlled to place the end effector of the robot in 3D space.

If you know the geometry of the robot and all its joint positions, you can do the math and figure out the position and orientation of any point on the robot. This is known as **forward kinematics (FK)**.

The more frequent robot manipulation problem, however, is the opposite. We want to calculate the joint angles needed such that the end effector reaches a specific position and orientation. This is known as **inverse kinematics (IK)**, and is more difficult to solve.

Depending on your robot geometry, IK can either be solved analytically or numerically.

**Analytic solutions**mean that you can derive, in closed-form, an expression for the joint positions given the desired end effector position. This is beneficial because you do all the work offline and solving IK will be fast. As with everything in engineering: if you have an exact model of your system, you should take advantage of it!**Numerical solutions**are generally slower and less predictable than analytic solutions, but they can solve harder problems than analytic solutions (we expand on this below). However, these solutions introduce uncertainty in the form of initial conditions, optimization algorithm choice, or even random chance. So, you may not get the answer you want.

The 3D pose of your end effector can be specified by 6 parameters: 3 for position and 3 for orientation. Technically, you can derive an analytical solution if there are up to 6 nonredundant joints in your manipulator, assuming the desired position is reachable.

Robot designers have been clever about ensuring their manipulators have high degrees of freedom for controllability, while still ensuring analytical IK solutions are possible. For example, I have been taking the Udacity Robotics Software Engineer Nanodegree, where one of the projects involves analytic IK for a KUKA KR210 6-DOF manipulator. This manipulator has a spherical wrist that decouples the position and orientation analytical IK problems. You can find my writeup on GitHub.

So… why would you choose a numerical solution? Here are a few ideas.

- Your manipulator has redundant degrees of freedom (always the case with 7 or more)
- You don’t want to derive the math and have the computational resources for a numeric solution
- Your target position is not valid, but you still want to get as close to it as possible
- There are multiple, or even infinite, analytic solutions
- You want to introduce multiple, complex constraints

*Cases where there are multiple solutions, which are relatively easy to handle with analytical IK.
(Left) IK has exactly two solutions – “over” or “under”.
(Right) IK has infinite solutions since any rotation of the base is valid.*

*Complex manipulation cases which are likely candidates for numerical solution.
(Left) 7-DOF manipulator can position the end effector with multiple valid solutions.
(Right) Example of position constraints between two coordinate frames on the manipulator. *

To summarize, solving IK analytically is fast, accurate, and reliable. However, as you move towards more difficult problems, numerical solutions are often easier to implement, or even necessary.

Now, you hopefully have a basic idea of why manipulator kinematics are important, and what kind of real-world problems they can solve. There are two built-in ways you can work with robot manipulator models in MATLAB and Simulink.

**How:**Create a rigid body tree object**When to use:**Solving forward and inverse kinematics and dynamics, extracting mechanical properties (Jacobian, mass matrix, gravity torques, etc.)

**How:**Create a Simscape Multibody model**When to use:**System-level dynamic simulation, integration with physical models of actuators, contact mechanics, etc.

Both rigid body tree objects and Simscape Multibody models can be created from scratch, or imported from Unified Robot Description Format (URDF) files. In addition, Simscape Multibody can also import 3D models from common CAD software. My colleague Christoph Hahn wrote a blog post on this.

Starting with release 2018a, Robotics System Toolbox includes a Manipulator Algorithms Simulink block library. These blocks allow you to perform kinematic and dynamic analysis on rigid body tree objects from Simulink, which makes the two representations above work together for system-level simulation and control design applications. You will learn more about this in Part 2.

… and yes, these blocks generate C/C++ code so you can deploy standalone algorithms outside of MATLAB and Simulink.

Robotics System Toolbox provides two numerical solvers for manipulator inverse kinematics:

**Inverse Kinematics****:**Enforces joint limits and lets you supply relative weights for each position and orientation target.**Generalized Inverse Kinematics****:**Allows you to add multiple, and more complex, constraints such as relative position between coordinate frames, aiming at certain objects, or time-varying joint limits.

Below is some example MATLAB code and an animation of generalized IK on a model of a Rethink Sawyer, which has a 7-DOF arm. Here, we are setting a constraint on the end effector position, while simultaneously enforcing that the end effector points towards a separate target point near the ground.

sawyer = importrobot('sawyer.urdf', 'MeshPath', ... fullfile(fileparts(which('sawyer.urdf')),'..','meshes','sawyer_pv')); gik = robotics.GeneralizedInverseKinematics('RigidBodyTree',sawyer, ... 'ConstraintInputs',{'position','aiming'}); % Target Position constraint targetPos = [0.5, 0.5, 0]; handPosTgt = robotics.PositionTarget('right_hand','TargetPosition',targetPos); % Target Aiming constraint targetPoint = [1, 0, -0.5]; handAimTgt = robotics.AimingConstraint('right_hand','TargetPoint',targetPoint); % Solve Generalized IK [gikSoln,solnInfo] = gik(sawyer.homeConfiguration,handPosTgt,handAimTgt) show(sawyer,gikSoln);

*What other constraints can you think of to make the motion smoother?*

Once you’ve tested your IK solution, MATLAB and Simulink allow you to explore next steps towards building a complete robotic manipulation system, such as:

- Integrating IK with a simulation of the robot dynamics
- Adding other algorithms, such as supervisory logic, perception, and path planning
- Automatically generating standalone C/C++ code from your algorithms and deploying to hardware or middleware such as ROS

We discuss this in our video “Designing Robot Manipulator Algorithms “, which features the 4-DOF ROBOTIS OpenManipulator platform. You can download the example files from the MATLAB Central File Exchange.

**[VIDEO] **MATLAB and Simulink Robotics Arena: Designing Robot Manipulator Algorithms

Many of you are likely developing algorithms for existing robots that already have built-in joint torque controllers. From this perspective, you can assume that the robot joints will adequately track any valid setpoint you provide.

Kinematics alone can be useful to design motion planning algorithms, as well as performing analysis based solely on robot geometry – for instance, workspace analysis or collision avoidance.

In the next part, we’ll talk more about manipulator dynamics and how this facilitates lower-level control design applications with MATLAB and Simulink.

Feel free to leave us a comment or email us at roboticsarena@mathworks.com. I hope you enjoyed reading!

– Sebastian

]]>I asked Connell D’Souza, today’s guest blogger, that exact question. His answer was short: “REPETITION. INTERACTION. AUTOMATION.” In the following, Connell will share his experience from a past job and illustrate some options we have. Enjoy! – – Most of us use a multitude of apps every day for almost everything from... read more >>

]]>– –

Most of us use a multitude of apps every day for almost everything from waking up in the morning to watching cat videos at 2 am. But how do apps apply to student competitions? Being an alumnus of a student competition team, something that I learnt over time is that the easiest way to help your team do better every year is to not re-invent the wheel. It is easier to define your goals and work on making upgrades rather than spending months redesigning every component and here is one way in which apps can help.

MATLAB apps are interactive tools to perform technical computing tasks so you could. For example, you can set up an app that does design calculations based on design constraints provided interactively by the user, a calculator of sorts. Now these apps can be kept within the team and used for years allowing future generations of your team to do concentrate on more involved technology upgrades. Coming back to my point on usability, when code is passed down the years, the hardest part is making sure someone using the code in the future is still able to understand what every variable and function in your code does.

MATLAB has a few tools available for you to build your Apps each with their own benefits. You can put almost any kind of MATLAB code into an app. Let’s jump right in to the high-level workflow involved with app building in MATLAB.

MATLAB gives you a lot of flexibility in terms of functionality that can be used to build apps. You could build an app with MATLAB functionality or even Simulink models. My recommendation for picking code that should be put in an app is REPETITION, INTERACTION and AUTOMATION. A good indicator of code that can be wrapped into an app is code that you foresee will be run multiple times and requires user interaction to provide inputs, manipulate parameters and post-process results. Automate repetitive interactive code using apps. An example could be this piece of code below.

%% Clean Up close all; clear; clc; %% Create Video Input Object videoInputNumber = 1; vidObj = videoinput('winvideo', videoInputNumber); preview(vidObj); % View the video stream %% Initialise variables numOfPictures = 5; waitTime = 1; mode = 1; mkdir 'Session Data' %% Click images if mode == 1 img = clickSingleImage(vidObj); imshow(ycbcr2rgb(img)); elseif mode == 2 img = clickBurst(vidObj, numOfPictures, waitTime); imshow(ycbcr2rgb(img)); end %% Delete Video Object delete(vidObj)

This script uses a few tunable parameters like *numOfPictures*, mode, and *waitTime* to click either single images or a burst of images. Now someone who is not familiar with the code will have a hard time understanding the parameters to change to use this script effectively. When you scale this up to code that contains multiple tunable parameters, this problem gets magnified. There could also be the case where the user changes a non-tunable parameter, thus introducing an unwanted bug. Wrapping such code in an app is an easy way to expose users to only tunable parameters as well as streamlining their experience with using your code.

Now that you have selected your functionality, the next step is to build the app. MATLAB has a couple of different workflows, a programmatic and an interactive workflow. Both these workflows come with their own set of advantages. Let’s look at them in a little detail.

In the interactive workflow you can lay out components interactively and only write code for the functionality. App Designer is a tool that can help you do this. It was introduced in R2016a and is a rich development environment to build apps. App Designer provides a tightly integrated environment to lay out a set of ready to use components like buttons, checkboxes, edit fields, etc. and then write callbacks for these components within the same environment, much like an app to build apps. App inception anyone?

A good use case for App Designer is when you are not particularly concerned with using readymade components. The thing that I like about App Designer is when the components are laid out, the integrated MATLAB editor is automatically populated with the code that defines that component, which means, I as the user, only add the functionality for that component. You can watch this video to see how it took me under 10 minutes to put an app together with App Designer.

**[VIDEO] **MATLAB and Simulink Robotics Arena: Building Apps with MATLAB and App Designer

GUIDE is another tool that supports the interactive workflow. GUIDE has been around for a while and is like App Designer as it also contains a drag and drop environment to lay out the components of your app, the difference being that you go back to the MATLAB editor to code the behavior of the app. While GUIDE continues to be a supported workflow, App Designer is the future of building Apps with MATLAB. So, if you plan to build new apps in MATLAB, App Designer is our recommended tool. For a complete comparison between App Designer and GUIDE you can view this page from the product documentation.

This workflow gives users the most flexibility in designing apps. However, it comes at the expense of having to write your own code for every graphical component along with the functionality. Apps designed in this way can be highly customized, so this is the way to go if you intend to build a complex application with a lot of different dependent components. A good example of an app that has been built using this workflow is shown in this video.

**[VIDEO] **MATLAB and Simulink Robotics Arena: Building Interactive Design Tools

Zachary built an app to help him and his AIAA Design/Build/Fly team design their model aircraft. As you can see the app is very complex and includes a visualization pane that can be used to interactively manipulate the shape of the aircraft and perform the necessary aerodynamic stability calculations. There is also functionality to import Digital DATCOM files and then export the design of the airplane to a simulation environment which makes this an interactive robust design tool for model aircraft, and again all built in MATLAB. This is a great example of how student competition teams can use apps to help your team for generations to come.

Once the app is built, you can either share the MATLAB files directly with users allowing for future editing of the app or package the app from App Designer

or from the MATLAB toolstrip using the package app option in the Add-Ons dropdown.

Packaging the apps will make sure all the necessary files needed for the app are packaged together automatically. This is the way to go if you have many dependent files and your users are not too familiar with managing the MATLAB search path. Once installed, MATLAB will automatically add all dependent files to the path and load the app to the Apps tab.

You can also add documentation to these apps like you would for any custom toolboxes.

- Apps are a nice way to make your code interactive and user friendly.
- There are a few different workflows that you can use depending on the complexity of your application and the programming skills of the developer. There is something for everyone.
- Apps are a great way of sharing interactive code with future generations of your team.

Today’s cars are incredibly more complex than the vehicles we drove even 10 years ago, and with the move to advanced driver safety features, electric vehicles and autonomous driving there are no signs that this trend is slowing down. That said, car makers today aren’t just tapping the brains of seasoned... read more >>

]]>That said, car makers today aren’t just tapping the brains of seasoned engineers in Detroit, or Munich, or Tokyo. They are reaching deep into universities around the world to find the best student talent, because experience has shown us that these budding engineers are playing a vital role in shaping the car designs of tomorrow.

Think of it: 40 years ago many drivers had no trouble changing the oil, popping in an alternator or swapping in new spark plugs. While the archetype of the hobbyist tinkering with his muscle car in his parents’ garage is still quite alive; today it’s more likely to be a team of engineering students competing to build the prototype of a single-seat electric race car.

© Formula Student Germany, Photographer: Shidhartha, Photo Link

Through global competitions such as Formula Student, student engineers are acquiring hands-on skills that are directly transferable to real-world automotive design. Along the way, they’re also learning important lessons about teamwork, collaboration across multiple engineering disciplines, project management, budgeting, and presentation skills – lessons that they will apply professionally no matter where they end up.

In fact, Formula Student has become a recognized proving ground for young automotive engineering talent. In part that’s because of the size of the competition – approximately 550 active teams each with as many as 30 members. Formula Student is also the only competition in the world where teams start with nothing and then must conceptualize, design, build, test and race their own formula-style vehicles, and then “show” their work to a panel of judges who determine how well each team explains the logic behind the design process.

You might ask how well this model works for the car industry and what value it actually delivers. As a Formula Student team member myself (2006-2008), I can personally attest to the fact that the skills I learned serve me well here at MathWorks 10 years later. It’s also instructive to note that the “Guinness Book of World Records” holder for fastest electric car acceleration from 0-100 km/h is a Formula Student team out of ETH Zurich, Switzerland (1.513 seconds). And that the record was held before by another Formula Student team “the Greenteam” out of Suttgart, Germany. Not bad for a bunch of kids.

[Video] AMZ – World Record! 0-100kph in 1.513 seconds

As a Formula Student corporate sponsor, MathWorks provides student teams with access to our MATLAB and Simulink computational software tools. The tools help in two ways. First, they allow students to simulate their designs and accelerate prototyping, which reduces the number of physical models they must produce. This lets them try many more experiments while still getting their projects across the finish line faster. Secondly, MATLAB and Simulink are a defacto industry standard used by nearly every car maker and automotive systems supplier, which means that student teams are receiving practical, hands-on experience with the same technology they will use in their professional lives.

Car makers know this about these students. They recruit heavily from the ranks of Formula Student graduates and represent almost every recognized vehicle brand, including exotic makes such as Ferrari and McLaren. In fact, the automotive industry has hired Formula Student graduates in the 20 years since the competition was founded.

MathWorks also supports student competitors by posting an instructional video podcast series on our MATLAB and Simulink Racing Lounge site, where students (and engineers in industry likewise) can further hone their design skills. And we’re not alone. Virtually every company with ties to the automotive industry sponsors student competitions in some shape or form. It’s a virtuous circle and through MathWorks’ ongoing support of student competitions around the world I’m happy to be able to give back as much as I received.

]]>Today, I am happy to introduce Andrea Casadio, he is a junior mechanical engineer and first-time guest in this blog. Andrea is going to describe his thesis work at Politecnico di Torino in which he developed Simscape™ libraries for vehicle modeling. Thank you Andrea for providing the community with your work on... read more >>

]]>Thank you Andrea for providing the community with your work on **MATLAB Central FileExchange**.

– –

When I studied vehicle dynamics at university, I spent a lot of time modeling in Simulink^{®}. What I did in fact was creating models based on fundamental equations trying to mimic actual systems. This approach became increasingly time consuming when I used more advanced system modeling approaches. Ultimately, I spent more time building vehicle models instead of developing and optimizing simulations.

At this point, I discovered Simscape. Simscape allows me to speed-up creating models of physical systems within the Simulink environment. Simscape blocks typically provide the functionality of a system of Simulink blocks within a single block.

Anyway, describing pros and cons of Simscape is not the scope of this post. So, let’s go with the work illustration!

At the beginning of my project, I discovered that default blocks of the Simscape Driveline library allow the simulation of longitudinal behavior only. The reason is that the tire blocks provided are based on mathematical models that take into account the longitudinal forces exchanged between tire and road. As expected, MathWorks offers functionality that allows to create custom components using the Simscape language**.**

For the reason above, the first target was to develop customized tire blocks based on more involved mathematical models like Pacejka ’89 and ’96, taking into account also the combination between lateral and longitudinal forces.

Fig. 1 – Example of a customized tire block

Lateral and longitudinal forces are functions of lateral and longitudinal tire slip, which depend on translational and rotational speed of the wheel. To develop such kinds of blocks it was necessary to use Simscape language to define:

- Conserving ports to transfer speed and force information;
- Declaration of domains, variables, parameters and equations.

Conserving ports are the interface with other Simscape components, on the example shown above there are “HX” and “HY” being the conserving ports defined on the mechanical translational domain. There is also “A” which is the conserving port defined on the mechanical rotational domain.

To validate the tire model, it was necessary to apply longitudinal and lateral motion to the tire model and it was possible to obtain the plots shown in Fig.2 and Fig.3.

Fig.2 – Lateral force as a function of lateral slip at a different vertical loads – Pacejka ’89 model

Fig.2 displays the typical behavior of a tire when the longitudinal slip is zero (no braking or acceleration). The odd trend is a consequence of the Pacejka model. If you look at the block on Fig.1, there is no option to introduce a vertical load variable, this is introduced as a block parameter.

Fig.3 – Lateral force as a function of longitudinal force at a different lateral slip angles – Pacejka ’89 model

Applying different inputs to the block of Fig.1 it’s possible to obtain the graph on Fig.3, which displays how the lateral force between road and tire changes as a function of the longitudinal force for different lateral slip conditions (alpha). When the longitudinal force is zero, the lateral force is maximal. This is straightforward because tires’ grip on the corners is higher when the driver doesn’t apply brakes or throttle. The opposite condition is when the longitudinal force is maximal. In this case, the lateral force is zero. Even by common sense, you may agree that it is not a good idea to apply too much brakes or throttle on corners.

As done for tires, starting from a 3 degree of freedom (DOF) model of a vehicle, it was possible to develop a block describing vehicle body dynamics:

Fig.4 – 3 DOF Vehicle model

Here is a short description of the 3 DOF Vehicle model (Fig. 4):

- HXFR, HXFL, HXRR, HXRL are conserving ports defined on the mechanical translational domain. With these four connections it is possible to connect the vehicle body to other blocks (like tires), but only in the
. There is one port for every tire (FR means front right, RL means rear left, etc.)__longitudinal direction__ - HYFR, HYFL, HYRR, HYRL are conserving ports defined on the mechanical translational domain. With these connections it is possible to connect the vehicle body to other blocks (like tires) in
__lateral direction__ - NFR, NFL, NRR, NRL are the physical signal outputs representing the vertical loads. These ports are useful to make the block connectable with the one shown in Fig.1. In fact, the 3-DOF model doesn’t takes into account the load transfer during longitudinal or side acceleration. In this case, it’s considered being a constant value, depending on the load distribution of the vehicle on static condition without any kind of slope. ST is the physical input representing the steering angle
- PX, PT and YAW are three outputs representing the position along x, y and yaw. Yaw is the angle of rotation of the vehicle with respect to its vertical axis through the center of gravity (COG).

[Click on image to enlarge]

Fig.5 – A complete 3-DOF project – 4WD model with 3 differential

Systems like the one in Fig.5 allow to model vehicle behavior with:

- Input: steering wheel angle (ST);
- Output: coordinate of center of mass and yaw angle (PX, PY YAW).

To validate these blocks, four different configurations of a 4WD car with three differentials were defined, see Fig.6:

- All differentials opened
- Central differential locked
- Central and rear differential locked
- All differentials locked

Fig.6 – The four different configurations used to simulate the 3-DOF model

Saving on MATLAB workspace the center off mass coordinates, during the simulation, it was possible to plot the trajectory of the vehicle center of mass (see Fig.7). This plot shows, as expected, an increasing of under-steering behavior while the differentials have been locked.

Fig.7 – Vehicle trajectories at different working conditions

Modeling vehicles with the previous defined customized blocks offers, in comparison with Simulink models, a cleaner representation of the system. However, it was not straightforward to define the vehicle dynamics block based on the equations of the physical system. No worries, as Christoph pointed out, I shared the material with you on MATLAB Central FileExchange.

Another alternative is using MathWorks’ multi-body simulation tool called Simscape Multibody. Models look like the one shown on Fig.8 with its graphical representation in Fig.9.

[Click on image to enlarge]

Fig. 8 – A 6-DOF Vehicle model built with Simscape Multibody

Fig.9 – A representation of the 6-DOF vehicle model

At the end, admittedly, it is hard to give compact advice on whether to use Simscape or Simscape Multibody. In very brief, Simscape Multibody will provide a graphic representation of your model automatically as you build the model. It will also allow you to model contact – find here a detailed introduction to contact modeling. In contrast to Simscape, these additional features will require more CPU time for solving system equations.

Let me refer you to a previous article in the racing lounge focusing on vehicle modeling. Especially, the modeling approach using Simscape Multibody will thoroughly discuss the pros and cons of Simscape Multibody. And even better, all models are provided on the MATLAB Central FileExchange.

If there is one key takeaway of my work, it is this: There are many opportunities to simulate vehicle models and investigate the influence of different suspension parameters. No matter whether you are working in Simulink, Simscape or Simscape Multibody, parameters can always be modified via the MATLAB workspace.

This work was an interesting opportunity to see the great potential of MATLAB when it comes to vehicle modeling. For sure, there are a lot of future possible implementations, for example the development of further tire models to consider the load transfer or the variation of characteristics wheel angles during the suspension travel. I would love to hear your feedback about my work.

Thank you MathWorks for the opportunity to publish in the racing lounge blog!

]]>