Autonomous Systems

Design, develop, and test autonomous systems with MATLAB

Practical Challenges in Deploying Autonomy to Offroad Vehicles

In this MathWorks fireside chat, MathWorks Robotics Industry Manager, Dr. You Wu, sat down with two distinguished leaders in the offroad/off-highway autonomy field, Prof. Matthew Travers of Carnegie Mellon University (CMU) Robotics Institute, and Dr. Mohammad M. Aref of Cargotec-Hiab, and heard their experience deploying autonomy to offroad environments. They discussed the tradeoff between optimality and practicality in algorithms, modular systems, verification of safety, interaction between humans and autonomous machines, and much more.

This blog is a partial excerpt from the fireside chat. To listen to the full talk, use this link.

 

About the speakers

Prof. Matthew Travers is a system faculty member at CMU Robotics Institute in Pittsburgh, Pennsylvania. Prof. Travers has been the Principal Investigator for many autonomous projects. Among them, he led the CMU team in the DAPRA Subterranean Challenge (2021), where they successfully deployed teams of mobile robots to underground tunnels without maps, GPS, or prior knowledge. He also led the CMU team in the DARPA Racer Program (2023), where they deployed an autonomous jeep to drive in the desert at high speed.

Dr. Mohammad M. Aref is currently with Autonomous Technologies Team of Cargotec-Hiab as the Technical Lead, Robotics & AI. He helped shape the creation of several Cargotec products that benefited from mechatronic design, robotics, and AI. Prior to this role, he collaborated with numerous Nordic OEMs and engaged in international research projects with renowned organizations such as CERN, GSI Helmholtz, and ITER. These collaborations allowed him to engage in the cross-disciplinary design of intricate intelligent systems. His Ph.D. is in the field of mobile manipulation by heavy-duty machinery and his MSc is in mechatronics. Dr. Aref has contributed to 65 scientific publications and patents. About 70% of all goods delivered around the world at some point are carried by Cargotec products. Dr. Aref contributed to the development of several of Cargotec’s automated heavy-duty machinery products, such as terminal tractor, reach stacker, automated RTG, and hook lift.

[Question] What are some aspects you found that making a robot work in the research lab differs from making a robot work in the field?

Prof. Travers:

(Robotics Research) is really more focused on the algorithmic development, optimizing and beating this on this data set, and mapping performance is more accurate, and I got better classifications. That is a fantastic pursuit, in terms of algorithmic development and trying to have the next seminal results.

When you have to start putting things, at least in just my experience, on an actual robot, you can’t just think in your own silo anymore. You can’t just say, it’s the best mapping algorithm. Well, if it eats up 50% of your computing resources. It’s the best by a particular set of metrics. At least in my experience, that set of metrics starts to change drastically when you start considering things like systems engineering and computer science. Not just a proof of demonstration, but going from proof of demonstration to actually demonstrating on a real machine and having to think in terms of both practicality as well as optimality.

And the sets of challenges, it’s not like you can just take some of the pieces and put it all together and it’s just going to work the way you want to. Only half the work gets done on the algorithmic side in my experience. The other half, in my experience, actually goes into just trying to get something coherent working on one machine or one set of machines.

Dr. Aref:

What you aim as the TRL (technology readiness level) is fairly different than when you want to have only proof of a concept. And then if you want to really hand that as a working machine to someone else. So the gaps are there. But I also see that Professor Travers came to almost the same practical understanding. Mainly because the environment that DARPA Challenge actually pushes you to consider a lot of realistic situations that you usually don’t consider in academic research.

In general, I second to statements coming from Professor Travers that we might be proud of certain global optimization, but in practice, when you reduce dependencies and when you have simple and robust algorithms, you’re going to appreciate them in the field. For example, some buggy behavior during one deployment. The more complex the system you have, the more hazy interfaces you have, you’ll spend more time on a customer site or on your R&D test yard. Either you have to be ready for it, or you have to accept the embarrassment that you are basically having 20 engineers watching while you are debugging your code.

[Question] Both Professor Travers and Dr. Aref mentioned this balance between optimality and practicality. What are other challenges when it comes to deploying those autonomous algorithms onto hardware?

Dr. Aref:

One aspect is that we usually have software as a complex system full of dependencies on hardware. And what’s basically helped us so far is that we isolate the hardware-dependent parts, and then try to keep a majority of code pure algorithmic or pure logical. By doing that, we gain a lot of confidence before deployment. And when something goes wrong during the deployment, we have a better guess where the source of the issue is.

Another thing that I never thought about in my academic life, and it is one of the number one issues in industrial life, is the lifetime of components. When you are handing a device to a customer, you are promising that this machine is going to work for 10 years. Because it is a common practice in that market. Then you’re going to face that there is one component, let’s say, a processing unit or one sensor from the supplier, and it was supported for 10 years. But you spent four years on your product development, and now, only six years remain from the product’s lifetime support. Then it means while you are not yet introducing anything to the market, either you have to change your target hardware, which is going to be really a painful thing if you are not careful, or, when it comes to the end of its life, you need to supply your existing customers with a new upgraded hardware. These are something that usually people haven’t been planning for it. Code generation, for example, was really something that I believe will help go from the systems that work now and keep them working for the future.

Prof. Travers:

At a certain level, one of the things we preach in our win strategy, like what was in the proposal for the subterranean challenge, is what we called modular autonomy, which is this grand idea that you’re going to write your algorithms that are going to be independent, and it’ll work on any sensor. And you’ll use ROS, and you’ll have your communication protocol, and you’ll have your drivers handle the low-level dependencies. I’ve certainly preached that for a long time, but in my opinion or my experience, past a certain point, there’s not very much you can do to eliminate some of the low-level dependencies between hardware and software.

As an example, the way that the feature matching happens in our SLAM algorithm. So everything we’ve done for a while, my group and others related to us do a lot of field robotics. Not everybody, but a lot of people in Robotics Institute used a version of SLAM called LOAM, which is an industry-standard Laser Odometry and Mapping. This is a known feature, but like the way that the feature matching happens inside of the SLAM algorithm is innately optimized for the way things happen on a Velodyne (Lidar). So, if you go to a different sensor, like an Ouster, and you just try to run LOAM out of the box, the feature matching doesn’t work because domain knowledge got leveraged heavily in the way the feature matching was implemented relative to the actual physics of the way that the Velodyne scan happens. I’m sure somebody somewhere has figured out how to get around this. But it’s not an industry standard, at least not from where we sit.

[Question] What are some of the gaps you wish researchers can address and the industry need to address?

Dr. Aref:

It’s worth mentioning that, at least based on my experience, when you want to introduce anything to the market, you will not be able to sell anything unless safety and verification methods are in place. Going back to what I had as an assumption for a safe system when I was studying, I was taking more of the reliability and deterministic outcome of an algorithm. I was taking it as a safety or stability-proven algorithm. I was taking them as safe, but then that gap between what stable and reliable means in academia versus what safety-compliant means in the industry is not receiving enough attention. But if we want to really create large turnovers from these works and really make this happen as it is forecasted in roadmaps that, in 10 years, it’s going to reach some two trillion dollars in turnover from autonomous vehicles, these forecasts might be valid if we manage to address verification and validation properly.

[Question] To add to that, in the past we have seen software safety certifications. This is to make sure that the software follows the protocol, with no bugs, no place that can get stuck forever, and no unexpected behaviors. With regard to autonomy for any kind of machinery, we may also be talking about operational safety. Here are all the possible cases that the vehicle may run into. And it needs to make decisions to address those situations properly without safety concerns. And is that the certification and safety verification direction you are thinking Dr. Aref?

Dr. Aref:

It includes that, but there are also aspects of fault tolerance and keeping the system fail-operational, moving away from being fail-safe towards being fail-operational. That is one part. Another part is that basically I feel like this is also having some touch bases to mathematics that you get to prove that what you are doing is reliable in the entire range of a function if you are talking about just a function being tested and verified.

[Question] What do you see as the interaction between human and machine in the future? Will those autonomous machines take over most of the field operations, and what will human do over there?

Prof. Travers:

A lot of what I currently do on projects is trying to figure out how to have a single operator be able to interact, in my case, with a fleet of heterogeneous vehicles. Instead of having an operator remote control the vehicle or even give it waypoints, be able to give a single operator access to a dictionary of different behaviors, and create a closed-loop system between the operator and a set of autonomous vehicles. So the vehicles are communicating back to the operator when they have the information where they need a decision, and the operator is basically then figuring out how to do whatever the robots can’t.

Dr. Aref:

In our domain, autonomous load handling, there are really great trends. There are basically roadmaps showing that many roadblocks are fading away, but, at the same time, I would say safety and regulatory aspects are quite different. What we also have as a challenge is human behavior. So the problem is not about the job being handled by an autonomous system. The problem is that you handle the job in a safe way, even if a human coexists in the shared environment with you. And that becomes the challenge because then you have to have a response for any behavior that comes from manual machines or human not really obeying the contracted rules. So, in general, what we are facing is that we build an environment for human and manual machines. And then we expect that autonomous systems would operate there in the same way that is built for a human.

To hear more detailed stories from Prof. Travers and Dr. Aref, for example, Prof. Travers’ approach to DAPRA Subterranean Challenge, Prof. Travers and Dr. Aref’s opinion on the engineering collaboration behind autonomous system development, and etc, listen to the full talk at https://www.mathworks.com/videos/practical-challenges-in-deploying-autonomy-to-offroad-vehicles-1699413980368.html

Are you interested in learning how to program autonomous navigation for offroad vehicles? Setting up scenario simulation to test and validate autonomous algorithms? Or modeling and controlling your excavator and other heavy-duty machinery? Try out these tutorials from MathWorks.com

[code] Offroad Navigation for Autonomous Haul Trucks

[code] Excavator Design with Simscape

Design and Simulating Autonomy for Construction Vehicles

|
  • print

Comments

To leave a comment, please click here to sign in to your MathWorks Account or create a new one.