{"id":1849,"date":"2018-09-12T09:00:48","date_gmt":"2018-09-12T07:00:48","guid":{"rendered":"https:\/\/blogs.mathworks.com\/student-lounge\/?p=1849"},"modified":"2019-09-03T09:32:43","modified_gmt":"2019-09-03T07:32:43","slug":"autonomous-systems-sensors","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/student-lounge\/2018\/09\/12\/autonomous-systems-sensors\/","title":{"rendered":"How Do Autonomous Systems &#8220;See&#8221;?"},"content":{"rendered":"<p><a href=\"https:\/\/www.mathworks.com\/matlabcentral\/profile\/authors\/3069683-sebastian-castro\">Sebastian Castro<\/a> is back to talk about sensors in autonomous systems, supported by a few example algorithms and student competitions that use low-cost hardware platforms.<\/p>\n<h1>Introduction<\/h1>\n<p>There are many challenges around the world that focus on learning autonomous perception and navigation using low-cost ground vehicle platforms. Here are a few that we support, which consist of similar tasks:<\/p>\n<ul>\n<li><a href=\"http:\/\/emanual.robotis.com\/docs\/en\/platform\/turtlebot3\/autonomous_driving\/\">TurtleBot3 AutoRace<\/a><\/li>\n<li><a href=\"https:\/\/qutrobotics.com\/droid-racing-challenge\/\">Droid Racing Challenge<\/a><\/li>\n<li><a href=\"https:\/\/www.rose-hulman.edu\/academics\/educational-outreach\/autonomous--vehicle-competition.html\">Rose-Hulman High School Autonomous Vehicle Challenge<\/a><\/li>\n<\/ul>\n<p>Our <a href=\"https:\/\/blogs.mathworks.com\/racing-lounge\/2018\/08\/29\/autonomous-systems-high-school-students\/\">previous blog post<\/a> covered the Rose-Hulman High School Autonomous Vehicle Challenge. In this post, we will dive deeper into the <strong>TurtleBot3 AutoRace<\/strong>. This student competition contains a ready-to-go <a href=\"https:\/\/github.com\/ROBOTIS-GIT\/turtlebot3_simulations\">simulation package<\/a> and a <a href=\"https:\/\/github.com\/ROBOTIS-GIT\/turtlebot3_autorace\">set of tutorials<\/a>.<\/p>\n<p>TurtleBot3 is also designed to be an educational low-cost platform for the <a href=\"http:\/\/www.ros.org\/\">Robot Operating System (ROS)<\/a>. ROS is a software framework that, among many things, lets users easily switch between simulation and hardware. As we have discussed in a <a href=\"https:\/\/blogs.mathworks.com\/racing-lounge\/2017\/11\/08\/matlab-simulink-ros\/\">previous blog post<\/a>, MATLAB and Simulink has an interface to ROS. This made it possible for us to collect data from the AutoRace simulation and put together some quick examples.<\/p>\n<p>ROS is also known for containing specialized message types that represent common sensors used in autonomous systems. At the same time, MATLAB and Simulink have toolboxes for <a href=\"https:\/\/www.mathworks.com\/products\/computer-vision.html\">computer vision<\/a>, <a href=\"https:\/\/www.mathworks.com\/products\/robotics.html\">motion planning<\/a>, and <a href=\"https:\/\/www.mathworks.com\/products\/automated-driving.html\">automated driving<\/a>. So, in the rest of this post I will bridge these two tools by exploring common sensor types a few examples:<\/p>\n<ul>\n<li><strong>Vision sensors:<\/strong>\u00a0Various types of cameras (mono vs. stereo, color vs. infrared, etc.)<\/li>\n<li><strong>Line-of-sight sensors:<\/strong>\u00a0Ultrasonic sensors, radar, and lidar.<\/li>\n<\/ul>\n<p>First, we can download the TurtleBot3 AutoRace software packages and start up the Gazebo simulator.<\/p>\n<pre>$ roslaunch turtlebot3_gazebo turtlebot3_autorace.launch<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1867 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_gazebo-e1536676603751.png\" alt=\"\" width=\"799\" height=\"564\" \/><\/p>\n<p>Then, we can connect MATLAB to the simulator using ROS. Since I am using a Windows host and a Linux virtual machine, I am connecting to a ROS master with a specific IP address as shown below.<\/p>\n<pre>rosinit('192.168.119.129');<\/pre>\n<h1>Camera Example: Lane Detection<\/h1>\n<p>Let&#8217;s receive an image from the simulator.<\/p>\n<pre>imgSub = rossubscriber('\/camera\/image');\r\nimgMsg = receive(imgSub);\r\nrgbImg = readImage(imgMsg);<\/pre>\n<p>All the student competitions listed above have a lane detection and following component, so let&#8217;s use images to try detecting some lanes.\u00a0A typical workflow is:<\/p>\n<ol>\n<li>Transform image to a bird&#8217;s-eye view<\/li>\n<li>Segment the image<\/li>\n<li>Detect the lanes<\/li>\n<li>Create a control algorithm to follow (or not follow) the lanes<\/li>\n<\/ol>\n<p>Bird&#8217;s eye view is commonly used to make lane detection easier, because it removes the &#8220;perspective&#8221; effect of a camera by showing a top-down view of the image. To perform this transformation, we need to calculate a transformation matrix that performs rotation and\/or translation on the pixels of an image. To do this, you can either:<\/p>\n<ul>\n<li>Obtain intrinsic (focal length, distortion, etc.) and extrinsic (location of camera on the vehicle) parameters for the camera, and then calculate the transformation matrix<\/li>\n<li>Estimate a transformation matrix by mapping 4 points on the image to 4 known points in the real world<\/li>\n<\/ul>\n<p>To obtain the camera intrinsic parameters, you can either view the manufacturer datasheet or\u00a0<a href=\"https:\/\/www.mathworks.com\/help\/vision\/camera-calibration.html\">calibrate to a known pattern<\/a>\u00a0&#8212; usually a checkerboard. In this example, I took the second method which involved <a href=\"https:\/\/www.mathworks.com\/help\/images\/ref\/impoint.html\">manually selecting points<\/a> and mapping them to a known &#8220;truth&#8221; value in the bird&#8217;s-eye view. The <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/estimategeometrictransform.html\">estimateGeometricTransform<\/a> function was key in this step.<\/p>\n<pre>tform = estimateGeometricTransform(points,truth,'projective');\r\nwarpedImg = imwarp(rgbImg,tform);\r\nsubplot(1,2,1)\r\nimshow(rgbImg);\r\nsubplot(1,2,2)\r\nimshow(flipud(warpedImg));<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1869 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_birdseye-e1536669321204.png\" alt=\"\" width=\"801\" height=\"339\" \/><\/p>\n<p>To segment the image, you can use <a href=\"https:\/\/www.mathworks.com\/help\/images\/image-segmentation-using-the-color-thesholder-app.html\">color thresholding<\/a>. To make things a bit easier, these\u00a0student competitions use color to distinguish between left and right lane boundaries. In AutoRace, the lanes are yellow and white, so we can calibrate and define two sets of color thresholds.<\/p>\n<pre>yellowIdx = (warpedImg(:,:,1) &gt; 100 &amp; warpedImg(:,:,3) &lt; 100);\r\nwhiteIdx = (warpedImg(:,:,1) &gt; 100 &amp; warpedImg(:,:,3) &gt; 100);<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1887 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_threshold-e1536670319415.png\" alt=\"\" width=\"801\" height=\"334\" \/><\/p>\n<p>Finally, you can use the location of the pixels to approximate the lanes as mathematical equations &#8212; typically polynomials. A straight lane can be represented as a line (or degree 1 polynomial), but curved lanes benefit from higher order polynomials like quadratic or cubic. To reduce the number of pixels for curve fitting, I first\u00a0<a href=\"https:\/\/www.mathworks.com\/help\/images\/skeletonization.html\">skeletonized the lines<\/a> to make them one pixel wide.<\/p>\n<pre>polyOrder = 3;\r\n[ry,cy] = find(bwskel(yellowIdx));\r\nyellowPoly = polyfit(ry,cy,polyOrder);<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1873 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_lanedetect-e1536669344560.png\" alt=\"\" width=\"800\" height=\"333\" \/><\/p>\n<p>There is also the problem of multiple lanes in an image. So, instead of fitting a polynomial to all the pixels that meet the given color threshold, you can apply techniques like <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/fitpolynomialransac.html\">random sample consensus (RANSAC)<\/a> to eliminate outliers and produce better results.<\/p>\n<pre style=\"text-align: left;\">polyOrder = 3;\r\nnumOutliers = 50;\r\n[ry,cy] = find(bwskel(yellowIdx));\r\nyellowPoly = fitPolynomialRANSAC([ry cy],polyOrder,numOutliers);<\/pre>\n<p style=\"text-align: left;\"><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1875 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/lanes_polyfit-e1536669361617.png\" alt=\"\" width=\"699\" height=\"282\" \/><\/p>\n<p style=\"text-align: center;\"><em>Lanes detected with regular polynomial fit, showing the effect of outliers<\/em><\/p>\n<p style=\"text-align: center;\"><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1877 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/lanes_ransac-e1536669372579.png\" alt=\"\" width=\"700\" height=\"267\" \/><em>Lanes detected with RANSAC, showing the elimination of outliers<\/em><\/p>\n<p>For more advanced lane detection and following applications, you may want to look at some examples in Automated Driving System Toolbox:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.mathworks.com\/help\/driving\/examples\/visual-perception-using-monocular-camera.html\">Visual perception using monocular camera<\/a><\/li>\n<li><a href=\"https:\/\/www.mathworks.com\/help\/mpc\/ug\/lane-keeping-assist-with-lane-detection.html\">Lane keeping assist with lane detection<\/a><\/li>\n<\/ul>\n<h1>Lidar Example: Localization and Mapping<\/h1>\n<p>Unlike images, measurements from line-of-sight sensors do not contain color or intensity information. However, they are better at directly measuring distance and velocity between the vehicle and potential obstacles or landmarks.<\/p>\n<p>Ultrasonic sensors, radar, and lidar are fundamentally similar, except they use sound, radio waves, and light (respectively). Since light is much faster than sound, lidar can produce higher-resolution data, both in 2D and 3D. On the other hand, ultrasonic and radar sensors are less expensive, have a longer operating range, and are more robust to weather conditions.<\/p>\n<p>Line-of-sight sensors are used for two main tasks:<\/p>\n<ul>\n<li>Obstacle detection<\/li>\n<li>Localization and mapping<\/li>\n<\/ul>\n<p>In this example, we will demonstrate with lidar. Let&#8217;s start by receiving a scan from the simulated lidar sensor and converting it to a <a href=\"https:\/\/www.mathworks.com\/help\/robotics\/ref\/lidarscan.html\">lidar scan object<\/a> for convenience.<\/p>\n<pre>scanSub = rossubscriber('\/scan');\r\nlidar = lidarScan(receive(scanSub));<\/pre>\n<p><strong>Obstacle detection<\/strong> generally involves separating the drivable terrain from the obstacles, if needed, and then using machine learning techniques such as clustering to gain more information about the structure of the environment. For more information, refer to <a href=\"https:\/\/www.mathworks.com\/help\/driving\/lidar-processing.html\">this documentation page<\/a>.<\/p>\n<pre>% Convert 2D scan to 3D point cloud and perform obstacle segmenting using Euclidean distance clustering\r\nptCloud = pointCloud([lidar.Cartesian zeros(size(lidar.Cartesian,1),1)]);\r\nminDist = 0.25;\r\n[labels,numClusters] = pcsegdist(ptCloud,minDist);<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1893 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_segmented-e1536674931633.png\" alt=\"\" width=\"800\" height=\"357\" \/><\/p>\n<p>As for <strong>localization<\/strong>, the simplest approach is &#8220;dead reckoning&#8221;, or counting the number of times the vehicle&#8217;s wheels have turned. Since this process requires integrating speed measurements to estimate position, it is highly sensitive to wheel slip and sensor noise\/drift. Lidar based localization methods\u00a0are more complicated and computationally expensive, but because they do not depend on the entire history of sensor readings, they are better at avoiding drift in the estimated vehicle position.<\/p>\n<p>Localization with lidar depends on whether or not the map already exists:<\/p>\n<ul>\n<li>If the map is known, you can use probabilistic methods like <a href=\"https:\/\/www.mathworks.com\/help\/robotics\/examples\/localize-turtlebot-using-monte-carlo-localization.html\">particle filters<\/a> to figure out the most likely locations that would produce the current reading.<\/li>\n<li>If the map is not known, you can use optimization methods like <a href=\"https:\/\/www.mathworks.com\/help\/robotics\/ref\/matchscans.html\">scan matching<\/a> to figure out how far you moved between two consecutive scans.<\/li>\n<\/ul>\n<pre>% Get first scan\r\nlidar1 = lidarScan(receive(scanSub));\r\n\r\n% Move robot\r\nvelMsg.Linear.X = 0.1;\r\nvelMsg.Angular.Z = 0.25;\r\nsend(velPub,velMsg);\r\npause(1);\r\nvelMsg.Linear.X = 0;\r\nvelMsg.Angular.Z = 0;\r\nsend(velPub,velMsg);\r\n\r\n% Get second scan\r\nlidar2 = lidarScan(receive(scanSub));\r\n\r\n% Perform scan matching\r\nrelPoseScan = matchScans(lidar2,lidar1)<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1897 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_scanmatch-e1536675342546.png\" alt=\"\" width=\"801\" height=\"371\" \/><\/p>\n<p>Scan matching additionally lets you perform simultaneous localization and mapping (SLAM). By taking a sequence of lidar measurements, you can estimate both the vehicle trajectory and obstacle positions in the map that most likely would generate these measurements. One way to do this is by building and optimizing a graph of robot poses (position and orientation) over time, which is known as graph-based SLAM and is <a href=\"https:\/\/www.mathworks.com\/help\/robotics\/examples\/implement-simultaneous-localization-and-mapping-slam-with-lidar-scans.html\">available in Robotics System Toolbox<\/a>.<\/p>\n<p>Below is the result of collecting 45 seconds worth of data and estimating the robot trajectory and map only from lidar information. This is known as offline SLAM, in contrast to online SLAM which would be doing the same but with live data as the vehicle is driving.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" width=\"652\" height=\"680\" class=\"aligncenter size-full wp-image-1901\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_slamrun.gif\" alt=\"\" \/><\/p>\n<pre>% Define the LidarSLAM object\r\nmaxRange = 2;     % meters\r\nresolution = 25;  % cells per meter\r\nslam = robotics.LidarSLAM(resolution,maxRange);\r\nslam.LoopClosureThreshold = 350;\r\nslam.LoopClosureSearchRadius = 5;\r\n\r\n% Build and optimize the pose graph\r\nnumIters = numel(scansSampled);\r\nfor idx = 1:numIters\r\n  addScan(slam,scansSampled(idx));\r\nend\r\n\r\n% Export the final results as an Occupancy Grid\r\n[slamScans, slamPoses] = scansAndPoses(slam);\r\nmap = buildMap(slamScans, slamPoses, resolution, maxRange);<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-1899 size-full\" src=\"https:\/\/blogs.mathworks.com\/racing-lounge\/files\/2018\/09\/autorace_slam-e1536675553531.png\" alt=\"\" width=\"900\" height=\"423\" \/><\/p>\n<p>In practice, it is common to perform sensor fusion, or combine the readings from multiple sensors, using techniques such as <a href=\"https:\/\/www.mathworks.com\/discovery\/kalman-filter.html\">Kalman filtering<\/a>. This lets algorithms use the relative advantages of various sensors, provides redundant information in case some sensors produce inaccurate readings, and allows for more robust tracking of detected objects through time.<\/p>\n<p>Now that you have a map, you can refer to the following examples to see how to navigate in this path:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.mathworks.com\/help\/robotics\/examples\/path-planning-in-environments-of-different-complexity.html\">Path planning in environments of different complexity<\/a><\/li>\n<li>Automated parking valet <a href=\"https:\/\/www.mathworks.com\/help\/driving\/examples\/automated-parking-valet.html\">in MATLAB<\/a> and <a href=\"https:\/\/www.mathworks.com\/help\/driving\/examples\/automated-parking-valet-in-simulink.html\">in Simulink<\/a><\/li>\n<\/ul>\n<h1>What&#8217;s Next?<\/h1>\n<p>There is much more you can do with these sensors. One topic we did not discuss here is the use of machine learning to classify and locate other objects of interest &#8212; for example, traffic signs, pedestrians, or other vehicles. In fact, other\u00a0student competitions like the\u00a0<a href=\"https:\/\/www.duckietown.org\/research\/ai-driving-olympics\">AI Driving Olympics (AI-DO)<\/a>\u00a0are\u00a0encouraging such solutions over the more &#8220;traditional&#8221; techniques described above.<\/p>\n<p>In the examples above, we used ROS to stream sensor data to a separate computer that is running MATLAB &#8212; so the processing is done off-board. Small, low-cost platforms often cannot run a full install of MATLAB. Depending on the type of algorithms you create and hardware you are targeting, you can take advantage of C\/C++ code generation from MATLAB and Simulink, as well as its hardware support packages. This lets you use MATLAB and Simulink for prototyping and design without having to later rewrite your design in a different language so it runs on your target system.<\/p>\n<p>The <a href=\"http:\/\/emanual.robotis.com\/docs\/en\/platform\/turtlebot3\/overview\/\">TurtleBot3 platform<\/a>, for instance, has a <a href=\"https:\/\/www.raspberrypi.org\/products\/raspberry-pi-3-model-b\/\">Raspberry Pi 3 Model B<\/a> running Linux and ROS. So, if your algorithm is written with functionality and syntax that supports code generation, you can follow a workflow similar to <a title=\"https:\/\/www.mathworks.com\/help\/supportpkg\/raspberrypi\/examples\/getting-started-with-robot-operating-system-ros-on-raspberry-pi-r.html (link no longer works)\">this example<\/a> and get your MATLAB code or Simulink models running standalone on the robot or autonomous vehicle.<\/p>\n<p>If you have any questions on anything discussed here, feel free to leave us a comment or get in touch with us via <a href=\"mailto:roboticsarena@mathworks.com\">email<\/a> or <a href=\"https:\/\/blogs.mathworks.com\/racing-lounge\/2018\/08\/15\/robocup-athome-education-2018\/\">Facebook<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/student-lounge\/files\/2018\/09\/autorace_gazebo-e1536676603751.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div>\n<p>Sebastian Castro is back to talk about sensors in autonomous systems, supported by a few example algorithms and student competitions that use low-cost hardware platforms.<br \/>\nIntroduction<br \/>\nThere are many&#8230; <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/student-lounge\/2018\/09\/12\/autonomous-systems-sensors\/\">read more >><\/a><\/p>\n","protected":false},"author":155,"featured_media":1867,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[4,52,6,14],"tags":[177,189,175,191,173,187,102,185,179,181,193,94,183,171],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts\/1849"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/users\/155"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/comments?post=1849"}],"version-history":[{"count":26,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts\/1849\/revisions"}],"predecessor-version":[{"id":3396,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/posts\/1849\/revisions\/3396"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/media\/1867"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/media?parent=1849"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/categories?post=1849"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/student-lounge\/wp-json\/wp\/v2\/tags?post=1849"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}