{"id":1372,"date":"2018-03-16T13:06:35","date_gmt":"2018-03-16T13:06:35","guid":{"rendered":"https:\/\/blogs.mathworks.com\/headlines\/?p=1372"},"modified":"2019-10-22T15:17:52","modified_gmt":"2019-10-22T15:17:52","slug":"imaging-algorithm-lets-you-see-around-corners-with-laser-pulses","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/headlines\/2018\/03\/16\/imaging-algorithm-lets-you-see-around-corners-with-laser-pulses\/","title":{"rendered":"Imaging Algorithm Lets You See Around Corners with Laser Pulses"},"content":{"rendered":"<p>Stanford\u2019s new laser-based imaging technology could take blind spot detection in cars to a whole new level. Not only can it see things the driver can\u2019t see from the driver\u2019s seat, it can see things that aren\u2019t visible from anywhere on the car. It \u201csees\u201d things that are not in the line of sight.<\/p>\n<p>Blind spot detection relies on sensors mounted around the car to detect objects located to the driver&#8217;s side and behind the vehicle. Stanford\u2019s system can detect objects, in 3D, that are hidden behind walls and around corners. The system uses an algorithm, created with <a href=\"https:\/\/www.mathworks.com\/products\/matlab.html\" target=\"_blank\" rel=\"noopener\">MATLAB<\/a>, that computationally reconstructs objects hidden from view.<\/p>\n<p>&nbsp;<\/p>\n<p><div id=\"attachment_1378\" style=\"width: 360px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/www.computationalimaging.org\/wp-content\/uploads\/2018\/03\/teaser-1024x768.png\" target=\"_blank\" rel=\"attachment noopener wp-att-1378\"><img aria-describedby=\"caption-attachment-1378\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-1378\" src=\"https:\/\/blogs.mathworks.com\/headlines\/files\/2018\/03\/Set-up.png\" alt=\"Experimental setup shows how laser light scatters off the wall, reflects off the hidden rabbit, and returns to the wall. \" width=\"350\" height=\"263\" \/><\/a><p id=\"caption-attachment-1378\" class=\"wp-caption-text\">Experimental Setup:\u00a0The imaging system records the time it takes for laser light to scatter off the wall, reflect off the hidden rabbit, and return to the wall. By acquiring these timing measurements for different laser positions on the wall, the 3D geometry of the hidden object can be reconstructed. Image credit: Stanford Computational Imaging Lab.<\/p><\/div><\/p>\n<p>&nbsp;<\/p>\n<p>Stanford\u2019s system even goes beyond the state-of-the-art LIDAR being used in autonomous vehicles. <a href=\"https:\/\/www.mathworks.com\/products\/automated-driving\/code-examples.html?s_tid=srchtitle\" target=\"_blank\" rel=\"noopener\">LIDAR systems<\/a> send pulses of light into the surroundings and measure how long it takes for the light to bounce off an object and back to a sensor on the car. From this information, the LIDAR system calculates the 3D shape of the object in the path of the car, differentiating between other cars, road signs, and pedestrians. But LIDAR still requires line-of-sight conditions.<\/p>\n<p>&nbsp;<\/p>\n<p><div id=\"attachment_1392\" style=\"width: 360px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/www.computationalimaging.org\/wp-content\/uploads\/2018\/03\/outdoor.gif\" target=\"_blank\" rel=\"attachment noopener wp-att-1392\"><img aria-describedby=\"caption-attachment-1392\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-1392 size-full\" src=\"https:\/\/blogs.mathworks.com\/headlines\/files\/2018\/03\/outdoor_small.gif\" alt=\"Animation showing laser imaging system in outdoor demonstration, using wall to reflect photons.\" width=\"350\" height=\"197\" \/><\/a><p id=\"caption-attachment-1392\" class=\"wp-caption-text\">Outdoor Experiment:\u00a0Non-line-of-sight imaging is demonstrated outdoors. The imaging system captures measurements in indirect sunlight and robustly reconstructs the hidden \u201cS\u201d shape. Image credit:\u00a0Stanford Computational Imaging Lab.<\/p><\/div><\/p>\n<p>&nbsp;<\/p>\n<p>The Stanford imaging technology is similar to LIDAR, using a pulse of laser light. But it also captures light that scatters off a wall and reflects off objects that are hidden from view. It basically treats a wall as a mirror. Since walls don\u2019t reflect light as well as a mirror, the team reconstructs the image from the limited number of photons that are reflected back to the sensor by the wall.<\/p>\n<p>These photons are captured by a photon detector that was set up adjacent to the laser. The photon detector is so sensitive, that it can detect a single photon. It creates a \u201cscan\u201d the reflected light pulses.<\/p>\n<p style=\"padding-left: 30px;\">\u201cThese are, at most, a few photons we\u2019re recording, and they don\u2019t resemble the shape of the scene we\u2019re trying to recover,\u201d says Gordon Wetzstein, assistant professor of electrical engineering at Stanford University. \u201cSo, we need to build computational reconstruction methods to try to resolve these shapes.\u201d<\/p>\n<p>&nbsp;<\/p>\n<p><div id=\"attachment_1398\" style=\"width: 360px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/www.computationalimaging.org\/wp-content\/uploads\/2018\/03\/output.gif\" target=\"_blank\" rel=\"attachment noopener wp-att-1398\"><img aria-describedby=\"caption-attachment-1398\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-1398 size-full\" src=\"https:\/\/blogs.mathworks.com\/headlines\/files\/2018\/03\/cubes_small.gif\" alt=\"Image shows scanned reflected photons before the reconstruction as well as the reconstructed result. \" width=\"350\" height=\"197\" \/><\/a><p id=\"caption-attachment-1398\" class=\"wp-caption-text\">This image shows the scanned reflected photons before the reconstruction as well as the reconstructed result.\u00a0Image credit: Stanford Computational Imaging Lab.<\/p><\/div><\/p>\n<p>&nbsp;<\/p>\n<p>The computational reconstruction algorithm uses the information from the scan to infer the 3D shape of the hidden objects. According to <em><a href=\"https:\/\/news.stanford.edu\/2018\/03\/05\/technique-can-see-objects-hidden-around-corners\/\" target=\"_blank\" rel=\"noopener\">Stanford News<\/a><\/em>, \u201cOnce the scan is finished, the algorithm untangles the paths of the captured photons, and like the mythical enhancement technology of television crime shows, the blurry blob takes a much sharper form.\u201d<\/p>\n<p><a href=\"http:\/\/www.computationalimaging.org\/publications\/confocal-non-line-of-sight-imaging-based-on-the-light-cone-transform\/\" target=\"_blank\" rel=\"noopener\">The research<\/a> was published in <em><a href=\"https:\/\/www.nature.com\/articles\/nature25489\" target=\"_blank\" rel=\"noopener\">Nature.<\/a><\/em><\/p>\n<h2>Real-world applications<\/h2>\n<p style=\"padding-left: 30px;\">\u201cA benefit of our algorithm is that it is compatible with existing scanning LIDAR systems,\u201d explains David Lindell, Stanford University Ph.D. student.<\/p>\n<p>The\u00a0system was able to see a street sign that was hidden behind the wall. That would be valuable to an autonomous vehicle. But even more critical, it could see when a child or pet was about to dart out into traffic, even before it stepped into the line of sight.<\/p>\n<p>&nbsp;<\/p>\n<p><div id=\"attachment_1382\" style=\"width: 266px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/www.computationalimaging.org\/wp-content\/uploads\/2018\/03\/spin_6.gif\" target=\"_blank\" rel=\"attachment noopener wp-att-1382\"><img aria-describedby=\"caption-attachment-1382\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-1382 size-full\" src=\"https:\/\/blogs.mathworks.com\/headlines\/files\/2018\/03\/spin_6.gif\" alt=\"Reconstructed image of exit sign, spinning to show 3-d reconstruction.\" width=\"256\" height=\"192\" \/><\/a><p id=\"caption-attachment-1382\" class=\"wp-caption-text\">Image credit: Stanford Computational Imaging Lab.<\/p><\/div><\/p>\n<p>&nbsp;<\/p>\n<p>The authors of the paper note that the algorithm has uses beyond LIDAR system in vehicles. Potential uses vary from search and rescue to see victims hidden by tree canopy foliage, to enabling microscopy to see objects obscured by larger items in the field of view.<\/p>\n<p style=\"padding-left: 30px;\">\u201cTo make \u2018imaging around corners\u2019 viable for real-world scenarios, we still need to shorten our procedure\u2019s acquisition time,\u201d says Matthew O\u2019Toole, a postdoctoral fellow in imaging technology at Stanford University. \u201cOur current prototype takes several minutes to collect enough photons to reconstruct images of objects hidden from sight. With better hardware such as a brighter laser, we believe this can be done within fractions of a second.\u201d<\/p>\n<p>MATLAB code and data are available <a href=\"https:\/\/drive.google.com\/file\/d\/1OoZ4JfkXY0bIGlb4dT22YjhhwplZjQOc\/view\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<p>To learn more about the research check out this video:<\/p>\n<p><iframe loading=\"lazy\" width=\"500\" height=\"375\" src=\"https:\/\/www.youtube.com\/embed\/lCJN_RwJPew?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen><\/iframe><\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img decoding=\"async\"  class=\"img-responsive\" src=\"https:\/\/blogs.mathworks.com\/headlines\/files\/2018\/03\/Set-up.png\" onError=\"this.style.display ='none';\" \/><\/div>\n<p>Stanford\u2019s new laser-based imaging technology could take blind spot detection in cars to a whole new level. Not only can it see things the driver can\u2019t see from the driver\u2019s seat, it can see things&#8230; <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/headlines\/2018\/03\/16\/imaging-algorithm-lets-you-see-around-corners-with-laser-pulses\/\">read more >><\/a><\/p>\n","protected":false},"author":138,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/posts\/1372"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/users\/138"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/comments?post=1372"}],"version-history":[{"count":16,"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/posts\/1372\/revisions"}],"predecessor-version":[{"id":2408,"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/posts\/1372\/revisions\/2408"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/media?parent=1372"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/categories?post=1372"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/headlines\/wp-json\/wp\/v2\/tags?post=1372"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}