Stanford’s new laser-based imaging technology could take blind spot detection in cars to a whole new level. Not only can it see things the driver can’t see from the driver’s seat, it can see things that aren’t visible from anywhere on the car. It “sees” things that are not in the line of sight.
Blind spot detection relies on sensors mounted around the car to detect objects located to the driver’s side and behind the vehicle. Stanford’s system can detect objects, in 3D, that are hidden behind walls and around corners. The system uses an algorithm, created with MATLAB, that computationally reconstructs objects hidden from view.
Stanford’s system even goes beyond the state-of-the-art LIDAR being used in autonomous vehicles. LIDAR systems send pulses of light into the surroundings and measure how long it takes for the light to bounce off an object and back to a sensor on the car. From this information, the LIDAR system calculates the 3D shape of the object in the path of the car, differentiating between other cars, road signs, and pedestrians. But LIDAR still requires line-of-sight conditions.
The Stanford imaging technology is similar to LIDAR, using a pulse of laser light. But it also captures light that scatters off a wall and reflects off objects that are hidden from view. It basically treats a wall as a mirror. Since walls don’t reflect light as well as a mirror, the team reconstructs the image from the limited number of photons that are reflected back to the sensor by the wall.
These photons are captured by a photon detector that was set up adjacent to the laser. The photon detector is so sensitive, that it can detect a single photon. It creates a “scan” the reflected light pulses.
“These are, at most, a few photons we’re recording, and they don’t resemble the shape of the scene we’re trying to recover,” says Gordon Wetzstein, assistant professor of electrical engineering at Stanford University. “So, we need to build computational reconstruction methods to try to resolve these shapes.”
The computational reconstruction algorithm uses the information from the scan to infer the 3D shape of the hidden objects. According to Stanford News, “Once the scan is finished, the algorithm untangles the paths of the captured photons, and like the mythical enhancement technology of television crime shows, the blurry blob takes a much sharper form.”
“A benefit of our algorithm is that it is compatible with existing scanning LIDAR systems,” explains David Lindell, Stanford University Ph.D. student.
The system was able to see a street sign that was hidden behind the wall. That would be valuable to an autonomous vehicle. But even more critical, it could see when a child or pet was about to dart out into traffic, even before it stepped into the line of sight.
The authors of the paper note that the algorithm has uses beyond LIDAR system in vehicles. Potential uses vary from search and rescue to see victims hidden by tree canopy foliage, to enabling microscopy to see objects obscured by larger items in the field of view.
“To make ‘imaging around corners’ viable for real-world scenarios, we still need to shorten our procedure’s acquisition time,” says Matthew O’Toole, a postdoctoral fellow in imaging technology at Stanford University. “Our current prototype takes several minutes to collect enough protons to reconstruct images of objects hidden from sight. With better hardware such as a brighter laser, we believe this can be done within fractions of a second.”
MATLAB code and data are available here.
To learn more about the research check out this video: