Behind the Headlines

MATLAB and Simulink behind today’s news and trends

A praying mantis could teach robots a thing or two about 3D machine vision

What’s the best way to teach a robot or drone to see in 3D? Quite possibly, the answer is to teach it to think like an insect. A praying mantis, to be more specific.

A team at the Institute of Neuroscience at Newcastle University recently studied the stereoscopic vision of the praying mantis and found that its approach to depth perception is quite different than ours. It’s much more computationally efficient.

And what do you need to study praying mantises’ 3D vision? 3D glasses, of course!

 

Praying mantis with 3d glasses affixed to face with beeswax.

Image Credit: Newcastle University

 

The team, led by behavioral ecologist Dr. Vivek Nityananda, discovered that mantis 3D vision works differently from all previously known forms of biological 3D vision. Mammals, birds, and amphibians with stereo vision compute the slight differences in the images seen by their right and left eye. These differences determine the objects’ positions in 3D, providing the person or animal with depth perception. Then they tested mantises’ 3D vision.

“The researchers tested the mantises vision by simulating prey on a screen. The tests mirrored those carried out to investigate human 3D vision. The images on the screen, seen without the glasses, look like the familiar fuzzy bi-colored images you see when you accidentally stumble into a theater featuring a movie in 3D,” per the ZDNet article, A praying mantis wearing tiny glasses holds the key to robot vision.  “What the researchers found was that the mantis only see objects in 3D when they’re moving.”

Publishing their latest research in Current Biology, the research showed the insects don’t process the details in the image. They simply look for areas where the image is changing. By focusing on movement alone, mantises target their prey.

“This is a completely new form of 3D vision as it is based on change over time instead of static images,” said Nityananda. “In mantises, it is probably designed to answer the question ‘is there prey at the right distance for me to catch?’”

They also use a lot less “computing power” than a human would analyzing the same scene. Given the much smaller size of their brains, the reduced computing requirements are necessary. They have less than 1 million neurons to a human’s 85 billion. But in some ways, their stereovision is more capable than ours.

“Even if we made the two eyes’ images completely different, mantises can still match up the places where things are changing,” stated Nityananda, “even though humans can’t.”

The research

The researchers created a mantis movie that gave the illusion of prey hovering right in front of the mantis. The mantis would try to catch the prey.

Then they attached special insect 3D glasses, temporarily glued to the mantis with beeswax. With the different colored filters, the researchers could project a different image to each eye. When the two images were combined, it created the illusion of depth.

 

Image Credit: Current Biology article submitted by Newcastle University.

 

In the above image, the targets in the left and right image move until they converge at the appropriate perceived depth. This triggered the mantis’ strike.

“They played variations on the same film: a target dot that moved against a polka-dot background. The target dot and its 3-D motion were so convincing that the mantises attacked, like a cat hunting a laser pointer,” per The Washington Post.

All stimuli were custom written in MATLAB with the Psychophysics Toolbox. Here’s a human version of the stimulus, rendered in red and blue. For the experiment, the video was rendered in blue and green to be visible to the mantis. Source: Newcastle University.

|
  • print

评论

要发表评论,请点击 此处 登录到您的 MathWorks 帐户或创建一个新帐户。