Robots, Drones, & Autonomous Vehicles will Soon Have Depth Perception


Unlike humans, robots cannot yet distinguish how far an object is from a single image. We can do so naturally and immediately thanks to our peripheral vision. Robots on the contrast, utilize complicated vision systems to see with panoramic vision and to perceive depth. A team at Stanford University and the University of San Diego is on the forefront of changing this limitation.

4D Camera Technology

Both universities have teamed up to develop a 4D camera that will grant improved robotic-vision. This camera uses a single-lens image-capture system. The lens panoramic light field camera will now be able to give robots a 138-degree field of view and can quickly calculate the distance to an object. This achievement grants depth perception to robotic sight.


With current technology, robotic imaging systems gather different images and pieces them together to create an entire scene composed of different image perspectives.  With this new 4D camera technology, a robot can gather the same information from a single image. This is done through the use of the camera’s spherical lens, the use of digital signal processing technology, and light field photography.


Benefits from depth perception…

With new improvements in depth perception and panoramic images, robots and drones will be a lot more capable in crowded areas or obscured landscapes. These capabilities will allow autonomous cars to conduct more safely and smoother. It will also help the autonomous vehicle drive harsh terrain and snow.