Robot Vision and Perception

Robot Vision and Perception

In navigating and manipulating our world, humans make use of several perceptual systems. Our sense of proprioception indicates our orientation and stability relative the Earth’s pull. Our sense of touch tells us how well we can trust our footing, and helps us find and interact with actionable targets with our hands. Our sense of smell, even, is occasionally responsible for telling us that we are heading where we want to go.

The majority of humans rely quite heavily on the sense of sight. We use it to detect and recognize objects and other organisms, estimate their motion and shape, tell the time of day, judge our completion of a task, and so on. In the RRG, we seek to replicate this myriad of visual abilities in autonomous robots navigating, manipulating, and completing tasks in our world.

The RRG is currently focused on improving low-level vision using higher-level contextual and cognitive cues. In pursuit of this goal, we are developing algorithms for estimating depth/shape from natural RGB images, and fusing robot vision algorithms using estimates of uncertainty. We are also exploring methods for conveying scene and task semantics from higher-level cognition blocks to lower-level perceptual blocks, drawing inspiration from reciprocal and lateral information flow in the primate brain.

Research Themes

Semantically-informed robot vision, Uncertainty-aware sensor fusion, Computer vision and machine learning, Human vision and primate neuroscience

People

Laura Brandt
Ph.D., 2024