Robot Perception of Context
Share
tweet

Human environments are dynamic, and always in flux. People constantly re-purpose spaces, rearrange objects, and alter their behaviors. Human perception is robust in the face of these changes, and takes a context-based approach to watch, learn, and adapt dynamically to change. However, most machine perception is content-based and ignores important perceptual clues; is only successful within known, static environments; and tends to report artificially high success rates across biased data sets. This significantly limits robots’ perceptual autonomy when operating in real time in human environments, making them inflexible when faced with noise and change.

We have been working to address this gap on several fronts. First, we designed a model of context that is computationally fast and robust, and incorporates multidisciplinary inspiration from neuroscience and entomology (O’Connor and Riek, 2015). This first effort included a successful validation of context perception across a noisy Youtube dataset of human activities that mimic real-world operating conditions for robots (e.g., frequent occlusion, great variations in lighting and sound levels). We then built on this work, to enable a mobile robot to automatically perceive context across noisy, busy locations on campus, and use that to inform its behaviors around people (Nigam and Riek, 2015).

Recently, we have developed new methods for generating object proposals (Chan, Taylor, and Riek, 2017), which enable robots to quickly perceive their enviornment. We are applying this work to enable robots to sense and partner with human teams in real time (Taylor and Riek, 2017, 2018).