A Self-Calibrating, Vision-Based Navigation Assistant
Abstract
We describe a body-worn sensor suite, environment representation, set of algorithms, and graphical-aural interface designed to provide human-centered guidance to a person moving through a complex space. The central idea underlying our approach is to model the environment as a graph of visually distinctive places (graph nodes) connected by path segments (graph edges). During exploration, our algorithm processes multiple video-rate inputs to identify visual features and construct the “place graph” representation of the traversed space. The system then provides visual and/or spoken guidance in user-centered terms to lead the user along existing or newly-synthesized paths. Our approach is novel in several respects: it requires no precise calibration of the cameras or multi-camera rig used; it generalizes to any number of cameras with any placement on the body; it learns the correlation between user motion and evolution of image features; it constructs the place graph automatically; and it provides only coarse (rather than precise metrical) guidance to the user. We present an experimental study of our methods applied to walking routes through both indoor and outdoor environments, and show that the system provides accurate localization and effective navigation guidance.
Origin : Files produced by the author(s)
Loading...