Using obstacles and road pixels in the disparity-space computation of stereo-vision based occupancy grids
Abstract
Occupancy grids have been used for a variety of applications in the field of robotics. These grids have typically been created based on data provided by range sensors such as laser or ultrasound. Current practice is to create the grids based on a probabilistic sensor model such as [1]. The use of stereo-vision to create occupancy grids is less common. This paper will detail a novel approach to compute occupancy grids, as applied to intelligent vehicles. Occupancy is initially computed directly in the stereoscopic sensor's disparity space, allowing the handling of occlusions in the observed area. It is also computationally efficient, since it uses the u-disparity approach to avoid processing a large point cloud. The occupancy calculation formally accounts for the detection of obstacles and the road in disparity space, as well as partial occlusions in the scene. In a second stage, this disparity-space occupancy grid is transformed into a Cartesian space occupancy grid to be used by subsequent applications. This transformation includes a filtering step to reduce discretization effects and explicitly account for the relation between range and uncertainty in stereoscopic data. In this paper, we present the method and show the results obtained with real road data.
Origin | Files produced by the author(s) |
---|
Loading...