In this paper, experiments are presented in support of an adaptive color-depth (RGB-D) camera-based visual odometry algorithm. The goal of visual odometry is to estimate the egomotion of a robot using images from a camera attached to the robot. This type of measurement can be extremely useful when position sensor information, such as GPS, in unavailable and when error from other motion sensors (e.g., wheel encoders) is inaccurate (e.g., due to wheel slip). In the presented method, visual odometry algorithm parameters are adapted to ensure that odometry measurements are accurate while also considering computational cost. In this paper, live experiments are performed that show the feasibility of implementing the proposed algorithm on small wheeled mobile robots.

This content is only available via PDF.
You do not currently have access to this content.