The goal of this paper is to perform a parametric study on a newly developed visual odometry algorithm for use with color-depth (RGB-D) camera pairs, such as the Microsoft Kinect. In this algorithm, features are detected in the color image and converted to 3D points using the depth image. These features are then described by their 3D location and matched across subsequent frames based on spatial proximity. The visual odometry is then calculated using a one-point inverse kinematic solution. The primary contribution of this work is the identification of critical operating parameters associated with the algorithm, the analysis of their effects on the visual odometry performance, and the verification of the analysis using experimentation.

This content is only available via PDF.
You do not currently have access to this content.