In this research, color-depth (RGB-D) camera pairs, like the Microsoft® Kinect™, are used to enable one-point visual odometry. In the proposed algorithm, features are detected in the color image using the speeded-up robust features (SURF) algorithm and converted into 3-dimensional (3D) feature locations using information from the depth image. These 3D features allow feature tracking between successive images based on spatial proximity. An inverse kinematic solution is used to calculate the visual odometry between frames based on the 3D feature matches. The proposed method supports visual odometry measurement with a single feature correspondence between frames. The proposed algorithm is implemented on a small, wheeled mobile robot (WMR) and evaluated experimentally in a series of representative, indoor, operational contexts. The preliminary results demonstrate that the proposed approach accurately tracks the motion of the robot to within 4–6% error, which is comparable to other more computationally expensive algorithms.

This content is only available via PDF.
You do not currently have access to this content.