This paper describes a computationally viable localization technique for an autonomous underwater vehicle (AUV) that is used to carry visual inspection and nondestructive testing equipment inside large water conduits and tunnels without decommissioning the service. The localization technique is required to estimate the instantaneous location of the robot with sufficient accuracy for the control system of the robot in real time. The proposed technique features a sensor fusion framework that incorporates a monocular camera and an inertial navigation system (INS). Localization is carried out using a standard Lucas-Kanade algorithm which searches for a subset of matching pixels between two sequential images to estimate a motion vector for the time interval between the two images. The novelty of the proposed technique is in regards with the use of an INS to predict a rotation and translation vector between the two sequential images. This prediction is used to minimize the search region of the Lucas-Kanade algorithm and hence significantly reduce the computational load of the overall localization process. Experimental results on a special testbed verify that the proposed system not only reduces the computational load but also improves the accuracy since finding a false match in the minimized search region is unlikely.

This content is only available via PDF.
You do not currently have access to this content.