Most aircraft exterior inspections require human workers to visually detect defects such as dents, cracks, leaking, broken or missing parts, etc., and manually measure the parameters of the identified defects, which is a time-consuming process and it is also error-prone if the human inspector is not fully focused. This situation can be alleviated by the advance in computer vision and robotics for saving time and relieving human workers from such repetitive and stressful tasks. However, the challenge of automated robotic inspection for aircraft exterior remains due to the very large inspected area, full-coverage requirement and sometimes insufficient digital model available for planning the inspection path. This paper presents a two-stage approach to automate visual inspection of aircraft exterior surface in a static environment such as a MRO shop by a mobile manipulator equipped with a consumer-grade RGB-D camera following an optimal inspection trajectory learned from a low-resolution point cloud model of the aircraft when a proper CAD model of the aircraft is unavailable. In the first stage, a low-cost RGB-D camera is used to acquire a coarse point cloud model of the whole or interested area of the aircraft. In the second stage, a full coverage inspection path is learned based on the coarse model using a reinforcement learning process, which is our focus of this work.

This content is only available via PDF.
You do not currently have access to this content.