Abstract

This paper presents a target detection technique, which combines a supervised learning model with sensor data to eliminate false positives for a given input image frame. Such a technique aids with selective docking procedures where multiple robots are present in the environment. Hence the sensor data provides additional information for this decision making process. Senor accuracy plays a crucial role when the motion of the robot is defined by the use of data recorded by its sensors. The uncertainties in the sensory data can cause misalignments due to poor calibration of the sensor, which can result in poor positioning of the robot relative to its target. Such misalignments can play a significant role where certain accuracy is desired. Therefore, it is necessary to minimize such misalignments to achieve certainty for the robot interaction with its target. The work proposed in this paper allows achieving such accuracy using a vision-based approach by eliminating all false occurrences leading to selective interactions with the target. The proposed methodology is validated using a self-reconfigurable mobile robot capable of hybrid Wheeled-Tracked mobility, as an application towards autonomous docking of mobile robotic modules.

This content is only available via PDF.
You do not currently have access to this content.