Abstract

Teleoperated robotic systems that have computer vision technology are expanding across different industries such as surgical, industrial, and space or ocean exploration applications. While the benefits of vision-guided robot control are evident, challenges arise due to limitations in traditional workspace Cartesian mapping methods, particularly in avoiding singularity points. These limitations can result in jerking movements or system shutdowns at the remote end, presenting significant hurdles to seamless operation. This work integrates two algorithms in the robot teleoperation: 1) the edge drifting algorithm for the teleoperation workspace mapping, and 2) the deep learning algorithm for obstacle recognition and collision avoidance. By combining these two approaches in robot control, we aim to cover the robot’s workspace during task operation with improved accuracy and efficiency, ensuring both safety and seamless motion planning for robot teleoperation.

This content is only available via PDF.
You do not currently have access to this content.