Most industrial robots are taught using the teaching playback method; therefore, they are unsuitable for use in variable production systems. Although offline teaching methods have been developed, they have not been practiced because of the low accuracy of the position and posture of the end-effector. Therefore, many studies have attempted to calibrate the position and posture but have not reached a practical level, as such methods consider the joint angle when the robot is stationary rather than the features during robot motion. Currently, it is easy to obtain servo information under numerical control operations owing to the Internet of Things technologies. In this study, we propose a method for obtaining servo information during robot motion and converting it into images to find features using a convolutional neural network (CNN). Herein, a large industrial robot was used. The three-dimensional coordinates of the end-effector were obtained using a laser tracker. The positioning error of the robot was accurately learned by the CNN. We extracted the features of the points where the positioning error was extremely large. By extracting the features of the X-axis positioning error using the CNN, the joint 1 current is a feature. This indicates that the vibration current in joint 1 is a factor in the X-axis positioning error.

This content is only available via PDF.
You do not currently have access to this content.