Abstract
Advanced motion planning is crucial for safe and efficient robotic operations in various scenarios of smart manufacturing, such as assembling, packaging, and palletizing. Compared to traditional motion planning methods, Reinforcement Learning (RL) shows better adaptability to complex and dynamic working environments. However, the training of RL models is often time-consuming, and the determination of well-behaved reward function parameters is challenging. To tackle these issues, an adaptive robot motion planning approach is proposed based on digital twin and reinforcement learning. The core idea is to adaptively select geometry-based or RL-based methods for robot motion planning through a real-time distance detection mechanism, which can reduce the complexity of RL model training and accelerate the training process. In addition, Bayesian Optimization is integrated within RL training to refine the reward function parameters. The approach is validated with a Digital Twin-enabled robot system through five kinds of tasks (Pick and Place, Drawer Open, Light Switch, Button Press, and Cube Push) in dynamic environments. Experiment results show that our approach outperforms the traditional RL-based method with improved training speed and guaranteed task performance. This work contributes to the practical deployment of adaptive robot motion planning in smart manufacturing.