Abstract

Snake robots, comprised of sequentially connected joint actuators, have recently gained increasing attention in the industrial field, like life detection in narrow space. Such robots can navigate through the complex environment via the cooperation of multiple motors located on the backbone. However, controlling the robots in a physically constrained environment is challenging, and conventional control strategies can be energy-inefficient or even fail to navigate to the destination. This work develops a snake locomotion gait policy via deep reinforcement learning (DRL) for energy-efficient control. After the environment model is established, we apply a physics constrained online policy gradient method based on the proximal policy optimization (PPO) objective function of each joint motor parameterized by angular velocity. The DRL agent learns the standard serpenoid curve at each timestep. The policy is updated based on the observations of the robot and estimation of the current states. The robot simulator and task environment are built upon PyBullet. Comparing to conventional control strategies, the snake robots controlled by the trained PPO agent can achieve faster movement and more energy-efficient locomotion gait. This work demonstrates that DRL provides an energy-efficient solution for robot control.

This content is only available via PDF.
You do not currently have access to this content.