Abstract

The obstacle avoidance problem of autonomous surface vessels (ASV) has attracted the attention of the marine control research community for long years. Out of safety consideration, it is important for ASV to avoid all kinds of obstacles like shores, cliffs, floaters and other vessels. Developing a heading and path planning strategy for ASV is the main task and the remaining challenge. Traditional obstacle avoidance algorithms lead to too much computing in working environment. The issue of computation cost can be solved by training obstacle avoidance models with reinforcement learning (RL). By using the RL method, the ASV will choose the most efficient action according to the ASV’s experience it learned from the past. In this paper, RL is adopted to design a decision-making agent for obstacle avoidance. To train the obstacle avoidance model under a sparse feedback environment, hierarchical reinforcement learning (HRL) method is applied. Using this algorithm, better obstacle avoidance performance and longer survival time can be achieved. Memory pool modification and target network modification are also used to smooth the training process of the ASV. Simulation results demonstrate that HRL can make the learning process of un-manned ship’s obstacle avoidance smoother and more effective. Also, the living time of ASVs is improved.

This content is only available via PDF.
You do not currently have access to this content.