Abstract
This study explores the design issues of a learning-based approach to solving a tri-finger robotic arm manipulating task, which requires complex movements and coordination among the fingers. We train an agent to acquire the necessary skills for proficient manipulation by employing reinforcement learning. To enhance the learning efficiency, effectiveness, and robustness, two knowledge transfer strategies, fine-tuning and curriculum learning, are utilized and compared within the soft actor-critic architecture. Fine-tuning allows the agent to leverage pre-trained knowledge and adapt it to new tasks. Several tasks and learning-related factors are investigated and evaluated, such as model versus policy transfer and within- versus across-task transfer. To eliminate the need for pretraining, curriculum learning decomposes the advanced task into simpler and progressive stages, mirroring how humans learn. The number of learning stages, the context of the subtasks, and the transition timing are examined as critical design parameters. The key design parameters of two learning strategies and their corresponding effects are explored in context-aware and context-unaware scenarios, allowing us to identify the scenarios where the methods demonstrate optimal performance, derive conclusive insights, and contribute to a broader range of learning-based engineering applications.