Abstract

In this work, we propose a teaching-learning framework for the robot to learn from multi-modal human demonstrations to build task knowledge and assist its human partner in collaborative tasks. These multi-modal human demonstrations are parameterized by the natural language and forearm gestures. The Random Forests algorithm is employed for the robot to learn from human demonstrations and construct its task knowledge in assembly contexts. The experimental results suggest that the proposed approach can not only have the robot gain the task knowledge directly through the human demonstrations but also provide a more natural and user-friendly robot teaching pattern to non-expert users. In addition, the proposed method can allow the users to customize the motion pattern of the robot according to their working habit.

This content is only available via PDF.
You do not currently have access to this content.