Abstract

Learning from Demonstration (LfD) is an approach to robot programming where the machine aims to replicate the task presented by a human without being explicitly programmed to execute this task. While being an effective way to create complex robot routines even for users without coding skills, most LfD implementations heavily rely on sensors for the robot to capture the state of the surrounding world and the task being demonstrated.

In this research paper we offer an alternative LfD approach based on a fully simulated 3D environment. We demonstreate how simulation can eliminate the need for real-life sensors on the robot, serve as a unified medium for recording demonstrated tasks, and facilitate sharing of the produced solutions between different types of robotic cells with minimal to no reconfiguration. A Virtual Reality interface allows the operator to interact with the LfD environment in a natural way when recording task demonstrations for the robot. The system is built on top of commonly available software such as Unity engine, Robot Operating System, ROS-Industrial and MoveIt motion planning framework. It will be published on the public GitHub page of TalTech IVAR Lab [1].

We also provide a simple experimental procedure to validate the system’s ability to generate programs for multiple robot models from a single task demonstration, demonstrating its flexibility and ease of use for the operator.

This content is only available via PDF.
You do not currently have access to this content.