In this paper, we propose an exteroceptive sensing based framework to achieve safe human-robot interaction during shared tasks. Our approach allows a human to operate in close proximity with the robot, while pausing the robot’s motion whenever a collision between the human and the robot is imminent. The human’s presence is sensed by a N-range sensor based system, which consists of multiple range sensors mounted at various points on the periphery of the work cell. Each range sensor is based on a Microsoft Kinect sensor. Each sensor observes the human and outputs a 20 DOF human model. Positional data of these models are fused together to generate a refined human model. Next, the robot and the human model are approximated by dynamic bounding spheres and the robot’s motion is controlled by tracking the collisions between these spheres. Whereas most previous exteroceptive methods relied on depth data from camera images, our approach is one of the first successful attempts to build an explicit human model online and use it to evaluate human-robot interference. Real-time behavior observed during experiments with a 5 DOF robot and a human safely performing shared assembly tasks validate our approach.

This content is only available via PDF.
You do not currently have access to this content.