This paper presents a framework allowing emblematic gestures detection, segmentation and their recognition for human-robots interaction purposes. This framework is based on a new coding of arms’ kinematics reflecting both the muscular activity of the performer and the appearance of arm seen by the recipient when a gesture is performed. Following that, gestures can be seen as sequences of torques activations leading arm’s parts to express a comprehensive meaning. In addition, these sequences have very stable topologies and shapes regardless to performers. This facilitates the generalization of the recognition process with a minimalistic learning effort for online usages. Promising results were obtained for a set of 5 classes of gestures performed by 19 different persons.

This content is only available via PDF.
You do not currently have access to this content.