This paper compares two approaches to controlling virtual hands in grasping simulation, and it investigates the ability of humans to control with their hands a virtual hand model in manipulative tasks. In our setup, the users are following their interaction in a desktop virtual reality environment, in order to determine the feasibility of user studies for user – virtual product interaction. Because no real force feed-back is provided, the user can decide upon the correctness of the grasping posture only from visual feedback. The hand and the virtual object interaction are computed from a simulation, therefore accurate spatial position of the real hand and fingers are needed to be measured in real time. Our simulation program is using the Nvidia PhysX SDK. We have implemented two control mechanisms, which enables the users of the system to manipulate the virtual hand. The first mechanism controls the motion of the virtual hand and grasping forces based on the principles of kinematics and energy transfer by contact simulation. The second mechanism relies on the principles of multibody dynamics, controlling the motion of the hand by PD controllers, and applying joint torques to the hand in order to exert forces on the grasped object. In this paper, we compare how well these principles perform in (a) accurately moving the virtual hand in the simulation space, (b) accurately positioning the fingers on the grasped objects, and (c) controlling the grasping forces on the objects.

This content is only available via PDF.
You do not currently have access to this content.