Efficient support of conceptual design requires dedicated computer-based systems that feature new kinds of interaction and visualization techniques. As input means for this kind of systems, various modalities have been considered. Hand motions have been found to be especially efficient at describing shapes and expressing shape related operations directly in the 3D space. Therefore a formal hand motion language (HML) has earlier been developed by the authors. Computer interpretation of HML is however challenging not only because of the technological complexity of the problem, but also because of the need for real-time computation. Our hypothesis has been that the HML interpretation problem can be reduced to motion detection, trajectory segmentation, hand posture recognition, and command mapping sub-problems. The objective of trajectory segmentation is to find the non-transient parts of the hand motion that can be mapped to the words of HML. In this paper we propose a method which combines trajectory segmentation and hand posture recognition. Based on the postural information that is conveyed by the individual frames of the recorded motion, the beginning and the end of the meaningful segments are identified. In addition, the spatial and geometric information related to the formal HML words is also gathered. These pieces of information are combined in order to reconstruct and visualize the control commands in the shape conceptualization system. The current results shows that the necessary computer algorithms are fast enough and do not impose restrictions on the process of hand motion interpretation. Future research will concentrate on the integration of hand motion detection and reconstruction with visualization and manipulation of shape concepts in a fully volumetric imaging environment.

This content is only available via PDF.
You do not currently have access to this content.