The existing interfaces for 3D CAD modeling softwares use 2D subspace inputs such as x and y axes of mouse to create 3D models. These existing interfaces are inherently modal because one needs to switch between subspaces, and disconnects the input space from modeling space. This makes existing interfaces tedious, complex, non-intuitive and difficult to learn. In this paper, a multi-sensory, interactive, and intuitive 3D CAD modeling interface is presented to address these shortcomings. Three different modalities (gestures, brain-computer interface, and speech) have been used for creating interactive and intuitive 3D CAD modeling interface. DepthSense® camera from SoftKinetic is used to recognize gestures, EEG Neuro-headset from Emotiv® is used for acquiring, and processing neuro-signals and CMU Sphinx is used for recognizing and processing speech. Multiple CAD models created by several users using the proposed multi-modal interface are presented. In conclusion, the proposed system is easier to learn and use as compared to the already existing systems.

This content is only available via PDF.
You do not currently have access to this content.