In the past two decades, various CAE technologies and tools have been developed for design, development and specification of the graphical user interface (GUI) of consumer products both in and outside the automotive industry. The growing trend of deploying speech interfaces by automotive manufacturers and the resulting usage of speech requires that the work be extended to speech interface modeling — an area where both technologies and methodologies are lacking.

This paper presents our recent work aimed at developing a speech interface integrated with an existing GUI modeling system. A multi-contour seat was utilized as the testbed for the work. Our prototype allows one to adjust the multi-contour seat with a touchscreen GUI, a steering wheel mounted button coupled with an instrument cluster display, or a speech interface.

The speech interface modeling began with an initial language model, which was developed by interviewing both the experts and novice users. The interview yielded a base corpus and necessary linguistic information for an initial speech grammar model and dialog strategy. After the module was developed it was integrated into the exiting GUI modeling system, in a way that the human voice is treated as a standard input for the system, similar to a press on the touchscreen. The multimodal prototype was used for two customer clinics. In each clinic, we asked a subject to adjust the multi-contour seat using different modalities, including the touchscreen, steering wheel mounted buttons, and the speech interface. We collected both objective and subjective data, including task completion time and customer feedback. Based on the clinic results, we refined both the language model and dialogue strategy. Our work has proven effective for developing a speech-centric, multimodal human machine interface.

This content is only available via PDF.
You do not currently have access to this content.