Skip to Main Content
Skip Nav Destination
ASME Press Select Proceedings
International Conference on Instrumentation, Measurement, Circuits and Systems (ICIMCS 2011)
By
Chen Ming
Chen Ming
Search for other works by this author on:
ISBN:
9780791859902
No. of Pages:
1400
Publisher:
ASME Press
Publication date:
2011

To ensure that 3D facial visual speech system can fast set model of human face and synthesize realistic animation which fit various portable sets for multi-modal interaction, a visual speech system employing Candide-3 model is presented. The system firstly maps the frontal human face photos onto the Candide-3 model to set a human face model, secondly uses the model to establish the mouth definition tables which can adaptively determine the controllable regions of all the mouth feature points and mobility factors of non-feature points controlled by the feature points, therefore the animation can be drove by the mouth feature points with pronunciation. The experimental results of 3D animation using the system demonstrate the animation can reached the level of being acceptable in subjective and objective assessments.

Abstract
Keywords:
Introduction
Texture Mapping
Mouth Definition Tables
Experiment
Conclusions
References
This content is only available via PDF.
You do not currently have access to this chapter.
Close Modal

or Create an Account

Close Modal
Close Modal