Skip to Main Content
ASME Press Select Proceedings

International Conference on Information Technology and Computer Science, 3rd (ITCS 2011)

Editor
V. E. Muhin
V. E. Muhin
National Technical University of Ukraine
Search for other works by this author on:
W. B. Hu
W. B. Hu
Wuhan University
Search for other works by this author on:
ISBN:
9780791859742
No. of Pages:
656
Publisher:
ASME Press
Publication date:
2011

Currently the size of most statistical language models based on large-scale training corpus always goes beyond the storage ability of many handheld devices. This paper proposes a language model compression method which combined the domain compressing and the importance pruning, by setting the unit retained rate to control the size of the language model. In this paper, we use perplexity to evaluate the performance of the language model which get from the new compression method. The experimental results show that the new compression method can well adapted to the needs of handheld devices.

Abstract
Keywords
1 Introduction
2 Statistical Language Models
3 Language Model Compression
4 The Improved Compression Method
5 Experimental Results
Conclusion
Acknowledgments
Reference
This content is only available via PDF.
Close Modal
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close Modal
Close Modal