International Conference on Information Technology and Computer Science, 3rd (ITCS 2011)
89 Research on the Language Model Compression Combing Domain Compressing and Importance Pruning
Download citation file:
- Ris (Zotero)
- Reference Manager
Currently the size of most statistical language models based on large-scale training corpus always goes beyond the storage ability of many handheld devices. This paper proposes a language model compression method which combined the domain compressing and the importance pruning, by setting the unit retained rate to control the size of the language model. In this paper, we use perplexity to evaluate the performance of the language model which get from the new compression method. The experimental results show that the new compression method can well adapted to the needs of handheld devices.