Authors
Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim
Publication date
2017/8/18
Conference
SCLeM @ EMNLP 2017 (2017): 92.
Description
Language models for agglutinative languages have always been hindered in past due to myriad of agglutinations possible to any given word through various affixes. We propose a method to diminish the problem of out-of-vocabulary words by introducing an embedding derived from syllables and morphemes which leverages the agglutinative property. Our model outperforms character-level embedding in perplexity by 16.87 with 9.50M parameters. Proposed method achieves state of the art performance over existing input prediction methods in terms of Key Stroke Saving and has been commercialized.
Total citations
201720182019202020212022202320241212541
Scholar articles
S Yu, N Kulkarni, H Lee, J Kim - arXiv preprint arXiv:1708.05515, 2017