Artigo Acesso aberto Revisado por pares

Skip-Gram-KR: Korean Word Embedding for Semantic Clustering

2019; Institute of Electrical and Electronics Engineers; Volume: 7; Linguagem: Inglês

10.1109/access.2019.2905252

ISSN

2169-3536

Autores

Sun-Young Ihm, Jihye Lee, Young‐Ho Park,

Tópico(s)

Text and Document Classification Technologies

Resumo

Deep learning algorithms are used in various applications for pattern recognition, natural language processing, speech recognition, and so on. Recently, neural network-based natural language processing techniques use fixed length word embedding. Word embedding is a method of digitizing a word at a specific position into a low-dimensional dense vector with fixed length while preserving the similarity of the distribution of its surrounding words. Currently, the word embedding methods for foreign language are used for Korean words; however, existing word embedding methods are developed for English originally, so they do not reflect the order and structure of the Korean words. In this paper, we propose a word embedding method for Korean, which is called Skip-gram-KR, and a Korean affix tokenizer. Skip-gram-KR creates similar word training data through backward mapping and the two-word skipping method. The experiment results show the proposed method achieved the most accurate performance.

Referência(s)